Wednesday, December 11, 2013

Randy Wayne Schekman (born December 30, 1948) is an American cell biologist at the University of California, Berkeley,[6] and former editor-in-chief of Proceedings of the National Academy of Sciences.[2][7][8] In 2011 he was announced as the editor of eLife, a new high profile open access journal published by the Howard Hughes Medical Institute, the Max Planck Society and the Wellcome Trust launching in 2012.[9] He was elected to the National Academy of Sciences in 1992.[10] Schekman shared the 2013 Nobel Prize for Physiology or Medicine with James Rothman and Thomas C. Südhof.[5][11]

How journals like Nature, Cell and Science are damaging science | Randy Schekman
http://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science

I am a scientist. Mine is a professional world that achieves great things for humanity. But it is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best. Those of us who follow these incentives are being entirely rational – I have followed them myself – but we do not always best serve our profession's interests, let alone those of humanity and society.
We all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals – chiefly NatureCell and Science.
These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals' reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.
These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called "impact factor" – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.
It is common, and encouraged by many journals, for research to be judged by the impact factor of the journal that publishes it. But as a journal's score is an average, it says little about the quality of any individual piece of research. What is more, citation is sometimes, but not always, linked to quality. A paper can become highly cited because it is good science – or because it is eye-catching, provocative or wrong. Luxury-journal editors know this, so they accept papers that will make waves because they explore sexy subjects or make challenging claims. This influences the science that scientists do. It builds bubbles in fashionable fields where researchers can make the bold claims these journals want, while discouraging other important work, such as replication studies.
In extreme cases, the lure of the luxury journal can encourage the cutting of corners, and contribute to the escalating number of papers that are retracted as flawed or fraudulent. Science alone has recently retracted high-profile papers reporting cloned human embryos, links between littering and violence, and the genetic profiles of centenarians. Perhaps worse, it has not retracted claims that a microbe is able to use arsenic in its DNA instead of phosphorus, despite overwhelming scientific criticism.
There is a better way, through the new breed of open-access journals that are free for anybody to read, and have no expensive subscriptions to promote. Born on the web, they can accept all papers that meet quality standards, with no artificial caps. Many are edited by working scientists, who can assess the worth of papers without regard for citations. As I know from my editorship of eLife, an open access journal funded by the Wellcome Trust, the Howard Hughes Medical Institute and the Max Planck Society, they are publishing world-class science every week.
Funders and universities, too, have a role to play. They must tell the committees that decide on grants and positions not to judge papers by where they are published. It is the quality of the science, not the journal's brand, that matters. Most importantly of all, we scientists need to take action. Like many successful researchers, I have published in the big brands, including the papers that won me the Nobel prize for medicine, which I will be honoured to collect tomorrow.. But no longer. I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.
Just as Wall Street needs to break the hold of the bonus culture, which drives risk-taking that is rational for individuals but damaging to the financial system, so science must break the tyranny of the luxury journals. The result will be better research that better serves science and society.

Monday, October 28, 2013



Problems with scientific research
How science goes wrong




Scientific research has changed the world. Now it needs to change itself
Oct 19th 2013 |From the print edition

[[There are a lot of articles with this subject, but due to the deserved reputation of the Economist I am putting this one on the blog. D. G.]]
·                                  
·                                 
A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better.
But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.
In this section
·                                 How science goes wrong
Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
What a load of rubbish
Even when flawed research does not put people’s lives at risk—and much of it is too far from the market to do so—it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.
One reason is the competitiveness of science. In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control. The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012—more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.
Careerism also encourages exaggeration and the cherry-picking of results. In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.
Conversely, failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990. Yet knowing what is false is as important to science as knowing what is true. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.
The hallowed process of peer review is not all it is cracked up to be, either. When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.
If it’s broke, fix it
All this makes a shaky foundation for an enterprise dedicated to discovering the truth about the world. What might be done to shore it up? One priority should be for all disciplines to follow the example of those that have done most to tighten standards. A start would be getting to grips with statistics, especially in the growing number of fields that sift through untold oodles of data looking for patterns. Geneticists have done this, and turned an early torrent of specious results from genome sequencing into a trickle of truly significant ones.
Ideally, research protocols should be registered in advance and monitored in virtual notebooks. This would curb the temptation to fiddle with the experiment’s design midstream so as to make the results look more substantial than they are. (It is already meant to happen in clinical trials of drugs, but compliance is patchy.) Where possible, trial data also should be open for other researchers to inspect and test.
The most enlightened journals are already becoming less averse to humdrum papers. Some government funding agencies, including America’s National Institutes of Health, which dish out $30 billion on research each year, are working out how best to encourage replication. And growing numbers of scientists, especially young ones, understand statistics. But these trends need to go much further. Journals should allocate space for “uninteresting” work, and grant-givers should set aside money to pay for it. Peer review should be tightened—or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments. That system has worked well in recent years in physics and mathematics. Lastly, policymakers should ensure that institutions using public money also respect the rules.
Science still commands enormous—if sometimes bemused—respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.


Sunday, October 6, 2013





Evolution, Speeded by Computation

‘Probably Approximately Correct’ Explores Nature’s Algorithms

  • FACEBOOK
  • TWITTER
  • GOOGLE+
  • SAVE
  • E-MAIL
  • SHARE
  • PRINT
  • REPRINTS
Our daily lives are growing ever more dependent on algorithms, those omnipresent computational procedures that run programs on our laptops, our smartphones, our GPS devices and much else. Algorithms influence our decisions, too: when we choose a movie on Netflix or a book on Amazon, we are presented with recommendations churned out by sophisticated algorithms that take into account our past choices and those of other users determined (by still other algorithms) to be similar to us.

The importance of these algorithms in the modern world is common knowledge, of course. But in his insightful new book “Probably Approximately Correct,” the Harvard computer scientist Leslie Valiant goes much further: computation, he says, is and has always been “the dominating force on earth within all its life forms.” Nature speaks in algorithms.
Dr. Valiant believes that phenomena like evolution, adaptation and learning are best understood in terms of “ecorithms,” a term he has coined for algorithms that interact with and benefit from their environment. Ecorithms are at play when children learn how to distinguish cats from dogs, when we navigate our way in a new city — but more than that, Dr. Valiant writes, when organisms evolve and when brain circuits are created and maintained.
Here is one way he illustrates this complex idea. Suppose we want to distinguish between two types of flowers by measuring their petals. For each petal we have two numbers, x for length and y for width. The task is to find a way to tell which type of flower a petal with given measurements x and y belongs to.
To achieve this, the algorithm is fed a set of examples meant to “train” it to come up with a good criterion for distinguishing the two flowers. The algorithm does not know the criterion in advance; it must “learn” it using the data that are fed to it.
So it starts with a hypothesis and tests it on the first example. (Say flower No. 1’s petals can be described by the formula 2x—3y>2, while for Flower No. 2 it’s 2x—3y<2 .="" a="" algorithm="" along.="" an="" and="" applied="" as="" by="" certain="" example.="" example="" goes="" hypothesis="" if="" is="" it="" learning="" literally="" misclassifies="" new="" next="" p="" precise="" proceed="" rule="" that="" the="" to="" updated="" we="" works="">
A striking mathematical theorem is that if a rule separating the two flowers exists (within the class of criteria we are considering, such as linear inequalities), then our algorithm will find it after a finite number of steps, no matter what the starting hypothesis was.
And Dr. Valiant argues that similar mechanisms are at work in nature. An organism can adapt to a new environment by starting with a hypothesis about the environment, then testing it against new data and, based on the feedback, gradually improving the hypothesis by using an ecorithm, to behave more effectively.
“Probably Approximately Correct,” Dr. Valiant’s winsome title, is his quantitative framework for understanding how these ecorithms work. In nature, there is nothing so neat as our idealized flower algorithm; we cannot really hope to get a precise rule distinguishing between two types of flowers, but can hope only to have one that gives an approximate result with high probability.
The evolution of species, as Darwin taught us, relies on natural selection. But Dr. Valiant argues that if all the mutations that drive evolution were simply random and equally distributed, it would proceed at an impossibly slow and inefficient pace.
Darwin’s theory “has the gaping gap that it can make no quantitative predictions as far as the number of generations needed for the evolution of a behavior of a certain complexity,” he writes. “We need to explain how evolution is possible at all, how we got from no life, or from very simple life, to life as complex as we find it on earth today. This is the BIG question.”
[[Shades of Thomas Nagel!! - D.G.]]
Dr. Valiant proposes that natural selection is supplemented by ecorithms, which enable organisms to learn and adapt more efficiently. Not all mutations are realized with equal probability; those that are more beneficial are more likely to occur. In other words, evolution is accelerated by computation.
This is an ambitious proposal, sure to ignite controversy. But what I find so appealing about this discussion, and the book in general, is that Dr. Valiant fearlessly goes to the heart of the “BIG” questions. (I don’t know him, though we share an editor at Basic Books.) He passionately argues his case, but is also first to point out the parts of his theory that are incomplete, eager to anticipate and confront possible counterarguments. This is science at its best, driven not by dogma and blind belief, but by the desire to understand, intellectual integrity and reliance on facts.
Many other topics are discussed, from computational complexity to intelligence (artificial and not), and even an algorithmic perspective on teaching. The
book is written in a lively, accessible style and is surprisingly entertaining. It’s funny how your perception of even mundane tasks can change after reading it — you start thinking algorithmically, confirming Dr. Valiant’s maxim that computer science is “more about humans than about computers.”
Edward Frenkel, a professor of mathematics at the University of California, Berkeley, is the author of the new book “Love and Math.”

Sunday, August 11, 2013

Current Limits of Science -a few of very many.

About six months ago I had a tragi-comic exchange with the principal of a North American day school. One of his students, sixteen years old, challenged his teacher and the rest of the class with the words: "Since science can explain everything, there is no need to believe in G-d." I suppose one should both laugh and cry: laugh at the colossal ignorance of the student, and cry for the damage done by the false impression of the scope of science. Just in case that student is not alone, here is a [very partial] list of some outstanding mysteries for which science has no clue. [I invite readers t write to me with more that I can add.]

1. Dark matterhttp://en.wikipedia.org/wiki/Dark_matter
In astronomy and cosmologydark matter is a type of matter hypothesized to account for a large part of the total mass in theuniverse. Dark matter cannot be seen directly with telescopes; evidently it neither emits nor absorbs light or other electromagnetic radiation at any significant level.[1] Instead, its existence and properties are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. According to the Planck mission team, and based on the standard model of cosmology, the total mass–energy of the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy.[2][3]Thus, dark matter is estimated to constitute 84.5% of the total matter in the universe and 26.8% of the total content of the universe.[4].................According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle.[6][7] The search for this particle, by a variety of means, is one of the major efforts in particle physics today.[8]

2. Dark energy. http://en.wikipedia.org/wiki/Dark_energy
In physical cosmology and astronomydark energy is a hypothetical form of energy that permeates all of space and tends toaccelerate the expansion of the universe.[1] Dark energy is the most accepted hypothesis to explain observations since the 1990s that indicate that the universe is expanding at an accelerating rate. According to the Planck mission team, and based on the standard model of cosmology, the total mass–energy of the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy.[2][3][4]
Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously,[5] andscalar fields such as quintessence or modulidynamic quantities whose energy density can vary in time and space. [[In other words, two contradictory proposals mean that we do not understand what it is.]]

[[Taking 1 and 2 together, all our science explains at most 4.9% of the total mass of the universe. And by the way, not all is well with the 4.9% either http://www.space.com/15936-astronomy-mysteries-science-countdown.html  Where are the Missing Baryons?
Credit: Spectrum: NASA/CXC/Univ. of California Irvine/T. Fang Illustration: CXC/M. WeissDark energy and dark matter combine to occupy approximately 95 percent of the universe, with regular matter making up the remaining 5 percent. But, researchers have been puzzled to find that more than half of this regular matter is missing. 

This missing matter is called baryonic matter, and it is composed of particles such as protons and electrons that make up majority of the mass of the universe's visible matter.

Some astrophysicists suspect that missing baryonic matter may be found between galaxies, in material known as warm-hot intergalactic medium, but the universe's missing baryons remain a hotly debated topic.]]


3.  See the posts on this blog undr the label "evolution" for the many failures of that theory.

Explanatory gap

From Wikipedia, the free encyclopedia
The explanatory gap is a term introduced by philosopher Joseph Levine for the difficulty that physicalist theories of mind have in explaining how physical properties give rise to the way things feel when they are experienced.[1] In the 1983 paper in which he first used the term, he used as an example the sentence, "Pain is the firing of C fibers", pointing out that while it might be valid in a physiological sense, it does not help us to understand how pain feels.
The explanatory gap has vexed and intrigued philosophers and AI researchers alike for decades and caused considerable debate. Bridging this gap (that is, finding a satisfying mechanistic explanation for experience and qualia) is known as "the hard problem".[2]
To take an example of a phenomenon in which there is no gap, imagine a modern computer: as marvelous as these devices are, their behavior can be fully explained by their circuitry, and vice versa. By contrast, it is thought by many mind-body dualists (e.g. René DescartesDavid Chalmers) that subjective conscious experience constitutes a separate effect that demands another cause, a cause that is either outside the physical world (dualism) or due to an as yet unknown physical phenomenon (see for instance Quantum mind,Indirect realism).
Proponents of dualism claim that the mind is substantially and qualitatively different from the brain and that the existence of something metaphysically extra-physical is required to 'fill the gap'.
The nature of the explanatory gap has been the subject of some debate. For example, some consider it to simply be a limit on our current explanatory ability.[3] They argue that future findings in neuroscience or future work from philosophers could close the gap. However, others have taken a stronger position and argued that the gap is a definite limit on our cognitive abilities as humans—no amount of further information will allow us to close it.[4] There has also been no consensus regarding what metaphysical conclusions the existence of the gap provides. Those wishing to use its existence to support dualism have often taken the position that an epistemic gap—particularly if it is a definite limit on our cognitive abilities—necessarily entails a metaphysical gap.[5]
Others, such as Joseph Levine, have wished to either remain silent on the matter or argue that no such metaphysical conclusion should be drawn.[1] He agrees that conceivability (as used in the Zombie and inverted spectrum arguments) is flawed as a means of establishing metaphysical realities; but he points out that even if we come to the metaphysicalconclusion that qualia are physical, they still present an explanatory problem.
While I think this materialist response is right in the end, it does not suffice to put the mind-body problem to rest. Even if conceivability considerations do not establish that the mind is in fact distinct from the body, or that mental properties are metaphysically irreducible to physical properties, still they do demonstrate that we lack an explanation of the mental in terms of the physical.
However, such an epistemological or explanatory problem might indicate an underlying metaphysical issue—the non-physicality of qualia, even if not proven by conceivability arguments is far from ruled out.
In the end, we are right back where we started. The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in our understanding of nature. Of course a plausible explanation for there being a gap in our understanding of nature is that there is a genuine gap in nature. But so long as we have countervailing reasons for doubting the latter, we have to look elsewhere for an explanation of the former.[6]
Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not?[41] This is called the problem of other minds.[42] It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness.[43]
The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior: we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do.[44] There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe.[44] 
In 2004, eight neuroscientists felt it was too soon for a definition. They wrote an apology in "Human Brain Function":[82]
"We have no idea how consciousness emerges from the physical activity of the brain and we do not know whether consciousness can emerge from non-biological systems, such as computers ... At this point the reader will expect to find a careful and precise definition of consciousness. You will be disappointed. Consciousness has not yet become a scientific term that can be defined in this way. Currently we all use the term consciousness in many different and often ambiguous ways. Precise definitions of different aspects of consciousness will emerge ... but to make precise definitions at this stage is premature."

Mind–body problem[edit source | editbeta]


Illustration of dualism by René Descartes. Inputs are passed by the sensory organs to the pineal gland and from there to the immaterial spirit.
The first influential philosopher to discuss this question specifically was Descartes and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things which he called res extensa (the realm of extension).[27] He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland.[28]
Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed.[29] Alternative solutions, however, have been very diverse. They can be divided broadly into two categories: dualist solutions that maintain Descartes' rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism(which holds that only thought truly exists and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these camps.[30]

[[This list of alternative theories shows how  far we are from an understanding of consciousness.]]

4. The origin of life. [Not to be confused with the theory of evolution which tries to explain only how life develops, not how it originated.] 
For a very thorough review of the decade of scientific despaif dealing iwth this problem, look here http://torahexplorer.com./

And look at this summary of the various proposals http://en.wikipedia.org/wiki/Abiogenesis

  • 3 Current models
  • 4 Other models

  • The Scientific American February 2013 contains the following iconoclastic articles:

  • 5. Europe's winter weather



  • New Simulations Question the Gulf Stream’s Role in Tempering Europe’s Winters

    It's the flow of warm tropical water across the Atlantic that keeps European winters mild, right? Maybe not
    For a century, schoolchildren have been taught that the massive ocean current known as the Gulf Stream carries warm water from the tropical Atlantic Ocean to northwestern Europe. As it arrives, the water heats the air above it. That air moves inland, making winter days in Europe milder than they are in the northeastern U.S.
    It might be time to retire that tidy story. The explosion of interest in global climate has prompted scientists to closely study the climatic effects of the Gulf Stream only to discover that those effects are not as clear as conventional wisdom might suggest. Based on modeling work and ocean data, new explanations have emerged for why winter in northern Europe is generally less bitter than winter at the same latitudes in the northeastern U.S. and Canada—and the models differ on the Gulf Stream's role. One of the explanations also provides insight into why winter in the U.S. Northwest is warmer than it is across the Pacific in eastern Russia.
    At the same time, recent studies have been casting doubt on the popular conjecture made a few years ago that melting of Arctic ice could “shut down” the Gulf Stream, thereby wreaking havoc with Europe'sweather. Yet the studies do suggest that climate change could at least affect thestrength of the Gulf Stream, which could lessen the impact of global warming on northern Europe.
    ....later on in the article they say the gulf stream theory is refuted.......

    6. How does the brain store memory? 


    A Single Brain Cell Stores a Single Concept [Preview]

    Each concept—each person or thing in our everyday experience—may have a set of corresponding neurons assigned to it

    ...in which are described the two theories of memory - millions of cells widely distributed in the brain, or narrow simple storage - that are still in competition after several decades. New discoveries lend some support to the latter, but memory storage in the brain is still not understood
    7. What causes aging? 
    Is the Free-Radical Theory of Aging Dead? [Preview]
    The hallowed notion that oxidative damage causes aging and that vitamins might preserve our youth is now in doubt
    David Gems's life was turned upside down in 2006 by a group of worms that kept on living when they were supposed to die. As assistant director of the Institute of Healthy Aging at University College London, Gems regularly runs experiments on Caenorhabditis elegans, a roundworm that is often used to study the biology of aging. In this case, he was testing the idea that a buildup of cellular damage caused by oxidation—technically, the chemical removal of electrons from a molecule by highly reactive compounds, such as free radicals—is the main mechanism behind aging. According to this theory, rampant oxidation mangles more and more lipids, proteins, snippets of DNA and other key components of cells over time, eventually compromising tissues and organs and thus the functioning of the body as a whole.

    Gems genetically engineered the roundworms so they no longer produced certain enzymes that act as naturally occurring antioxidants by deactivating free radicals. Sure enough, in the absence of the antioxidants, levels of free radicals in the worms skyrocketed and triggered potentially damaging oxidative reactions throughout the worms' bodies. [[But they did not die any faster than normal worms. The whole understanding of he process of aging is now in doubt. ]]
    8. Human origins. 

    Will Scientists Ever Be Able to Piece Together Humanity's Early Origins? [Preview]

    New fossil discoveries complicate the already devilish task of identifying our most ancient progenitors
    By Katherine Harmon 

    mammals, do. This familiar yet strange individual is Lucy, a member of the species Australopithecus afarensis, who lived some 3.2 million years ago. She is one of the oldest creatures presumed to have strode on the evolutionary path leading to our species, Homo sapiens.
    From a distance, you probably would have assumed her to be human. Although she stood only about a meter tall, with long arms and a small head, she walked, if perhaps slightly inelegantly, upright on two legs, as we, alone among living mammals, do. This familiar yet strange individual is Lucy, a member of the species Australopithecus afarensis, who lived some 3.2 million years ago. She is one of the oldest creatures presumed to have strode on the evolutionary path leading to our species,Homo sapiens.
    When Lucy was uncovered in 1974, evidence of bipedal locomotion virtually guaranteed her kind a spot in the human family tree. And although scientists had an inkling that other branches of humans coexisted more recently alongside our own, early human evolution appeared to be a simple affair, with Lucy and the other ancient bipeds that eventually came to light belonging to the same lone lineage. Thus, the discoveries seemed to uphold the notion of human evolution as a unilinear “march of progress” from a knuckle-walking chimplike ape to our striding, upright form—a schema that has dominated paleoanthropology for the past century. Yet as researchers dig back further in time, our origins are turning out to be a lot more complicated than that iconic image would suggest.

    [[...indeed, in the body of the article it is suggested that it is unrealistic to expect that the lineage of homo sapiens will ever be established.]]
    end of articles from the Scientific American.