Wednesday, June 23, 2021

Severe Limits on AI

 

Same or Different? The Question Flummoxes Neural Networks

For all their triumphs, AI systems can’t seem to generalize the concepts of “same” and “different.” Without that, researchers worry, the quest to create truly intelligent machines may be hopeless.

https://www.quantamagazine.org/same-or-different-ai-cant-tell-20210623/?utm_source=Quanta+Magazine&utm_campaign=d403797ddb-RSS_Daily_Computer_Science&utm_medium=email&utm_term=0_f0cb61321c-d403797ddb-389846569&mc_cid=d403797ddb&mc_eid=61275b7d81

 

An array of blocky orange objects of all different shapes, with a single blue blob in one of the rows and columns.

Samuel Velasco/Quanta Magazine

John Pavlus

Contributing Writer


June 23, 2021



The first episode of Sesame Street in 1969 included a segment called “One of These Things Is Not Like the Other.” Viewers were asked to consider a poster that displayed three 2s and one W, and to decide — while singing along to the game’s eponymous jingle — which symbol didn’t belong. Dozens of episodes of Sesame Street repeated the game, comparing everything from abstract patterns to plates of vegetables. Kids never had to relearn the rules. Understanding the distinction between “same” and “different” was enough.

Machines have a much harder time. One of the most powerful classes of artificial intelligence systems, known as convolutional neural networks or CNNs, can be trained to perform a range of sophisticated tasks better than humans can, from recognizing cancer in medical imagery to choosing moves in a game of Go. But recent research has shown that CNNs can tell if two simple visual patterns are identical or not only under very limited conditions. Vary those conditions even slightly, and the network’s performance plunges.

These results have caused debate among deep-learning researchers and cognitive scientists. Will better engineering produce CNNs that understand sameness and difference in the generalizable way that children do? Or are CNNs’ abstract-reasoning powers fundamentally limited, no matter how cleverly they’re built and trained? Whatever the case, most researchers seem to agree that understanding same-different relations is a crucial hallmark of intelligence, artificial or otherwise.

Abstractions navigates promising ideas in science and mathematics. Journey with us and join the conversation.



“Not only do you and I succeed at the same-different task, but a bunch of nonhuman animals do, too — including ducklings and bees,” said Chaz Firestone, who studies visual cognition at Johns Hopkins University.

The ability to succeed at the task can be thought of as a foundation for all kinds of inferences that humans make. Adam Santoro, a researcher at DeepMind, said that the Google-owned AI lab is “studying same-different relations in a holistic way,” not just in visual scenes but also in natural language and physical interactions. “When I ask an [AI] agent to ‘pick up the toy car,’ it is implied that I am talking about the same car we have been playing with, and not some different toy car in the next room,” he explained. A recent survey of research on same-different reasoning also stressed this point. “Without the ability to recognize sameness,” the authors wrote, “there would seem to be little hope of realizing the dream of creating truly intelligent visual reasoning machines.”

Same-different relations have dogged neural networks since at least 2013, when the pioneering AI researcher Yoshua Bengio and his co-author, Caglar Gulcehre, showed that a CNN could not tell if groups of blocky, Tetris-style shapes were identical or not. But this blind spot didn’t stop CNNs from dominating AI. By the end of the decade, convolutional networks had helped AlphaGo beat the world’s best Go player, and nearly 90% of deep-learning-enabled Android apps relied on them.

Getting any machine to learn same-different distinctions may require a breakthrough in the understanding of learning itself.

This explosion in capability reignited some researchers’ interest in exploring what these neural networks couldn’t do. CNNs learn by roughly mimicking the way mammalian brains process visual input. One layer of artificial neurons detects simple features in raw data, such as bright lines or differences in contrast. The network passes these features along to successive layers, which combine them into more complex, abstract categories. According to Matthew Ricci, a machine-learning researcher at Brown University, same-different relations seemed like a good test of CNNs’ limits because they are “the simplest thing you can ask about an image that has nothing to do with its features.” That is, whether two objects are the same doesn’t depend on whether they’re a pair of blue triangles or identical red circles. The relation between features matters, not the features themselves.

In 2018, Ricci and collaborators Junkyung Kim and Thomas Serre tested CNNs on images from the Synthetic Visual Reasoning Test (SVRT), a collection of simple patterns designed to probe neural networks’ abstract reasoning skills. The patterns consisted of pairs of irregular shapes drawn in black outline on a white square. If the pair was identical in shape, size and orientation, the image was classified “same”; otherwise, the pair was labeled “different.”

The researchers found that a CNN trained on many examples of these patterns could distinguish “same” from “different” with up to 75% accuracy when shown new examples from the SVRT image set. But modifying the shapes in two superficial ways — making them larger, or placing them farther apart from each other — made the CNNs’ accuracy go “down, down, down,” Ricci said. The researchers concluded that the neural networks were still fixated on features, instead of learning the relational concept of “sameness.”

Last year, Christina Funke and Judy Borowski of the University of Tübingen showed that increasing the number of layers in a neural network from six to 50 raised its accuracy above 90% on the SVRT same-different task. However, they didn’t test how well this “deeper” CNN performed on examples outside the SVRT data set, as Ricci’s group had. So the study didn’t provide any evidence that deeper CNNs could generalize the concepts of same and different.



Guillermo Puebla and Jeffrey Bowers, cognitive scientists at the University of Bristol, investigated in a follow-up study earlier this year. “Once you grasp a relation, you can apply it to whatever comes to you,” said Puebla. CNNs, he maintains, should be held to the same standard.

Puebla and Bowers trained four CNNs with various initial settings (including some of the same ones used by Funke and Borowski) on several variations of the SVRT same-different task. They found that subtle changes in the low-level features of the patterns — like changing the thickness of a shape’s outline from one pixel to two — was often enough to cut a CNN’s performance in half, from near perfect to barely above chance.

What this means for AI depends on whom you ask. Firestone and Puebla think the recent results offer empirical evidence that current CNNs lack a fundamental reasoning capability that can’t be shored up with more data or cleverer training. Despite their ever-expanding powers, “it’s very unlikely that CNNs are going to solve this problem” of discriminating same from different, Puebla said. “They might be part of the solution if you add something else. But by themselves? It doesn’t look like it.”Funke agrees that Puebla’s results suggest that CNNs are still not generalizing the concept of same-different. “However,” she said, “I recommend being very careful when claiming that deep convolutional neural networks in general cannot learn the concept.” Santoro, the DeepMind researcher, agrees: “Absence of evidence is not necessarily evidence of absence, and this has historically been true of neural networks.” He noted that neural networks have been mathematically proved to be capable, in principle, of approximating any function. “It is a researcher’s job to determine the conditions under which a desired function is learned in practice,” Santoro said.

Ricci thinks that getting any machine to learn same-different distinctions will require a breakthrough in the understanding of learning itself. Kids understand the rules of “One of These Things Is Not Like the Other” after a single Sesame Street episode, not extensive training. Birds, bees and people can all learn that way — not just when learning to tell “same” from “different,” but for a variety of cognitive tasks. “I think that until we figure out how you can learn from a few examples and novel objects, we’re pretty much screwed,” Ricci said.


Thursday, June 17, 2021

Challenging the central dogma in biology

 

New discovery shows human cells can write RNA sequences into DNA



Credit: CC0 Public Domain

Cells contain machinery that duplicates DNA into a new set that goes into a newly formed cell. That same class of machines, called polymerases, also build RNA messages, which are like notes copied from the central DNA repository of recipes, so they can be read more efficiently into proteins. But polymerases were thought to only work in one direction DNA into DNA or RNA. This prevents RNA messages from being rewritten back into the master recipe book of genomic DNA. Now, Thomas Jefferson University researchers provide the first evidence that RNA segments can be written back into DNA, which potentially challenges the central dogma in biology and could have wide implications affecting many fields of biology.

"This work opens the door to many other studies that will help us understand the significance of having a mechanism for converting RNA messages into DNA in our own cells," says Richard Pomerantz, Ph.D., associate professor of biochemistry and molecular biology at Thomas Jefferson University. "The reality that a human  can do this with high efficiency, raises many questions." For example, this finding suggests that RNA messages can be used as templates for repairing or re-writing genomic DNA.

The work was published June 11th in the journal Science Advances.

Together with first author Gurushankar Chandramouly and other collaborators, Dr. Pomerantz's team started by investigating one very unusual polymerase, called polymerase theta. Of the 14 DNA polymerases in , only three do the bulk of the work of duplicating the entire genome to prepare for cell division. The remaining 11 are mostly involved in detecting and making repairs when there's a break or error in the DNA strands. Polymerase theta repairs DNA, but is very error-prone and makes many errors or mutations. The researchers therefore noticed that some of polymerase theta's "bad" qualities were ones it shared with another cellular machine, albeit one more common in viruses—the reverse transcriptase. Like Pol theta, HIV reverse transcriptase acts as a DNA polymerase, but can also bind RNA and read RNA back into a DNA strand.

In a series of elegant experiments, the researchers tested polymerase theta against the reverse transcriptase from HIV, which is one of the best studied of its kind. They showed that polymerase theta was capable of converting RNA messages into DNA, which it did as well as HIV reverse transcriptase, and that it actually did a better job than when duplicating DNA to DNA. Polymerase theta was more efficient and introduced fewer errors when using an RNA template to write new DNA messages, than when duplicating DNA into DNA, suggesting that this function could be its primary purpose in the cell.

The group collaborated with Dr. Xiaojiang S. Chen's lab at USC and used X-ray crystallography to define the structure and found that this molecule was able to change shape in order to accommodate the more bulky RNA molecule—a feat unique among polymerases.

"Our research suggests that polymerase theta's main function is to act as a reverse transcriptase," says Dr. Pomerantz. "In healthy cells, the purpose of this molecule may be toward RNA-mediated DNA repair. In unhealthy cells, such as cancer , polymerase theta is highly expressed and promotes cancer cell growth and drug resistance. It will be exciting to further understand how polymerase theta's activity on RNA contributes to DNA repair and cancer-cell proliferation."


Monday, June 14, 2021

Real fossil discontinuity

 

Is There Discontinuity in Biology — And How Would We Know?

https://evolutionnews.org/2021/06/is-there-discontinuity-in-biology-and-how-would-we-know/
Casey Luskin

Photo credit: t4berlin, via Pixabay.

Recently a correspondent of mine raised the issue of whether we should assume that there is 100 percent continuity throughout the tree of life — what is often called “universal common ancestry” (UCA) — until demonstrated otherwise. In the debate over UCA, such a framing would shift the burden of proof to those who claim that there are discontinuities. I get the feeling that UCA-proponents want to put UCA-skeptics into an “extraordinary claims require extraordinary evidence” box, and the idea that there is discontinuity in biology is being implicitly classed as an “extraordinary claim.” 

For my part, I think it’s better to approach the data without assumptions and to let the evidence speak for itself. No claim about whether discontinuities exist in the tree of life should be handicapped as “extraordinary,” although I think a good case could be made that UCA is the more “extraordinary” claim. Why? Because all we directly observe today are discrete groups that don’t interbreed (these are like leaves on a tree), while common ancestry (like the branches of the tree) is inferred based upon various methodological justifications. Those methodologies often are inconsistent, contradictory, or exclude non-tree-like data. But I can leave that alone for now. I would say that no viewpoint should be handicapped and both pro-UCA and anti-UCA views should be treated equally. Thus, I reject attempts to frame the issue or shift burdens of proof. Let’s just follow the evidence where it leads. 

And What Is That Evidence?

The core postulate of evolutionary biology is “descent with modification.” Crucial to assessing this postulate is the adequacy of evolutionary mechanisms. ID proponents have raised many reasonable mathematical and biological challenges to the adequacy of these mechanisms to account for transitions that are claimed to have occurred in the history of life. 

For example, consider ID’s inquiry into waiting times: whale fossils are supposed to provide some of the best examples of “transitional forms” in the fossil record, demonstrating common descent between whales and land mammals. Whale intermediates have become a favorite argument for common descent. I distinctly recall one of my professors teaching us that the evolution of whales happened “incredibly fast” — at an almost unbelievably rapid pace. ID researchers are looking at the genetic changes necessary to transform a land mammal into a whale, and when you apply the mathematics of population genetics to the time available from the fossil record, there simply is not enough time for standard evolutionary mechanisms to produce those genetic changes. 

So what ID theorists are doing is showing that standard evolutionary mechanisms are incapable of producing the descent and modifications that are claimed to have taken place. If such an argument would not constitute a mathematically demonstrated discontinuity in the tree of life, I don’t know what would. Our critiques of the mechanisms of evolution give very adequate evidence-based reasons to be skeptical of the tree of life, and to suspect that discontinuities exist. 

Koonin and Company

Some leading biologists agree that there is evidence for discontinuity:

Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity. The relationships between major groups within an emergent new class of biological entities are hard to decipher and do not seem to fit the tree pattern that, following Darwin’s original proposal, remains the dominant description of biological evolution. 

EUGENE V. KOONIN, “THE BIOLOGICAL BIG BANG MODEL FOR THE MAJOR TRANSITIONS IN EVOLUTION,” BIOLOGY DIRECT, 2:21 (AUGUST 20, 2007)

Koonin sees a lot of evidence for “discontinuity” (his word) in the tree of life. Here’s what he writes:

Below I list the most conspicuous instances of this pattern of discontinuity in the biological and pre-biological domains, and outline the central aspects of the respective evolutionary transitions.

1. Origin of protein folds

There seem to exist ~1,000 or, by other estimates, a few thousand distinct structural folds the relationships between which (if existent) are unclear.

2. Origin of viruses

For several major classes of viruses, notably, positive strand RNA viruses and nucleo-cytoplasmic large DNA viruses (NCLDV) of eukaryotes, substantial evidence of monophyletic origin has been obtained. However, there is no evidence of a common ancestry for all viruses.

3. Origin of cells 

The two principal cell types (the two prokaryotic domains of life), archaea and bacteria, have chemically distinct membranes, largely, non-homologous enzymes of membrane biogenesis, and also, non-homologous core DNA replication enzymes. This severely complicates the reconstruction of a cellular ancestor of archaea and bacteria and suggests alternative solutions.

4. Origin of the major branches (phyla) of bacteria and archaea

Although both bacteria and archaea show a much greater degree of molecular coherence within a domain than is seen between the domains (in particular, the membranes and the replication machineries are homologous throughout each domain), the topology of the deep branches in the archaeal and, especially, bacterial phylogenetic trees remains elusive. The trees conspicuously lack robustness with respect to the gene(s) analyzed and methods employed, and despite the considerable effort to delineate higher taxa of bacteria, a consensus is not even on the horizon. The division of the archaea into two branches, euryarchaeota and crenarchaeota is better established but even this split is not necessarily reproduced in trees, and further divisions in the archaeal domain remain murky.

5. Origin of the major branches (supergroups) of eukaryotes

Despite many ingenious attempts to decipher the branching order near the root of the phylogenetic tree of eukaryotes, there has been little progress, and an objective depiction of the state of affairs seems to be a “star” phylogeny, with the 5 or 6 supergroups established with reasonable confidence but the relationship between them remaining unresolved.

6. Origin of the animal phyla

The Cambrian explosion in animal evolution during which all the diverse body plans appear to have emerged almost in a geological instant is a highly publicized enigma. Although molecular clock analysis has been invoked to propose that the Cambrian explosion is an artifact of the fossil record whereas the actual divergence occurred much earlier, the reliability of these estimates appears to be questionable. In an already familiar pattern, the relationship between the animal phyla remains controversial and elusive.”

KOONIN, “THE BIOLOGICAL BIG BANG MODEL FOR THE MAJOR TRANSITIONS IN EVOLUTION,” EMPHASES ADDED; CITATIONS OMITTED

Multiple Independent Converging Lines 

Koonin ultimately adopts a fairly orthodox evolutionary position, but he certainly shows that those who see discontinuity aren’t unjustified in doing so. I could cite additional evidence for discontinuity between groups. In the 2017 volume Theistic Evolution, I contributed a chapter critiquing UCA by looking at five common lines of evidence. The chapter critiques UCA in a manner that responds to the precise form of the argument that evolutionary biologists make: an argument from multiple independent converging lines of evidence. For example, in a 2010 Nature article, Douglas Theobald writes:

UCA [universal common ancestry] is now supported by a wealth of evidence from many independent sources, including: (1) the agreement between phylogeny and biogeography; (2) the correspondence between phylogeny and the palaeontological record; (3) the existence of numerous predicted transitional fossils; (4) the hierarchical classification of morphological characteristics; (5) the marked similarities of biological structures with different functions (that is, homologies); and (6) the congruence of morphological and molecular phylogenies.

DOUGLAS L. THEOBALD, “A FORMAL TEST OF THE THEORY OF UNIVERSAL COMMON ANCESTRY,” NATURE 465 (MAY 13, 2010): 219-222; EMPHASIS ADDED

A Comprehensive Critique

This is a form of argument found not just in the technical literature but also in many biology textbooks. So in my chapter in Theistic Evolution,titled “Universal Common Descent: A Comprehensive Critique,” I framed a critique of UCA by looking at “evidence from many independent sources” — specifically, biogeography, paleontology, molecular and morphological phylogenies, and embryology. Here’s what I found:

  • In biogeography, evolutionists appeal to unlikely and speculative explanations where species must raft across vast oceans in order for common descent to account for their unexpected locations.
  • Paleontology fails to reveal the continuous branching pattern predicted by common ancestry, and the fossil record is dominated by abrupt explosions of new life forms. 
  • Regarding molecular and morphology-based trees, conflicting phylogenies have left the “tree of life” in tatters. Inconsistent phylogenetic methods predict that shared similarity indicates common inheritance, except for when it doesn’t.
  • Similar inconsistent methodological problems exist in embryology, where significant differences exist between embryos in their early stages, leading evolutionary biologists to predict that similarities will exist between vertebrate embryos, except for when we find differences, and then it predicts those too.

The collective evidence cited above shows that those who believe the tree of life is not 100 percent continuous across all organisms aren’t crazy. Whatever burdens of proof need to be met to have our view taken seriously, we’ve far exceeded them. 

At the very least, I think that the framing where the default assumption should be “total continuity” among organisms (UCA) until some “extraordinary evidence” comes along to show otherwise, is not appropriate framing. There’s plenty of evidence of discontinuity in biology, and this alone should allow us to have a real conversation about the data, where both viewpoints can converse on equal footing. 

Scientific arguments should be based upon a rhetorical symmetry. If a piece of evidence counts as evidence for a theory, then the opposite should count against it. So if one finds evidence for common descent, then it is necessary that when we find the opposite evidence, then that should count against common descent. And indeed we find much evidence opposite to the predictions of UCA. Those who believe discontinuities have plenty of evidence to back their view. They should not be rhetorically handicapped as they participate in this conversation.

Sunday, May 23, 2021

Noncoding “Junk” DNA Is Important for Limb Formation

 

Noncoding “Junk” DNA Is Important for Limb Formation

https://evolutionnews.org/2021/05/noncoding-junk-dna-is-important-for-limb-formation/
Casey Luskin

Image credit: Schäferle via Pixabay.

A 2021 article in Nature, “Non-coding deletions identify Maenli lncRNA as a limb-specific En1 regulator,” reports important new functions for non-coding or “junk” DNA that underlie limb formation. Before we get to the paper itself, consider a description of it on the Proceedings of the National Academy of Sciences “Journal Club” blog. The latter describes the research in terms that sound like they could have come directly from an intelligent design source: 

Genes that code for proteins make up only about 2% of the human genome. Many researchers once dismissed the other 98% of the genome as “junk DNA,” but geneticists now know these noncoding regions help to regulate the activity of the 20,000 or so protein-coding genes identified.

A new study in Nature underscores just how important noncoding DNA can be for human development. The authors show that deletions in a noncoding region of DNA on chromosome 2 cause severe congenital limb abnormalities. This is the first time a human disease has been definitively linked to mutations in noncoding DNA, says lead author Stefan Mundlos, head of the development and disease research group at the Max Planck Institute for Molecular Genetics in Berlin, Germany.

“Severe Congenital Limb Malformation” 

The technical paper in Nature describes the research. The investigators examined the chromosomes of people who had naturally occurring limb malformation, and found that these people had deletions of DNA encoding long non-coding RNA sequences (lncRNAs) from human chromosome 2. They deleted corresponding DNA sequences in mice and found similar “severe congenital limb malformation,” suggesting these lncRNA sequences are functionally important:

Here we show that genetic ablation of a lncRNA locus on human chromosome 2 causes a severe congenital limb malformation. We identified homozygous 27–63-kilobase deletions located 300 kilobases upstream of the engrailed-1 gene (EN1) in patients with a complex limb malformation featuring mesomelic shortening, syndactyly and ventral nails (dorsal dimelia). Re-engineering of the human deletions in mice resulted in a complete loss of En1expression in the limb and a double dorsal-limb phenotype that recapitulates the human disease phenotype. Genome-wide transcriptome analysis in the developing mouse limb revealed a four-exon-long non-coding transcript within the deleted region, which we named Maenli. Functional dissection of the Maenli locus showed that its transcriptional activity is required for limb-specific En1 activation in cis, thereby fine-tuning the gene-regulatory networks controlling dorso-ventral polarity in the developing limb bud. 

In the discussion, the article explains how important it is that we seek to understand the key functions of non-coding DNA sequences that encode lncRNAs:

In the era of whole-genome sequencing, the findings described here underscore the need for a systematic annotation and functional characterization of lncRNA loci to interpret and classify non-coding genetic variants. They highlight the importance of elucidating the complex diversity of lncRNA modes of action to assess their role in organ development and disease.

Over 130,000 Functional “Junk DNA” Elements!

So just how are we progressing in the task of determining the functions of non-coding DNA elements? Some defenders of evolutionary orthodoxy would have us believe that we’ve only found a handful of non-coding DNA sequences that have function — exceptions to the rule that non-coding DNA is usually useless junk. Another 2021 article in Nature shows why it’s no longer tenable for evolutionists to hide behind such an argument from ignorance. The article explains that over 130,000 functional “genomic elements, previously called junk DNA” have now been discovered, highlighting how important these “junk” segments have turned out to be:

[I]t is now appreciated that the majority of functional sequences in the human genome do not encode proteins. Rather, elements such as long non-coding RNAs, promoters, enhancers and countless gene-regulatory motifs work together to bring the genome to life. Variation in these regions does not alter proteins, but it can perturb the networks governing protein expression With the HGP draft in hand, the discovery of non-protein-coding elements exploded. So far, that growth has outstripped the discovery of protein-coding genes by a factor of five, and shows no signs of slowing. Likewise, the number of publications about such elements also grew in the period covered by our data set. For example, there are thousands of papers on non-coding RNAs, which regulate gene expression.

The article also observes that prior to the Human Genome Project, which was completed in 2003, there was “great debate” over whether it was “worth mapping the vast non-coding regions of genome that were called junk DNA, or the dark matter of the genome.” Under a paradigm informed by intelligent design, debates over whether to investigate junk DNA would have ended much sooner with an emphatic Yes!, furthering our knowledge of genetics and medicine. How much sooner would these 130,000+ “genomic elements, previously called junk DNA” have been uncovered if an ID paradigm had been governing biology research? 

Sunday, May 9, 2021

Human Origins

 

Review: Most human origins stories are not compatible with known fossils

by American Museum of Natural History

https://phys.org/news/2021-05-human-stories-compatible-fossils.html

 

The last common ancestor of chimpanzees and humans represents the starting point of human and chimpanzee evolution. Fossil apes play an essential role when it comes to reconstructing the nature of our ape ancestry. Credit: Printed with permission from © Christopher M. Smith

In the 150 years since Charles Darwin speculated that humans originated in Africa, the number of species in the human family tree has exploded, but so has the level of dispute concerning early human evolution. Fossil apes are often at the center of the debate, with some scientists dismissing their importance to the origins of the human lineage (the "hominins"), and others conferring them starring evolutionary roles. A new review out on May 7 in the journal Science looks at the major discoveries in hominin origins since Darwin's works and argues that fossil apes can inform us about essential aspects of ape and human evolution, including the nature of our last common ancestor.

Humans diverged from apes—specifically, the chimpanzee lineage—at some point between about 9.3 million and 6.5 million years ago, towards the end of the Miocene epoch. To understand hominin origins, paleoanthropologists aim to reconstruct the physical characteristics, behavior, and environment of the last common ancestor of humans and chimps.

"When you look at the narrative for hominin origins, it's just a big mess—there's no consensus whatsoever," said Sergio Almécija, a senior research scientist in the American Museum of Natural History's Division of Anthropology and the lead author of the review. "People are working under completely different paradigms, and that's something that I don't see happening in other fields of science."

There are two major approaches to resolving the human origins problem: "Top-down," which relies on analysis of living apes, especially chimpanzees; and "bottom-up," which puts importance on the larger tree of mostly extinct apes. For example, some scientists assume that hominins originated from a chimp-like knuckle-walking ancestor. Others argue that the human lineage originated from an ancestor more closely resembling, in some features, some of the strange Miocene apes.

In reviewing the studies surrounding these diverging approaches, Almécija and colleagues with expertise ranging from paleontology to functional morphology and phylogenetics discuss the limitations of relying exclusively on one of these opposing approaches to the hominin origins problem. "Top-down" studies sometimes ignore the reality that living apes (humans, chimpanzees, gorillas, orangutans, and hylobatids) are just the survivors of a much larger, and now mostly extinct, group. On the other hand, studies based on the "bottom-up"approach are prone to giving individual fossil apes an important evolutionary role that fits a preexisting narrative.

The positional repertoire preceding human bipedalism is unknown (so it is still in some living apes). Credit: © Sergio Almécija

"In The Descent of Man in 1871, Darwin speculated that humans originated in Africa from an ancestor different from any living species. However, he remained cautious given the scarcity of fossils at the time," Almécija said. "One hundred fifty years later, possible hominins—approaching the time of the human-chimpanzee divergence—have been found in eastern and central Africa, and some claim even in Europe. In addition, more than 50 fossil ape genera are now documented across Africa and Eurasia. However, many of these fossils show mosaic combinations of features that do not match expectations for ancient representatives of the modern ape and human lineages. As a consequence, there is no scientific consensus on the evolutionary role played by these fossil apes."

Overall, the researchers found that most stories of human origins are not compatible with the fossils that we have today.

"Living ape species are specialized species, relicts of a much larger group of now extinct apes. When we consider all evidence—that is, both living and fossil apes and hominins—it is clear that a human evolutionary story based on the few ape species currently alive is missing much of the bigger picture," said study co-author Ashley Hammond, an assistant curator in the Museum's Division of Anthropology.

Kelsey Pugh, a Museum postdoctoral fellow and study co-author adds, "The unique and sometimes unexpected features and combinations of features observed among fossil apes, which often differ from those of living apes, are necessary to untangle which features hominins inherited from our ape ancestors and which are unique to our lineage."

Living apes alone, the authors conclude, offer insufficient evidence. "Current disparate theories regarding ape and human evolution would be much more informed if, together with early hominins and living apes, Miocene apes were also included in the equation," says Almécija. "In other words, fossil apes are essential to reconstruct the 'starting point' from which humans and chimpanzees evolved."

 


Wednesday, April 21, 2021

AI state of the artifice


The Myth of Artificial Intelligence by Eric Larson

 https://www.amazon.com/Myth-Artificial-Intelligence-Computers-Think/dp/0674983513

“If you want to know about AI, read this book…it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence.”―Peter Thiel

A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks away―and argues that this myth is not just wrong, it’s actively blocking innovation and distorting our ability to make the crucial next leap.

Futurists insist that AI will soon eclipse the capacities of the most gifted human mind. What hope do we have against superintelligent machines? But we aren’t really on the path to developing intelligent machines. In fact, we don’t even know where that path might be.

A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there. Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake. AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don’t correlate data sets: we make conjectures informed by context and experience. Human intelligence is a web of best guesses, given what we know about the world. We haven’t a clue how to program this kind of intuitive reasoning, known as abduction. Yet it is the heart of common sense. That’s why Alexa can’t understand what you are asking, and why AI can only take us so far.

Larson argues that AI hype is both bad science and bad for science. A culture of invention thrives on exploring unknowns, not overselling existing methods. Inductive AI will continue to improve at narrow tasks, but if we want to make real progress, we will need to start by more fully appreciating the only true intelligence we know―our own.

Review

“If you want to know about AI, read this book. For several reasons―most of all because it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence.”Peter Thiel

“Artificial intelligence has always inspired outlandish visions, but now Elon Musk and other authorities assure us that those sci-fi visions are about to become reality. Artificial intelligence is going to destroy us, save us, or at the very least radically transform us. In The Myth of Artificial Intelligence, Erik Larson exposes the vast gap between the actual science underlying AI and the dramatic claims being made for it. This is a timely, important, and even essential book.”John Horgan, author of The End of Science

“Erik Larson offers an expansive look at the field of AI, from its early history to recent prophecies about the advent of superintelligent machines. Engaging, clear, and highly informed, The Myth of Artificial Intelligence is a terrific book.”Oren Etzioni, CEO of the Allen Institute for AI

“A fantastic tour of AI, at once deeply enlightening and eminently readable, that challenges the overwrought vision of a technology that revolutionizes everything and also threatens our existence. Larson, the thinking person’s tech entrepreneur, explores the philosophical and practical implications of AI as never before and reminds us that wishing for something is not the same as building it.”Todd C. Hughes, technology executive and former DARPA official

“A discussion of general human intelligence versus the current state of artificial intelligence, and how progress in a narrowly defined, specialized area (how to play chess) does not necessarily mean we are getting closer to human-like thinking machines. So, take a rain-check on the impending arrival of the robot overlords, that is going to have to wait a while.”Elizabeth ObeeTowards Data Science

About the Author

Erik J. Larson is a computer scientist and tech entrepreneur. The founder of two DARPA-funded AI startups, he is currently working on core issues in natural language processing and machine learning. He has written for The Atlantic and for professional journals and has tested the technical boundaries of artificial intelligence through his work with the IC2 tech incubator at the University of Texas at Austin.


--