Thursday, August 8, 2019

10 differences between artificial intelligence and human intelligence

Today I want to tell you what is artificial about artificial intelligence. There is of course, the obvious, which is that the brain is warm, wet, and wiggly, while a computer is not. But more importantly, there are structural differences between human and artificial intelligence, which I will get to in a moment.

Before we can talk about this though, I have to briefly tell you what “artificial intelligence” refers to.

What goes as “artificial intelligence” today are neural networks. A neural network is a computer algorithm that imitates certain functions of the human brain. It contains virtual “neurons” that are arranged in “layers” which are connected with each other. The neurons pass on information and thereby perform calculations, much like neurons in the human brain pass on information and thereby perform calculations.

In the neural net, the neurons are just numbers in the code, typically they have values between 0 and 1. The connections between the neurons also have numbers associated with them, and those are called “weights”. These weights tell you how much the information from one layer matters for the next layer.

The values of the neurons and the weights of the connections are essentially the free parameters of the network. And by training the network you want to find those values of the parameters that minimize a certain function, called the “loss function”.

So it’s really an optimization problem that neural nets solve. In this optimization, the magic of neural nets happens through what is known as backpropagation. This means if the net gives you a result that is not particularly good, you go back and change the weights of the neurons and their connections. This is how the net can “learn” from failure. Again, this plasticity mimics that of the human brain.

For a great introduction to neural nets, I can recommend this 20 minutes video by 3Blue1Brown.

Having said this, here are the key differences between artificial and real intelligence.

1. Form and Function

A neural net is software running on a computer. The “neurons” of an artificial intelligence are not physical. They are encoded in bits and strings on hard disks or silicon chips and their physical structure looks nothing like that of actual neurons. In the human brain, in contrast, form and function go together.

2. Size

The human brain has about 100 billion neurons. Current neural nets typically have a few hundred or so.

3. Connectivity

In a neural net each layer is usually fully connected to the previous and next layer. But the brain doesn’t really have layers. It instead relies on a lot of pre-defined structure. Not all regions of the human brain are equally connected and the regions are specialized for certain purposes.

4. Power Consumption

The human brain is dramatically more energy-efficient than any existing artificial intelligence. The brain uses around 20 Watts, which is comparable to what a standard laptop uses today. But with that power the brain handles a million times more neurons.

5. Architecture

In a neural network, the layers are neatly ordered and are addressed one after the other. The human brain, on the other hand, does a lot of parallel processing and not in any particular order.

6. Activation Potential

In the real brain neurons either fire or don’t. In a neural network the firing is mimicked by continuous values instead, so the artificial neurons can smoothly slide from off to on, which real neurons can’t.

7. Speed

The human brain is much, much slower than any artificially intelligent system. A standard computer performs some 10 billion operations per second. Real neurons, on the other hand, fire at a frequency of at most a thousand times per second.

8. Learning Technique

Neural networks learn by producing output, and if this output is of low performance according to the loss function, then the net responds by changing the weights of the neurons and their connections. No one knows in detail how humans learn, but that’s not how it works.

9. Structure

A neural net starts from scratch every time. The human brain, on the other hand, has a lot of structure already wired into its connectivity, and it draws on models which have proved useful during evolution.

10. Precision

The human brain is much more noisy and less precise than a neural net running on a computer. This means the brain basically cannot run the same learning mechanism as a neural net and it’s probably using an entirely different mechanism.

A consequence of these differences is that artificial intelligence today needs a lot of training with a lot of carefully prepared data, which is very unlike to how human intelligence works. Neural nets do not build models of the world, instead they learn to classify patterns, and this pattern recognition can fail with only small changes. A famous example is that you can add small amounts of noise to an image, so small amounts that your eyes will not see a difference, but an artificially intelligent system might be fooled into thinking a turtle is a rifle.

Neural networks are also presently not good at generalizing what they have learned from one situation to the next, and their success very strongly depends on defining just the correct “loss function”. If you don’t think about that loss function carefully enough, you will end up optimizing something you didn’t want. Like this simulated self-driving car trained to move at constant high speed, which learned to rapidly spin in a circle.

But neural networks excel at some things, such as classifying images or extrapolating data that doesn’t have any well-understood trend. And maybe the point of artificial intelligence is not to make it all that similar to natural intelligence. After all, the most useful machines we have, like cars or planes, are useful exactly because they do not mimic nature. Instead, we may want to build machines specialized in tasks we are not good at.

Wednesday, August 7, 2019

'Spin' found in over half of clinical trial abstracts published in top psychiatry journals

'Spin'—exaggerating the clinical significance of a particular treatment without the statistics to back it up—is apparent in more than half of clinical trial abstracts published in top psychology and psychiatry journals, finds a review of relevant research in BMJ Evidence Based Medicine.
The findings raise concerns about the potential impact this might be having on treatment decisions, as the evidence to date suggests that abstract information alone is capable of changing doctors' minds, warn the study authors.
Randomised controlled trials serve as the gold standard of evidence, and as such, can have a major impact on clinical care. But although researchers are encouraged to report their findings comprehensively, in practice they are free to interpret the results as they wish.
In an abstract, which is supposed to summarise the entire study, researchers may be rather selective with the information they choose to highlight, so misrepresenting or 'spinning' the findings.
To find out how common spin might be in abstracts, the study authors trawled the research database PubMed for randomised controlled trials of psychiatric and behavioural treatments published between 2012 and 2017 in six top psychology and psychiatry journals.
They reviewed only those trials (116) in which the primary results had not been statistically significant, and used a previously published definition of spin to see how often researchers had 'spun' their findings.
They found evidence of spin in the abstracts of more than half (65; 56%) of the published trials. This included titles (2%), results sections (21%), and conclusion sections (49%).
In 17 trials (15%), spin was identified in both the results and conclusion sections of the abstract.
Spin was more common in trials that compared a particular drug/behavioural approach with a dummy (placebo) intervention or usual care.
Industry funding was not associated with a greater likelihood of spinning the findings: only 10 of the 65 clinical trials in which spin was evident had some level of industry funding.
The study authors accept that their findings may not be widely applicable to clinical trials published in all psychiatry and psychology journals, and despite the use of objective criteria to define spin, inevitably, their assessments would have been subjective.
Nevertheless, they point out: "Researchers have an ethical obligation to honestly and clearly report the results of their research. Adding spin to the abstract of an article may mislead physicians who are attempting to draw conclusions about a treatment for patients. Most physicians read only the article abstract the majority of the time."
They add: "Those who write clinical trial manuscripts know that they have a limited amount of time and space in which to capture the attention of the reader. Positive results are more likely to be published, and many manuscript authors have turned to questionable reporting practices in order to beautify their results."

Special Breakthrough Prize awarded for Supergravity
Sabine Hossenfelder

The Breakthrough Prize is an initiative founded by billionaire Yuri Milner, now funded by a group of rich people which includes, next to Milner himself, Sergey Brin, Anne Wojcicki, and Mark Zuckerberg. The Prize is awarded in three different categories, Mathematics, Fundamental Physics, and Life Sciences. Today, a Special Breakthrough Prize in Fundamental Physics has been awarded to Sergio Ferrara, Dan Freedman, and Peter van Nieuwenhuizen for the invention of supergravity in 1976. The Prize of 3 million US$ will be split among the winners.

Interest in supergravity arose in the 1970s when physicists began to search for a theory of everything that would combine all four known fundamental forces to one. By then, string theory had been shown to require supersymmetry, a hypothetical new symmetry which implies that all the already known particles have – so far undiscovered – partner particles. Supersymmetry, however, initially only worked for the three non-gravitational forces, that is the electromagnetic force and the strong and weak nuclear forces. With supergravity, gravity could be included too, thereby bringing physicists one step closer to their goal of unifying all the interactions.

In supergravity, the gravitational interaction is associated with a messenger particle – the graviton – and this graviton has a supersymmetric partner particle called the “gravitino”. There are several types of supergravitational theories, because there are different ways of realizing the symmetry. Supergravity in the context of string theory always requires additional dimensions of space, which have not been seen. The gravitational theory one obtains this way is also not the same as Einstein’s General Relativity, because one gets additional fields that can be difficult to bring into agreement with observation. (For more about the problems with string theory, please watch my video.)

To date, we have no evidence that supergravity is a correct description of nature. Supergravity may one day become useful to calculate properties of certain materials, but so far this research direction has not led to much.

The works by Ferrera, Freedman, and van Nieuwenhuizen have arguably been influential, if by influential you mean that papers have been written about it. Supergravity and supersymmetry are mathematically very fertile ideas. They lend themselves to calculations that otherwise would not be possible and that is how, in the past four decades, physicists have successfully built a beautiful, supersymmetric, math-castle on nothing but thin air.

Awarding a scientific prize, especially one accompanied by so much publicity, for an idea that has no evidence speaking for it, sends the message that in the foundations of physics contact to observation is no longer relevant. If you want to be successful in my research area, it seems, what matters is that a large number of people follow your footsteps, not that your work is useful to explain natural phenomena. This Special Prize doesn’t only signal to the public that the foundations of physics are no longer part of science, it also discourages people in the field from taking on the hard questions. Congratulations. 

Thursday, August 1, 2019

Back-to-Back, Failed Visions of the “Brain as a Supercomputer” 
July 25, 2019, 5:14
It’s a delicious failed prediction. As neuroscientist Henry Markham summarized at the end of a TED Talk, “I hope that you are at least partly convinced that it is not impossible to build a brain. We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you. Thank you.” If he had asked anyone now gathered at Discovery Institute’s Walter Bradley Center, I think they would have advised him not to go out on that particular tree branch.
As Ed Yong points out at The Atlantic, Dr. Markham recorded his talk in July 2009, now just past a decade ago. “It’s been exactly 10 years,” Yong notes, adding perhaps superfluously, “He did not succeed.”

The Brain as a Supercomputer
The title of the talk was, “A brain in a supercomputer.” Well, maybe the prophecy failed because the brain is not just a computer, super- or otherwise, and because nothing like real consciousness will be available to a machine, now or perhaps ever. Sure, a machine can give a TED Talk — as you would have guessed if you’ve seen the Hall of Presidents attraction at Disneyland, introduced in 1971 — but whether it would understand what it was saying is the real question.
Another Anniversary

Over at Mind Matters, Walter Myers has an excellent post reflecting on another anniversary, this one the publication of an iconic book, Gödel, Escher, Bach: An Eternal Golden Braid (1979), 40 years old next month. I’d never read it and, out of curiosity, I picked up a copy for myself and started in on this huge work, which won a Pulitzer Prize for author Douglas Hofstadter. As Dr. Myers recalls, many readers got to the end and completely misunderstood Hofstadter’s point.
It’s not really about Kurt Gödel, M.C. Escher, or J.S. Bach, or about math, art, and music and their interplay. As Hofstadter clarified in a preface to the 20th anniversary edition, he was arguing in much the same vein as Henry Markham, that the brain can be understood in rules-bound machine terms, with consciousness dancing on top as an “emergent” property.

The book was intended to ask the fundamental question of how the animate can emerge from the inanimate, or more specifically, how does consciousness arise from inanimate, physical material? As philosopher and cognitivist scientist David Chalmers has eloquently asked, “How does the water of the brain turn into the wine of consciousness?”

Hofstadter believes he has the answer: the conscious “self” of the human mind emerges from a system of specific, hierarchical patterns of sufficient complexity within the physical substrate of the brain. The self is a phenomenon that rides on top of this complexity to a large degree but is not entirely determined by its underlying physical layers.

[[Gee – I taught that book in a Philosophy of Mathematics course at Johns Hopkins in the seventies. I had no clue the real hidden subject was consciousness. He hid it so well that the “real subject” had zero effect on the debates about consciousness….]]

In the 1999 preface, he notes an apparent contradiction. When we look at computers, we see inflexible, unintelligent, rule-following beasts with no internal desires, which he describes as “the epitome of unconsciousness.” Is it a contradiction that intelligent behavior can be programmed into unintelligent machines? Is there an “unbreachable gulf” between intelligence and non-intelligence?

Hofstadter believes that through large sets of formal rules and levels of rules generated by AI, we can finally program these inflexible computers to be flexible, thinking machines. If so, we were wrong in thinking that there is a marked difference between human minds and intelligent machines.
The Culture of Materialism
Or to put it another way, a brain is a supercomputer. Forty years later, that assertion remains just that, an assertion. Walter Myers concludes:

[T]he view that human consciousness is something unique is the most tenable philosophical position unless we learn definitively otherwise.
There is, quite simply, no mechanical explanation of how the human mind has emerged from brawling chimpanzees over the course of millions of years of evolution.

The idea of the mind as a “meat machine” retains its hold on smart people for reasons other than neuroscience. It’s not science but the culture of materialism speakingRead the rest at Mind Matters. And if you have not done so yet, watch Episode 2 of Science Uprising, which deals concisely with the issue: