Sunday, May 22, 2016

y brain   

AEON

ESSAYS

The empty brain

Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

by Robert Epstein
Header essay gs3522985
What’s in a brain? Photo by Gallery Stock
is a senior research psychologist at the American Institute for Behavioral Research and Technology in California. He is the author of 15 books, and the former editor-in-chief of Psychology Today. 
4,200 words
Edited by Pam Weintraub
No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does notcontain most of the things people think it does – not even simple things such as ‘memories’.
Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.
To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.
A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.
Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we notborn with such things, we also don’t develop them – ever.
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieveinformation or images or words from memory registers. Computers do all of these things, but organisms do not.
Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.
Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?
Get Aeon straight to your inbox
In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.
In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.
The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain
Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.
This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.
Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.
The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.
But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.
Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.
The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.
Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly.
If the IP metaphor is so silly, why is it so sticky? What is stopping us from brushing it aside, just as we might brush aside a branch that was blocking our path? Is there a way to understand human intelligence without leaning on a flimsy intellectual crutch? And what price have we paid for leaning so heavily on this particular crutch for so long? The IP metaphor, after all, has been guiding the writing and thinking of a large number of researchers in multiple fields for decades. At what cost?
In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.
Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):
And here is the drawing she subsequently made with a dollar bill present:
Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.
What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?
Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?
A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers.
The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?
So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had never seen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent.
The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, 'be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before.
Perhaps you will object to this demonstration. Jinny had seen dollar bills before, but she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present. Even in this case, though, no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music.
From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor.
As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) weobserve what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewardedfor behaving in certain ways.
We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded.
Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.
A few years ago, I asked the neuroscientist Eric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable.
A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.
My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
We will never have to worry about a human mind going amok in cyberspace, and we will never achieve immortality through downloading
Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor.
One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence(2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity.
Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing.
Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.
This is why, as Sir Frederic Bartlett demonstrated in his bookRemembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above).
This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.
Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.
Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in The New York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.)
Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.
We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.

Sunday, March 13, 2016

Mordechai Linzer has kindly volunteered to transcribe one of my central shiurim. Here it is:

Science and the Age of the Universe
Rabbi Dovid Gottlieb
When we say science we could mean one of two things: we could mean a method for investigating, or we could mean a body of information which has been discovered and established – the kind of thing you get in a science textbook or in a science museum. I’m taking about the latter; I’m not going to talk about the scientific method, I’m going to talk about what science teaches us and in general how reliable it is, and then I’ll talk about the age of the universe.
You can distinguish four different levels of scientific information. I suppose you could do it otherwise, but this is the way I do it. Sometimes science describes repeatable, observable phenomena. Things that happen over and over again, or things that you could make happen over and over again and all that science does is tell you when these and these things are there, that and that’s what happens. So the growing of the grass that grows every spring, and hard-boiling of an egg in water and flight of birds and behavior of animals generally and breaking glass, which unfortunately happens all too often and things that fall. Repeatable, observable phenomena. Science says when you this and this you should expect to see that and that. That’s where science is at its strongest.
Even there you could make mistakes, because you might not get the conditions exactly right. Water boils at 100 centigrade. Yeah, but not if you go up a mountain. If you go up a mountain it boils at a lower temperature and your hard-boiled egg comes out soft boiled. And not if it’s got other stuff in it, salt or dirt, it’s gotta be pure water. And not if it’s in a pressure cooker; in a pressure cooker it boils at 150 or 200 degree centigrade. And not if it’s in motion; if it’s in motion it won’t boil at 100 degrees centigrade. So you gotta be very careful to get the conditions right; otherwise even here you could make a mistake.
If you have water in a circular container and it’s absolutely still and you open a small hole in the very center of the base of the container. As the water goes out it will rotate counter-clockwise. Now you try this experiment in New York and in Paris and in Moscow or in Beijing or in the Philippines, every time it goes around counter-clockwise. So, being a good scientist, you say, we learned something about the universe. Look at that, the water goes down the drain, it rotates counter-clockwise. Until you go to Johannesburg. I mean, I’m not recommending that you go to Johannesburg, but if you should ever find yourself there by accident or against your will, you’ll find that it goes down clockwise.
Would anybody think that the laws of the universe change in the northern hemisphere and the southern hemisphere? You wouldn’t think it, but it’s true, and if you go take a look you’ll see that you can get it wrong. This is where science is at its strongest and even here it is possible for science to make mistakes.
One step away is called interpolation. Here’s the idea: I’m doing an experiment. In my experiment I take cubes of sugar and drop them into glasses of water and see how long it take them to dissolve, and I’m checking the effect of temperature on how fast the sugar dissolves. You could probably figure it out – the hotter it is the faster it’s going to dissolves. So I did an experiment at 10 degrees and I did an experiment at 40 degrees, and I saw that at 10 degrees it dissolves slowly and at 40 degrees it dissolves faster. Now I ask, how fast will it dissolver at 25? The truth is, strictly speaking, we don’t know, because we never tried it. We only tried 10 and 40, we didn’t try 25. Now, because I’m asking about 25 and I have on record 10 and 40 – I have one that’s less and one that’s more – this is called interpolation, because you’re asking about something in between.
Now the fact that you haven’t actually tested it is not going to frustrate any scientist that’s worth his salt because this is what he’ll do: he’ll draw a graph, and here’s 10 degrees and here’s how fast it dissolves, here’s 40 degrees and here’s how fast it dissolves. When you ask about 25 that’s in between, do you know what he’s going to do with those two dots? He’s going to play connect the dots. That’s what he’s going to do, he’s going to draw a line between the two dots. And then he’s going to say 25 ends up right here and that’s what’ll happen at 25, even though strictly speaking he hasn’t tested it yet.
The question will be raised, you play connect the dots by joining the two dots with a line, who says it should be a straight line? Maybe it should be a lazy curve, a sine curve, or maybe it should be a co-sine curve, or maybe it should be a jagged line. How many different ways are there to connect the dots? You could probably guess it’s unlimited; the strict answer is there’s an uncountable infinity, it’s beyond all belief.
So why do you pick the straight line? The official answer is it’s the simpler way to connect the dots. That’s true and it is accepted, the trouble is there’s no precise definition of simplicity. No one has been able to define what simplicity is and there’s no official explanation why simplicity is right. Indeed, if you ask scientists – who typically are very bad philosophers of science; they’re very good at doing science but they’re very bad at thinking about what science means – many scientists will tell you it’s aesthetically pleasing; it just looks nicer.
Now, there’s a bomb and the bomb is timed to go off when the sugar finishes dissolving and I need to know if there’s going to be enough time to get the population out of the room or not. And I have my curve here and I’m betting that at 25 it will dissolve at this speed because the curve looks nicer. Is that a reason to play with people’s lives? I don’t think so. Nevertheless, we will all accept the line. The reason can’t be because it looks nicer. So simplicity is a problem, but it’s only a philosophical problem so we can safely ignore it; it really isn’t very important. In interpolation, where you assessed at less and more and you’re asking about one in between, we all trust the line.
The next step away is extrapolation. That’s where you go outside the ones you tested. So we had 10 and 40, suppose I ask about -5 degrees centigrade. That’s outside; we have larger but we don’t have smaller. Now here you could in principle play the same game – listen, we have our graph, and when you connect the dots, there’s no reason to stop at the dot, you keep the line going. So here’s the axis and here’s the two points and here’s the line and just keep it going down to -5, and that’ll tell you how fast the sugar will dissolve at -5.
Except that it’s going to be a bit of a difficulty because water at -5 degrees centigrade is ice, and when you drop the sugar cube in it’s just going to sit on top and it’s not going to dissolve at all. So the prediction you get from the line is dead wrong. Okay, so the chemists will know that there’s such a thing as super cooling a liquid. If you keep a liquid very still you can cool it to below its freezing point and it stays liquid. That’s true, but the instant you drop the sugar in, you’re disturbing it and it will crystalize immediately and, again, the sugar will not dissolve.
When you extrapolate you are risking a qualitative change in the phenomena, the whole thing can go haywire and you get something brand new which you didn’t anticipate at all. So extrapolation is a big step beyond just telling me what you already tested.
A gigantic step beyond all three – repeatable, observable phenomena, interpolation and extrapolation – is deep theory. In deep theory you make up a story about something that you can’t see. So it’s always just a story. And you say, you know why these things happen, it happens because there are these little doodads that I can’t see and they’re doing something or other, and because they do something or other that’s why things happen. This is a gigantic step beyond because you can’t see them; you’re just making it up.
For example, you take a closed container filled with air and you heat it up. You heat it up and you heat it up and you heat it up, eventually it will burst. Why? Because as you heat up the gas in a closed container, the pressure gets greater and greater and greater. Sooner or later the pressure is so great that it will bust the walls of the container. [Okay, I suppose you could have super strong containers, I’m not talking about that, this is not material science.] Why is it that when you heat up the gas the pressure gets greater and greater and greater on the walls? So here’s the theory: The gas is made of tiny little balls. You never saw any of them? That’s right, because they’re invisible, they’re much too tiny for you to ever see. And since we like Greek we’ll give them a Greek name – molecules. Doesn’t that sound scientific? Now these little balls are in constant motion; they’re banging around the whole time. When you heat up the air the molecules move faster and faster. Indeed, the average random motion of a molecule is the heat; that’s what official science will tell you. So let’s suppose the molecules start moving faster and faster. What’s going to happen to the walls? First of all you’re going to get a lot more bangs. If in a second you had a thousand bangs, now in a second you’ll have a million bangs. And furthermore, each molecule is going faster so it hits harder; that’s what increases the pressure.
Now that is a great story. It just leaves over one little question: are gasses really made of little balls, since after all you can’t see them? So you devise steps and you try hypotheses and you check them, but in deep theory you are going beyond anything that you can see.
Now, if science can get the best category wrong – the category of repeatable, observable phenomena, category number one – you could imagine that interpolation will go wrong slightly more often, extrapolation will go wrong considerable more often and deep theory will go wrong very, very often, because you’re getting further and further away from what you actually see.
The first moral is this: if somebody tells you science has discovered X – and of course you couldn’t possible discover something that isn’t there, could you? Not if you really discovered it – so when they say science has discovered X, they’re telling you, “We Know, capital k, that it’s really there.” The first thing you should ask is, “How much repeatable, observable phenomena go into this discovery and how much interpolation and how much extrapolation and how much deep theory goes into this discovery?” And the more stuff he has further out on the list, the weaker the discovery will be.
I hope that you develop good memories. Because when you get to my age you will have stocked in your mind example of all sorts of discoveries that were later undiscovered, because certain mistakes were made in, so called, discovering them. You’re already into the age where people have realized that margarine is no better for you than butter, but for 30 years the sales of margarine were on the grounds of health. Just that they didn’t do the studies correctly and some long term studies showed that margarine is no healthier for you than butter.
You probably haven’t heard this it died so fast, but about 15 years ago there was a whole theory of A type personalities and B type personalities vis-à-vis heart attacks. The A’s are aggressive, nervous, hyper tense – they’re New Yorkers – and the B’s are relaxed and calm and take-it-as-you-go – Californians – and the A’s had much higher incidents of heart attacks and there were all sorts of programs set up to try to transform an A into a B. But it turned out that there were some statistical mistakes made and there is no such phenomena.
Falling sperm rates. About 8 years ago there reports of falling sperm rates all over the world. And of course that was the result of pollution and it meant the end of the human race. And again, they made some mistakes with the numbers and they took the samples in the wrong places and it turned out not to be true.
This sort of thing is going on all the time, sometimes long term and sometimes short term. So one has to be very careful.
Vestigial organs. They love those big words, this makes sure that 90% of listeners don’t understand what they’re talking about. There’re supposed to be things in your body that don’t do anything. Why? Because according to evolution your ancestors were once fish. Now, as you carry along becoming an animal you may carry along some of the old fish stuff, which isn’t very useful when you move onto land. But not everything changes, so it sticks around. And according to evolution, therefore you expect there to be things which have no use. That’s what you expect. And then if you haven’t found a use and you are desperate to wave the evolution flag, because that’s how you get your next grant, you say we discovered a vestigial organ. When if you would ask people in an honest, sober moment, if you could find one, how much do we know about what the body does? How much do we really know? Can we really map out everything the body does? They would have to admit to you, no, we really are very ignorant about a lot of things the body does. And then we could follow it up with the question, “Well then how do you know that this thing isn’t doing anything?”
And I think if it was an honest, sober moment, if you could find one, they would have to say, “We don’t really know; we’re just saying it because that’s what gets headlined in the New York Times. We don’t really know.”
When I was a kid if you got repeated infections in your tonsils, they would take out your tonsils, and while they were in there they took out the adenoids as well, because, you know, you’re in there anyway, and you got a nice sharp knife and the adenoids don’t do anything and they too could get affected so why don’t you just take them out because you’re there. They don’t do that anymore, because they discovered that the adenoids do do something, they play some role in the immune system. They just didn’t know that when I was a kid, and I lost both in one shot, my tonsils and my adenoids.
So here is a good example of using a deep theory, called evolution, to derive a conclusion and discovering that the conclusion is very unstable. It’s very unstable because you’re relying on something which you really can’t see, you really can’t test, and using it to draw conclusions. So the first thing you should ask in science is: how much of what they’re telling me is really describing what they saw happen in the laboratory, and how much requires a little bit of interpolation and a generous dose of extrapolation and deep theory? The more of that stuff you got, the less stable the conclusion is going to be. That’s moral number one.
Brief moral number two. I’m talking so far about science as gathering evidence for conclusions and what types of evidence exist. But we have to remember that science is done by flesh and blood scientists and they’re subject to prejudice and to wishful thinking and to political pressure as anybody else. And this introduces a considerable level of distortion, sometimes vicious willful distortion and sometimes sloppy incompetent, irresponsible thinking.
Here’s a term that you won’t learn in your science classes, called “data massage”. No, this is not the newest Japanese technique. Data is what you get when you do an experiment and massage means you massage it. You see, here’s how it works: I’m testing a theory. Now, in my heart of hearts I know that the theory is true and I know they’ll publish my paper if I can show the theory is true. So I do my experiment and the results I get aren’t exactly right. Now, these experiments are quite complicated and chances are that in my experimental design I made some mistakes. Isn’t it obvious that the places where my data disproved the theory must be due to my experimental design? It’s not due to the wrongness of the theory; no, it must be that I did a lousy experiment. So in order to protect myself, and only to report the truth, I change the data because where the data disagrees with the theory, it’s not that that’s the real data, it’s because I designed the experiment wrongly; surely that’s true. So I change the data and then I get my paper published. And this is so popular in the field it has even a title, “data massage”, only in structured scientist courses they don’t teach you that because they’d like you to respect the field and not know about some of its problems.
Another example which I think is endemic and something which you have to be aware of throughout, throughout the scientific world you have a tendency to jump to conclusions. And this is not just the people who watch test tubes, it involves the very top people in the field. They’ll jump to conclusions on the basis of insufficient data.
In 1903 there was a convention of physicists, and one of America’s greatest physicists, Albert Michaelson of the famous Michaelson Worldly Experiment, said physics is over. It’s over, it’s finished. We know everything we need to know, all that needs to be done is calculate the values to the eighth decimal point or the twelfth decimal point. He discouraged his graduate students from going into physics because it’s a dead field. Gosh, it didn’t turn out that way, did it?
In 1948 Max Morn said that physics will be finished in six months, because at that time they thought there were only 3 particles: neutron, or maybe at that time they didn’t get to neutron yet, electron and proton. Derack found the equation for the electron, surely somebody in six months is going to find the equation for the proton and it’ll be over. It didn’t turn out that way.
Those of you who are from Los Angeles know about the Labrea tarpits. I was there in ’69 and ’71, I don’t know if they’re still showing it, they probably changed it since then. They had a movie where they interviewed famous paleontologists, these people who dig up bones. They had an interview with the guy who discovered the brontosaurus. Now the brontosaurus is pretty big and at the time it was the biggest. So they interviewed this guy and they said to him, “Was it was a tremendous discovery, a very big animal?” “Yes.” “Do you think there could be a bigger land animal than the brontosaurus?” He said, “No.” “Why not?” “Well, according to our theory the thing was so big it had to spend all of its time in water, otherwise it couldn’t walk; it had to have water holding it up.” That turned out to be wrong also, that’s another story. “And it had to spend all of its time eating, because otherwise it couldn’t feed itself. To think of one bigger, impossible to imagine.” Good. A few years later they discovered one that was bigger. So they interviewed the guy that discovered this. “Is this a big discovery?” “Very big discovery, gigantic.” “Do you think there could be anything bigger than this?” and he said, “No.” And they asked him why. Now listen, I’m glad you’re sitting down. He said, “This has gotta be the biggest because we used to think that the brontosaurus is the biggest and this is even bigger.” That was his answer. I almost fell off my chair in the theater; I couldn’t believe that an intelligent human being could say that – we were wrong once, we couldn’t possibly be wrong twice, it’s just inconceivable to be wrong twice.
Unless you fall prey to the prejudice that of course these are theoretical scientists, we know about them – head in the clouds, feet in the clouds, completely disconnected from reality, but practical science, technology, everything has to be checked and triple checked, everything has to be investigated, everything has to experimented, surely in practical science and technology, there everything’s nailed down, let me just let you in on some of the great stories of history.
When I was a kid, when you went to buy a pair of shoes you put on the new pair of shoes and you went over to a contraption, stuck your feet in a slot, an open slot, and then you looked down through goggles and you press the button and you could see the bones of your foot inside the shoe. You would get an x-ray to check your shoe size. We don’t do that anymore. Can you guess why? Because they discovered that x-rays really aren’t so good for you. But they didn’t know that then and we were doing x-rays for shoe size.
In the ’40s-‘50s there were tens of thousands of lobotomies that were performed, cutting out a certain section of the brain, especially for epileptics, because it was supposed to help stabilize them, we don’t do that anymore either, because it turned out not to be effective. And then it solidified in the ‘70s which caused horrible, horrible defective births.
So this is a problem which you have to be worried about throughout the whole discipline, this tendency to jump to conclusions before the adequate datas come in. If you take any science book, even popular science book, from twenty-twenty five years ago and look up neutrino – I’m not recommending this, I’m just saying hypothetically, if you were to do it, maybe you shouldn’t waste your time – but if you look up neutrino it says in the text “a particle with no mass.” Yep, that’s what it says. And then about 20 years ago they began to wonder – maybe it does have some mass. And they did some observation and they decided probably it does have mass and then they decided probably it doesn’t have mass. And then they did an experiment about 10 years ago which says it’s supposed to have mass. And it is now a big problem. But 25 years ago there was no doubt, it was just obvious.
Dinosaurs were thought to be cold blooded, dumb, sluggish brutes because they were thought to be related to reptiles and that pretty much describes reptiles. Until the ‘70s when a guy named Robert Backer reinterpreted all the old evidence to show that either all of them or a great proportion of them must have been warm blooded and they worked together in social organization and they hunted together as packs and they were very energetic. He didn’t do it on the basis of new discoveries; he reinterpreted all the old discoveries. Which means that the old position was held because of a lack of imagination. Everybody saw it one way and nobody saw thought of it another way.
At any rate, you should be very cautious about accepting the latest thing that science does.
Now, let’s come to the age of the universe. This is one of the questions that’s most prominently asked. How could it be that science says the universe is 14 billion years old, give or take a billion – you know, which is small change – and the Torah says that it’s 5,763, which is considerable different than 14 billion? How is it possible to reconcile these two very widely divergent dates?
The truth is that there are two different ways to do it. I’ll start with the one that panders to your prejudices and then I’ll tell you the other one. One way to do it is to say that the universe as a whole could be 14 billion years old, it could be as old as you like, there’s no limit on how old the universe is. Ay, it says in Genesis that God created the world in six days? Those six days might not be 24 hour periods. They might be much, much longer.
Now listen. If you remember one thing from today, I want you to remember what I’m going to say now. We are not changing the verses to fit science. That is not what’s going on here. We’re not changing the understanding of the verses to fit science. We’re not reinterpreting the verses to fit science. We’re not doing that; that is absolutely invalid. That’s invalid, you do not do that. Only if there are internal sources, internal to the Jewish tradition, which would allow you to say it’s longer, can you say it’s longer. We do not read the Torah with our eyes over our shoulders on science and say, “Oh, they discovered X, let’s put X in over here.” We do not do that. We’re not changing the understanding of Genesis to fit science; we’re not doing that; we are relying on internal sources.
What kind of internal sources? Well, first of all, what is a day? What does the word “day” mean? This holds in Hebrew and in English. Day means one cycle of the sun vis-à-vis the earth; that’s what it means. Sunrise to sunrise, sunset to sunset. One cycle of the sun vis-à-vis the earth. Day does not mean 24 hours. I’ll prove it to you: science now thinks that the day is getting longer, because the rotation of the earth is slowing down. That does not mean that 24 hours is getting longer. 24 hours can’t get longer because it’s a certain amount of time, I hope this is obvious.
So day means one sun-cycle with respect to the earth. According to the first chapter of Genesis, when is the sun created and put where it is today? On day four. So I ask you, what was day 3? One thing’s for sure: it was not what we call day. Because day means a sun cycle vis-à-vis the earth, and that you didn’t have.
So day 3 is not what we call day. Ay, the Torah uses the word “day”? So the Torah means there’s some analogy, some similarity, between what we call day and what went on on the 3rd “day”. But it’s not the same, it’s not identical; it can’t be identical. So now you have to look for the analogy. You are free, if you like, if you choose to say the analogy is 24 hours, you can say that if you choose. But that’s because you’re choosing, not because it has to be that way. And indeed some commentators do say that.
The only thing I know in the Torah itself about this thing called day is that it’s an alternation of light and dark. That I know – evening, morning, there is an alternation between light and dark. But light and dark what? On day 3 it wasn’t the body of the earth obscuring the sun and making dark, that wasn’t what it was, the sun wasn’t where it is. So you are open, if you like, to take the first 3 days of Genesis and understand them as alternation of something else called light and dark and it can be as long as you like.
Ah, but then you’ll ask, what about days 4, 5, and 6? And I will answer you that each of the 6 days in Genesis ends with a description – it was evening, it was morning, day end. And nowhere in the succeeding 999 pages of the Tanach do you ever have that phrase again, never. Not in the Psalms and not in Job in not in Isaiah and not in Deuteronomy and not in Joshua and not in Samuel and not in Writings, nowhere.
Now look at it as a book. The author or authors of this book are telling you something. The first six days have a certain character that no time period has in the rest of the entire book. So if I have sufficient reason to regard the first 3 as much longer periods of time it is acceptable on literary grounds to say that days 4, 5, and 6 also have that longer period of time. And therefore I can say that the 6 “yamim” that the Torah describes were a much longer period of time. And indeed there are Midrashim that say this and there are Kabbalistic works that say this – that the world is much older than 5,763 years.
So the first reconciliation is the scientists are right – the universe is 14 billion years old, or whatever number they come up with tomorrow, and our date only goes back to Adam. Because from Adam’s life on, there you have a calculation of years, overlapping genealogies, and there the date is fixed.
The next problem will be, they’ll tell you, but human beings are much older than 5,763 years. Human beings are 200,000 years old or 2 million years old, depending upon who you ask and how exactly you define it. What are we going to do with the paleontological data, the bones of our ancestors that we found in the rocks? You say if it’s got 7 syllables, it’s got to be right.
Here it’s crucial to decide what you mean by human. Since it’s our date – 5,763 – we get to define what we mean by human, we don’t have to follow their definition. 5,763 in our dating takes us back to Adam. What kind of creature was Adam? Well, Adam had a certain bodily structure similar to ours, and he had a certain level of intelligence, and he had concepts of morality and spirituality, because God spoke to him, because God gave him a command, because he was held responsible for violating the command and indeed punished for it. For us to be human, a descendant of Adam, means 4 characteristics: body, mind, morality and spirituality.
So the question ought to be, do we have any evidence that there are other creatures older than 5,763 that have all 4 characteristics. Well, let’s see: body, if you find bones you can infer certain things throughout the body that creatures had the bones. And if you finds tools and habitations and those enchanting cave pictures drawn in France from 25 or 35 thousand years ago you can certainly infer intelligence. How would you infer morality and spirituality, right and wrong, good and evil, some transcendent being, transcendent values? Where’re you going to infer that from?
The answer is, you can only infer that from language. If they wrote something and we can decipher what they wrote then we would have a basis for saying that they knew the difference between right and wrong and they had a concept of spirituality. Without writing, nothing you’re going to find in bones and in artifacts and in habitations is going to be good evidence that the creature possessed morality and spirituality. And the oldest writing ever discovered is about 5,300 years old. There is no older writing of which we are aware, anywhere on the planet. So as of this moment, there is no reason to say that anything was here older than 5,763 years that was what we call human.
Ay, you’ll tell me, but those creatures that painted the pictures, they clearly had some great intelligence, and some great sensitivity and they’re only 20 thousand years removed from us. Are you telling me that in that short space of 20 thousand years, or 15 thousand years, such a tremendous difference took place? Well, even in evolutionary terms, which I’m echoing now, the case is not open and shut, because many people feel that those creatures that painted the cave pictures don’t stand in our line. They’re not our grandfathers, they’re our cousins. Indeed there were up to four different what you could call semi-human species running around at the same time and only one survived. They’re not our ancestors, they’re part of a branch that came to a dead end. So they aren’t directly in our line at all, and therefore they have nothing to do with us.
The first solution says the universe is 14 billion years old, 5,763 goes back to Adam, Adam has 4 characteristics, body, mind, spirit and values and there’s no evidence of anything older than Adam that had those 4 characteristics. That’s solution number one.
Solution number two. The universe, capital U, the whole shebang, is 5, 763 years old, period. There ain’t no more, that’s the finish. How could that be? And science tell us it’s 14 billion? The answer is this: God created the universe looking older than it is. He did a mock up job, it’s a Hollywood job. You set the stage with stuff and you make it look old. But it isn’t really old, it was created looking old: if you would saw down a tree in the Garden of Eden, you’d find tree rings. Even though tree rings usually means that that’s the number of years that the tree was growing, but not these trees, these trees were created with tree rings inside.
Adam was created, not as a newborn infant, and less a fertilized cell, Adam was created as an adult. You meet him 5 minutes after he was created, he looked 30 years old. But he isn’t 30 years old, he’s 5 minutes old. He was just created looking older.
Now, similarly, the universe as a whole is created looking much older than it is. The scientists have correctly followed the clues and drawn the correct conclusions, they just didn’t know that the clues were planted and aren’t genuine.
Now, the usual critique of that idea is, why would God do that? And if you answer, to deceive us, then the intelligencia, including Dawkins, will tell you, that means you believe in a trickster God, a jokester God, and isn’t that foolish, and isn’t that belittling even to religion? Even Elliot Sober, who’s a very good philosopher, fell into this trap.
Now listen, I can’t resist to itch here because I’m a logician. This question is not relevant. Suppose I said God created the universe looking older than it is, and if you ask me why, I don’t know. I don’t know why he did it, I haven’t got the foggiest idea why he did it. Does that mean that my solution is wrong? What does that have to do with the solution? Must I have a complete biography or a psychology of God to say that he did one thing? It’s just not relevant.
But as a matter of a fact we do have an answer, and it’s an answer not concocted for this particular occasion. This is where Sober reveals a certain kind of naiveté. From our point of view, one of the principals of creation is that God hides his presence. He hides his presence in the seeming laws of nature, He hides His presence in the suffering that goes on in the world. One of the principals of creation is to hide His presence, and this could be just another reflection of hiding his presence. So for us this is not a difficult question.
The difficult question, at least more difficult, will be this. Look, you’re using a technique, the critic will say, the technique is, you have a raft of evidence for X and you say I don’t accept X, I don’t agree with X, because the evidence is phony, the evidence is all phony and that’s why I don’t have to accept the conclusion. The critic will say, if you follow that method, couldn’t you short-circuit every investigation, every inquiry, every conclusion? Couldn’t you always say that God, especially an all-powerful God, put evidence there but it isn’t really true? “I know you thought you saw me shoot bullets into your car and cause it to explode, but that’s just because God caused you to hallucinate. It wasn’t really me; I was inside reading a book. I know you saw it, but couldn’t God do that? Yes, so I’m free, you can’t take me to court because God just made you think that.” If you use that method you could short-circuit every inquiry, every investigation, every conclusion, and, the critic will say, any method that short-circuits every investigation, every inquiry and every conclusion is a wrong method. It’s a method that stop you from doing everything.
Now, there’s an answer to this critique, and the answer is this: you’re right, if I use this method without limit or if I use the method without reason, without some kind of limits that are reasonable and justified, you’re right, it would be illegitimate. But here I have a reason, a reason that would enable me to limit it.
Now, I’ll give you an analogy and then I’ll show you how it applies. George is accused of committing murder. Now, what we have is George’s footprints outside the window where the crime was committed and his fingerprints inside and George has a motive and we found a weapon in George’s position that matches the weapon that committed the murder. That’s a considerable amount of evidence.
Let’s suppose George’s attorney says, “My defense of my client is, it’s a frame-up. He’s being framed.” That’s all he says, one sentence, to the judge or the jury, “My client is being framed.”
Is that going to be a successful defense? No. The reason it’s not successful is this: you could say that in every case. And if you thought that that was a successful defense, you could never convict anybody of anything. So of course it’s stupid, you can’t use that as a defense.
But now let’s suppose that in addition to the footprint and the fingerprints and the motive and the weapon, the defense produces – well those things are produced by the prosecution – the defense produces a witness who says he saw George at the time of the crime 100 miles away. Now you have a problem, because now you have a contradiction in the evidence.
And now the defense lawyer says, “I have a witness who says he saw him 100 miles away and I tell you that all the rest of it is planted, all the rest of it is a frame-up.” Now it would be worthwhile investigating a frame-up, because frame-ups do happen. They’re rare but they do happen. So when you have a reason to think that there’s a contradiction, then, asserting that there’s a frame-up is a reasonable hypothesis.
Now, our attitude towards the age of the universe is this: the scientists produce a raft of evidence, they say that the universe is 14 billion years old. We have evidence for the truth of the Jewish tradition. Evidence! We don’t believe it just because it makes us feel good, because we like cholent. We have evidence that it’s true. So if I look at my world and I survey the sub total, I see evidence on one side and evidence on the other side. If there’s evidence on both sides, then to suggest that one is a result of a frame-up is not irrational. And that’s exactly what we’re suggesting. We’re suggesting that the scientists are responding to a frame-up. God framed the world to look much older than it really is and therefore the universe really is 5,763.
So I think that this is also a reasonable solution and that means we have two reasonable solutions, and any problem for which I have two reasonable solutions is a problem over which I don’t lose too much sleep.
Question: What do the Christians believe about the world being how old the Torah says? They also believe in the Torah, right?
Answer: They split. There are fundamentalist Christians who take it literally and believe it’s that old and there are reform Christians who don’t believe that. Both of the opinions that I just gave you, you’ll find among the Christian thinkers as well. They face, at the outset, the same problem.
Question: So you’re saying that dinosaurs are only 5,763 years old?
Answer: According to the second solution, there never were any dinosaurs – giant creatures roaming the earth, screaming and shaking the ground and chewing up elephants and things – there’re just bones. God created bones in the ground.
You could sell the first answer. The first answer definitely meets your prejudices much better; it protects science – Thank God! – it’s really 14 billion and it just goes back to Adam. That’s easier to sell to people, that’s why I put it first.
First of all you don’t need to know the reason to say that it happened. Listen, your next door neighbor, you don’t know the reasons for half the things he does, it doesn’t mean he doesn’t do them. Knowing the reason why has nothing to do with whether he did it or not. It’s two entirely different questions. And if you find people who are so prejudiced they won’t think about the second answer, give them the first answer. That’s why I put it first, because usually it does wash better, because people are too prejudiced to hear the second answer.
Question: If you believe that it is a frame-up, what would be the motivation for God to do that?
Answer: It’s a principal of the creation that He creates it in such a way that He hides His activity. So here, if we have a book that says it’s 5,763 and scientifically it seems to be much older, here’s a lot of evidence against the book and that would be a way of hiding his activity.

It’s a little bit like this: we say the universe has a beginning. That’s for sure, no matter how old it is, it has a beginning. All educated opinion until the middle of the 20th century believed that the universe has no beginning. That was always the belief. That was the belief of Aristotle, and that was the belief of the Newtonians – the universe has no beginning. Can you imagine, that in 1965 scientific opinion swung over to say it does have a beginning? That was pretty shocking. That’s just a little glimpse. Now, just imagine they recalculated everything and said, “Yeah, 5,763.” It would be all over.