Thursday, March 28, 2019


Atheism Is Inconsistent with the Scientific Method, Prizewinning Physicist Says

[[ I received two emails criticizing my endorsement of this article on the grounds that some of the beliefs of the author violate the Torah. My interest in the article is not that it expresses a Torah point of view. My interest is in the admissions of a scientist - marked in bold - that are useful in debate with non-|Orthodox people. Those admissions significantly weaken their critique of Torah. DG]] 

In conversation, the 2019 Templeton Prize winner does not pull punches on the limits of science, the value of humility and the irrationality of nonbelief
By Lee Billings on March 20, 2019
·          

Theoretical physicist Marcelo Gleiser, recipient of the 2019 Templeton Prize. Credit: Eli Burakian Dartmouth College
Marcelo Gleiser, a 60-year-old Brazil-born theoretical physicist at Dartmouth College and prolific science popularizer, has won this year’s Templeton Prize. Valued at just under $1.5 million, the award from the John Templeton Foundation annually recognizes an individual “who has made an exceptional contribution to affirming life’s spiritual dimension.” Its past recipients include scientific luminaries such as Sir Martin Rees and Freeman Dyson, as well as religious or political leaders such as Mother Teresa, Desmond Tutu and the Dalai Lama.
Across his 35-year scientific career, Gleiser’s research has covered a wide breadth of topics, ranging from the properties of the early universe to the behavior of fundamental particles and the origins of life. But in awarding him its most prestigious honor, the Templeton Foundation chiefly cited his status as a leading public intellectual revealing “the historical, philosophical and cultural links between science, the humanities and spirituality.” He is also the first Latin American to receive the prize.

Scientific American spoke with Gleiser about the award, how he plans to advance his message of consilience, the need for humility in science, why humans are special, and the fundamental source of his curiosity as a physicist.
 [An edited transcript of the interview follows.]

Scientific American: First off, congratulations! How did you feel when you heard the news?
Marcelo Gleiser: It was quite a shocker. I feel tremendously honored, very humbled and kind of nervous. It’s a cocktail of emotions, to be honest. I put a lot of weight on the fact that I’m the first Latin American to get this. That, to me anyway, is important—and I’m feeling the weight on my shoulders now. I have my message, you know. The question now is how to get it across as efficiently and clearly as I can, now that I have a much bigger platform to do that from.
You’ve written and spoken eloquently about nature of reality and consciousness, the genesis of life, the possibility of life beyond Earth, the origin and fate of the universe, and more. How do all those disparate topics synergize into one, cohesive message for you?
To me, science is one way of connecting with the mystery of existence. And if you think of it that way, the mystery of existence is something that we have wondered about ever since people began asking questions about who we are and where we come from. So while those questions are now part of scientific research, they are much, much older than science. I’m not talking about the science of materials, or high-temperature superconductivity, which is awesome and super important, but that’s not the kind of science I’m doing. I’m talking about science as part of a much grander and older sort of questioning about who we are in the big picture of the universe. To me, as a theoretical physicist and also someone who spends time out in the mountains, this sort of questioning offers a deeply spiritual connection with the world, through my mind and through my body. Einstein would have said the same thing, I think, with his cosmic religious feeling.

Right. So which aspect of your work do you think is most relevant to the Templeton Foundation’s spiritual aims?
Probably my belief in humility. I believe we should take a much humbler approach to knowledge, in the sense that if you look carefully at the way science works, you’ll see that yes, it is wonderful — magnificent! — but it has limits. And we have to understand and respect those limits. And by doing that, by understanding how science advances, science really becomes a deeply spiritual conversation with the mysterious, about all the things we don’t know. So that’s one answer to your question. And that has nothing to do with organized religion, obviously, but it does inform my position against atheism. I consider myself an agnostic.

Why are you against atheism?

Sign up for Scientific American’s free newsletters.
I honestly think atheism is inconsistent with the scientific method. What I mean by that is, what is atheism? It’s a statement, a categorical statement that expresses belief in nonbelief. “I don’t believe even though I have no evidence for or against, simply I don’t believe.” Period. It’s a declaration. But in science we don’t really do declarations. We say, “Okay, you can have a hypothesis, you have to have some evidence against or for that.” And so an agnostic would say, look, I have no evidence for God or any kind of god (What god, first of all? The Maori gods, or the Jewish or Christian or Muslim God? Which god is that?) But on the other hand, an agnostic would acknowledge no right to make a final statement about something he or she doesn’t know about. “The absence of evidence is not evidence of absence,” and all that. This positions me very much against all of the “New Atheist” guys—even though I want my message to be respectful of people’s beliefs and reasoning, which might be community-based, or dignity-based, and so on. And I think obviously the Templeton Foundation likes all of this, because this is part of an emerging conversation. It’s not just me; it’s also my colleague the astrophysicist Adam Frank, and a bunch of others, talking more and more about the relation between science and spirituality.

So, a message of humility, open-mindedness and tolerance. Other than in discussions of God, where else do you see the most urgent need for this ethos?
You know, I’m a “Rare Earth” kind of guy. I think our situation may be rather special, on a planetary or even galactic scale. So when people talk about Copernicus and Copernicanism—the ‘principle of mediocrity’ that states we should expect to be average and typical, I say, “You know what? It’s time to get beyond that.” When you look out there at the other planets (and the exoplanets that we can make some sense of), when you look at the history of life on Earth, you will realize this place called Earth is absolutely amazing. And maybe, yes, there are others out there, possibly—who knows, we certainly expect so—but right now what we know is that we have this world, and we are these amazing molecular machines capable of self-awareness, and all that makes us very special indeed. And we know for a fact that there will be no other humans in the universe; there may be some humanoids somewhere out there, but we are unique products of our single, small planet’s long history.
The point is, to understand modern science within this framework is to put humanity back into kind of a moral center of the universe, in which we have the moral duty to preserve this planet and its life with everything that we’ve got, because we understand how rare this whole game is and that for all practical purposes we are alone. For now, anyways. We have to do this! This is a message that I hope will resonate with lots of people, because to me what we really need right now in this increasingly divisive world is a new unifying myth. I mean “myth” as a story that defines a culture. So, what is the myth that will define the culture of the 21st century? It has to be a myth of our species, not about any particular belief system or political party. How can we possibly do that? Well, we can do that using astronomy, using what we have learned from other worlds, to position ourselves and say, “Look, folks, this is not about tribal allegiance, this is about us as a species on a very specific planet that will go on with us—or without us.” I think you know this message well.

I do. But let me play devil’s advocate for a moment, only because earlier you referred to the value of humility in science. Some would say now is not the time to be humble, given the rising tide of active, open hostility to science and objectivity around the globe. How would you respond to that?
This is of course something people have already told me: “Are you really sure you want to be saying these things?” And my answer is yes, absolutely. There is a difference between “science” and what we can call “scientism,” which is the notion that science can solve all problems. To a large extent, it is not science but rather how humanity has used science that has put us in our present difficulties. Because most people, in general, have no awareness of what science can and cannot do. So they misuse it, and they do not think about science in a more pluralistic way. So, okay, you’re going to develop a self-driving car? Good! But how will that car handle hard choices, like whether to prioritize the lives of its occupants or the lives of pedestrian bystanders? Is it going to just be the technologist from Google who decides? Let us hope not! You have to talk to philosophers, you have to talk to ethicists. And to not understand that, to say that science has all the answers, to me is just nonsense. We cannot presume that we are going to solve all the problems of the world using a strict scientific approach. It will not be the case, and it hasn’t ever been the case, because the world is too complex, and science has methodological powers as well as methodological limitations.
And so, what do I say? I say be honest. There is a quote from the physicist Frank Oppenheimer that fits here: “The worst thing a son of a bitch can do is turn you into a son of a bitch.” Which is profane but brilliant. I’m not going to lie about what science can and cannot do because politicians are misusing science and trying to politicize the scientific discourse. I’m going to be honest about the powers of science so that people can actually believe me for my honesty and transparency. If you don’t want to be honest and transparent, you’re just going to become a liar like everybody else. Which is why I get upset by misstatements, like when you have scientists—Stephen Hawking and Lawrence Krauss among them—claiming we have solved the problem of the origin of the universe, or that string theory is correct and that the final “theory of everything” is at hand. Such statements are bogus. So, I feel as if I am a guardian for the integrity of science right now; someone you can trust because this person is open and honest enough to admit that the scientific enterprise has limitations—which doesn’t mean it’s weak!

You mentioned string theory, and your skepticism about the notion of a final “theory of everything.” Where does that skepticism come from?
It is impossible for science to obtain a true theory of everything. And the reason for that is epistemological. Basically, the way we acquire information about the world is through measurement. It’s through instruments, right? And because of that, our measurements and instruments are always going to tell us a lot of stuff, but they are going to leave stuff out. And we cannot possibly ever think that we could have a theory of everything, because we cannot ever think that we know everything that there is to know about the universe. This relates to a metaphor I developed that I used as the title of a book, The Island of Knowledge. Knowledge advances, yes? But it’s surrounded by this ocean of the unknown. The paradox of knowledge is that as it expands and the boundary between the known and the unknown changes, you inevitably start to ask questions that you couldn’t even ask before.
I don’t want to discourage people from looking for unified explanations of nature because yes, we need that. A lot of physics is based on this drive to simplify and bring things together. But on the other hand, it is the blank statement that there could ever be a theory of everything that I think is fundamentally wrong from a philosophical perspective. This whole notion of finality and final ideas is, to me, just an attempt to turn science into a religious system, which is something I disagree with profoundly. So then how do you go ahead and justify doing research if you don’t think you can get to the final answer? Well, because research is not about the final answer, it’s about the process of discovery. It’s what you find along the way that matters, and it is curiosity that moves the human spirit forward.

Speaking of curiosity… You once wrote, “Scientists, in a sense, are people who keep curiosity burning, trying to find answers to some of the questions they asked as children.” As a child, was there a formative question you asked, or an experience you had, that made you into the scientist you are today? Are you still trying to answer it?
I’m still completely fascinated with how much science can tell about the origin and evolution of the universe. Modern cosmology and astrobiology have most of the questions I look for—the idea of the transition from nonlife, to life, to me, is absolutely fascinating. But to be honest with you, the formative experience was that I lost my mom. I was six years old, and that loss was absolutely devastating. It put me in contact with the notion of time from a very early age. And obviously religion was the thing that came immediately, because I’m Jewish, but I became very disillusioned with the Old Testament when I was a teenager, and then I found Einstein. That was when I realized, you can actually ask questions about the nature of time and space and nature itself using science. That just blew me away. And so I think it was a very early sense of loss that made me curious about existence. And if you are curious about existence, physics becomes a wonderful portal, because it brings you close to the nature of the fundamental questions: space, time, origins. And I’ve been happy ever since.



Sunday, March 17, 2019

A New Discovery Upends What We Know About Viruses

A plant virus distributes its genes into eight separate segments that can all reproduce, even if they infect different cells.


It is a truth universally acknowledged among virologists that a single virus, carrying a full set of genes, must be in want of a cell. A virus is just a collection of genes packaged into a capsule. It infiltrates and hijacks a living cell to make extra copies of itself. Those daughter viruses then bust out of their ailing host, and each finds a new cell to infect. Rinse, and repeat. This is how all viruses, from Ebola to influenza, are meant to work.
But Stéphane Blanc and his colleagues at the University of Montpellier have shown that one virus breaks all the rules.
Faba bean necrotic stunt virus, or FBNSV for short, infects legumes, and is spread through the bites of aphids. Its genes are split among eight segments, each of which is packaged into its own capsule. And, as Blanc’s team has now shown, these eight segments can reproduce themselves, even if they infect different cells. FBNSV needs all of its components, but it doesn’t need them in the same place. Indeed, this virus never seems to fully come together. It is always distributed, its existence spread between capsules and split among different host cells.
“This is truly a revolutionary result in virology,” says Siobain Duffy of Rutgers University, who wasn’t involved in the study. “Once again, viruses prove that they’ve had the evolutionary time to try just about every reproductive strategy, even ones that are hard for scientists to imagine.”


Thursday, March 14, 2019


Kept in Mind
Language in Our Brain: The Origins of a Uniquely Human Capacity
by Angela Friederici
The M.I.T. Press, 304pp., USD$45.00.
https://inference-review.com/article/kept-in-mind

Juan Uriagereka is a linguist at the University of Maryland.

[[All emphasis below is mine. D.G.]]

ANGELA FRIEDERICI is the director of the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig. An internationally renowned neuropsychologist, she is known as well for her expertise in linguistics. Language in Our Brain thus offers an insider’s account of the play between language and the neurosciences. In his endorsement, David Poeppel describes her book as a “masterful summary of decades of work on the neurobiological foundations of language.” He goes on to remark that it “develops a comprehensive account of how this most complex of human computational functions is organized, providing a detailed and lucid perspective on the neuroscience of language.”1
In Language in Our Brain, we are dealing with an obviously important book.
Drums and Symbols
AFTER STUDYING Alexis St. Martin’s stomach through a gastric fistula in the 1820s, US Army surgeon William Beaumont determined that digestion is more chemical than mechanical. Muscular contractions of the stomach mashed his patient’s food, but it was the stomach’s gastric acid that dissolved it. Beaumont is today known as the father of gastric physiology, if only because two centuries ago, physicians had no idea how digestion worked.
At the beginning of the twentieth century, Santiago Ramón y Cajal advanced the daring hypothesis that neurons form a discrete lattice. If neurons are discrete, synaptic transmission follows inevitability. One neuron can do little, after all, and if many neurons can do more, they must be in touch with one another. This is precisely the conclusion that Cajal drew.
In 1906, Cajal shared the Nobel Prize with Camillo Golgi, who argued that the brain comprises a single continuous but reticulated network. Cajal was correct and Golgi wrong. Or so it seems today.
In the mid-1950s, Noam Chomsky solidified theoretical syntax. Thinking in general, and language in particular, he argued, boils down to symbolic manipulation—a perspective known as the computational theory of mind. It is a point of view deeply indebted to the work undertaken by the great logicians of the 1930s: Kurt Gödel, Alonzo Church, Alan Turing, and Emil Post. For all the sophistication of their ideas, surprisingly simple questions remain. Which part of our brain carries information forward in time? No one knows. For that matter, no one knows what a symbol is, or where symbolic interactions take place. The formal structures of linguistics and neurophysiology are disjoint, a point emphasized by Poeppel and David Embick in a widely cited study.2 There is an incommensurability between theories of the brain, TB, and theories of the mind, TM. This is the sort of granularity issue that concerned Poeppel and Embick. TM deals with formal devices and how they interact, while TB deals with waves of different frequencies and amplitudes, and how they overlap in time sequences across brain regions. In the absence of a common vocabulary and conceptual space, TM and TB are, at best, conceptual strangers. Are they elementarily equivalent, or is one an extension of the other? Do they share a common model? Is there a mapping between them such that one is interpretable in the other? Language in Our Brain does not attempt explicitly to ask such questions. It is worth considering whether they have answers, and if not, whether they are correctly posed.
Friederici takes linguistics seriously, and this is all to the good. Few neuropsychologists have studied how sentences break down into phrases, or how words carry meanings, or why speech is more than just sound. No one has distinguished one thought from another by dissecting brains. Neuroimaging tells us only when some areas of the brain light up selectively. Brain wave frequencies may suggest that different kinds of thinking are occurring, but a suggestion is not an inference—even if there is a connection between certain areas of the brain and seeing, hearing, or processing words. Connections of this sort are not nothing, of course, but neither are they very much. Is this because techniques have not yet been developed to target individual neurons? Or is it because thinking is more subtle than previously imagined?
We may not figure this out within our lifetimes.
There have historically been many theories that rival the computational theory of mind. In his famous review of B. F. Skinner’s Verbal Behavior, Chomsky demonstrated that no such theory could explain the fact that human language is compositional, representational, and recursive.3 It is within the space marked by the computational theory of mind that these properties receive an explanation. Progress is slow. Language in Our Braintalks of information or representations, but the corresponding entries are not in the glossary or index. The book says little about them. When Friederici writes about the “fast computation of the phonological representation,” an obvious inferential lapse is involved.4 Some considerable distance remains between the observation that the brain is doing something and the claim that it is manipulating various linguistic representations. Friederici notes the lapse. “How information content is encoded and decoded,” she remarks, “in the sending and receiving brain areas is still an open issue—not only with respect to language, but also with respect to the neurophysiology of information processing in general.”5
At the Limits of Neuroimaging
ANY DISCUSSION about language and the brain must be focused on human language, and throughout her book Friederici assumes that something like the minimalist program is its underlying theory. Minimalism is a streamlined version of generative grammar, and it is precisely because of this theoretical streamlining that finding syntax within the brain is even possible. Neuroimaging techniques depict the brain as it digests information. This is material that Friederici handles expertly. The techniques that she describes yield a variety of markers typically signaling time in milliseconds, and the electrophysiological polarity in the signal. There has been some progress in determining precisely where what is taking place takes place. Overall oscillation packages in brain waves can also be studied through some of the same techniques. Even more recent techniques allow the analysis of neurotransmitters or even single neurons. The results suggest the existence of neural pathways connecting brain regions, or representational networks.
“[E]ven during task-dependent functional magnetic resonance imaging,” Friederici acknowledges, “only about 20 percent of the activation is explained by the specific task whereas about 80 percent of the low frequency fluctuation is unrelated.6 A lot is going on at any given time within a given brain, and experimenters have to ingeniously subtract what is irrelevant from whatever task is observed. This is familiar enough from daily life. We do many things at once. With present technology, there is no way to determine what each neuron is doing at any given moment, or whether neuronal teams are firing together to perform a given task. Below a millisecond, sensing techniques yield noise.
Cognitive scientists cannot say how the mass or energy of the brain is related to the information it carries. Everyone expects that more activity in a given area means more information processing. No one has a clue whether it is more information or more articulated information, or more interconnected information, or whether, for that matter, the increased neuro-connectivity signifies something else entirely. Friederici remarks:
The picture that can be drawn from the studies reviewed here is neuroanatomically quite concise with respect to sensory processes and those cognitive processes that follow fixed rules, namely, the syntactic processes. Semantic processes that involve associative aspects are less narrowly localized.7
And then there are event-related potential effects, or stereotyped electrophysiological responses to a stimulus:
Acoustic processes and processes of speech sound categorization take place around 100 ms (N100). Initial phrase structure building takes place between 120 and 250 ms (ELAN), while the processing of semantic, thematic, and syntactic relations is performed between 300 and 500 ms (LAN, N400). Integration of different information types takes place around 600 ms (P600). These processes mainly involve the left hemisphere.8
Markers like N100, N400, or P600 signal whether the electrophysiological reading is positive (P) or negative (N). No one knows what such polarities entail. It is the

functions of brain areas and timeframes, Friederici assumes, that determine whether something is early or late, anterior or posterior, lateral or bilateral. If the perception of a signal presupposes some sensory modality, the modality must swing into action before computation begins. Language in Our Brain is written in the expectation, or the hope, that a division of labor into phonetics, morphology, syntax, semantics, and pragmatics more or less corresponds to the tasks the brain executes in aggregating representations from more elementary bits.
Minimalism
MERGE IS THE essential operation of Chomsky’s minimalism, because it is the simplest way of putting linguistic items together. “Merge,” Friederici assumes, “has a well-defined localization in the human brain.”9
Localized? Localized where?
“[I]n the most ventral anterior portion of the BA 44.”10
The data, Friederici writes, citing Poeppel’s work, “suggest that neural activation reflects the mental construction of hierarchical linguistic structures.”11 But hierarchical linguistic structures are one thing, and Merge is quite another. It is by being merged that the and ship yield the ship. To go beyond that to sail the ship involves the merger of sail and the ship. In what order do these two mergers take place? The merger of the ship must take place before that particular ship sets sail because the ship is a phrase, and there is no word to which sail could have merged within this sentence. Sail the is not a phrase of English.
There is a difference between the temporal order of events in neurophysiology and the logical order of events in syntax. It is obvious that, in the phrase sail the ship, we first pronounce sail and next the, with ship coming last. What could it mean to say that I first merged the and ship, and then merged them with sail? When I have heard or read sail the ship I have encountered sail the… first, and at that point I cannot know whether what is next is going to be shipboat, or even skies.
Consider:
  1.  
    1. The man sailed the ship.
    2. [[NP The man ]NP [VP sailed [NP the ship ]NP ]]
Sentence 1a has a subject, the noun phrase the man, and a predicate, the verb phrase sailed the ship. There is a logical order in which a sentence like this is assembled, in terms of what grammarians call thematic relations. Friederici is sensitive to the apparatus of modern syntax. Thematic relations, she writes, express “the relation between the function denoted by a noun and the meaning of the action expressed by the verb.”12 It follows that the relationship between sailed and what (the) ship denotes is logically prior to that between what (the) man denotes and the rest of the sentence. In 1b the man sailed the ship. In 1b the ship is merged first, but what is first said is the man. The speech sequence (as perceived) and the syntactic sequence (as generated) are at odds.
Generative grammar addresses this sort of orthogonality by separating competence from performance. Competence reveals that in 1a Merge works from the bottom up, following the brackets in 1b. That what is first encountered in speech is the man is a fact of performance, a matter of parsing. This poses a serious puzzle. Hearing or reading a sentence is an affair from before to after. It is not bottom up. Parsing even something as simple as 1a is a gambit. After the phrase the man has been parsed, it is held on a memory buffer in order to allow the mental parser to concentrate on what comes next, so as to establish thematic integration. In 1a, that happens to be sailed the ship, but consider:
  1. The man
    1. sailed a balloon.
    2. sailed a kite.
    3. sailed a space-probe.
These are all sailings, but rather different actions are asserted for the subject, which is assigned dissimilar thematic roles depending on information that is only accessed upon parsing the direct object of each sentence. Neuroimaging cannot possibly determine whether theta relations are at work in such an elaborate parsing, or whether considerations of memory and attention are paramount instead. How would one decide that whatever is going on at BA 44 is Merge, as opposed to, for instance, the processed phrase being assigned to an active memory buffer? Merge involves systematic and phrasally complex combinatorial information, which is why language recognizers routinely invoke such notions as a memory stack. As far as I can see, present-day observational technology does not seem capable of teasing apart these different components of syntax at work, so it seems to me premature to claim that the observables localize Merge.
The Functional Language Network
THERE IS evidence, Friederici suggests, that different neuronal networks support early and late syntactic processes. These networks are bound together by fiber tracts. There is also a language network at the molecular level. “Information flows,” Friederici writes, “from the inferior frontal gyrus back to the posterior temporal cortex via the dorsal pathway.”13 This is, of course, inferential: no one has seen information flowing, if only because no one has ever seen information. But brain events cohere at different levels into a pattern, which is consistent with what can be surmised from brain deficits and injuries. A functional language network, if more abstract than the digestive system, is no less real.
The question is how the thing works; indeed, the question of what the functional language network might be doing should, in my view, be subordinated to the distinction between competence and performance. What the mind must know and what the brain must process are very generally orthogonal. Consider the feat involved in recognizing a word’s syntactic category, distinguishing transform from transformation and either of those from transformational. The grammatical morphemes -tionand -al come at the tail of the word. We process words from their onset, trans first, then form, and finally the suffixes. So what does the mind actually do as it encounters each of these, in that sequential order?
Faced with such considerations, Morris Halle and Kenneth Stevens pioneered the concept of analysis by synthesis in 1962.14 Thomas Bever and Poeppel remind us how this “heuristic model emphasizes a balance of bottom-up and knowledge-driven, top-down, predictive steps in speech perception and language comprehension.”15 In their view, a model integrating the orthogonality of narrow competence-driven computations and broad performative strategies is computationally tractable and biopsychologically plausible. In processing transformationalize, say, the model may make a first-pass prediction upon parsing transform that needs to be adjusted upon processing -tion, then -al, then -ize.
A functional language network is, no doubt, playing some kind of role in such processes, but whether the activity that imaging techniques reveal when our brain entertains these symbolic dependencies involves the grammarian’s Merge, or something else entirely, no one really knows. Language in Our Brain begins by quoting Paul Flechsig: “[I]t is rather unlikely that psychology, on its own, will arrive at the real, lawful characterization of the structure of the mind, as long as it neglects the anatomy of the organ of the mind.”16 I am left wondering whether neurobiology shouldn’t have to take in all seriousness the central results of cognitive psychology—including the competence/performance divide—if seeking a lawful understanding of the human mind.
  1. MIT Press, “Language in Our Brain: The Origins of a Uniquely Human Capacity.” 
  2. David Poeppel and David Embick, “Defining the Relation between Linguistics and Neuroscience,” in Twenty-first Century Psycholinguistics: Four Cornerstones, ed. Anne Cutler (Mahwah: Lawrence Erlbaum, 2005), 103–20. 
  3. Noam Chomsky, “A Review of B. F. Skinner’s Verbal Behavior,” Language 35 no. 1 (1959): 26–58. 
  4. Angela Friederici, Language in Our Brain: The Origins of a Uniquely Human Capacity (Cambridge, MA: MIT Press, 2017), 20. 
  5. Ibid., 121. 
  6. Ibid., 127. 
  7. Ibid., 82. 
  8. Ibid. 
  9. Ibid., 4. 
  10. Ibid., 42. 
  11. Ibid., 53. 
  12. Ibid., 237. 
  13. Ibid., 129. 
  14. Morris Halle and Kenneth Stevens, “Speech Recognition: A Model and a Program for Research,” IRE Transactions on Information Theory 8 (1962): 155–59. 
  15. Thomas Bever and David Poeppel, “Analysis by Synthesis: A (Re-)Emerging Program of Research for Language and Vision,” Biolinguistics 4, no. 2–3 (2010): 174. 
  16. Angela Friederici, Language in Our Brain: The Origins of a Uniquely Human Capacity (Cambridge, MA: MIT Press, 2017), v. 
Published on March 1, 2019 in Volume 4, Issue 3.