Wednesday, May 16, 2018




To Build Truly Intelligent Machines, Teach Them Cause and Effect
https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/?utm_source=Quanta+Magazine&utm_campaign=1267952a81-RSS_Daily_Computer_Science&utm_medium=email&utm_term=0_f0cb61321c-1267952a81-389846569&mc_cid=1267952a81&mc_eid=61275b7d81

Judea Pearl, a pioneering figure in artificial intelligence, argues that AI has been stuck in a decades-long rut. His prescription for progress? Teach machines to understand the question why.

[[Note the critique of the limitations of current AI - DG.]]





May 15, 2018
Artificial intelligence owes a lot of its smarts to Judea Pearl. In the 1980s he led efforts that allowed machines to reason probabilistically. Now he’s one of the field’s sharpest critics. In his latest book, “The Book of Why: The New Science of Cause and Effect,” he argues that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.
Three decades ago, a prime challenge in artificial intelligence research was to program machines to associate a potential cause to a set of observable conditions. Pearl figured out how to do that using a scheme called Bayesian networks. Bayesian networks made it practical for machines to say that, given a patient who returned from Africa with a fever and body aches, the most likely explanation was malaria. In 2011 Pearl won the Turing Award, computer science’s highest honor, in large part for this work.
But as Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently.
In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions — to inquire how the causal relationships would change given some kind of intervention — which Pearl views as the cornerstone of scientific thought. Pearl also proposes a formal language in which to make this kind of thinking possible — a 21st-century version of the Bayesian framework that allowed machines to think probabilistically.
Pearl expects that causal reasoning could provide machines with human-level intelligence. They’d be able to communicate with humans more effectively and even, he explains, achieve status as moral entities with a capacity for free will — and for evil. Quanta Magazine sat down with Pearl at a recent conference in San Diego and later held a follow-up interview with him by phone. An edited and condensed version of those conversations follows.
Why is your new book called “The Book of Why”?
It means to be a summary of the work I’ve been doing the past 25 years about cause and effect, what it means in one’s life, its applications, and how we go about coming up with answers to questions that are inherently causal. Oddly, those questions have been abandoned by science. So I’m here to make up for the neglect of science.


That’s a dramatic thing to say, that science has abandoned cause and effect. Isn’t that exactly what all of science is about?
Of course, but you cannot see this noble aspiration in scientific equations. The language of algebra is symmetric: If X tells us about Y,then Y tells us about X. I’m talking about deterministic relationships. There’s no way to write in mathematics a simple fact — for example, that the upcoming storm causes the barometer to go down, and not the other way around.
Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Ycauses X. It sounds like a terrible thing to say against science, I know. If I were to say it to my mother, she’d slap me.
But science is more forgiving: Seeing that we lack a calculus for asymmetrical relations, science encourages us to create one. And this is where mathematics comes in. It turned out to be a great thrill for me to see that a simple calculus of causation solves problems that the greatest statisticians of our time deemed to be ill-defined or unsolvable. And all this with the ease and fun of finding a proof in high-school geometry.
You made your name in AI a few decades ago by teaching machines how to reason probabilistically. Explain what was going on in AI at the time.
The problems that emerged in the early 1980s were of a predictive or diagnostic nature. A doctor looks at a bunch of symptoms from a patient and wants to come up with the probability that the patient has malaria or some other disease. We wanted automatic systems, expert systems, to be able to replace the professional — whether a doctor, or an explorer for minerals, or some other kind of paid expert. So at that point I came up with the idea of doing it probabilistically.
Unfortunately, standard probability calculations required exponential space and exponential time. I came up with a scheme called Bayesian networks that required polynomial time and was also quite transparent.
Yet in your new book you describe yourself as an apostate in the AI community today. In what sense?
In the sense that as soon as we developed tools that enabled machines to reason with uncertainty, I left the arena to pursue a more challenging task: reasoning with cause and effect. Many of my AI colleagues are still occupied with uncertainty. There are circles of research that continue to work on diagnosis without worrying about the causal aspects of the problem. All they want is to predict well and to diagnose well.
I can give you an example. All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.
I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.
People are excited about the possibilities for AI. You’re not?
As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

The way you talk about curve fitting, it sounds like you’re not very impressed with machine learning.
No, I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step. We also want to conduct some communication with a machine that is meaningful, and meaningful means matching our intuition. If you deprive the robot of your intuition about cause and effect, you’re never going to communicate meaningfully. Robots could not say “I should have done better,” as you and I do. And we thus lose an important channel of communication.
What are the prospects for having machines that share our intuition about cause and effect?
We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.
The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.
Robots, too, will communicate with each other and will translate this hypothetical world, this wild world, of metaphorical models. 
When you share these ideas with people working in AI today, how do they react?
AI is currently split. First, there are those who are intoxicated by the success of machine learning and deep learning and neural nets. They don’t understand what I’m talking about. They want to continue to fit curves. But when you talk to people who have done any work in AI outside statistical learning, they get it immediately. I have read several papers written in the past two months about the limitations of machine learning.
Are you suggesting there’s a trend developing away from machine learning?
Not a trend, but a serious soul-searching effort that involves asking: Where are we going? What’s the next step?
That was the last thing I wanted to ask you.
I’m glad you didn’t ask me about free will.
In that case, what do you think about free will?
We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable.
In what way?
You have the sensation of free will; evolution has equipped us with this sensation. Evidently, it serves some computational function.
Will it be obvious when robots have free will?
I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t. So the
Now that you’ve brought up free will, I guess I should ask you about the capacity for evil, which we generally think of as being contingent upon an ability to make choices. What is evil?
It’s the belief that your greed or grievance supersedes all standard norms of society. For example, a person has something akin to a software module that says “You are hungry, therefore you have permission to act to satisfy your greed or grievance.” But you have other software modules that instruct you to follow the standard laws of society. One of them is called compassion. When you elevate your grievance above those universal norms of society, that’s evil.
So how will we know when AI is capable of committing evil?
When it is obvious for us that there are software components that the robot ignores, consistently ignores. When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.


Tuesday, May 15, 2018


How a new black hole paradox has set the physics world ablaze.
Alice and Bob Meet the Wall of Fire

An illustration of a galaxy with a supermassive black hole shooting out jets of radio waves.
Illustration by NASA/JPL-Caltech
Alice and Bob, beloved characters of various thought experiments in quantum mechanics, are at a crossroads. The adventurous, rather reckless Alice jumps into a very large black hole, leaving a presumably forlorn Bob outside the event horizon — a black hole’s point of no return, beyond which nothing, not even light, can escape.
Conventionally, physicists have assumed that if the black hole is large enough, Alice won’t notice anything unusual as she crosses the horizon. In this scenario, colorfully dubbed “No Drama,” the gravitational forces won’t become extreme until she approaches a point inside the black hole called the singularity. There, the gravitational pull will be so much stronger on her feet than on her head that Alice will be “spaghettified.”
Now a new hypothesis is giving poor Alice even more drama than she bargained for. If this alternative is correct, as the unsuspecting Alice crosses the event horizon, she will encounter a massive wall of fire that will incinerate her on the spot. As unfair as this seems for Alice, the scenario would also mean that at least one of three cherished notions in theoretical physics must be wrong.
When Alice’s fiery fate was proposed this summer, it set off heated debates among physicists, many of whom were highly skeptical. “My initial reaction was, ‘You’ve got to be kidding,’” admitted Raphael Bousso, a physicist at the University of California, Berkeley. He thought a forceful counterargument would quickly emerge and put the matter to rest. Instead, after a flurry of papers debating the subject, he and his colleagues realized that this had the makings of a mighty fine paradox.
The ‘Menu From Hell’
Paradoxes in physics have a way of clarifying key issues. At the heart of this particular puzzle lies a conflict between three fundamental postulates beloved by many physicists. The first, based on the equivalence principle of general relativity, leads to the No Drama scenario: Because Alice is in free fall as she crosses the horizon, and there is no difference between free fall and inertial motion, she shouldn’t feel extreme effects of gravity. The second postulate is unitarity, the assumption, in keeping with a fundamental tenet of quantum mechanics, that information that falls into a black hole is not irretrievably lost. Lastly, there is what might be best described as “normality,” namely, that physics works as expected far away from a black hole even if it breaks down at some point within the black hole — either at the singularity or at the event horizon.
Together, these concepts make up what Bousso ruefully calls “the menu from hell.” To resolve the paradox, one of the three must be sacrificed, and nobody can agree on which one should get the ax.
Physicists don’t lightly abandon time-honored postulates. That’s why so many find the notion of a wall of fire downright noxious. “It is odious,” John Preskill of the California Institute of Technology declared earlier this month at an informal workshop organized by Stanford University’s Leonard Susskind. For two days, 50 or so physicists engaged in a spirited brainstorming session, tossing out all manner of crazy ideas to try to resolve the paradox, punctuated by the rapid-fire tap-tap-tap of equations being scrawled on a blackboard. But despite the collective angst, even the firewall’s fiercest detractors have yet to find a satisfactory solution to the conundrum.

Joseph Polchinski, a string theorist at the University of California, Santa Barbara, is the “P” in the “AMPS” team that presented a new hypothesis about black hole firewalls.
According to Joseph Polchinski, a string theorist at the University of California, Santa Barbara, the simplest solution is that the equivalence principle breaks down at the event horizon, thereby giving rise to a firewall. Polchinski is a co-author of the paper that started it all, along with Ahmed Almheiri, Donald Marolf and James Sully — a group often referred to as “AMPS.” Even Polchinski thinks the idea is a little crazy. It’s a testament to the knottiness of the problem that a firewall is the least radical potential solution.
If there is an error in the firewall argument, the mistake is not obvious. That’s the hallmark of a good scientific paradox. And it comes at a time when theorists are hungry for a new challenge: The Large Hadron Collider has failed to turn up any data hinting at exotic physics beyond the Standard Model. “In the absence of data, theorists thrive on paradox,” Polchinski quipped.
If AMPS is wrong, according to Susskind, it is wrong in a really interesting way that will push physics forward, hopefully toward a robust theory of quantum gravity. Black holes are interesting to physicists, after all, because both general relativity and quantum mechanics can apply, unlike in the rest of the universe, where objects are governed by quantum mechanics at the subatomic scale and by general relativity on the macroscale. The two “rule books” work well enough in their respective regimes, but physicists would love to combine them to shed light on anomalies like black holes and, by extension, the origins of the universe.
An Entangled Paradox
The issues are complicated and subtle — if they were simple, there would be no paradox — but a large part of the AMPS argument hinges on the notion of monogamous quantum entanglement: You can only have one kind of entanglement at a time. AMPS argues that two different kinds of entanglement are needed in order for all three postulates on the “menu from hell” to be true. Since the rules of quantum mechanics don’t allow you to have both entanglements, one of the three postulates must be sacrificed.
Entanglement — which Albert Einstein ridiculed as “spooky action at a distance” — is a well-known feature of quantum mechanics (in the thought experiment, Alice and Bob represent an entangled particle pair). When subatomic particles collide, they can become invisibly connected, though they may be physically separated. Even at a distance, they are inextricably interlinked and act like a single object. So knowledge about one partner can instantly reveal knowledge about the other. The catch is that you can only have one entanglement at a time.
Under classical physics, as Preskill explained on Caltech’s Quantum Frontiersblog, Alice and Bob can both have copies of the same newspaper, which gives them access to the same information. Sharing this bond of sorts makes them “strongly correlated.” A third person, “Carrie,” can also buy a copy of that newspaper, which gives her equal access to the information it contains, thereby forging a correlation with Bob without weakening his correlation with Alice. In fact, any number of people can buy a copy of that same newspaper and become strongly correlated with one another.

With quantum correlations, Bob can be highly entangled with Alice or with Carrie, but not both.
Illustration courtesy of John Preskill
But with quantum correlations, that is not the case. For Bob and Alice to be maximally entangled, their respective newspapers must have the same orientation, whether right side up, upside down or sideways. So long as the orientation is the same, Alice and Bob will have access to the same information. “Because there is just one way to read a classical newspaper and lots of ways to read a quantum newspaper, the quantum correlations are stronger than the classical ones,” Preskill said. That makes it impossible for Bob to become as strongly entangled with Carrie as he is with Alice without sacrificing some of his entanglement with Alice.
This is problematic because there is more than one kind of entanglement associated with a black hole, and under the AMPS hypothesis, the two come into conflict. There is an entanglement between Alice, the in-falling observer, and Bob, the outside observer, which is needed to preserve No Drama. But there is also a second entanglement that emerged from another famous paradox in physics, one related to the question of whether information is lost in a black hole. In the 1970s, Stephen Hawking realized that black holes aren’t completely black. While nothing might seem amiss to Alice as she crosses the event horizon, from Bob’s perspective, the horizon would appear to be glowing like a lump of coal — a phenomenon now known as Hawking radiation.

The entanglement of particles in the No Drama scenario: Bob, outside the event horizon (dotted lines), is entangled with Alice just inside the event horizon, at point (b). Over time Alice (b’) drifts toward the singularity (squiggly line) while Bob (b”) remains outside the black hole.
Illustration courtesy of Joseph Polchinski
This radiation results from virtual particle pairs popping out of the quantum vacuum near a black hole. Normally they would collide and annihilate into energy, but sometimes one of the pair is sucked into the black hole while the other escapes to the outside world. The mass of the black hole, which must decrease slightly to counter this effect and ensure that energy is still conserved, gradually winks out of existence. How fast it evaporates depends on the black hole’s size: The bigger it is, the more slowly it evaporates.
Hawking assumed that once the radiation evaporated altogether, any information about the black hole’s contents contained in that radiation would be lost. “Not only does God play dice, but he sometimes confuses us by throwing them where they can’t be seen,” he famously declared. He and the Caltech physicist Kip Thorne even made a bet with a dubious Preskill in the 1990s about about whether or not information is lost in a black hole. Preskill insisted that information must be conserved; Hawking and Thorne believed that information would be lost. Physicists eventually realized that it is possible to preserve the information at a cost: As the black hole evaporates, the Hawking radiation must become increasingly entangled with the area outside the event horizon. So when Bob observes that radiation, he can extract the information.
But what happens if Bob were to compare his information with Alice’s after she has passed beyond the event horizon? “That would be disastrous,” Bousso explained, “because Bob, the outside observer, is seeing the same information in the Hawking radiation, and if they could talk about it, that would be quantum Xeroxing, which is strictly forbidden in quantum mechanics.”
Physicists, led by Susskind, declared that the discrepancy between these two viewpoints of the black hole is fine so long as it is impossible for Alice and Bob to share their respective information. This concept, called complementarity, simply holds that there is no direct contradiction because no single observer can ever be both inside and outside the event horizon. If Alice crosses the event horizon, sees a star inside that radius and wants to tell Bob about it, general relativity has ways of preventing her from doing so.
Susskind’s argument that information could be recovered without resorting to quantum Xeroxing proved convincing enough that Hawking conceded his bet with Preskill in 2004, presenting the latter with a baseball encyclopedia from which, he said, “information can be retrieved at will.” But perhaps Thorne, who refused to concede, was right to be stubborn.

The Hawking radiation is the result of virtual particle pairs popping into existence near the event horizon, with one partner falling in and the other escaping. The black hole’s mass decreases as a result and is emitted as radiation.
Illustration courtesy of Joseph Polchinski
Bousso thought complementarity would come to the rescue yet again to resolve the firewall paradox. He soon realized that it was insufficient. Complementarity is a theoretical concept developed to address a specific problem, namely, reconciling the two viewpoints of observers inside and outside the event horizon. But the firewall is just the tiniest bit outside the event horizon, giving Alice and Bob the same viewpoint, so complementarity won’t resolve the paradox.
Toward Quantum Gravity
If they wish to get rid of the firewall and preserve No Drama, physicists need to find a new theoretical insight tailored to this unique situation or concede that perhaps Hawking was right all along, and information is indeed lost, meaning Preskill might have to return his encyclopedia. So it was surprising to find Preskill suggesting that his colleagues at the Stanford workshop at least reconsider the possibility of information loss. Although we don’t know how to make sense of quantum mechanics without unitarity, “that doesn’t mean it can’t be done,” he said. “Look in the mirror and ask yourself: Would I bet my life on unitarity?”
Polchinski argues persuasively that you need Alice and Bob to be entangled to preserve No Drama, and you need the Hawking radiation to be entangled with the area outside the event horizon to conserve quantum information. But you can’t have both. If you sacrifice the entanglement of the Hawking radiation with the area outside the event horizon, you lose information. If you sacrifice the entanglement of Alice and Bob, you get a firewall.
David Kaplan, Petr Stepanek and MK12 for Quanta Magazine; Music by Steven Gutheinz
Video: David Kaplan explores black hole physics and the problem of quantum gravity in this In Theory video.
That consequence arises from the fact that entanglement between the area outside the event horizon and the Hawking radiation must increase as the black hole evaporates. When roughly half the mass has radiated away, the black hole is maximally entangled and essentially experiences a mid-life crisis. Preskill explained: “It’s as if the singularity, which we expected to find deep inside the black hole, has crept right up to the event horizon when the black hole is old.” And the result of this collision between the singularity and the event horizon is the dreaded firewall.
The mental image of a singularity migrating from deep within a black hole to the event horizon provoked at least one exasperated outburst during the Stanford workshop, a reaction Bousso finds understandable. “We should be upset,” he said. “This is a terrible blow to general relativity.”
Yet for all his skepticism about firewalls, he is thrilled to be part of the debate. “This is probably the most exciting thing that’s happened to me since I entered physics,” he said. “It’s certainly the nicest paradox that’s come my way, and I’m excited to be working on it.”
Alice’s death by firewall seems destined to join the ranks of classic thought experiments in physics. The more physicists learn about quantum gravity, the more different it appears to be from our current picture of how the universe works, forcing them to sacrifice one cherished belief after another on the altar of scientific progress. Now they must choose to sacrifice either unitarity or No Drama, or undertake a radical modification of quantum field theory. Or maybe it’s all just a horrible mistake. Any way you slice it, physicists are bound to learn something new.







How Many Genes Do Cells Need? Maybe Almost All of Them
An ambitious study in yeast shows that the health of cells depends on the highly intertwined effects of many genes, few of which can be deleted together without consequence.
https://www.quantamagazine.org/how-many-genes-do-cells-need-maybe-almost-all-of-them-20180419/ 

The activities of genes in complex organisms, including humans, may be deeply interrelated.




April 19, 2018


By knocking out genes three at a time, scientists have painstakingly deduced the web of genetic interactions that keeps a cell alive. Researchers long ago identified essential genes that yeast cells can’t live without, but new work, which appears today inScience, shows that looking only at those gives a skewed picture of what makes cells tick: Many genes that are inessential on their own become crucial as others disappear. The result implies that the true minimum number of genes that yeast — and perhaps, by extension, other complex organisms — need to survive and thrive may be surprisingly large.
About 20 years ago, Charles Boone and Brenda Andrews decided to do something slightly nuts. The yeast biologists, both professors at the University of Toronto, set out to systematically destroy or impair the genes in yeast, two by two, to get a sense of how the genes functionally connected to one another. Only about 1,000 of the 6,000 genes in the yeast genome, or roughly 17 percent, are considered essential for life: If a single one of them is missing, the organism dies. But it seemed that many other genes whose individual absence was not enough to spell the end might, if destroyed in tandem, sicken or kill the yeast. Those genes were likely to do the same kind of job in the cell, the biologists reasoned, or to be involved in the same process; losing both meant the yeast could no longer compensate.
Ignorant as science may still be about certain happenings in yeast, it’s dwarfed by our ignorance of what is going on in our own cells
Boone and Andrews realized they could use this idea to figure out what various genes were doing. They and their collaborators went about it deliberately, by first generating more than 20 million strains of yeast that were each missing two genes — almost all of the unique combinations of knockouts among those 6,000 genes. The researchers then scored how healthy each of the double mutant strains was and investigated how the missing genes could be related. The results let the researchers sketch a map of the shadowy web of interactions that underlie life. Two years ago, they reported the details of the map and revealed that it had already allowed researchers to discover previously unknown roles for genes.
Along the way, however, they realized that a surprising number of genes in the experiment didn’t have any obvious interactions with others. “Maybe, in some cases, deleting two genes isn’t enough,” Andrews said, reflecting on their thoughts at the time. Elena Kuzmin, a graduate student in the lab who is now a postdoc at McGill University, decided to go one step further by knocking out a third gene.
In the paper out today in Science, Kuzmin, Boone, Andrews and their collaborators at the University of Toronto, the University of Minnesota and elsewhere report that effort has yielded a deeper and more detailed map of the cell’s inner workings. Unlike in the double mutant experiments, the researchers did not make every possible combination of mutations — there are about 36 billion different ways to knock out three genes in yeast. Instead, they looked at the pairs of genes they’d already knocked out and ranked their interactions according to severity. They took a number of those pairs, whose effects ranged from making cells grow a little slower to making them significantly impaired, and matched them up one by one with knockouts of other genes, generating about 200,000 triple mutant strains. They monitored how quickly colonies of the mutant yeast grew, and after noting which mutants were struggling, they checked databases to see what the disabled genes were thought to do.
Charles Boone and Brenda Andrews, genomics researchers at the University of Toronto, oversaw the effort to systematically delete pairs and triplets of genes from yeast cells for insights into the genes’ functions. They found that even seemingly unrelated genes can have crucially interconnected effects.
Michael Schertzberg
As the scientists built their new map, several things became clear. For one, in about two-thirds of the triple mutants that showed an additional genetic interaction, knocking out the third gene tended to intensify the problems that the double mutant had. Pairs of genes might already show some interaction with each other, Andrews said, “but it was much more severe when we deleted a third gene.” Boone says that these are likely to be situations in which the loss of a third gene is dealing a critical blow to an already faltering system.
However, a third of the interactions were completely new. And they tended to involve more disparate processes. In double mutants, the functional connections between genes tended to be tight: A gene involved in DNA repair usually had links with other genes that are also involved in DNA repair, and genes that had interactions with each other usually interacted with the same other genes. With the triple mutants, however, more far-flung tasks started to get linked together. The constellation of connected cellular tasks shifted and morphed subtly.
“Perhaps what we’re sampling here,” Andrews said, “are some functional connections in the cell that we weren’t able to see before.”
One set of new connections, for example, was between genes involved in transporting proteins and genes involved in DNA repair. On the surface, it’s difficult to see what would connect these two functions. And in fact, the researchers still don’t have a mechanistic explanation. But they are sure there is one. “Our immediate reaction was, ‘Well, that’s kind of random,’” Andrews said. “But we’ve learned over the course of doing this project that it’s not random. We just don’t understand how the cell is connected.”
Their group has just started probing that link between protein transport and DNA repair, but according to Andrews, if you look closely at those yeast cells, they do in fact show a great deal of DNA damage. The map of connections helped draw their attention to it: “There would have been no reason to look before,” she said.
Yeast geneticists were never under the impression that only essential genes mattered. But the new paper reinforces the idea that simplistic interpretations of just what is important in the yeast genome are likely to be flawed. The reality is more complicated, Boone and Andrews say. They suggest that when double and triple interactions are taken into account, the number of genes that a yeast cell truly can’t do without jumps. As their paper notes, the minimum genome needed for yeast cells to avoid a substantial defect “may nearly approach the complete set of genes encoded in the genome.”

This figure maps the interactions among various genes (represented as dots) in the yeast genome. Genes with linked effects are connected by lines; genes with more strongly correlated effects are closer together. The color of the dots corresponds to the biological processes and organelles in which the genes are involved.
Anastasia Baryshnikova, University of Toronto
Indeed, experimental efforts to devise a minimal genome for a microorganism — to pinpoint the smallest number of genes that a cell would need to survive, as a step toward making artificial genomes — have shown it to be surprisingly difficult to remove genes and still have a thriving creature.
In 2016, researchers at the J. Craig Venter Institute (JCVI) reported the creation of an artificial genome for the bacterium Mycoplasma genitalium, in which they winnowed its 525 genes down to 473. But negative effects from removing seemingly inessential genes were indeed a serious issue, according to Clyde A. Hutchison III, a biochemist and distinguished professor at JCVI involved in the work. “That was the main problem for choosing a gene set to design for a minimal genome,” he said.
Joel Bader, a systems biologist at Johns Hopkins University, says that the current work suggests an intriguing connection to an idea in human genetics — that a wide array of genes may be subtly influencing traits that we don’t normally associate with them. “[The] closer we are able to look, the more we are able to see that perturbing one gene or pathway has effects that propagate throughout the entire system,” he said. “The effects get weaker, but they can still be measured.”
Ignorant as science may still be about certain happenings in yeast, it’s dwarfed by our ignorance of what is going on in our own cells. Part of what makes a project like this one at the University of Toronto possible is that yeast has been heavily studied and its genes intricately annotated by several generations of biologists, to a degree not yet reached with the human genome, which is comparatively enormous, rambling and full of mysteries. Still, the researchers say that they hope that as gene-editing technology for human cells advances, these kinds of experiments can help reveal more about the workings of cells and how the genes within a genome relate to one another. “I think there are many basic rules of genome biology we have not discovered,” Andrews said.
Correction: This article was updated on April 20 to include mention of the contributions of scientists at the University of Minnesota to the new Science paper. The credit for the map of gene interactions was also corrected on April 23 to read “Anastasia Baryshnikova, University of Toronto.”