Monday, November 9, 2020

Incompetent Experts and Bad Government

Incompetent Experts and Bad Government

Alexander William Salter 

https://www.aier.org/article/incompetent-experts-and-bad-government/

The late Richard Feynman, one of the 20th century’s eminent physicists, famously said, “Science is the belief in the ignorance of experts.” Unfortunately, the response of experts to the coronavirus pandemic has vindicated Feynman’s claim. 

Experts in the supposedly scientific fields of public health and economics have made a mess of things. Their failures would be comedic, were the consequences not so tragic. Instead of capable service for the public’s welfare, the American people have been made to suffer incompetence and malfeasance. Unless we critically examine the failure of experts, we invite similar blunders in the future. 

In terms of incompetence, it would be difficult to top this country’s public health officials. Case in point is the Centers for Disease Control and Prevention. The CDC, as it describes itself, “saves lives and protects people from health, safety, and security threats.” In truth, it has directly contributed to the opposite

One glaring failure of the United States in regards to the pandemic is testing. We lag significantly behind other countries. This is largely the CDC’s fault. The virus genome was mapped in January, and private tests were available not long after. But the CDC ordered these private labs to halt, coming up with its own test, which proved to be defective. 

In fact, many of the testing kits assembled by the CDC were contaminated! Had the CDC not dropped the ball, we could have moved towards mass testing much sooner. This in turn would lessen the economic harm of the several states’ stay-at-home orders. In other words, the CDC’s incompetence is directly responsible for thousands of lives lost and trillions of dollars in economic damage.

But as bad as the CDC is, they can only be charged with incompetence, not malfeasance. The latter indictment applies to America’s economic experts, especially those in charge of monetary policy. The Federal Reserve is charged with managing the country’s money supply, in the service of full employment and price stability. Their mandate is strictly monetary policy: making sure markets have adequate liquidity to operate at their full potential. But ever since the 2007-8 crisis, the Fed has flirted with crossing the line between monetary and fiscal policy. 

With the coronavirus pandemic, the Fed has brazenly stepped over that line. Ostensibly to support the economy, the Fed is buying corporate bonds, commercial paper, and municipal bonds. The planned size of this largesse (so far) is a cool $2 trillion. In other words, the Fed is picking winners and losers. 

It has definitively switched from referee to player in the game, and since the Fed has a monopoly on money creation, it’s the biggest player around. This is malfeasance, because monetary policymakers can and do know better. They are not supposed to promote a particular allocation of resources. Their job is to give the market what it needs to allocate resources for itself. Direct resource allocation—fiscal policy—is the exclusive prerogative of the people’s representatives, in Congress assembled. Thus, the Fed is usurping a key feature of Congress’s Constitutional authority. Since Fed officials are not even subject to the relatively weak discipline of elections, this is a particularly egregious transgression of the most basic norms of republican democracy. The essence of the Fed’s malfeasance is that it is now operating outside of the rule of law.

Ignorance and malfeasance: this is what the American people have been forced to endure from those who govern them. We have no reason to expect things will be different in the future, unless we unequivocally demand it otherwise. Experts have a role to play, but they are properly the servants of the people, not the masters. Americans are entitled to competent governance and their Constitutional rights. Rule by experts threatens both.


 

Brain Cell DNA Refolds Itself to Aid Memory Recall

 

Brain Cell DNA Refolds Itself to Aid Memory Recall

https://www.quantamagazine.org/brain-cell-dna-refolds-itself-to-aid-memory-recall-20201102/?utm_source=Quanta+Magazine&utm_campaign=4a101cca26-RSS_Daily_Biology&utm_medium=email&utm_term=0_f0cb61321c-4a101cca26-38986569&mc_cid=4a101cca26&mc_eid=61275b7d81


Researchers see structural changes in genetic material that allow memories to strengthen when remembered.

Researchers have discovered that memory formation is linked to large-scale changes in the chromatin of neurons. In this cross-section of a mouse’s brain, the yellow structure near the top is the hippocampus. The yellow reveals the presence of engram cells that were active in both the formation and recall of a memory. More than a century ago, the zoologist Richard Semon coined the term “engram” to designate the physical trace a memory must leave in the brain, like a footprint. Since then, neuroscientists have made progress in their hunt for exactly how our brains form memories. They have learned that specific brain cells activate as we form a memory and reactivate as we remember it, strengthening the connections among the neurons involved. That change ingrains the memory and lets us keep memories we recall more often, while others fade. But the precise physical alterations within our neurons that bring about these changes have been hard to pin down — until now.

In a study published last month, researchers at the Massachusetts Institute of Technology tracked an important part of the memory-making process at the molecular scale in engram cells’ chromosomes. Neuroscientists already knew that memory formation is not instantaneous, and that the act of remembering is crucial to locking a memory into the brain. These researchers have now discovered some of the physical embodiment of that mechanism.

The MIT group worked with mice that had a fluorescent marker spliced into their genome to make their cells glow whenever they expressed the gene Arc, which is associated with memory formation. The scientists placed these mice in a novel location and trained them to fear a specific noise, then returned them to this location several days later to reactivate the memory. In the brain area called the hippocampus, the engram cells that formed and recalled this memory lit up with color, which made it easy to sort them out from other brain cells under the microscope during a postmortem examination.Peering into the nuclei of these engram cells, the researchers spotted fine-grained changes in the architecture of the chromatin — the complex of DNA and regulatory proteins that makes up chromosomes — as the memory took shape. Parts of the chromatin reorganized in such a way that memory-associated genes could more easily spring into action to strengthen and preserve a memory. “Basically, the entire memory formation process is a priming event,” said Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the senior author on the study.

Warming Up for a Memory

This conclusion wasn’t clear from the beginning of the experiment. Right after the memory formed, there weren’t huge differences in how the engram cells expressed their genes. But the researchers did notice some structural changes to the cells’ chromatin: Certain regions of the DNA became more accessible, shifting so that chromatin proteins and other stretches of DNA weren’t covering them up. This made the genes in that DNA more accessible to enhancers, genetic elements that can increase the activation of genes.

A few days later, the researchers spotted more alterations. The DNA had rearranged itself further so that many of these enhancers were closer to the specific genes they targeted. Nevertheless, there still weren’t dramatic changes in the way genes were expressed. “I was really depressed at that time,” said Asaf Marco, a postdoctoral associate at MIT and the lead author of the research. “It didn’t make sense at all.”

But when the mice were placed back in the environment where they originally formed this memory, a surge of gene expression followed. The structural changes to enhancers aligned with these activation patterns, leading to stronger connections between the neurons involved. That’s when Marco realized that the architectural changes to the chromatin were preparing the cells to reinforce the memories when they were recalled.

“It’s almost like warming up for a workout,” explained Steve Ramirez, an assistant professor of psychological and brain sciences at Boston University. As a memory forms, engram cells gear up to express genes that will create and strengthen connections among them. Cells can only take full advantage of these latent changes, however, when the memory is called to mind again. “They’re ready to run and enable the process of recollection,” he said. “That idea is very tantalizing.”

Over the last decade or so, several groups conducting engram research have begun to suspect that structural changes in the chromatin prime the cell to make and preserve memories. “We all thought about it, but this is a really awesome paper actually showing it,” said Iva Zovkic, an assistant professor of psychology at the University of Toronto. Moreover, the MIT group’s research solidified the concept with new kinds of evidence, separating the stages of memory formation and recall to see when these structural changes play a role. “That’s really a much more direct way of showing it than anything that’s been done before,” Zovkic said.

New technologies that can analyze genetic and cellular changes on a very small scale have brought about a renaissance in engram neuroscience over the last few years, Ramirez said. Connecting molecular changes to brain systems to behavior is newly possible. “One of the most exciting things about this paper was that it really zoomed in at this unprecedented level,” he said. “It really is magical to see this kind of resolution.”

Studying the Architecture

Still, even the most cutting-edge tools can’t track memory formation this closely in live animals, so scientists can’t observe human memory formation as closely. These processes were studied in mice, and human cells may not follow the same patterns while encoding more complex and overlapping memories. “At this stage, it’s very hard to evaluate how much can be translated to human research,” said Shawn Liu, an assistant professor of physiology and cellular biophysics at Columbia University.

But mice and humans do have some memory circuitry in common. This study tracked cells in the hippocampus, a curved structure near the center of the brain in both species that’s vital for learning and memory. Differences between the human and mouse versions of the hippocampus temper the applicability of the study’s results, but within this new subfield, they are compelling data points. “Priming as a model to explain memory formation is very attractive,” Tsai said.

More experiments like this one can narrow down which brain cells follow these patterns, and if the patterns are the same for different kinds of memories, Ramirez said — whether those are emotional moments, physical skills or visual information that your brain is holding on to. That could bring into view a broader principle of how memories form, which could in turn point toward therapies for conditions like post-traumatic stress disorder or Alzheimer’s disease, in which memories are too persistent or not persistent enough. Understanding at the molecular level how the brain cements some memories and loses others could create opportunities to influence aging, learning and other essential processes.

There’s much more to learn about these changes in chromatin architecture. Many kinds of environmental factors, such as nutrition or stress, can alter the arrangement of DNA and proteins in chromatin, with downstream effects if the DNA is expressed and influences cell behavior. Further studies could also examine the plentiful regions of DNA that don’t direct the creation of proteins or have other obvious effects in the brain.

“We are currently ignoring 95% of the genome,” Marco said. He was taught to call it junk DNA. But like the enhancers that drive this aspect of memory encoding, the rest of these genes may take on crucial roles as well. “Although we mapped the genome, we still don’t understand most of it,” he said.


Monday, November 2, 2020

The speed of light has never been measured!!

 Veritasium - source of fascinating and extremely clear videos explaining a wide variety of scientific topics. 


https://www.youtube.com/watch?v=pTn6Ewhb27k&feature=emb_rel_end

Sunday, November 1, 2020

Energy is not conserved - strictly speaking Sabine Hossenfelder

http://backreaction.blogspot.com/2020/10/what-is-energy-is-energy-conserved.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+blogspot%2Fermku+%28Backreaction%29


 Why save energy if physics says energy is conserved anyway? Did Einstein really say that energy is not conserved? And what does energy have to do with time? This is what we will talk about today.

I looked up “energy” in the Encyclopedia Britannica and it told me that energy is “the capacity for doing work”. Which brings up the question, what is work? The Encyclopedia says work is “a measure of energy transfer.” That seems a little circular. And as if that wasn’t enough, the Encyclopedia goes on to say, well, actually not all types of energy do work, and also energy is always associated with motion, which actually it is not because E equals m c squared. I hope you are sufficiently confused now to hear how to make sense of this.

A good illustration for energy conservation is a roller-coaster. At the starting point, it has only potential energy, that comes from gravity. As it rolls down, the gravitational potential energy is converted into kinetic energy, meaning that the roller-coaster speeds up. At the lowest point it moves the fastest. And as it climbs up again, it slows down because the kinetic energy is converted back into potential energy. If you neglect friction, energy conservation means the roller-coaster should have just exactly the right total energy to climb back up to the top where it started. In reality of course, friction cannot be neglected. This means the roller-coaster loses some energy into heating the rails or creating wind. But this energy is not destroyed. It is just no longer useful to move the roller coaster.

This simple example tells us two things right away. First, there are different types of energy, and they can be converted into each other. What is conserved is only the total of these energies. Second, some types of energy are more, others less useful to move things around.

But what really is this energy we are talking about? There was indeed a lot of confusion about this among physicists in the 19th century, but it was cleared up beautifully by Emmy Noether in 1915. Noether proved that if you have a system whose equations do no change in time then this system has a conserved quantity. Physicists would say, such a system has time-translation invariance. Energy is then by definition the quantity that is conserved in a system with time-translation invariance.

What does this mean? Time-translation invariance does not mean the system itself does not change in time. Even if the equations do not change in time, the solutions to these equations, which are what describe the system, usually will depend on time. Time-translation invariance just means that the change of the system depends only on the amount of time that passed since you started an experiment, but you could have started it at any moment and gotten the same result. Whether you fall off a roof at noon or a midnight, it will take the same time for you to hit the ground. That’s what “time-translation invariance” means.

So, energy is conserved by definition, and Noether’s theorem gives you a concrete mathematical procedure to derive what energy is. Okay, I admit it is a little more complicated, because if you have some quantity that is conserved, then any function of that quantity is also conserved. The missing ingredient is that energy times time has to have the dimension of Pla()nck’s constant. Basically, it has to have the right units.

I know this sounds rather abstract and mathematical, but the relevant point is just that physicists have a way to define what energy is, and it’s by definition conserved, which means it does not change in time. If you look at a simple system, for example that roller coaster, then the conserved energy is as usual the kinetic energy plus the potential energy. And if you add air molecules and the rails to the system, then their temperature would also add to the total, and so on.

But. If you look at a system with many small constituents, like air, then you will find that not all configurations of such a system are equally good at causing a macroscopic change, even if they have the same energy. A typical example would be setting fire to coal. The chemical bonds of the coal-molecules store a lot of energy. If you set fire to it, this causes a chain reaction between the coal and the oxygen in the air. In this reaction, energy from the chemical bonds is converted into kinetic energy of air molecules. This just means the air is warm, and since it’s warm, it will rise. You can use this rising air to drive a turb(ain), which you can then use to, say, move a vehicle or feed it into the grid to create electricity.

But suppose you don’t do anything with this energy, you just sit there and burn coal. This does not change anything about the total energy in the system, because that is conserved. The chemical energy of the coal is converted into kinetic energy of air molecules which distributes into the atmosphere. Same total energy. But now the energy is useless. You can no longer drive any turbine with it. What’s the difference?

The difference between the two cases is entropy. In the first case, you have the energy packed into the coal and entropy is small. In the latter case, you have the energy distributed in the motion of air molecules, and in this case the entropy is large.

A system that has energy in a state of low entropy is one whose energy you can use to create macroscopic changes, for example driving that turbine. Physicists call this useful energy “free energy” and say it “does work”. If the energy in a system is instead at high entropy, the energy is useless. Physicists then call it “heat” and heat cannot “do work”. The important point is that while energy is conserved, free energy is not conserved.

So, if someone says you should “save energy” by switching off the light, they really mean you should “save free energy”, because if you let the light on when you do not need it you convert useful free energy, from whatever is your source of electricity, into useless heat, that just warms the air in your room.

Okay, so we have seen that the total energy is by definition conserved, but that free energy is not conserved. Now what about the claim that Einstein actually told us energy is not conserved. That is correct. I know this sounds like a contradiction, but it’s not. Here is why.

Remember that energy is defined by Noether’s theorem, which says that energy is that quantity which is conserved if the system has a time-translation invariance, meaning, it does not really matter just at which moment you start an experiment.

But now remember, that Einstein’s theory of general relativity tells us that the universe expands. And if the universe expands, it does matter when you start an experiment. And expanding universe is not time-translation invariant. So, Noether’s theorem does not apply. Now, strictly speaking this does not mean that energy is not conserved in the expanding universe, it means that energy cannot be defined. However, you can take the thing you called energy when you thought the universe did not expand and ask what happens to it now that you know the universe does expand. And the answer is, well, it’s just not conserved.

A good example for this is cosmological redshift. If you have light of a particular wavelength early in the universe, then the wave-length of this light will increase when the universe expands, because it stretches. But the wave-length of light is inversely proportional to the energy of the light. So if the wave-length of light increases with the expansion of the universe, then the energy decreases. Where does the energy go? It goes nowhere, it just is not conserved. No, it really isn’t.

However, this non-conservation of energy in Einstein’s theory of general relativity is a really tiny effect that for all practical purposes plays absolutely no role here on Earth. It is really something that becomes noticeable only if you look at the universe as a whole. So, it is technically correct that energy is not conserved in Einstein’s theory of General Relativity. But this does not affect our earthly affairs.

In summary: The total energy of a system is conserved as long as you can neglect the expansion of the universe. However, the amount of useful energy, which is what physicists call “free energy,” is in general not conserved because of entropy increase.

Thanks for watching, see you next week. And remember to switch off the light.


Tuesday, October 20, 2020

 

The Closest Black Hole to Earth May Not Actually Be a Black Hole After All

MICHELLE STARR

20 OCTOBER 2020

https://www.sciencealert.com/the-closest-black-hole-to-earth-has-been-reidentified-as-a-very-special-pair-of-stars

[[I am not interested in all the details of this case particularly. I think it illustrates the type of complex reasoning inherent in many of the claims made in astronomy and indicates that we should look at their "discoveries" and "findings" with much caution.]]


An object identified earlier this year as the closest black hole we've ever discovered may have just been demoted. After reanalysing the data, separate teams of scientists have concluded that the system in question, named HR 6819, does not include a black hole after all.

Instead, they have found that it's likely just two stars with a slightly unusual binary orbit that makes it difficult to interpret.

HR 6819, located around 1,120 light-years away, has been a bit of a puzzle for some time. Initially, it was thought to be a single star of the Be spectral type.

This is a hot, blue-white star on the main sequence whose spectrum contains a strong hydrogen emission line, interpreted as evidence of a disc of circumstellar gas ejected by the star as it rotates at an equatorial velocity of around 200 kilometres per second.

In the 1980s, astronomers noticed that the object seemed also to be exhibiting the light signature of a second type of B-type star, a B3 III star. This was found in 2003 to mean that HR 6819 was not one, but two stars, although they could not be individually resolved.

Further analysis revealed that the B3 III star, clocking in at an estimated 6 solar masses, had a roughly 40-day orbit - but the Be star, also estimated to be around 6 solar masses, seemed to be motionless. If the two stars comprised an equal mass binary, they should orbit a mutual centre of gravity, not one star orbiting the other.

After conducting careful calculations, a team of astronomers concluded that the B3 III star could be orbiting another, third object, one that couldn't be seen. A black hole.

But, other astronomers argue, that's far from the only possibility. What if we have miscalculated the masses of the stars?

"The presence of a Be star component in the spectrum of HR 6819 suggests another interpretation of the system," wrote astronomers Douglas Gies and Luqian Wang of Georgia State University in their paper.

"It is possible that the B3 III stellar component is actually a low mass, stripped down star that is still relatively young and luminous. In this case, the Be star would be the companion in the 40-day binary instead of a black hole."

In other words, the much lower-mass B3 III star would whizz around the Be star. If this were the case, that orbital motion could be detectable in the hydrogen gas surrounding the Be star - it would move almost imperceptibly as it was tugged by the smaller star. This is what Gies and Wang went looking for.

They carefully studied the hydrogen emission in the system's spectrum, and found that the hydrogen disc around the Be star did indeed display a 40-day periodicity in both Doppler shift and emission line shape. This is consistent with the B3 III star's orbit - just as would be expected if the system were an unequal-mass binary.

"This indicates," they wrote, "that HR 6819 is a binary system consisting of a massive Be star and a low-mass companion that is the stripped down remnant of a former mass donor star in a mass transfer binary."

In other words, the Be star slurped up a whole bunch of material from the B3 III star, leaving it much smaller. There is, the team noted, recent evidence that suggests many Be stars are the product of this process. According to their calculations, the Be star would be about 6 solar masses, as previously found; but the B3 III star would be between 0.4 and 0.8 solar masses.

But it gets more interesting. Gies and Wang were not the only researchers looking into this idea. In a second paper, a team of astronomers led by Julia Bodensteiner of KU Leuven in Belgium independently examined the hydrogen emission of the Be star, and performed an orbital analysis of the system. She and her colleagues came to almost exactly the same conclusion.

"We infer spectroscopic masses of 0.4 [solar masses] and 6 [solar masses] for the primary and secondary," they wrote in their paper. "This indicates that the primary might be a stripped star rather than a B-type giant. Evolutionary modelling suggests that a possible progenitor system would be a tight B+B binary system that experienced conservative mass transfer… In the framework of this interpretation, HR 6819 does not contain a BH."

And, in a third paper, currently in preprint, astronomers Kareem El-Badry and Eliot Quataert of UC Berkeley also independently analysed the system's spectra, obtaining masses of 0.47 and 6.7 solar masses for the B3 III and Be stars respectively.

"We argue that the B star is a bloated, recently stripped helium star with mass ≈ 0.5 solar masses that is currently contracting to become a hot subdwarf," El-Badry and Quataert wrote.

"The orbital motion of the Be star obviates the need for a black hole to explain the B star's motion. A stripped-star model reproduces the observed luminosity of the system, while a normal star with the B star's temperature and gravity would be more than 10 times too luminous."

So the future looks grim for the black hole interpretation, although it's not settled quite yet. Future observations could help resolve any lingering questions. But, Gies and Lang argue, the binary system could be more interesting than a black hole.

"The luminous and low-mass companion in the HR 6819 system may represent a rare and important case in which the companion has recently completed mass transfer and has yet to descend to the white dwarf cooling stage of evolution," they wrote.

So, either way, we have not yet heard the last from HR 6819.

Gies and Lang's research was published in The Astrophysical Journal Letters. Bodensteiner et al.'s research was published in Astronomy & Astrophysics. El-Badry and Quataert's paper has been submitted to the Monthly Notices of the Royal Astronomical Society and is available on arXiv.


Monday, October 12, 2020

Breaking the Kuzari - Reply revised Oct 12, 2020

 

Revised Oct 12, 2020 new addition at the end


About a year ago several people mentioned the book Breaking the Kuzari by Shraga Lowenstein. He tries to refute the Kuzari Principle which plays the key role in my argument that there is enough evidence to regard  the revelation at Sinai as a real historical event. The book is long and very detailed - it represents a considerable serious effort to ,make the case. Nevertheless, he does not succeed. I originally planned a  very detailed and complete reply. Since then other matters became more important. I hope the document linked below will be enough to convince the reader that Lowenstein's efforts are not fruitful. Should anyone read a section of his book that seems persuasive, write to me and I will try to add the reply to this document. 

https://docs.google.com/document/d/1xiM_rXbEj00FQSOrv9dOqOC-WQphDvyplvIiMgZc7WQ/edit?usp=sharing


Tuesday, September 22, 2020

The Astonishing Connectome


https://mindmatters.ai/2020/09/the-human-brain-has-given-researchers-a-big-surprise/


Gray matter isn't the big story. Connection—the connectome—is the astonishing feature of the brain.

NEWS SEPTEMBER 16, 2020

 

 We hear a good deal about the gray matter (the neurons) in the brain. They are often considered synonymous with thinking. For a long time, it was believed that the white matter did not do very much and its signals were generally excluded from brain mapping studies as noise. But that has all changed in recent years:

You might think that the brain is mostly gray matter, as it certainly looks that way, but in actuality there is more white matter in the brain. White matter is the infrastructure of the brain and includes the long nerve axons and their protective layer of fat, called myelin. Gray matter, on the other hand, is composed of the neurons themselves. Scientists have long thought that white matter didn’t play an active role in the brain, but new research has shown that this is untrue and that white matter actively affects both how the brain learns and how it dysfunctions.


From the little we understand about our hundred-billion neuron brains, connection is everything. Thus the Human Connectome Project (HCP), launched 2009, seeks to understand some of those connections better by surveying the brain imaging data from hundreds of people.

The challenge? The unthinkably large number of connections:

As of yet, scientists have only identified one connectome: that of a nematode (Caenorhabditis elegans). Its modest nervous system consists of 300 neurons. In the 1970s and 1980s, a team of researchers traced a map of its 7,000 interneural connections. The name for that map, as we mentioned before, is the connectome. Obviously, human beings are much more complex, with more than 100 billion neurons and 10 thousand times more connections.

The surprise? The brain is quite orderly, not the haphazard accumulations of aeons of evolution that the researchers expected:

LONDON’S STREETS ARE a mess. Roads bend sharply, end abruptly, and meet each other at unlikely angles. Intuitively, you might think that the cells of our brain are arranged in a similarly haphazard pattern, forming connections in random places and angles. But a new study suggests that our mental circuitry is more like Manhattan’s organised grid than London’s chaotic tangle. It consists of sheets of fibres that intersect at right angles, with no diagonals anywhere to be seen.

Van Wedeen from Massachusetts General Hospital, who led the study, says that his results came as a complete shock. “I was expecting it to be a pure mess,” he says. Instead, he found a regular criss-cross pattern like the interlocking fibres of a piece of cloth. …

Wedeen’s maps may not reveal all the details about the brain’s network, but it does show how that network is structured. “If you look at brain connections in an adult human, it’s really a massive puzzle how something so complex can emerge,” says Behrens. “If we can establish any sort of organisation, we get a clue about how these things grow. If it obeys some rules, you could start to work out how it follows those rules. You have something to hang onto.”


At Medical Daily,we learn more about the project itself:

The project comprises 36 investigators, including biologists, physicians, physicists, and computer scientists, at 11 institutions across the nation. The primary centers of research are USC’s Laboratory of Neuroimaging, Massachusetts General Hospital’s Martinos Center, Washington University’s Van Essen Lab, and the University of Minnesota’s Center for Magnetic Resonance Research.

The project was carried out in two phases. During Phase I, which spanned the years 2010 through 2012, research teams designed the project’s 16 major components. During Phase II, which ranged from 2012 through this past summer, the various scientists performed the actual work of gathering data. More importantly, however, during the most recent phase investigators made their datasets publicly available at regular intervals so that scientists around the world could begin to use them in their own projects.


But the surprise factor has not abated:

What’s been discovered so far?

Many surprises. Scientists have been amazed to see that, instead of chaos, the connecting fibers are organized into an orderly 3D grid, where axons run up and down and left and right, minus any diagonals or tangles. Science magazine compares the brain’s 3D layout to New York City, with its streets running in two directions and buildings’ elevators running up and down. Strangely, in flat areas of the grid, the fibers overlap at precise 90 degree angles and weave together much like a fabric, the scientists say.

SUSAN SCUTTI, “10 FAQ ABOUT THE HUMAN CONNECTOME PROJECT, THE ASTONISHING SISTER OF NIH’S HUMAN GENOME PROJECT” AT MEDICAL DAILY (NOVEMBER 24, 2015)

While there is much more to learn—the project was described to MedicalDaily as “a 30,000-foot fly-by view”—new findings amount to teasing out the innumerable details of a fundamentally orderly structure. But trying to completely understand the brain would be like trying to completely understand New York City.

Help with understanding schizophrenia?

Because the brain is an unexpectedly orderly structure, close examination of the connectome can help medical researchers see what is going wrong when our brains don’t co-operate with us. For example, there is some evidence that schizophrenia has a basis in faulty brain connections:

Researchers have consistently found patterns of abnormally high or low connectivity in the brains of schizophrenic patients. So delusions, hallucinations, and depression-like symptoms might not be a result of one region acting strangely—they could instead arise from flawed communication among regions.


Brains of persons who suffer from schizophrenia also lack “small-worldness.” That’s the quality by which most brain nodes cluster into thickly connected modules in which one node is a hub that connects long range across the network. It’s somewhat like a local club where one member volunteers for the duty of communicating with the central organization and the local media. But what if communications are less frequent and more haphazard?

But while the healthy brain is a small-world network, the schizophrenic brain is measurably less so—it can still be organized into modules, but those modules aren’t as densely connected. If small-worldness helps the brain undertake a variety of processes effectively and efficiently, its lack in the schizophrenic brain could someday help to explain the disease’s symptoms.


By itself, that finding doesn’t point to a cure. But as knowledge of unexpected patterns accumulates, a clearer picture of the problem is forming. Down the road, that ever more precise picture will suggest possible treatments.

Materialism may keep us from important insights. Computational neuroscientist Sebastian Seung, a rising star in the study of the connectome (connectomics), announced at TED, “I am my connectome.” No, he isn’t his connectome or his brain either. Or his brain and body. The unexpected orderly structure of the brain suggests a bigger picture. And it is within that structure that answers will be found.