Sunday, November 24, 2019

What is Dark Energy?

What’s the difference between dark energy and dark matter? What does dark energy have to do with the cosmological constant and is the cosmological constant really the worst prediction ever?

First things first, what is dark energy? Dark energy is what causes the expansion of the universe to accelerate. It’s not only that astrophysicists think the universe expands, but that the expansion is actually getting faster. And, here’s the important thing, matter alone cannot do that. If there was only matter in the universe, the expansion would slow down. To make the expansion of the universe accelerate, it takes negative pressure, and neither normal matter nor dark matter has negative pressure – but dark energy has it.

We do not actually know that dark energy is really made of anything, so interpreting this pressure in the normal way as by particles bumping into each other may be misleading. This negative pressure is really just something that we write down mathematically and that fits to the observations. It is similarly misleading to call dark energy “dark”, because “dark” suggests that it swallows light like, say, black holes do. But neither dark matter nor dark energy is actually dark in this sense. Instead, light just passes through them, so they are really transparent and not dark.

What’s the difference between dark energy and dark matter? Dark energy is what makes the universe expand, dark matter is what makes galaxies rotate faster. Dark matter does not have the funny negative pressure that is characteristic of dark energy. Really the two things are different and have different effects. There are of course some physicists speculating that dark energy and dark matter might have a common origin, but we don’t know whether that really is the case.

What does dark energy have to do with the cosmological constant? The cosmological constant is the simplest type of dark energy. As the name says, it’s really just a constant, it doesn’t change in time. Most importantly this means that it doesn’t change when the universe expands. This sounds innocent, but it is a really weird property. Think about this for a moment. If you have any kind of matter or radiation in some volume of space and that volume expands, then the density of the energy and pressure will decrease just because the stuff dilutes. But dark energy doesn’t dilute! It just remains constant.

Doesn’t this violate energy conservation? I get this question a lot. The answer is yes, and no. Yes, it does violate energy conservation in the way that we normally use the term. That’s because if the volume of space increases but the density of dark energy remains constant, then it seems that there is more energy in that volume. But energy just is not a conserved quantity in general relativity, if the volume of space can change with time. So, no, it does not violate energy conservation because in general relativity we have to use a different conservation law, that is the local conservation of all kinds of energy densities. And this conservation law is fulfilled even by dark energy. So the mathematics is all fine, don’t worry.

The cosmological constant was famously already introduced by Einstein and then discarded again. But astrophysicists think today that is necessary to explain observations, and it has a small, positive value. But I often hear physicists claiming that if you try to calculate the value of the cosmological constant, then the result is 120 orders of magnitude larger than what we observe. This, so the story has it, is the supposedly worst prediction ever.

Trouble is, that’s not true! It just isn’t a prediction. If it was a prediction, I ask you, what theory was ruled out by it being so terribly wrong? None, of course. The reason is that this constant which you can calculate – the one that is 120 orders of magnitude too large – is not observable. It doesn’t correspond to anything we can measure. The actually measureable cosmological constant is a free parameter of Einstein’s theory of general relativity that cannot be calculated by the theories we currently have.

Dark energy now is a generalization of the cosmological constant. This generalization allows that the energy density and pressure of dark energy can change with time and maybe also with space. In this case, dark energy is really some kind of field that fills the whole universe.

What observations speak for dark energy? Dark energy in the form of a cosmological constant is one of the parameters in the concordance model of cosmology. This model is also sometimes called ΛCDM. The Λ (Lambda) in this name is the cosmological constant and CDM stands for cold dark matter.

The cosmological constant in this model is not extracted from one observation in particular, but from a combination of observations. Notably that is the distribution of matter in the universe, the properties of the cosmic microwave background, and supernovae redshifts. Dark energy is necessary to make the concordance model fit to the data.

At least that’s what most physicists say. But some of them are claiming that really the data has been wrongly analyzed and the expansion of the universe doesn’t speed up after all. Isn’t science fun? If I come around to do it, I’ll tell you something about this new paper next week, so stay tuned.

Sunday, November 17, 2019

Cells That ‘Taste’ Danger Set Off Immune Responses
Taste and smell receptors in unexpected organs monitor the state of the body’s natural microbial health and raise an alarm over invading parasites.

[[Everything is much more complicated that we thought.]]

Cells with taste receptors sometimes develop inside the lungs of animals infected with influenza. By “tasting” the presence of certain pathogens, these cells may act as sentinels for the immune system.

November 15, 2019

When the immunologist De’Broski Herbert at the University of Pennsylvania looked deep inside the lungs of mice infected with influenza, he thought he was seeing things. He had found a strange-looking cell with a distinctive thatch of projections like dreadlocks atop a pear-shaped body, and it was studded with taste receptors. He recalled that it looked just like a tuft cell — a cell type most often associated with the lining of the intestines.
But what would a cell covered with taste receptors be doing in the lungs? And why did it only appear there in response to a severe bout of influenza?
Herbert wasn’t alone in his puzzlement over this mysterious and little-studied group of cells that keep turning up in unexpected places, from the thymus (a small gland in the chest where pathogen-fighting T cells mature) to the pancreas. Scientists are only just beginning to understand them, but it is gradually becoming clear that tuft cells are an important hub for the body’s defenses precisely because they can communicate with the immune system and other sets of tissues, and because their taste receptors allow them to identify threats that are still invisible to other immune cells.

De’Broski Herbert, an immunology researcher at the University of Pennsylvania, was the first to notice the emergence of tuft cells, which are rich in “taste” receptors, developing in the infected lungs of sick mice.
Researchers around the world are tracing the ancient evolutionary roots that olfactory and taste receptors (collectively called chemosensory receptors or nutrient receptors) share with the immune system. A flurry of work in recent years shows that their paths cross far more often than anyone anticipated, and that this chemosensory-immunological network plays a role not just in infection, but in cancer and at least a handful of other diseases.
This system, says Richard Locksley, an immunologist at the University of California, San Francisco, helps direct a systematic response to potential dangers throughout the body. Research focusing on the interactions of the tuft cell could offer a glimpse of how organ systems work together. He describes the prospects of what could come from the studies of these receptors and cells as “exciting,” but cautions that “we’re still in the very early days” of figuring it out.
Not Merely Taste and Smell Receptors
One of life’s fundamental challenges is to find food that’s good to eat and avoid food that isn’t. Outside of our modern world of prepackaged food on grocery store shelves, it’s a perilous task. Taking advantage of a new type of food could mean the difference between starvation and survival, or it could mean an early death from accidental self-poisoning. Chemosensory receptors help us make this distinction. They’re so essential that even single-celled bacteria such as Escherichia coli carry a type of this receptor.
Despite the near universality of these receptors and their centrality to survival, scientists didn’t discover the big family of genes that encode for olfactory receptors until 1991, with the ones for taste receptors following in 2000. (The olfactory receptor discovery brought the researchers Richard Axel and Linda Buck a Nobel Prize in 2004.) Olfactory receptors and taste receptors for bitter, sweet and umami (savory) are all part of a large family of proteins called G protein-coupled receptors (GPCRs) that are embedded in cell membranes. Although the precise details vary from receptor to receptor, when a GPCR binds to the proper molecule, it sets off a signaling cascade within the cell. For taste and olfactory receptors in the mouth and nose, this cascade causes neurons to fire and enables us to recognize everything from the rich sweetness of a chocolate chip cookie to the nose-wrinkling stench of a passing skunk.
The discoveries of these receptors were momentous, groundbreaking advances, says Jennifer Pluznick, a physiologist at Johns Hopkins University. But in her view, labeling them as olfactory and taste receptors rather than as chemosensory receptors entrenched the idea that they function specifically and exclusively in smell and taste. If scientists found signs of these receptors in cells outside the nose and mouth, it was easy to write them off as mistakes or anomalies. [[Why was it easy? Because we  already know everything so this can’t change that.]] She herself was shocked to find an olfactory receptor called Olfr78 in kidney cells, a finding that she reported in 2009.
“I think I even famously said something to my postdoc adviser, like, ‘I don’t even know that I can trust this data, you know?’” Pluznick recalled. “Olfactory receptors in the kidney? Come on.”
This wasn’t the first time these receptors had shown up in unexpected tissues. For example, in 2005, the University of Liverpool biochemist Soraya Shirazi-Beechey showed in a paper published in Biochemical Society Transactions that taste receptors could be found in the small intestine as well as the mouth. Their presence was surprising, but it made a certain sense that the intestine might use a taste receptor to monitor the food it was digesting.
But then in 2010, the laboratory of Stephen Liggett, who was then at the University of Maryland School of Medicine, reported that smooth muscle in the airways of the lungs expresses receptors for bitter taste. Moreover, they showed that these receptors were involved in a dilation response of the airways that helped to clear out obstructions.
They were these really intriguing, weird cells that didn’t really have a clear function in terms of the normal physiology.
Michael Howitt, Stanford University
Receptors for sweetness also turned up on the cells lining the airways. In 2012, a research group led by Herbert’s colleague Noam Cohen at the University of Pennsylvania found that the sugars coating the respiratory pathogen Pseudomonas aeruginosa activated those receptors and caused the cells to beat their hairlike cilia more rapidly, a process that can sweep away invading bacteria and prevent infections.
Meanwhile, Pluznick and her colleagues had continued to study the role of the Olfr78 receptor in the kidneys. They demonstrated in 2013 that it responded to molecules secreted by intestinal microorganisms, and that signals from that response helped to direct the kidney’s secretion of the hormone renin, which regulates blood pressure. “Other labs finding similar things in other tissues was both very encouraging and very exciting,” Pluznick said.
These studies and a torrent of others from labs around the world drove home the message that these seemingly misplaced olfactory and taste receptors serve important and often vital functions. And a theme common to many of those functions was that the chemosensory receptors often seemed to be alerting tissues to the presence and condition of microbes in the body. In hindsight, that application for the receptors made a lot of sense. For example, as Herbert notes, being able to “taste” and “smell” minute traces of pathogens gives the body more chances to respond to infections before microbes overwhelm the host’s defenses.
A Job for Tuft Cells
In researchers’ assays for chemosensory receptors in tissues throughout the body, a cell type that kept popping up was a relatively rare, largely unstudied one called a tuft cell. Tuft cells had been known to science since the mid-1950s, when microscopy studies found them in the lining of practically every organ in the body, including the gut, the lungs, the nasal passages, the pancreas and the gallbladder. The passage of a half-century, however, hadn’t led to any greater understanding of what tuft cells do. The further discovery of taste receptors on many tuft cells only deepened the mystery: Given their locations in the body, they certainly weren’t contributing to our sense of taste.
As a postdoc at Harvard University in the lab of Wendy Garrett in 2011, Michael Howitt became fascinated with tuft cells, especially those found in the intestines. “They were these really intriguing, weird cells that didn’t really have a clear function in terms of the normal physiology,” said Howitt, who is now an immunologist at Stanford University. He set out to learn the enigmatic cells’ function, and he eventually got his answer — through an unexpected discovery involving the mouse microbiome.
Howitt’s findings were significant because they pointed to a possible role for tuft cells in the body’s defenses — one that would fill a conspicuous hole in immunologists’ understanding.
Because some studies had hinted at a link between taste receptors and immune function, Howitt wondered whether the receptor-studded tuft cells in the intestines might respond to the microbiome population of bacteria living in the gut. To find out, he turned to a strain of mice that other Harvard researchers had bred to lack a wide variety of bacterial pathogens.
But surprisingly, when he inspected a small sample of intestinal tissue from the mice, Howitt found that they had 18 times the number of tuft cells previously reported. When he looked more closely, he found that the mice carried more protozoa in their guts than expected — specifically, a common single-celled parasite called Tritrichomonas muris.
Howitt realized that T. muris wasn’t an accidental infection but rather a normal part of the microbiome in mice — something that neither he nor Garrett had thought much about. “We weren’t looking for protozoa,” Howitt said. “We were focused on bacteria.”
To confirm the relationship between the presence of the protozoa and the elevated numbers of tuft cells, Howitt ordered another set of similarly pathogen-free mice from a different breeding facility and fed them some of the protozoan-rich intestinal contents of the Harvard mice. The number of tuft cells in the new mice soared as the parasites colonized their intestines, too.

The numbers of tuft cells also climbed when Howitt infected mice with parasitic worms. But the increase didn’t happen in mice with defects in the biochemical pathways underpinning their taste receptors, including those on the tuft cells.
Howitt’s findings were significant because they pointed to a possible role for tuft cells in the body’s defenses — one that would fill a conspicuous hole in immunologists’ understanding. Scientists understood quite a bit about how the immune system detects bacteria and viruses in tissues. But they knew far less about how the body recognizes invasive worms, parasitic protozoa and allergens, all of which trigger so-called type 2 immune responses. Howitt and Garett’s work suggested that tuft cells might act as sentinels, using their abundant chemosensory receptors to sniff out the presence of these intruders. If something seems wrong, the tuft cells could send signals to the immune system and other tissues to help coordinate a response.
At the same time that Howitt was working, Locksley and his postdoc Jakob von Moltke (who now runs his own lab at the University of Washington) were homing in on that finding from another direction by studying some of the chemical signals (cytokines) involved in allergies. Locksley had discovered a group of cells called group 2 innate lymphoid cells (ILC2s) that secrete these cytokines. ILC2s, he found, release cytokines after receiving a signal from a chemical called IL-25. Locksley and von Moltke used a fluorescent tag to mark intestinal cells that produced IL-25. The only cells that gave off a red glow in their experiments were tuft cells. Locksley had barely even heard of them.
“Even textbooks of [gastrointestinal] medicine had no idea what these cells did,” he said.

Andrew Vaughan, a lung researcher at the University of Pennsylvania, notes that even if the sudden emergence of tuft cells in infected tissues is part of the body’s defenses, it could still cause its own pathologies.
Courtesy of University of Pennsylvania School of Veterinary Medicine
The Howitt-Garrett and Locksley-von Moltke papers were prominently featured in Science and Nature, respectively. Together with a third paper in Nature by Philippe Jay of the Institute for Functional Genomics at the National Center for Scientific Research in France and his colleagues, these studies provided the first explanation for what tuft cells do: They recognize parasites by means of a small molecule called succinate, an end product of parasite metabolism. Once succinate binds to a tuft cell, it triggers the release of IL-25, which alerts the immune system to the problem. As part of the defensive cascade, the IL-25 also helps to initiate the production of mucus by nearby goblet cells and triggers muscle contractions to remove the parasites from the gut.
For the first time, biologists had found at least one explanation for what tuft cells do. Before this, “people just kind of ignored them or didn’t even realize that they were there,” said Megan Baldridge, a molecular microbiologist at Washington University in St. Louis.
As groundbreaking as this trio of studies was, the work focused on intestinal cells. No one knew at first whether the tuft cells appearing elsewhere throughout the body play the same anti-parasitic role. Answers soon began to roll in, and it became clear that tuft cells respond to more than succinate and do more than help repel the body’s invaders. In the thymus (a small globular outpost of the immune system nestled behind the breastbone), tuft cells help teach the immune system’s maturing T cells the difference between self proteins and non-self proteins. Kathleen DelGiorno, now a staff scientist at the Salk Institute for Biological Studies, helped to show that tuft cells can help protect against pancreatic cancer by detecting cellular injury. And in Cohen’s studies of chronic nasal and sinus infection, he discovered that recognition of bacterial pathogens such as Pseudomonas aeruginosa by receptors for bitterness on tuft cells causes neighboring cells to pump out microbe-killing chemicals.
As a lung biologist and a colleague of Herbert’s at the University of Pennsylvania, Andrew Vaughan followed these tuft-cell discoveries with interest. In many cases, tuft cells appeared to be intimately involved with the part of the immune response known as inflammation. Vaughan was studying how tissue deep in the lungs repairs itself after inflammation caused by the flu virus. After reading about some of the new findings, Vaughan began to wonder whether tuft cells might be involved in the lungs’ recovery from influenza. He and Herbert infected mice with the influenza virus and searched the lungs of those with severe symptoms for signs of tuft cells.

“Sure enough, they were all over the place,” Vaughan said. But the tuft cells only appeared after influenza infection, which made Vaughan believe that he and Herbert were “basically seeing a cell type where [it’s] not supposed to be.” Although he’s unsure exactly why this proliferation of tuft cells happens after the flu, Vaughan speculates that it might be an aspect of the body’s attempt to repair damage from the virus as part of the broader type 2 immune response.
The researchers don’t yet know what the tuft cells are doing in the lungs or what they are sensing, but Herbert believes that their ability to continually “taste” the environment for different compounds provides a key opportunity for the body to respond to even minute threats.
The tuft cell, Herbert said, is constantly sensing the metabolic products present in microenvironments within the body. “Once some of those metabolic products go out of whack … bam! Tuft cells can recognize it and make a response if something is wrong.”
Newly discovered connections between tuft cells and the immune and nervous systems provide further evidence that chemosensory receptors are multipurpose tools like Swiss Army knives, with evolved functions beyond taste and smell. It isn’t clear which function evolved first, though, or whether they all evolved in tandem, Howitt says. Just because scientists became aware of “taste” receptors on the tongue first, “that doesn’t mean that’s the order in which it evolved

In fact, a preliminary study in rats hints that the receptors’ immune functions may have evolved first. Two groups of immune cells known as monocytes and macrophages use formyl peptide receptors on their membranes to detect chemical cues from pathogens, and a group of Swiss scientists showed that rats use these same receptors to detect pheromone odors. Those facts suggest that at some point in history, the ancestors of rats made scent receptors out of the immunological molecules. The evolutionary history of other groups of olfactory and taste receptors has yet to be deciphered.
Whatever their history, scientists now say that a major role of these receptors is to monitor the molecules in our body, tasting and smelling them for any sign that they might be from a pathogen. Then, with help from tuft cells and other parts of the immune system, the body can fight off the invaders before they’ve gotten a foothold. But Vaughan cautioned that the sudden emergence of tuft cells in tissues like the lungs, where they are not always present, might also cause its own pathologies.
“You may not always want to have the ability to [defensively] overreact,” he said. That could be part of what goes wrong in conditions like allergies and asthma: There could be dangers “if you have too many of these cells and they’re too poised to respond to the external environment.”

Tuesday, November 12, 2019

A Chance Discovery Changes Everything We Know About Biblical Israel
[[Absolutely revolutionary. And bear in mind it was published in Haaretz.]]
The discovery of a powerful nomad kingdom in Israel's Arava desert upends our notion of the role of archaeology in understanding Ancient Israel
Erez Ben-Yosef
An excavation in the Timna Valley. Sagi Babi/Courtesy of Erez Ben-Yosef
Is the Bible true? Do at least some biblical narratives have a historical foundation? Can archaeology answer that question? To what degree can broken walls, fragments and stones tell the whole story? When it comes to the nomads of the biblical period, new findings show that the ability of archaeology to assist in constructing historical models has been extremely limited. This has dramatic implications for understanding the genesis of Ancient Israel, when – according to the biblical account – it was nomadic tribes that created a kingdom.
Until now, the scholarly literature has perceived non-sedentary societies of this region – peoples that did not live in permanent settlements – as Bronze and Iron Age “Bedouin”: simple societies that were ever on the margins of historical events, void of political power, and which cannot be identified with kingdoms or political entities wielding extra-regional influence. New archaeological discoveries in the Arava desert in southern Israel and in Jordan, however, show that this approach is mistaken and that nomads were able to forge complex political structures that differed substantively from the conventional “Bedouin model.”
But the Arava case is exceptional in world archaeology. Nomads almost never leave behind significant archaeological evidence. It follows that many archaeological “pronouncements” concerning the period of the entry into Canaan, the era of Judges and the incipient monarchy have no sound grounds. Now, at least with regard to historical processes involving mobile populations, it is time to acknowledge the limitations of archaeology’s contribution. The focal point of the discussion should revert to biblical criticism – the study of the text and its contexts – and research biases originating in simplistic archaeological pronouncements should be corrected.

Kingdom of nomads
In the stark deserts of southern Israel and western Jordan, ancient copper mines can be found in both sides of the Arava: in the Timna Valley near present-day Eilat, and in Wadi Faynan (biblical Punon), in Jordan.
Until studies conducted in the 1950s, the remains throughout the Arava were dated to the middle of the 10th century B.C.E. and associated with “King Solomon’s mines.” In 1969, following the discovery of an Egyptian shrine in the heart of the Timna Valley, the excavator, Beno Rothenberg, dated the remains of the mining in the valley to the period of the New Kingdom in Egypt, approximately 300 years before the period identified with King Solomon.
The dating for Wadi Faynan was also revised; in this case it was assigned a younger age, when permanent settlement in the region first appeared under the influence of the Assyrian Empire, during the late 8th and 7th centuries B.C.E. In other words, the systematic study of the region differentiated chronologically between Timna and Faynan, and it attributed the activity in both areas to imperial rule: Egypt in the southern Arava in the 13th and 12th centuries B.C.E., and Assyria in the northern Arava in the 7th century B.C.E.
It’s understandable why scholars considered the remains to be evidence of imperial projects. More than 10,000 mine shafts were discovered in Timna alone, indicating an organized enterprise and an orderly, systematic search for subterranean deposits. Some of the shafts are more than 40 meters deep (in Faynan, more than 70 meters!), and many develop into complex tunnel systems. Scattered in the smelting camps, where the furnaces operated and raw copper was produced, are dozens of heaps of slag and other industrial waste, some more than six meters high.
The remains attest to production on a vast scale using advanced technological knowhow. The hypothesis, based on this evidence, was that only powerful empires could have managed a project on this scale, organized the work in the mines and at the smelting camps, and channeled the product to extra-regional trade.
But whose empire was behind this vast mining endeavor? New research in the region has once again revised the picture. Dozens of high-precision radiocarbon dates from intensive excavations in Faynan and Timna showed that the mines were active at the same time in both regions, from the 11th to the 9th centuries B.C.E. – that is, after the Egyptians left Timna and withdrew from the whole of Canaan, in the mid-12th century B.C.E. This was also before the intervention of the Assyrian Empire in the area in the late 8th century B.C.E. If not the Egyptians or the Assyrians, then who?
In the absence of permanent settlements across the entire area, the necessary conclusion is that this prodigious project must be attributed to the region’s nomadic population. These nomads must have created a complex social-political organization – if not an empire (as the remains had been interpreted until now), then at least a hierarchical, centralized kingdom that dominated the Arava and adjacent areas.
This is first evidence of a strong, nomad-based kingdom. That kingdom – a tribal coalition centering around the large oases of Faynan – should be identified with biblical Edom, whose population later settled in permanent sites in southern Transjordan and in their capital, Bozrah (modern Busayra).
The new excavations at Faynan and Timna have enriched our knowledge of the Edomite society that operated the mines, providing direct evidence that the entire Arava was under a uniform leadership at least from the 11th century B.C.E.
But there is also a far broader contribution here, having to do with methodology. We know about the existence of a strong nomadic kingdom in the Arava solely because of its copper production; if the economy of the nomadic tribes of the Kingdom of Edom had been based only on commerce and agriculture, archaeologists would probably have reconstructed an “occupation gap” in the Arava region. Nomads typically do not leave behind significant archaeological remains (which is why the use of Bedouin ethnography dominates research on biblical-era nomads).
To your tents, O Israel
The basic premise in biblical archaeology – that nomads could not create complex social structures (still less an actual kingdom) – led to an absurd situation in scholarship, in which accounts in the biblical text itself that explicitly mention tent dwellings were ignored. Such accounts appear with regard to the early days of the Israelites in Canaan, but also in reference to the United Monarchy of David and Solomon and even a few generations afterward. For example, in the famous scene of the split between Judah and Israel, the northern tribes left angrily back to “their tents” (1 Kings 12:16).
So entrenched was this premise that the proposal was made, and institutionalized, that the use of the word “tent” in a text referring to events after the establishment of a kingdom was meant figuratively. But if the text is taken at face value, it is clear that in the period of the early monarchy, Israel’s population was still mixed, with some residing in fixed structures in the central cities (formerly Canaanite), and the majority dwelling in tents in the surrounding areas.
This possibility is reinforced by the fact that in the biblical accounts of later periods, use of the term “tent” was significantly reduced and its contexts were made more specific. However, the most meaningful support for the reconstruction of a central nomadic element in the Hill Country during the period of David and Solomon actually comes from archaeology.
A few years ago, Joseph Livni, an independent researcher specializing in the demography of early societies, noted the existence of a substantive problem in the archaeological reconstruction of the size of the Hill Country’s population from the 10th century B.C.E., the period of David and Solomon, to the 8th century B.C.E. On the basis of a comparison to what is known about pre-industrial societies, Livni showed that the population growth between the start and end of the period could not be explained by natural increase alone. In the Judean Hills, for example, the population increased from about 5,000 to almost 40,000.
One possible solution was to attribute this improbable rate of population increase to an error in the observations and calculations regarding the first part of the period (this was also proposed as a response to the claim that “5,000 people cannot constitute a basis for any sort of kingdom”). Another solution attributed the phenomenon to new migration into the region throughout the period.
However, the simplest and most persuasive explanation is the reconstruction of a large but archaeologically invisible population that lived alongside the residents of the permanent settlements. Most of this population also shifted to a sedentary way of life, but did so gradually, in a process lasting many years.
Walls and invisibility
It’s easy to understand how conceptions that see the transition to permanent settlement as a rapid process, and the nomads of the period as a historically negligible population, took hold in biblical archaeology. After all, because nomads are archaeologically invisible, any other point of departure would have diminished archaeology’s very ability to contribute to the period’s historical reconstructions. Today, archaeology’s status as a “supreme judge” concerning the historicity of various events in the Bible's stories is almost unchallenged. One reason for this is the advancement of simplistic approaches in regard to nomads. Of course, this state of affairs is not the product of a deliberate “strategy,” but of a natural process that occurs in every institutionalized sphere of research.
In 1996, Prof. Israel Finkelstein, of Tel Aviv University, published a new observation about the dating of the archaeological layers at Tel Megiddo, and in so doing, touched off one of the fiercest controversies in the history of biblical archaeology (“the debate between high and low chronology”). The reason for this intensity of discussion comes from the ostensible implications of the new observations: if correct, instead of monumental stone structures, the period of David and Solomon turns out to be represented by quite meager remains, which in turn casts doubt on the historicity of the relevant biblical accounts.
The use of walls and of the monumentality of stone-built structures as a key to reconstruct the period of Ancient Israel, including the scale of power of the United Monarchy (or its very existence) is shared by both central schools of biblical archaeology: the “minimalist” school, which in principle does not consider the Bible as a basis for historical reconstruction, and which in Israel is identified with Tel Aviv University; and the “maximalist” school, which as a principle accepts the existence of a historical core in the Scriptures, and is identified with the Hebrew University of Jerusalem. In historical reconstructions, the proponents of both schools attribute crucial weight to the very existence of permanent settlements and to the form of their construction; but neither school takes into account the possibility that nomadic societies, or mixed societies consisting of a population both mobile and fixed (as was apparently the case in the period of the early monarchy), forged complex social-political structures. The absolute and simplistic reliance on the remains of stone-built structures in reconstructing the history of the biblical period can be termed the “architectural bias” in biblical archaeology.
In other words, even without the recent discoveries – such as the fortified settlement from the early 10th century B.C.E. at Khirbet Qeiyafa (which some consider a Philistine site), or the permanent sites and other construction remains that Prof. Yosef Garfinkel, from the Hebrew University, is studying in an attempt to demonstrate archaeologically the existence of the United Monarchy – there could have been a kingdom in the Hill Country during the period of David and Solomon that consisted mainly of a mobile population – and hence is archaeologically transparent. In this context it’s important to return again to the Bible itself, because, in many cases archaeological research has constructed a straw man in the form of a tremendous kingdom made up of fortified cities, stone palaces and fortifications from the Euphrates to the River of Egypt, whereas in practice the United Monarchy is described in far more modest terms (as are the neighboring kingdoms, including the Edomite). It was a tribal kingdom that was based on mechanisms that are completely different from those of city-states or empires and that are not necessarily reflected in archaeology.
For example, the description of the subjugation of Edom to Jerusalem in the period of David (2 Samuel 8:13-14) can be seen as a tax-increase agreement that was upheld with a threat of war; payment could have been collected in tents (netzivim in the Hebrew) that were erected on the trade routes. Naturally, all these elements, though they could have constituted the most important basis for Jerusalem’s economy in light of what we know today about the Edomites’ occupation with copper, are not visible in archaeology.

As opposed to the meagerness of the archaeological findings in Judah and the Hill Country in general in the period associated with the United Monarchy (10th century B.C.E.), the impressive contemporaneous remains of the Philistine sites, notably Tell es-Safi, which is identified with the biblical Gath, stand out. The excavations uncovered a vast walled city. This contrast between the Hill Country and Philistia is cited in the research literature as evidence – if not as irrefutable proof – that if a kingdom even existed in Jerusalem, it was weak and of negligible historical impact.
However, if we take into account the different social-cultural background of the two populations – in the Hill Country and in the coastal plain – together with the substantial disparity in the character of the archaeological evidence that each of them could be expected to leave, it’s obvious that there is a basic methodological flaw here. Whereas the social-political entity that developed in the hill region bore a tribal-nomadic character, the Philistines (and the Canaanites alongside them) had an urban background. The archaeological disparity reflects this difference above all, and its translation into geopolitical power and historical influence is necessarily simplistic.
The archaeological contrast between the regions is expressed not only in the existence or absence of walls and stone structures, but also in “small findings” (objects and other small remains from everyday life), which are immeasurably richer in Philistia than those found in the Hill Country. Here, too, though, the difference could be due to a divergent way of life: the preservation of findings of this sort depends on a significant accumulation of garbage, which does not occur in temporary tent camps. The understandable bias “in favor of” Philistia in preservation has implications for the reconstructions of key processes in the development of the Hill Country society.
For example, the conclusion of Finkelstein and Prof. Benjamin Sass that the appearance of writing in Judah occurred relatively late, relies principally on a comparison of the quantity of the written remains between this region and Philistia. In the light of what was noted above, it’s possible that writing was no less widespread in Judah, only that the archaeological “mirror” created a research distortion.
A European viewpoint
Even before the advent of systematic archaeological research of the “Holy Land,” and the search for traces of the Bible in the ground, Bible scholars tried to understand the significance of the tribal-nomadic existence described in the Scriptures. With no deep awareness, indeed almost uncritically, the biblical nomad was imagined in the garb of the nomadic Arab Bedouin of the deserts of the Ottoman Empire. This overlay was the work of scholars in Western Europe, primarily in late 18th-century and 19th-century Germany, where the occupation with biblical criticism developed intensively at the time. The parallel that was drawn between biblical nomads and latter-day Bedouin was based on scraps of information provided by adventurers returning from journeys to the East. (In the frontier regions, where Bedouin dwelled, acquaintance with the tribal feuds was crucial in order to navigate one’s way safely and to pay baksheesh to the right person, and the journeys were risk-filled forays into a wild, lawless land.) The impression gleaned was of a simple, diffuse society incapable of creating centralized, stable political bodies – notions that were projected directly onto the approach to the biblical accounts.
For example, the biblical description of David as heading a tribal coalition with a nomadic background led one of the greatest 19th-century biblical scholars, Julius Wellhausen, to view the king as no more than a “Bedouin sheikh.” The work produced by Wellhausen, who was also a scholar of Islam and the orient, derived from a Western world of contexts, which gave rise to ingrained biases in the representation of the oriental world in research (and in other realms, such as art). By the same token, biblical research as a whole “suffered” (and is still suffering, some would say) from orientalism, as defined by Edward Said; drawing on the Bedouin to understand the nomads of the period rooted the orientalist distortion even more deeply through the Romantic concept of the “noble Bedouin,” whose exotic way of life seemed to embody ancient traditions.
In light of this background, it is manifest that with the development of biblical archaeology, whose initial practitioners were actually theologians and biblical scholars, the parallel between nomads and Bedouin became a permanent fixture of the new discipline – and it remains unchallenged even after more than a century of research. Accordingly, in order to “decide” whether David was a “Bedouin sheikh” or the leader of a powerful kingdom, archaeologists look for walls large and small and argue over their dating. An example is the debate in the literature about whether to date the monumental stone structure (“palace”) that was found in Jerusalem a few years ago to the 10th century B.C.E. (signifying that David was a strong king) or to the 9th century B.C.E. (signifying that he was weak or that there was no kingdom in Judah at all).
But could David have ruled a population that did not build fortified cities of stone and monumental structures, and most of whom lived in tents – yet who were subjects of a strong kingdom nevertheless? Is it possible that David should be understood in dissociation from the Bedouin model and seen as a tribal-nomadic leader who more closely resembles the Nabatean kings or Genghis Khan?

The archaeology of the Arava shows explicitly that this is possible and that a similar situation existed in the neighboring kingdom to the south in that same period. As noted, we know about the existence of this kingdom only because of its copper production, but its existence may attest to a broader phenomenon that is applicable to all the region’s nascent kingdoms, including Israel. Support for this approach comes from an examination of the characteristics of the period against the background of the country’s history over the past thousands of years (the “longue durée,” or long duration). It turns out that this exact time, when Ancient Israel and neighboring kingdoms sprang up, was unique in the history of the Land of Israel, witnessing exceptional conditions that made it possible for marginal groups, such as frontier nomads, to accumulate social-political power.
Around the middle of the 12th century B.C.E., a crisis extending across and beyond the region led to the collapse of the great empires of the Ancient Near East, together with the entire existing world order of the time. The southern Levant was liberated from centuries-long Egyptian hegemony and a political vacuum was created that could have been exploited by typically weak population groups (such as nomads) that had previously been distant from the urban power centers that enjoyed Egyptian protection. In addition, evidence from recent years indicates that the source of the crisis lay in climate change, and that the 12th and 11th centuries B.C.E. – the “period of “entry into Canaan” and “the Judges” – were extremely dry. This change in environmental conditions in itself could also have contributed to the rising power of the nomads, for it must have been easier for them to cope with desertification than it was for an agricultural-urban society.
The acute crisis also disrupted trade arrangements and brought about the fall of Cyprus, the region’s largest, monopolistic exporter of copper. This turn of events undoubtedly had a direct implication for social developments in the Arava, as an extraordinary opportunity arose for the tribes in the region to supplant the island and earn a fortune from the efficient exploitation of the local deposits (through unification under a centralist political structure). In the regional perspective, the very fact that the Arava became a tremendous center for the manufacture and export of copper (Arava copper has recently been found in Greece and Egypt) renders the period singular in the neighboring areas as well, for clearly all the political entities that dominated the trade routes to the north and the west profited from the commerce itself. This can explain the prosperity of Philistia with its urban centers and it is also a possible background for the accumulation of wealth and the development of a social elite in Judah.
That last point can hardly be overestimated, because until today, models that seek to reconstruct the economy of Ancient Israel have been based on calculations of the output of pasture land and agriculture, whereas the introduction of copper into the equation changes the picture radically. It goes without saying that copper that passes through the hands of merchants or tax collectors is not expected to leave behind direct archaeological evidence, even in urban societies.
Breaking the fixation
For decades, biblical archaeology has been in the forefront of the attempt to understand better the period of the Bible, and as a veteran field its research activity has become institutionalized within a set mold that dictates how archaeological evidence is to be uncovered and in what way it is amenable to interpretation. Over the course of time, in light of the perception of archaeology as an “objective science” that relies on extra-biblical observations from “the field,” its central role in the discussion of the historicity of the Bible also achieved uncontested status. To this day, scholars from abutting disciplines (biblical criticism and history, for example) turn to archaeology for answers to key questions about the period.
As the scholars from these disciplines, like the members of the general public, are not familiar with the gamut of methodological difficulties that underlie archaeological interpretation, they accept unreservedly the conclusions arrived at by the “professionals.” Although archaeologists customarily argue among themselves (frequently, and usually quite emotionally) about these conclusions, all the sides behave according to the same “rules of the game” and there is almost no discussion within the discipline, still less outside it, of the interpretive framework itself.
The chance discovery in the Arava of a strong kingdom that is not based on permanent settlements means that we are back to square one, at least with regard to periods in which there were – or could have been – societies that possessed a nomadic component. In light of this, the “Bedouin model” fixation must end and consideration be given to an interpretation of nomadic societies as multidimensional, and capable of creating strong political bodies without leaving archaeological evidence of their existence.
This may indirectly support the maximalist school, which finds in the biblical account – in which nomadic tribes play a central role – an essential historical core. But in practice, more than buttressing one school or another, the new understanding of nomads undermines archaeology’s very role as the pivotal factor in the discourse about the historical truth of the Bible. Recognition of its limitations in the study of nomads tips the scales back to the side of biblical criticism and obliges archaeologists to be more modest in their pronouncements.
Erez Ben-Yosef is associate professor of archaeology at Tel Aviv University and director of the Central Timna Valley Project. This essay is based on an article he published in Vetus Testamentum, a journal devoted to Old Testament studies.