More about Modeling and Science

This post is based on a paper that I wrote 38 years ago. The subject then was the bankruptcy of warfare models, which shows through in parts of this post. I am trying here to generalize the message to encompass all complex, synthetic models (defined below). For ease of future reference, I have created a page that includes links to this post and the many that are listed at the bottom.


Alfred North Whitehead said in Science and the Modern World (1925) that “the certainty of mathematics depends on its complete abstract generality” (p. 25). The attraction of mathematical models is their apparent certainty. But a model is only a representation of reality, and its fidelity to reality must be tested rather than assumed. And even if a model seems faithful to reality, its predictive power is another thing altogether. We are living in an era when models that purport to reflect reality are given credence despite their lack of predictive power. Ironically, those who dare point this out are called anti-scientific and science-deniers.

To begin at the beginning, I am concerned here with what I will call complex, synthetic models of abstract variables like GDP and “global” temperature. These are open-ended, mathematical models that estimate changes in the variable of interest by attempting to account for many contributing factors (parameters) and describing mathematically the interactions between those factors. I call such models complex because they have many “moving parts” — dozens or hundreds of sub-models — each of which is a model in itself. I call them synthetic because the estimated changes in the variables of interest depend greatly on the selection of sub-models, the depictions of their interactions, and the values assigned to the constituent parameters of the sub-models. That is to say, compared with a model of the human circulatory system or an internal combustion engine, a synthetic model of GDP or “global” temperature rests on incomplete knowledge of the components of the systems in question and the interactions among those components.

Modelers seem ignorant of or unwilling to acknowledge what should be a basic tenet of scientific inquiry: the complete dependence of logical systems (such as mathematical models) on the underlying axioms (assumptions) of those systems. Kurt Gödel addressed this dependence in his incompleteness theorems:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic….

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

There is the view that Gödel’s theorems aren’t applicable in fields outside of mathematical logic. But any quest for certainty about the physical world necessarily uses mathematical logic (which includes statistics).

This doesn’t mean that the results of computational exercises are useless. It simply means that they are only as good as the assumptions that underlie them; for example, assumptions about relationships between parameters, assumptions about the values of the parameters, and assumptions as to whether the correct parameters have been chosen (and properly defined) in the first place.

There is nothing new in that, certainly nothing that requires Gödel’s theorems by way of proof. It has long been understood that a logical argument may be valid — the conclusion follows from the premises — but untrue if the premises (axioms) are untrue. But it bears repeating — and repeating.


There have been mathematical models of one kind and another for centuries, but formal models weren’t used much outside the “hard sciences” until the development of microeconomic theory in the 19th century. Then came F.W. Lanchester, who during World War I devised what became known as Lanchester’s laws (or Lanchester’s equations), which are

mathematical formulae for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two [opponents’] strengths A and B as a function of time, with the function depending only on A and B.

Lanchester’s equations are nothing more than abstractions that must be given a semblance of reality by the user, who is required to make myriad assumptions (explicit and implicit) about the factors that determine the “strengths” of A and B, including but not limited to the relative killing power of various weapons, the effectiveness of opponents’ defenses, the importance of the speed and range of movement of various weapons, intelligence about the location of enemy forces, and commanders’ decisions about when, where, and how to engage the enemy. It should be evident that the predictive value of the equations, when thus fleshed out, is limited to small, discrete engagements, such as brief bouts of aerial combat between two (or a few) opposing aircraft. Alternatively — and in practice — the values are selected so as to yield results that mirror what actually happened (in the “replication” of a historical battle) or what “should” happen (given the preferences of the analyst’s client).

More complex (and realistic) mathematical modeling (also known as operations research) had seen limited use in industry and government before World War II. Faith in the explanatory power of mathematical models was burnished by their use during the war, where such models seemed to be of aid in the design of more effective tactics and weapons.

But the foundation of that success wasn’t the mathematical character of the models. Rather, it was the fact that the models were tested against reality. Philip M. Morse and George E. Kimball put it well in Methods of Operations Research (1946):

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [Op cit., p. 10]

A mathematical model doesn’t represent scientific knowledge unless its predictions can be and have been tested. Even then, a valid model can represent only a narrow slice of reality. The expansion of a model beyond that narrow slice requires the addition of parameters whose interactions may not be well understood and whose values will be uncertain.

Morse and Kimball accordingly urged “hemibel thinking”:

Having obtained the constants of the operations under study … we compare the value of the constants obtained in actual operations with the optimum theoretical value, if this can be computed. If the actual value is within a hemibel ( … a factor of 3) of the theoretical value, then it is extremely unlikely that any improvement in the details of the operation will result in significant improvement. [When] there is a wide gap between the actual and theoretical results … a hint as to the possible means of improvement can usually be obtained by a crude sorting of the operational data to see whether changes in personnel, equipment, or tactics produce a significant change in the constants. [Op cit., p. 38]

Should we really attach little significance to differences of less than a hemibel? Consider a five-parameter model involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model can easily yield a cumulative error of a hemibel (or greater), given a twenty-five percent error in the value each parameter. (Mathematically, 1.255 = 3.05; alternatively, 0.755 = 0.24, or about one-fourth.)


What does this say about complex, synthetic models such as those of economic activity or “climate change”? Any such model rests on the modeler’s assumptions as to the parameters that should be included, their values (and the degree of uncertainty surrounding them), and the interactions among them. The interactions must be modeled based on further assumptions. And so assumptions and uncertainties — and errors — multiply apace.

But the prideful modeler (I have yet to meet a humble one) will claim validity if his model has been fine-tuned to replicate the past (e.g., changes in GDP, “global” temperature anomalies). But the model is useless unless it predicts the future consistently and with great accuracy, where “great” means accurately enough to validly represent the effects of public-policy choices (e.g., setting the federal funds rate, investing in CO2 abatement technology).

Macroeconomic Modeling: A Case Study

In macroeconomics, for example, there is Professor Ray Fair, who teaches macroeconomic theory, econometrics, and macroeconometric modeling at Yale University. He has been plying his trade at prestigious universities since 1968, first at Princeton, then at MIT, and since 1974 at Yale. Professor Fair has since 1983 been forecasting changes in real GDP — not decades ahead, just four quarters (one year) ahead. He has made 141 such forecasts, the earliest of which covers the four quarters ending with the second quarter of 1984, and the most recent of which covers the four quarters ending with the second quarter of 2019. The forecasts are based on a model that Professor Fair has revised many times over the years. The current model is here. His forecasting track record is here.) How has he done? Here’s how:

1. The median absolute error of his forecasts is 31 percent.

2. The mean absolute error of his forecasts is 69 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 3 percent; too low when real, four-quarter GDP growth is greater than 3 percent.

4. His forecasts have grown generally worse — not better — with time. Recent forecasts are better, but still far from the mark.


This and the next two graphs were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing, as noted in the caption.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

It fails a crucial test, in that it doesn’t reflect the downward trend in economic growth:

General Circulation Models (GCMs) and “Climate Change”

As for climate models, Dr. Tim Ball writes about a

fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).


Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

And yet the proponents of CO2-forced “climate change” rely heavily on that flawed temperature record because it is the only one that goes back far enough to “prove” the modelers’ underlying assumption, namely, that it is anthropogenic CO2 emissions which have caused the rise in “global” temperatures. See, for example, Dr. Roy Spencer’s “The Faith Component of Global Warming Predictions“, wherein Dr. Spencer points out that the modelers

have only demonstrated what they assumed from the outset. It is circular reasoning. A tautology. Evidence that nature also causes global energy imbalances is abundant: e.g., the strong warming before the 1940s; the Little Ice Age; the Medieval Warm Period. This is why many climate scientists try to purge these events from the historical record, to make it look like only humans can cause climate change.

In fact the models deal in temperature anomalies, that is, departures from a 30-year average. The anomalies — which range from -1.41 to +1.68 degrees C — are so small relative to the errors and uncertainties inherent in the compilation, estimation, and model-driven adjustments of the temperature record, that they must fail Morse and Kimball’s hemibel test. (The model-driven adjustments are, as Dr. Spencer suggests, downward adjustments of historical temperature data for consistency with the models which “prove” that CO2 emissions induce a certain rate of warming. More circular reasoning.)

They also fail, and fail miserably, the acid test of predicting future temperatures with accuracy. This failure has been pointed out many times. Dr. John Christy, for example, has testified to that effect before Congress (e.g., this briefing). Defenders of the “climate change” faith have attacked Dr. Christy’s methods and finding, but the rebuttals to one such attack merely underscore the validity of Dr. Christy’s work.

This is from “Manufacturing Alarm: Dana Nuccitelli’s Critique of John Christy’s Climate Science Testimony“, by Mario Lewis Jr.:

Christy’s testimony argues that the state-of-the-art models informing agency analyses of climate change “have a strong tendency to over-warm the atmosphere relative to actual observations.” To illustrate the point, Christy provides a chart comparing 102 climate model simulations of temperature change in the global mid-troposphere to observations from two independent satellite datasets and four independent weather balloon data sets….

To sum up, Christy presents an honest, apples-to-apples comparison of modeled and observed temperatures in the bulk atmosphere (0-50,000 feet). Climate models significantly overshoot observations in the lower troposphere, not just in the layer above it. Christy is not “manufacturing doubt” about the accuracy of climate models. Rather, Nuccitelli is manufacturing alarm by denying the models’ growing inconsistency with the real world.

And this is from Christopher Monckton of Brenchley’s “The Guardian’s Dana Nuccitelli Uses Pseudo-Science to Libel Dr. John Christy“:

One Dana Nuccitelli, a co-author of the 2013 paper that found 0.5% consensus to the effect that recent global warming was mostly manmade and reported it as 97.1%, leading Queensland police to inform a Brisbane citizen who had complained to them that a “deception” had been perpetrated, has published an article in the British newspaper The Guardian making numerous inaccurate assertions calculated to libel Dr John Christy of the University of Alabama in connection with his now-famous chart showing the ever-growing discrepancy between models’ wild predictions and the slow, harmless, unexciting rise in global temperature since 1979….

… In fact, as Mr Nuccitelli knows full well (for his own data file of 11,944 climate science papers shows it), the “consensus” is only 0.5%. But that is by the bye: the main point here is that it is the trends on the predictions compared with those on the observational data that matter, and, on all 73 models, the trends are higher than those on the real-world data….

[T]he temperature profile [of the oceans] at different strata shows little or no warming at the surface and an increasing warming rate with depth, raising the possibility that, contrary to Mr Nuccitelli’s theory that the atmosphere is warming the ocean, the ocean is instead being warmed from below, perhaps by some increase in the largely unmonitored magmatic intrusions into the abyssal strata from the 3.5 million subsea volcanoes and vents most of which Man has never visited or studied, particularly at the mid-ocean tectonic divergence boundaries, notably the highly active boundary in the eastern equatorial Pacific. [That possibility is among many which aren’t considered by GCMs.]

How good a job are the models really doing in their attempts to predict global temperatures? Here are a few more examples:

Mr Nuccitelli’s scientifically illiterate attempts to challenge Dr Christy’s graph are accordingly misconceived, inaccurate and misleading.

I have omitted the bulk of both pieces because this post is already longer than needed to make my point. I urge you to follow the links and read the pieces for yourself.

Finally, I must quote a brief but telling passage from a post by Pat Frank, “Why Roy Spencer’s Criticism is Wrong“:

[H]ere’s NASA on clouds and resolution: “A doubling in atmospheric carbon dioxide (CO2), predicted to take place in the next 50 to 100 years, is expected to change the radiation balance at the surface by only about 2 percent. … If a 2 percent change is that important, then a climate model to be useful must be accurate to something like 0.25%. Thus today’s models must be improved by about a hundredfold in accuracy, a very challenging task.

Frank’s very long post substantiates what I say here about the errors and uncertainties in GCMs — and the multiplicative effect of those errors and uncertainties. I urge you to read it. It is telling that “climate skeptics” like Spencer and Frank will argue openly, whereas “true believers” work clandestinely to present a united front to the public. It’s science vs. anti-science.


In the end, complex, synthetic models can be defended only by resorting to the claim that they are “scientific”, which is a farcical claim when models consistently fail to yield accurate predictions. It is a claim based on a need to believe in the models — or, rather, what they purport to prove. It is, in other words, false certainty, which is the enemy of truth.

Newton said it best:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Just as Newton’s self-doubt was not an attack on science, neither have I essayed an attack on science or modeling — only on the abuses of both that are too often found in the company of complex, synthetic models. It is too easily forgotten that the practice of science (of which modeling is a tool) is in fact an art, not a science. With this art we may portray vividly the few pebbles and shells of truth that we have grasped; we can but vaguely sketch the ocean of truth whose horizons are beyond our reach.

Related pages and posts:

Climate Change
Modeling and Science

Modeling Is Not Science
Modeling, Science, and Physics Envy
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
Ty Cobb and the State of Science
Is Science Self-Correcting?
Mathematical Economics
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Hurricane Hysteria
Deduction, Induction, and Knowledge
A (Long) Footnote about Science
The Balderdash Chronicles
Analytical and Scientific Arrogance
The Pretence of Knowledge
Wildfires and “Climate Change”
Why I Don’t Believe in “Climate Change”
Modeling Is Not Science: Another Demonstration
Ad-Hoc Hypothesizing and Data Mining
Analysis vs. Reality

Expressing Certainty (or Uncertainty)

I have waged war on the misuse of probability for a long time. As I say in the post at the link:

A probability is a statement about a very large number of like events, each of which has an unpredictable (random) outcome. Probability, properly understood, says nothing about the outcome of an individual event. It certainly says nothing about what will happen next.

From a later post:

It is a logical fallacy to ascribe a probability to a single event. A probability represents the observed or computed average value of a very large number of like events. A single event cannot possess that average value. A single event has a finite number of discrete and mutually exclusive outcomes. Those outcomes will not “average out” — only one of them will obtain, like Schrödinger’s cat.

To say that the outcomes will average out — which is what a probability implies — is tantamount to saying that Jack Sprat and his wife were neither skinny nor fat because their body-mass indices averaged to a normal value. It is tantamount to saying that one can’t drown by walking across a pond with an average depth of 1 foot, when that average conceals the existence of a 100-foot-deep hole.

But what about hedge words that imply “probability” without saying it: certain, uncertain, likely, unlikely, confident, not confident, sure, unsure, and the like? I admit to using such words, which are common in discussions about possible future events and the causes of past events. But what do I, and presumably others, mean by them?

Hedge words are statements about the validity of hypotheses about phenomena or causal relationships. There are two ways of looking at such hypotheses, frequentist and Bayesian:

While for the frequentist, a hypothesis is a proposition (which must be either true or false) so that the frequentist probability of a hypothesis is either 0 or 1, in Bayesian statistics, the probability that can be assigned to a hypothesis can also be in a range from 0 to 1 if the truth value is uncertain.

Further, as discussed above, there is no such thing as the probability of a single event. For example, the Mafia either did or didn’t have JFK killed, and that’s all there is to say about that. One might claim to be “certain” that the Mafia had JFK killed, but one can be certain only if one is in possession of incontrovertible evidence to that effect. But that certainty isn’t a probability, which can refer only to the frequency with which many events of the same kind have occurred and can be expected to occur.

A Bayesian view about the “probability” of the Mafia having JFK killed is nonsensical. Even If a Bayesian is certain, based on incontrovertible evidence, that the Mafia had JFK killed, there is no probability attached to the occurrence. It simply happened, and that’s that.

Lacking such evidence, a Bayesian (or an unwitting “man on the street”) might say “I believe there’s a 50-50 chance that the Mafia had JFK killed”. Does that mean (1) there’s some evidence to support the hypothesis, but it isn’t conclusive, or (2) that the speaker would bet X amount of money, at even odds, that incontrovertible evidence (if any) surfaces it will prove that the Mafia had JFK killed? In the first case, attaching a 50-percent probability to the hypothesis is nonsensical; how does the existence of some evidence translate into a statement about the probability of a one-off event that either occurred or didn’t occur? In the second case, the speaker’s willingness to bet on the occurrence of an event at certain odds tells us something about the speaker’s preference for risk-taking but nothing at all about whether or not the event occurred.

What about the familiar use of “probability” (a.k.a., “chance”) in weather forecasts? Here’s my take:

[W]hen you read or hear a statement like “the probability of rain tomorrow is 80 percent”, you should mentally translate it into language like this:

X guesses that Y will (or will not) happen at time Z, and the “probability” that he attaches to his guess indicates his degree of confidence in it.

The guess may be well-informed by systematic observation of relevant events, but it remains a guess. As most Americans have learned and relearned over the years, when rain has failed to materialize or has spoiled an outdoor event that was supposed to be rain-free.

Further, it is true that some things happen more often than other things but

only one thing will happen at a given time and place.

[A] clever analyst could concoct a probability of a person’s being shot by writing an equation that includes such variables as his size, the speed with which he walks, the number of shooters, their rate of fire, and the distance across the shooting range.

What would the probability estimate mean? It would mean that if a very large number of persons walked across the shooting range under identical conditions, approximately S percent of them would be shot. But the clever analyst cannot specify which of the walkers would be among the S percent.

Here’s another way to look at it. One person wearing head-to-toe bullet-proof armor could walk across the range a large number of times and expect to be hit by a bullet on S percent of his crossings. But the hardy soul wouldn’t know on which of the crossings he would be hit.

Suppose the hardy soul became a foolhardy one and made a bet that he could cross the range without being hit. Further, suppose that S is estimated to be 0.75; that is, 75 percent of a string of walkers would be hit, or a single (bullet-proof) walker would be hit on 75 percent of his crossings. Knowing the value of S, the foolhardy fellow offers to pay out $1 million dollars if he crosses the range unscathed — one time — and claim $4 million (for himself or his estate) if he is shot. That’s an even-money bet, isn’t it?

No it isn’t….

The bet should be understood for what it is, an either-or-proposition. The foolhardy walker will either lose $1 million or win $4 million. The bettor (or bettors) who take the other side of the bet will either win $1 million or lose $4 million.

As anyone with elementary reading and reasoning skills should be able to tell, those possible outcomes are not the same as the outcome that would obtain (approximately) if the foolhardy fellow could walk across the shooting range 1,000 times. If he could, he would come very close to breaking even, as would those who bet against him.

I omitted from the preceding quotation a sentence in which I used “more likely”:

If a person walks across a shooting range where live ammunition is being used, he is more likely to be killed than if he walks across the same patch of ground when no one is shooting.

Inasmuch as “more likely” is a hedge word, I seem to have contradicted my own position about the probability of a single event, such as being shot while walking across a shooting range. In that context, however, “more likely” means that something could happen (getting shot) that wouldn’t happen in a different situation. That’s not really a probabilistic statement. It’s a statement about opportunity; thus:

  • Crossing a firing range generates many opportunities to be shot.
  • Going into a crime-ridden neighborhood certainly generates some opportunities to be shot, but their number and frequency depends on many variables: which neighborhood, where in the neighborhood, the time of day, who else is present, etc.
  • Sitting by oneself, unarmed, in a heavy-gauge steel enclosure generates no opportunities to be shot.

The “chance” of being shot is, in turn, “more likely”, “likely”, and “unlikely” — or a similar ordinal pattern that uses “certain”, “confident”, “sure”, etc. But the ordinal pattern, in any case, can never (logically) include statements like “completely certain”, “completely confident”, etc.

An ordinal pattern is logically valid only if it conveys the relative number of opportunities to attain a given kind of outcome — being shot, in the example under discussion.

Ordinal statements about different types of outcome are meaningless. Consider, for example, the claim that the probability that the Mafia had JFK killed is higher than (or lower than or the same as) the probability that the moon is made of green cheese. First, and to repeat myself for the nth time, the phenomena in question are one-of-a-kind and do not lend themselves to statements about their probability, nor even about the frequency of opportunities for the occurrence of the phenomena. Second, the use of “probability” is just a hifalutin way of saying that the Mafia could have had a hand in the killing of JFK, whereas it is known (based on ample scientific evidence, including eye-witness accounts) that the Moon isn’t made of green cheese. So the ordinal statement is just a cheap rhetorical trick that is meant to (somehow) support the subjective belief that the Mafia “must” have had a hand in the killing of JFK.

Similarly, it is meaningless to say that the “average person” is “more certain” of being killed in an auto accident than in a plane crash, even though one may have many opportunities to die in an auto accident or a plane crash. There is no “average person”; the incidence of auto travel and plane travel varies enormously from person to person; and the conditions that conduce to fatalities in auto travel and plane travel vary just as enormously.

Other examples abound. Be on the lookout for them, and avoid emulating them.

Regarding Napoleon Chagnon

Napoleon Alphonseau Chagnon (1938-2019) was a noted anthropologist to whom the label “controversial” was applied. Some of the story is told in this surprisingly objective New York Times article about Chagnon’s life and death. Matthew Blackwell gives a more complete account in “The Dangerous Life of an Anthropologist” (Quilette, October 5, 2019).

Chagnon’s sin was his finding that “nature” trumped “nurture”, as demonstrated by his decades-long ethnographic field work among the Yanomamö, indigenous Amazonians who live in the border area between Venezuela and Brazil. As Blackwell tells it,

Chagnon found that up to 30 percent of all Yanomamö males died a violent death. Warfare and violence were common, and duelling was a ritual practice, in which two men would take turns flogging each other over the head with a club, until one of the combatants succumbed. Chagnon was adamant that the primary causes of violence among the Yanomamö were revenge killings and women. The latter may not seem surprising to anyone aware of the ubiquity of ruthless male sexual competition in the animal kingdom, but anthropologists generally believed that human violence found its genesis in more immediate matters, such as disputes over resources. When Chagnon asked the Yanomamö shaman Dedeheiwa to explain the cause of violence, he replied, “Don’t ask such stupid questions! Women! Women! Women! Women! Women!” Such fights erupted over sexual jealousy, sexual impropriety, rape, and attempts at seduction, kidnap and failure to deliver a promised girl….

Chagnon would make more than 20 fieldwork visits to the Amazon, and in 1968 he published Yanomamö: The Fierce People, which became an instant international bestseller. The book immediately ignited controversy within the field of anthropology. Although it commanded immense respect and became the most commonly taught book in introductory anthropology courses, the very subtitle of the book annoyed those anthropologists, who preferred to give their monographs titles like The Gentle Tasaday, The Gentle People, The Harmless People, The Peaceful People, Never in Anger, and The Semai: A Nonviolent People of Malaya. The stubborn tendency within the discipline was to paint an unrealistic façade over such cultures—although 61 percent of Waorani men met a violent death, an anthropologist nevertheless described this Amazonian people as a “tribe where harmony rules,” on account of an “ethos that emphasized peacefulness.”…

These anthropologists were made more squeamish still by Chagnon’s discovery that the unokai of the Yanomamö—men who had killed and assumed a ceremonial title—had about three times more children than others, owing to having twice as many wives. Drawing on this observation in his 1988 Science article “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” Chagnon suggested that men who had demonstrated success at a cultural phenomenon, the military prowess of revenge killings, were held in higher esteem and considered more attractive mates. In some quarters outside of anthropology, Chagnon’s theory came as no surprise, but its implication for anthropology could be profound. In The Better Angels of Our Nature, Steven Pinker points out that if violent men turn out to be more evolutionarily fit, “This arithmetic, if it persisted over many generations, would favour a genetic tendency to be willing and able to kill.”…

Chagnon considered his most formidable critic to be the eminent anthropologist Marvin Harris. Harris had been crowned the unofficial historian of the field following the publication of his all-encompassing work The Rise of Anthropological Theory. He was the founder of the highly influential materialist school of anthropology, and argued that ethnographers should first seek material explanations for human behavior before considering alternatives, as “human social life is a response to the practical problems of earthly existence.” Harris held that the structure and “superstructure” of a society are largely epiphenomena of its “infrastructure,” meaning that the economic and social organization, beliefs, values, ideology, and symbolism of a culture evolve as a result of changes in the material circumstances of a particular society, and that apparently quaint cultural practices tend to reflect man’s relationship to his environment. For instance, prohibition on beef consumption among Hindus in India is not primarily due to religious injunctions. These religious beliefs are themselves epiphenomena to the real reasons: that cows are more valuable for pulling plows and producing fertilizers and dung for burning. Cultural materialism places an emphasis on “-etic” over “-emic” explanations, ignoring the opinions of people within a society and trying to uncover the hidden reality behind those opinions.

Naturally, when the Yanomamö explained that warfare and fights were caused by women and blood feuds, Harris sought a material explanation that would draw upon immediate survival concerns. Chagnon’s data clearly confirmed that the larger a village, the more likely fighting, violence, and warfare were to occur. In his book Good to Eat: Riddles of Food and Culture Harris argued that fighting occurs more often in larger Yanomamö villages because these villages deplete the local game levels in the rainforest faster than smaller villages, leaving the men no option but to fight with each other or to attack outside groups for meat to fulfil their protein macronutrient needs. When Chagnon put Harris’s materialist theory to the Yanomamö they laughed and replied, “Even though we like meat, we like women a whole lot more.” Chagnon believed that smaller villages avoided violence because they were composed of tighter kin groups—those communities had just two or three extended families and had developed more stable systems of borrowing wives from each other.

There’s more:

Survival International … has long promoted the Rousseauian image of a traditional people who need to be preserved in all their natural wonder from the ravages of the modern world. Survival International does not welcome anthropological findings that complicate this harmonious picture, and Chagnon had wandered straight into their line of fire….

For years, Survival International’s Terence Turner had been assisting a self-described journalist, Patrick Tierney, as the latter investigated Chagnon for his book, Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon. In 2000, as Tierney’s book was being readied for publication, Turner and his colleague Leslie Sponsel wrote to the president of the American Anthropological Association (AAA) and informed her that an unprecedented crisis was about to engulf the field of anthropology. This, they warned, would be a scandal that, “in its scale, ramifications, and sheer criminality and corruption, is unparalleled in the history of Anthropology.” Tierney alleged that Chagnon and Neel had spread measles among the Yanomamö in 1968 by using compromised vaccines, and that Chagnon’s documentaries depicting Yanomamö violence were faked by using Yanomamö to act out dangerous scenes, in which further lives were lost. Chagnon was blamed, inter alia, for inciting violence among the Yanomamö, cooking his data, starting wars, and aiding corrupt politicians. Neel was also accused of withholding vaccines from certain populations of natives as part of an experiment. The media were not slow to pick up on Tierney’s allegations, and the Guardian ran an article under an inflammatory headline accusing Neel and Chagnon of eugenics: “Scientists ‘killed Amazon Indians to test race theory.’” Turner claimed that Neel believed in a gene for “leadership” and that the human genetic stock could be upgraded by wiping out mediocre people. “The political implication of this fascistic eugenics,” Turner told the Guardian, “is clearly that society should be reorganised into small breeding isolates in which genetically superior males could emerge into dominance, eliminating or subordinating the male losers.”

By the end of 2000, the American Anthropological Association announced a hearing on Tierney’s book. This was not entirely reassuring news to Chagnon, given their history with anthropologists who failed to toe the party line….

… Although the [AAA] taskforce [appointed to investigate Tierney’s accusations] was not an “investigation” concerned with any particular person, for all intents and purposes, it blamed Chagnon for portraying the Yanomamö in a way that was harmful and held him responsible for prioritizing his research over their interests.

Nonetheless, the most serious claims Tierney made in Darkness in El Dorado collapsed like a house of cards. Elected Yanomamö leaders issued a statement in 2000 stating that Chagnon had arrived after the measles epidemic and saved lives, “Dr. Chagnon—known to us as Shaki—came into our communities with some physicians and he vaccinated us against the epidemic disease which was killing us. Thanks to this, hundreds of us survived and we are very thankful to Dr. Chagnon and his collaborators for help.” Investigations by the American Society of Human Genetics and the International Genetic Epidemiology Society both found Tierney’s claims regarding the measles outbreak to be unfounded. The Society of Visual Anthropology reviewed the so-called faked documentaries, and determined that these allegations were also false. Then an independent preliminary report released by a team of anthropologists dissected Tierney’s book claim by claim, concluding that all of Tierney’s most important assertions were either deliberately fraudulent or, at the very least, misleading. The University of Michigan reached the same conclusion. “We are satisfied,” its Provost stated, “that Dr. Neel and Dr. Chagnon, both among the most distinguished scientists in their respective fields, acted with integrity in conducting their research… The serious factual errors we have found call into question the accuracy of the entire book [Darkness in El Dorado] as well as the interpretations of its author.” Academic journal articles began to proliferate, detailing the mis-inquiry and flawed conclusions of the 2002 taskforce. By 2005, only three years later, the American Anthropological Association voted to withdraw the 2002 taskforce report, re-exonerating Chagnon.

A 2000 statement by the leaders of the Yanomamö and their Ye’kwana neighbours called for Tierney’s head: “We demand that our national government investigate the false statements of Tierney, which taint the humanitarian mission carried out by Shaki [Chagnon] with much tenderness and respect for our communities. The investigation never occurred, but Tierney’s public image lay in ruins and would suffer even more at the hands of historian of science Alice Dreger, who interviewed dozens of people involved in the controversy. Although Tierney had thanked a Venezuelan anthropologist for providing him with a dossier of information on Chagnon for his book, the anthropologist told Dreger that Tierney had actually written the dossier himself and then misrepresented it as an independent source of information.

A “dossier” and its use to smear an ideological opponent. Where else have we seen that?

Returning to Blackwell:

Scientific American has described the controversy as “Anthropology’s Darkest Hour,” and it raises troubling questions about the entire field. In 2013, Chagnon published his final book, Noble Savages: My Life Among Two Dangerous Tribes—The Yanomamö and the Anthropologists. Chagnon had long felt that anthropology was experiencing a schism more significant than any difference between research paradigms or schools of ethnography—a schism between those dedicated to the very science of mankind, anthropologists in the true sense of the word, and those opposed to science; either postmodernists vaguely defined, or activists disguised as scientists who seek to place indigenous advocacy above the pursuit of objective truth. Chagnon identified Nancy Scheper-Hughes as a leader in the activist faction of anthropologists, citing her statement that we “need not entail a philosophical commitment to Enlightenment notions of reason and truth.”

Whatever the rights and wrong of his debates with Marvin Harris across three decades, Harris’s materialist paradigm was a scientifically debatable hypothesis, which caused Chagnon to realize that he and his old rival shared more in common than they did with the activist forces emerging in the field: “Ironically, Harris and I both argued for a scientific view of human behavior at a time when increasing numbers of anthropologists were becoming skeptical of the scientific approach.”…

Both Chagnon and Harris agreed that anthropology’s move away from being a scientific enterprise was dangerous. And both believed that anthropologists, not to mention thinkers in other fields of social sciences, were disguising their increasingly anti-scientific activism as research by using obscurantist postmodern gibberish. Observers have remarked at how abstruse humanities research has become and even a world famous linguist like Noam Chomsky admits, “It seems to me to be some exercise by intellectuals who talk to each other in very obscure ways, and I can’t follow it, and I don’t think anybody else can.” Chagnon resigned his membership of the American Anthropological Association in the 1980s, stating that he no longer understood the “unintelligible mumbo jumbo of postmodern jargon” taught in the field. In his last book, Theories of Culture in Postmodern Times, Harris virtually agreed with Chagnon. “Postmodernists,” he wrote, “have achieved the ability to write about their thoughts in a uniquely impenetrable manner. Their neo-baroque prose style with its inner clauses, bracketed syllables, metaphors and metonyms, verbal pirouettes, curlicues and figures is not a mere epiphenomenon; rather, it is a mocking rejoinder to anyone who would try to write simple intelligible sentences in the modernist tradition.”…

The quest for knowledge of mankind has in many respects become unrecognizable in the field that now calls itself anthropology. According to Chagnon, we’ve entered a period of “darkness in cultural anthropology.” With his passing, anthropology has become darker still.

I recount all of this for three reasons. First, Chagnon’s findings testify to the immutable urge to violence that lurks within human beings, and to the dominance of “nature” over “nurture”. That dominance is evident not only in the urge to violence (pace Steven Pinker), but in the strong heritability of such traits as intelligence.

The second reason for recounting Chagnon’s saga it is to underline the corruption of science in the service of left-wing causes. The underlying problem is always the same: When science — testable and tested hypotheses based on unbiased observations — challenges left-wing orthodoxy, left-wingers — many of them so-called scientists — go all out to discredit real scientists. And they do so by claiming, in good Orwellian fashion, to be “scientific”. (I have written many posts about this phenomenon.) Leftists are, in fact, delusional devotees of magical thinking.

The third reason for my interest in the story of Napoleon Chagnon is a familial connection of sorts. He was born in a village where his grandfather, also Napoleon Chagnon, was a doctor. My mother was one of ten children, most of them born and all of them raised in the same village. When the tenth child was born, he was given Napoleon as his middle name, in honor of Doc Chagnon.

Certainty about Uncertainty

Words fail us. Numbers, too, for they are only condensed words. Words and numbers are tools of communication and calculation. As tools, they cannot make certain that which is uncertain, though they often convey a false sense of certainty.

Yes, arithmetic seems certain: 2 + 2 = 4 is always and ever (in base-10 notation). But that is only because the conventions of arithmetic require 2 + 2 to equal 4. Neither arithmetic nor any higher form of mathematics reveals the truth about the world around us, though mathematics (and statistics) can be used to find approximate truths — approximations that are useful in practical applications like building bridges, finding effective medicines, and sending rockets into space (though the practicality of that has always escaped me).

But such practical things are possible only because the uncertainty surrounding them (e.g., the stresses that may cause a bridge to fail) is hedged against by making things more robust than they would need to be under perfect conditions. And, even then, things sometimes fail: bridges collapse, medicines have unforeseen side effects, rockets blow up, etc.

I was reminded of uncertainty by a recent post by Timothy Taylor (Conversable Economist):

For the uninitiated, “statistical significance” is a way of summarizing whether a certain statistical result is likely to have happened by chance, or not. For example, if I flip a coin 10 times and get six heads and four tails, this could easily happen by chance even with a fair and evenly balanced coin. But if I flip a coin 10 times and get 10 heads, this is extremely unlikely to happen by chance. Or if I flip a coin 10,000 times, with a result of 6,000 heads and 4,000 tails (essentially, repeating the 10-flip coin experiment 1,000 times), I can be quite confident that the coin is not a fair one. A common rule of thumb has been that if the probability of an outcome occurring by chance is 5% or less–in the jargon, has a p-value of 5% or less–then the result is statistically significant. However, it’s also pretty common to see studies that report a range of other p-values like 1% or 10%.

Given the omnipresence of “statistical significance” in pedagogy and the research literature, it was interesting last year when the American Statistical Association made an official statement “ASA Statement on Statistical Significance and P-Values” (discussed here) which includes comments like: “Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold. … A p-value, or statistical significance, does not measure the size of an effect or the importance of a result. … By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.”

Now, the ASA has followed up with a special supplemental issue of its journal The American Statistician on the theme “Statistical Inference in the 21st Century: A World Beyond p < 0.05” (January 2019).  The issue has a useful overview essay, “Moving to a World Beyond “p < 0.05.” by Ronald L. Wasserstein, Allen L. Schirm, and  Nicole A. Lazar. They write:

We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term “statistically significant” entirely. Nor should variants such as “significantly different,” “p < 0.05,” and “nonsignificant” survive, whether expressed in words, by asterisks in a table, or in some other way. Regardless of whether it was ever useful, a declaration of “statistical significance” has today become meaningless. … In sum, `statistically significant’—don’t say it and don’t use it.

. . .

So let’s accept the that the “statistical significance” label has some severe problems, as Wasserstein, Schirm, and Lazar write:

[A] label of statistical significance does not mean or imply that an association or effect is highly probable, real, true, or important. Nor does a label of statistical nonsignificance lead to the association or effect being improbable, absent, false, or unimportant. Yet the dichotomization into “significant” and “not significant” is taken as an imprimatur of authority on these characteristics. In a world without bright lines, on the other hand, it becomes untenable to assert dramatic differences in interpretation from inconsequential differences in estimates. As Gelman and Stern (2006) famously observed, the difference between “significant” and “not significant” is not itself statistically significant.

In the middle of the post, Taylor quotes Edward Leamer’s 1983 article, “Taking the Con out of Econometrics” (American Economic Review, March 1983, pp. 31-43).

Leamer wrote:

The econometric art as it is practiced at the computer terminal involves fitting many, perhaps thousands, of statistical models. One or several that the researcher finds pleasing are selected for re- porting purposes. This searching for a model is often well intentioned, but there can be no doubt that such a specification search in-validates the traditional theories of inference. … [I]n fact, all the concepts of traditional theory, utterly lose their meaning by the time an applied researcher pulls from the bramble of computer output the one thorn of a model he likes best, the one he chooses to portray as a rose. The consuming public is hardly fooled by this chicanery. The econometrician’s shabby art is humorously and disparagingly labelled “data mining,” “fishing,” “grubbing,” “number crunching.” A joke evokes the Inquisition: “If you torture the data long enough, Nature will confess” … This is a sad and decidedly unscientific state of affairs we find ourselves in. Hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else’s data analyses seriously.”

Economists and other social scientists have become much more aware of these issues over the decades, but Leamer was still writing in 2010 (“Tantalus on the Road to Asymptopia,” Journal of Economic Perspectives, 24: 2, pp. 31-46):

Since I wrote my “con in econometrics” challenge much progress has been made in economic theory and in econometric theory and in experimental design, but there has been little progress technically or procedurally on this subject of sensitivity analyses in econometrics. Most authors still support their conclusions with the results implied by several models, and they leave the rest of us wondering how hard they had to work to find their favorite outcomes … It’s like a court of law in which we hear only the experts on the plaintiff’s side, but are wise enough to know that there are abundant for the defense.

Taylor wisely adds this:

Taken together, these issues suggest that a lot of the findings in social science research shouldn’t be believed with too much firmness. The results might be true. They might be a result of a researcher pulling out “from the bramble of computer output the one thorn of a model he likes best, the one he chooses to portray as a rose.” And given the realities of real-world research, it seems goofy to say that a result with, say, only a 4.8% probability of happening by chance is “significant,” while if the result had a 5.2% probability of happening by chance it is “not significant.” Uncertainty is a continuum, not a black-and-white difference [emphasis added].

The italicized sentence expresses my long-held position.

But there is a deeper issue here, to which I alluded above in my brief comments about the nature of mathematics. The deeper issue is the complete dependence of logical systems on the underlying axioms (assumptions) of those systems, which Kurt Gödel addressed in his incompleteness theorems:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic….

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

This is very deep stuff. I own the book in which Gödel proves his theorems, and I admit that I have to take the proofs on faith. (Which simply means that I have been too lazy to work my way through the proofs.) But there seem to be no serious or fatal criticisms of the theorems, so my faith is justified (thus far).

There is also the view that the theorems aren’t applicable in fields outside of mathematical logic. But any quest for certainty about the physical world necessarily uses mathematical logic (which includes statistics).

This doesn’t mean that the results of computational exercises are useless. It simply means that they are only as good as the assumptions that underlie them; for example, assumptions about relationships between variables, assumptions about the values of the variables, assumptions as to whether the correct variable have been chosen (and properly defined), in the first place.

There is nothing new in that, certainly nothing that requires Gödel’s theorems by way of proof. It has long been understood that a logical argument may be valid — the conclusion follows from the premises — but untrue if the premises (axioms) are untrue.

But it bears repeating and repeating — especially in the age of “climate change“. That CO2 is a dominant determinant of “global” temperatures is taken as axiomatic. Everything else flows from that assumption, including the downward revision of historical (actual) temperature readings, to ensure that the “observed” rise in temperatures agrees with — and therefore “proves” — the validity of climate models that take CO2 as dominant variable. How circular can you get?

Check your assumptions at the door.

What about the “Gay Gene”?

UPDATED 09/24/19

An article in Science discusses the findings of a recent “genome-wide association study (GWAS) [by Andrea Ganna et al.], in which the genome is analyzed for statistically significant associations between single-nucleotide polymorphisms (SNPs) and a particular trait.” That upshot is that

genetics could eventually account for an upper limit of 8 to 25% of same-sex sexual behavior of the population. However, when all of the SNPs they identified from the GWAS are considered together in a combined score, they explain less than 1%. Thus, although they did find particular genetic loci associated with same-sex behavior, when they combine the effects of these loci together into one comprehensive score, the effects are so small (under 1%) that this genetic score cannot in any way be used to predict same-sex sexual behavior of an individual.

Further, “Ganna et al. did not find evidence of any specific cells and tissues related to the loci they identified.”

Is this a big change from what was thought previously about genes and homosexuality? Yes, according to an article in ScienceNews:

The new study is an advance over previous attempts to find “gay genes,” says J. Michael Bailey, a psychologist at Northwestern University in Evanston, Ill., who was not involved in the new work. The study’s size is its main advantage, Bailey says. “It’s huge. Huge.”…

Previous sexual orientation genetic studies, including some Bailey was involved in, may also have suffered from bias because they relied on volunteers. People who offer to participate in a study, without being randomly selected, may not reflect the general population, he says. This study includes both men and women and doesn’t rely on twins, as many previous studies have….

This is the first DNA difference ever linked to female sexual orientation, says Lisa Diamond, a psychologist at the University of Utah in Salt Lake City who studies the nature and development of same-sex sexuality. The results are consistent with previous studies suggesting genetics may play a bigger role in influencing male sexuality than female sexuality. It’s not unusual for one sex of a species to be more fluid in their sexuality, choosing partners of both sexes, Diamond says. For humans, male sexuality may be more [but not very] tightly linked to genes.

But that doesn’t mean that genes control sexual behavior or orientation. “Same-sex sexuality appears to be genetically influenced, but not genetically determined,” Diamond says. “This is not the only complex human phenomenon for which we see a genetic influence without a great understanding of how that influence works.” Other complex human behaviors, such as smoking, alcohol use, personality and even job satisfaction all have some genetic component [emphasis added].

In sum, as an article in The Telegraph about the Ganna study explains,

[g]enes play just a small role in whether a person is gay, scientists have found, after discovering that environment has a far bigger impact on homosexuality….

[G]enes are responsible for between eight to 25 per cent of the probability of a person being gay, meaning at least three quarters is down to environment.

What’s most interesting about the commentary that I’ve read, including portions of the articles just quoted above, is what “environment” means to those who are eager to preserve the illusion of homosexuality as a condition that’s almost unavoidable.

The article in The Telegraph quotes one of the Ganna study’s authors, who is hardly a disinterested party:

As a gay man I’ve experienced homophobia and I’ve felt both hurt and isolated by it. This study disproves the notion there is a so-called ‘gay gene’ and disproves sexual behaviour is a choice.

Genetics absolutely plays an important role, many genes are involved, and altogether they capture perhaps a quarter of same-sex sexual behaviour, which means genetics isn’t even half the story. The rest is likely environmental.

It’s both biology and environment working together in incredibly complicated ways.

How does the study disprove that sexual behavior is a choice? It does so only if one makes the valiant assumption that environmental influences somehow don’t operate on behavior, and aren’t in turn shaped by behavior.

Another scientist, quoted in The Telegraph article, acknowledges my point:

There is an unexplained environmental effect that one can never put a finger on exactly it’s such a complex interplay between environment, upbringing, and genetics [emphasis added].

The Associated Press, always ready to spin news leftward, ran a story with the same facts and interpretations as those quoted earlier, but with the headline: “Study finds new genetic links to same-sex sexuality”. Which hides  the gist of the story: Homosexuality is mainly determined by environmental influences, which include and are shaped by behavioral choices.

What’s left unmentioned, of course, is that homosexuality is therefore predominantly a choice. What’s also left unmentioned are the environmental influences that are most likely to induce homosexual behavior. Here is my not-mutually-exclusive list of such influences:

  • shyness toward the opposite sex (caused by introversion, fear of rejection, self-assessment as unattractive)
  • proximity of potential sexual partners who are shy toward the opposite sex and/or open to experimentation, or who are seeking opportunities to seduce those who are shy and/or experimental
  • conditions conducive to experimentation (sleepovers, drunkenness, pornographic titillation)
  • encouraging or permissive milieu (parental encouragement/indifference, parental absence — as when a person is away at college, especially a “liberal” college populated by sexual experimenters/homosexuals)
  • general indifference or approbation (secularization of society, removal of criminal sanctions, legalization of sodomy, legalization of same-sex marriage, elite embrace and praise of non-traditional sexual behavior and roles: bisexuality, homosexuality, transgenderism).

All of those influences operate on the urge for sexual satisfaction, which is especially strong in adolescents and young adults. Given that urge, the incidence of introversion, the incidence of physical unattractiveness, the unconscionably large number of persons in college, and the social and legal trends of recent decades, it is almost shocking to learn that only about five percent of American adults think of themselves as LGBT. If you live in a “cosmopolitan” city of any size, you might believe that they account for a much larger fraction of the population. But that’s just due to the clustering effect — birds of a feather, and all that. Even if you don’t live in a “cosmopolitan” city, you may believe that far more than five percent of the population is LGBT because the producers of films and TV fare — “cosmopolitan” elites that they are — like to “celebrate diversity”. (Or, more aptly, shove it down your throat.)

In summary, I submit that most persons of the LGBT persuasion make a deliberate and often tragic choice about their “sexual identity”. (See, for example, my post, “The Transgender Fad and Its Consequences“, and Carlos D. Flores’s article, “The Absurdity of Transgenderism: A Stern But Necessary Critique“.)

I submit, further, that the study by Ganna et al. implies that conversion therapy could be effective, and that (politically correct) “scientific” opposition to it is based on the now-discredited view of homosexuality as a genetically immutable condition.

If the apologists for and promoters of LGBT “culture” were logically consistent in their insistence on homosexuality as a genetic condition, they would also acknowledge that intelligence is largely a matter of genetic inheritance. They don’t do that, generally, because they are usually leftists who subscribe to the blank-slate view of inherent equality.

If any trait is strongly genetic in origin, it is the abhorrence of homosexuality, which is a threat to the survival of the species.


There is some support for my hypothesis about environmental causes of homosexual behavior in Gregory Cochran’s post, “Gay Genes” at West Hunter, where he discusses the paper by Ganna et al.:

They found two SNPs [single-nucleotide polymorhpisms] that influenced both male and female homosexuality, two that affected males only, one that affected females only. All had very small but statistically significant effects….

The fraction of the variance influenced by these few SNPs is low, less than 1%. The contribution of all common SNPs is larger, estimated to be between 8% and 25%. Still small compared to traits like height and IQ, but then we knew that the heritability of homosexuality is not terribly high, from twin studies and such – political views are more heritable.

So genes influence homosexuality, but then they influence everything. Does it look as if the key causal link (assuming that there is one) is genetic? No, but then we knew that already, from high discordance for homosexuality in MZ twins.

Most interesting to me were the genetic correlations between same-sex behavior and various other traits [displayed in a graph]….

The genes correlated with male homosexuality are also correlated ( at a statistically significant level) with risk-taking, cannabis use,  schizophrenia, ADHD, major depressive disorder, loneliness, and number of sex partners. For female homosexuals, risk-taking, smoking, cannabis use, subjective well-being (-), schizophrenia, bipolar disorder, ADHD, major depressive disorder, loneliness, openness to experience, and number of sex partners.

Generally, the traits genetically correlated with homosexuality are bad things.  As far as I can see,  they look like noise, rather than any kind of genetic strategy.  Mostly, they accord with what we already knew about male and female homosexuals: both are significantly more likely to have psychiatric disorders, far more likely to use drugs.   The mental-illness association maybe looks stronger in lesbians.  The moderately-shared genetic architecture seems compatible with a noise model….

[The finding] that homosexuality was genetically correlated with various kinds of unpleasantness was apparently an issue in the preparation and publication of this paper. The authors were at some pains  to avoid hurting the feelings of the gay community, since avoiding hurting  feelings is the royal road to Truth, as shown by Galileo and Darwin.

If there are genes for bad behavior, it is the penchant for bad behavior that pushes some persons in the direction of homosexuality. But the choice is theirs. Many persons who are prone to bad behavior become serial philanderers, serial killers, sleazy politicians, etc., etc., etc. Homosexuality isn’t a necessary outcome of bad behavior, just a choice that many persons happen to make because of their particular circumstances.

P.S. I don’t mean to imply that bad behavior and homosexuality go hand-in-hand. As I suggest in the original post, there are other reasons for choosing homosexuality in those cases (the majority of them) where it isn’t a genetic predisposition.

(See also “The Myth That Same-Sex Marriage Causes No Harm“, “Two-Percent Tyranny“, and “Further Thoughts about Utilitarianism“.)

Analysis vs. Reality

In my days as a defense analyst I often encountered military officers who were skeptical about the ability of civilian analysts to draw valid conclusions from mathematical models about the merits of systems and tactics. I took me several years to understand and agree with their position. My growing doubts about the power of quantitative analysis of military matters culminated in a paper where I wrote that

combat is not a mathematical process…. One may describe the outcome of combat mathematically, but it is difficult, even after the fact, to determine the variables that made a difference in the outcome.

Much as we would like to fold the many different parameters of a weapon, a force, or a strategy into a single number, we can not. An analyst’s notion of which variables matter and how they interact is no substitute for data. Such data as exist, of course, represent observations of discrete events — usually peacetime events. It remains for the analyst to calibrate the observations, but without a benchmark to go by. Calibration by past battles is a method of reconstruction — of cutting one of several coats to fit a single form — but not a method of validation.

Lacking pertinent data, an analyst is likely to resort to models of great complexity. Thus, if useful estimates of detection probabilities are unavailable, the detection process is modeled; if estimates of the outcomes of dogfights are unavailable, aerial combat is reduced to minutiae. Spurious accuracy replaces obvious inaccuracy; untestable hypotheses and unchecked calibrations multiply apace. Yet the analyst claims relative if not absolute accuracy, certifying that he has identified, measured, and properly linked, a priori, the parameters that differentiate weapons, forces, and strategies.

In the end, “reasonableness” is the only defense of warfare models of any stripe.

It is ironic that analysts must fall back upon the appeal to intuition that has been denied to military men — whose intuition at least flows from a life-or-death incentive to make good guesses when choosing weapons, forces, or strategies.

My colleagues were not amused, to say the least.

I was reminded of all this by a recent exchange with a high-school classmate who had enlisted my help in tracking down a woman who, according to a genealogy website, is her first cousin, twice removed. The success of the venture is as yet uncertain. But if it does succeed it will be because of the classmate’s intimate knowledge of her family, not my command of research tools. As I said to my classmate,

You know a lot more about your family than I know about mine. I have all of the names and dates in my genealogy data bank, but I really don’t know much about their lives. After I moved to Virginia … I was out of the loop on family gossip, and my parents didn’t relate it to me. For example, when I visited my parents … for their 50th anniversary I happened to see a newspaper clipping about the death of my father’s sister a year earlier. It was news to me. And I didn’t learn of the death of my mother’s youngest brother (leaving her as the last of 10 children) until my sister happened to mention it to me a few years after he had died. And she didn’t know that I didn’t know.

All of which means that there’s a lot more to life than bare facts — dates of birth, death, etc. That’s why military people (with good reason) don’t trust analysts who draw conclusions about military weapons and tactics based on mathematical models. Those analysts don’t have a “feel” for how weapons and tactics actually work in the heat of battle, which is what matters.

Climate modelers are even more in the dark than military analysts because, unlike military officers with relevant experience, there’s no “climate officer” who can set climate modelers straight — or (more wisely) ignore them.

(See also “Modeling Is Not Science“, “The McNamara Legacy: A Personal Perspective“, “Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry“, “Analytical and Scientific Arrogance“, “Why I Don’t Believe in ‘Climate Change’“, and “Predicting ‘Global’ Temperatures — An Analogy with Baseball“.)

Predicting “Global” Temperatures — An Analogy with Baseball

The following graph is a plot of the 12-month moving average of “global” mean temperature anomalies for 1979-2018 in the lower troposphere, as reported by the climate-research unit of the University of Alabama-Huntsville (UAH):

The UAH values, which are derived from satellite-borne sensors, are as close as one can come to an estimate of changes in “global” mean temperatures. The UAH values certainly are more complete and reliable than the values derived from the surface-thermometer record, which is biased toward observations over the land masses of the Northern Hemisphere (the U.S., in particular) — observations that are themselves notoriously fraught with siting problems, urban-heat-island biases, and “adjustments” that have been made to “homogenize” temperature data, that is, to make it agree with the warming predictions of global-climate models.

The next graph roughly resembles the first one, but it’s easier to describe. It represents the fraction of games won by the Oakland Athletics baseball team in the 1979-2018 seasons:

Unlike the “global” temperature record, the A’s W-L record is known with certainty. Every game played by the team (indeed, by all teams in organized baseball) is diligently recorded, and in great detail. Those records yield a wealth of information not only about team records, but also about the accomplishments of the individual players whose combined performance determines whether and how often a team wins its games. Given that information, and much else about which statistics are or could be compiled (records of players in the years and games preceding a season or game; records of each team’s owner, general managers, and managers; orientations of the ballparks in which each team compiled its records; distances to the fences in those ballparks; time of day at which games were played; ambient temperatures, and on and on).

Despite all of that knowledge, there is much uncertainty about how to model the interactions among the quantifiable elements of the game, and how to give weight to the non-quantifiable elements (a manager’s leadership and tactical skills, team spirit, and on and on). Even the professional prognosticators at FiveThirtyEight, armed with a vast compilation of baseball statistics from which they have devised a complex predictive model of baseball outcomes will admit that perfection (or anything close to it) eludes them. Like many other statisticians, they fall back on the excuse that “chance” or “luck” intrudes too often to allow their statistical methods to work their magic. What they won’t admit to themselves is that the results of simulations (such as those employed in the complex model devised by FiveThirtyEight),

reflect the assumptions underlying the authors’ model — not reality. A key assumption is that the model … accounts for all relevant variables….

As I have said, “luck” is mainly an excuse and rarely an explanation. Attributing outcomes to “luck” is an easy way of belittling success when it accrues to a rival.

It is also an easy way of dodging the fact that no model can accurately account for the outcomes of complex systems. “Luck” is the disappointed modeler’s excuse.

If the outcomes of baseball games and seasons could be modeled with great certainly, people wouldn’t bet on those outcomes. The existence of successful models would become general knowledge, and betting would cease, as the small gains that might accrue from betting on narrow odds would be wiped out by vigorish.

Returning now to “global” temperatures, I am unaware of any model that actually tries to account for the myriad factors that influence climate. The pseudo-science of “climate change” began with the assumption that “global” temperatures are driven by human activity, namely the burning of fossil fuels that releases CO2 into the atmosphere. CO2 became the centerpiece of global climate models (GCMs), and everything else became an afterthought, or a non-thought. It is widely acknowledged that cloud formation and cloud cover — obviously important determinants of near-surface temperatures — are treated inadequately (when treated at all). The mechanism by which the oceans absorb heat and transmit it to the atmosphere also remain mysterious. The effect of solar activity on cosmic radiation reaching Earth (and thus on cloud formation) remains is often dismissed despite strong evidence of its importance. Other factors that seem to have little or no weight in GCMs (though they are sometimes estimated in isolation) include plate techtonics, magma flows, volcanic activity, and vegetation.

Despite all of that, builders of GCMs — and the doomsayers who worship them — believe that “global” temperatures will rise to catastrophic readings. The rising oceans will swamp coastal cities; the earth will be scorched. except where it is flooded by massive storms; crops will fail accordingly; tempers will flare and wars will break out more frequently.

There’s just one catch, and it’s a big one. Minute changes in the value of a dependent variable (“global” temperature, in this case) can’t be explained by a model in which key explanatory variables are unaccounted for, about which there is much uncertainty surrounding the values of those explanatory variables that can be accounted for, and about which there is great uncertainty about the mechanisms by which the variables interact. Even an impossibly complete model would be wildly inaccurate given the uncertainty of the interactions among variables and the values of those variables (in the past as well as in the future).

I say “minute changes” because first graph above is grossly misleading. An unbiased depiction of “global” temperatures looks like this:

There’s a much better chance of predicting the success or failure of the Oakland A’s, whose record looks like this on an absolute scale:

Just as no rational (unemotional) person should believe that predictions of “global” temperatures should dictate government spending and regulatory policies, no sane bettor is holding his breath in anticipation that the success or failure of the A’s (or any team) can be predicted with bankable certainty.

All of this illustrates a concept known as causal density, which Arnold Kling explains:

When there are many factors that have an impact on a system, statistical analysis yields unreliable results. Computer simulations give you exquisitely precise unreliable results. Those who run such simulations and call what they do “science” are deceiving themselves.

The folks at FiveThirtyEight are no more (and no less) delusional than the creators of GCMs.

Simple Economic Truths Worth Repeating

From “Keynesian Multiplier: Fiction vs. Fact“:

There are a few economic concepts that are widely cited (if not understood) by non-economists. Certainly, the “law” of supply and demand is one of them. The Keynesian (fiscal) multiplier is another; it is

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier….

To show why the math is phony, I’ll start with a derivation of the multiplier. The derivation begins with the accounting identity Y = C + I + G, which means that total output (Y) = consumption (C) + investment (I) + government spending (G)….

Now, let’s say that b = 0.8. This means that income-earners, on average, will spend 80 percent of their additional income on consumption goods (C), while holding back (saving, S) 20 percent of their additional income. With b = 0.8, k = 1/(1 – 0.8) = 1/0.2 = 5. That is, every $1 of additional spending — let us say additional government spending (∆G) rather than investment spending (∆I) — will yield ∆Y = $5. In short, ∆Y = k(∆G), as a theoretical maximum.


[The multiplier] it isn’t a functional representation — a model — of the dynamics of the economy. Assigning a value to b (the marginal propensity to consume) — even if it’s an empirical value — doesn’t alter that fact that the derivation is nothing more than the manipulation of a non-functional relationship, that is, an accounting identity.

Consider, for example, the equation for converting temperature Celsius (C) to temperature Fahrenheit (F): F = 32 + 1.8C. It follows that an increase of 10 degrees C implies an increase of 18 degrees F. This could be expressed as ∆F/∆C = k* , where k* represents the “Celsius multiplier”. There is no mathematical difference between the derivation of the investment/government-spending multiplier (k) and the derivation of the Celsius multiplier (k*). And yet we know that the Celsius multiplier is nothing more than a tautology; it tells us nothing about how the temperature rises by 10 degrees C or 18 degrees F. It simply tells us that when the temperature rises by 10 degrees C, the equivalent rise in temperature F is 18 degrees. The rise of 10 degrees C doesn’t cause the rise of 18 degrees F.


[T]he Keynesian investment/government-spending multiplier simply tells us that if ∆Y = $5 trillion, and if b = 0.8, then it is a matter of mathematical necessity that ∆C = $4 trillion and ∆I + ∆G = $1 trillion. In other words, a rise in I + G of $1 trillion doesn’t cause a rise in Y of $5 trillion; rather, Y must rise by $5 trillion for C to rise by $4 trillion and I + G to rise by $1 trillion. If there’s a causal relationship between ∆G and ∆Y, the multiplier doesn’t portray it.

In sum, the fiscal multiplier puts the cart before the horse. It begins with a non-functional, mathematical relationship, stipulates a hypothetical increase in GDP, and computes that increase in consumption (and other things) that would occur if that increase were to be realized.

As economist Steve Landsburg explains in “The Landsburg Multiplier: How to Make Everyone Rich”,

Murray Rothbard … observed that the really neat thing about this [fiscal stimulus] argument is that you can do exactly the same thing with any accounting identity. Let’s start with this one:

Y = L + E

Here Y is economy-wide income, L is Landsburg’s income, and E is everyone else’s income. No disputing that one.

Next we observe that everyone else’s share of the income tends to be about 99.999999% of the total. In symbols, we have:

E = .99999999 Y

Combine these two equations, do your algebra, and voila:

Y = 100,000,000

That 100,000,000 there is the soon-to-be-famous “Landsburg multiplier”. Our equation proves that if you send Landsburg a dollar, you’ll generate $100,000,000 worth of income for everyone else.

Send me your dollars, yearning to be free.

Tax cuts may stimulate economic activity, but not nearly to the extent suggested by the multiplier. Moreover, if government spending isn’t reduced at the same time that taxes are cut, and if there is something close to full employment of labor and capital, the main result of a tax cut will be inflation.

Government spending (as shown in “Keynsian Multiplier: Fact vs. Fiction” and “Economic Growth Since World War II“) doesn’t stimulate the economy, and usually has the effect of reducing private consumption and investment. That may be to the liking of big-government worshipers, but it’s bad for most of us.

Has Humanity Reached Peak Intelligence?

That’s the title of a post at BBC Future by David Robson, a journalist who has written a book called The Intelligence Trap: Why Smart People Make Dumb Mistakes. Inasmuch as “humanity” isn’t a collective to which “intelligence” can be attached, the title is more titillating than informative about the substance of the post, wherein Mr. Robson says some sensible things; for example:

When the researcher James Flynn looked at [IQ] scores over the past century, he discovered a steady increase – the equivalent of around three points a decade. Today, that has amounted to 30 points in some countries.

Although the cause of the Flynn effect is still a matter of debate, it must be due to multiple environmental factors rather than a genetic shift.

Perhaps the best comparison is our change in height: we are 11cm (around 5 inches) taller today than in the 19th Century, for instance – but that doesn’t mean our genes have changed; it just means our overall health has changed.

Indeed, some of the same factors may underlie both shifts. Improved medicine, reducing the prevalence of childhood infections, and more nutritious diets, should have helped our bodies to grow taller and our brains to grow smarter, for instance. Some have posited that the increase in IQ might also be due to a reduction of the lead in petrol, which may have stunted cognitive development in the past. The cleaner our fuels, the smarter we became.

This is unlikely to be the complete picture, however, since our societies have also seen enormous shifts in our intellectual environment, which may now train abstract thinking and reasoning from a young age. In education, for instance, most children are taught to think in terms of abstract categories (whether animals are mammals or reptiles, for instance). We also lean on increasingly abstract thinking to cope with modern technology. Just think about a computer and all the symbols you have to recognise and manipulate to do even the simplest task. Growing up immersed in this kind of thinking should allow everyone [hyperbole alert] to cultivate the skills needed to perform well in an IQ test….

[Psychologist Robert Sternberg] is not alone in questioning whether the Flynn effect really represented a profound improvement in our intellectual capacity, however. James Flynn himself has argued that it is probably confined to some specific reasoning skills. In the same way that different physical exercises may build different muscles – without increasing overall “fitness” – we have been exercising certain kinds of abstract thinking, but that hasn’t necessarily improved all cognitive skills equally. And some of those other, less well-cultivated, abilities could be essential for improving the world in the future.

Here comes the best part:

You might assume that the more intelligent you are, the more rational you are, but it’s not quite this simple. While a higher IQ correlates with skills such as numeracy, which is essential to understanding probabilities and weighing up risks, there are still many elements of rational decision making that cannot be accounted for by a lack of intelligence.

Consider the abundant literature on our cognitive biases. Something that is presented as “95% fat-free” sounds healthier than “5% fat”, for instance – a phenomenon known as the framing bias. It is now clear that a high IQ does little to help you avoid this kind of flaw, meaning that even the smartest people can be swayed by misleading messages.

People with high IQs are also just as susceptible to the confirmation bias – our tendency to only consider the information that supports our pre-existing opinions, while ignoring facts that might contradict our views. That’s a serious issue when we start talking about things like politics.

Nor can a high IQ protect you from the sunk cost bias – the tendency to throw more resources into a failing project, even if it would be better to cut your losses – a serious issue in any business. (This was, famously, the bias that led the British and French governments to continue funding Concorde planes, despite increasing evidence that it would be a commercial disaster.)

Highly intelligent people are also not much better at tests of “temporal discounting”, which require you to forgo short-term gains for greater long-term benefits. That’s essential, if you want to ensure your comfort for the future.

Besides a resistance to these kinds of biases, there are also more general critical thinking skills – such as the capacity to challenge your assumptions, identify missing information, and look for alternative explanations for events before drawing conclusions. These are crucial to good thinking, but they do not correlate very strongly with IQ, and do not necessarily come with higher education. One study in the USA found almost no improvement in critical thinking throughout many people’s degrees.

Given these looser correlations, it would make sense that the rise in IQs has not been accompanied by a similarly miraculous improvement in all kinds of decision making.

So much for the bright people who promote and pledge allegiance to socialism and its various manifestations (e.g., the Green New Deal, and Medicare for All). So much for the bright people who suppress speech with which they disagree because it threatens the groupthink that binds them.

Robson, still using “we” inappropriately, also discusses evidence of dysgenic effects in IQ:

Whatever the cause of the Flynn effect, there is evidence that we may have already reached the end of this era – with the rise in IQs stalling and even reversing. If you look at Finland, Norway and Denmark, for instance, the turning point appears to have occurred in the mid-90s, after which average IQs dropped by around 0.2 points a year. That would amount to a seven-point difference between generations.

Psychologist (and intelligence specialist) James Thompson has addressed dysgenic effects at his blog on the website of The Unz Review. In particular, he had a lot to say about the work of an intelligence researcher named Michael Woodley. Here’s a sample from a post by Thompson:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong?

From a later post:

The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by [Matthew] Sarraf.

The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.

The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”….

Here are the findings which I have arranged by generational decline (taken as 25 years).

  • Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.
  • 3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.
  • Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.
  • Working memory, over 85 years (3.4 generations) 0.16 drop/decade.
  • Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.
  • Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.

Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.

My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.

How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

Here’s my hypothesis, which I offer on the assumption that the test-takers are demographically representative of the whole populations of the countries in which they were tested: The less-intelligent portions of the populace are breeding faster than the more-intelligent portions. That phenomenon is magnified by the rapid growth of the Muslim component of Europe’s population and the rapid growth of the Latino component of America’s population.

(See also “The Learning Curve and the Flynn Effect“, “More about Intelligence“, “Selected Writings about Intelligence“, and especially “Intelligence“.)

Ad-Hoc Hypothesizing and Data Mining

An ad-hoc hypothesis is

a hypothesis added to a theory in order to save it from being falsified….

Scientists are often skeptical of theories that rely on frequent, unsupported adjustments to sustain them. This is because, if a theorist so chooses, there is no limit to the number of ad hoc hypotheses that they could add. Thus the theory becomes more and more complex, but is never falsified. This is often at a cost to the theory’s predictive power, however. Ad hoc hypotheses are often characteristic of pseudoscientific subjects.

An ad-hoc hypothesis can also be formed from an existing hypothesis (a proposition that hasn’t yet risen to the level of a theory) when the existing hypothesis has been falsified or is in danger of falsification. The (intellectually dishonest) proponents of the existing hypothesis seek to protect it from falsification by putting the burden of proof on the doubters rather than where it belongs, namely, on the proponents.

Data mining is “the process of discovering patterns in large data sets”. It isn’t hard to imagine the abuses that are endemic to data mining; for example, running regressions on the data until the “correct” equation is found, and excluding or adjusting portions of the data because their use leads to “counterintuitive” results.

Ad-hoc hypothesizing and data mining are two sides of the same coin: intellectual dishonesty. The former is overt; the latter is covert. (At least, it is covert until someone gets hold of the data and the analysis, which is why many “scientists” and “scientific” journals have taken to hiding the data and obscuring the analysis.) Both methods are justified (wrongly) as being consistent with the scientific method. But the ad-hoc theorizer is just trying to rescue a falsified hypothesis, and the data miner is just trying to conceal information that would falsify his hypothesis.

From what I have seen, the proponents of the human activity>CO2>”global warming” hypothesis have been guilty of both kinds of quackery: ad-hoc hypothesizing and data mining (with a lot of data manipulation thrown in for good measure).

The Learning Curve and the Flynn Effect

UPDATED 09/14/19

I first learned of the learning curve when I was a newly hired analyst at a defense think-tank. A learning curve

is a graphical representation of how an increase in learning (measured on the vertical axis) comes from greater experience (the horizontal axis); or how the more someone (or something) performs a task, the better they [sic] get at it.

In my line of work, the learning curve figured importantly in the estimation of aircraft procurement costs. There was a robust statistical relationship between the cost of making a particular model of aircraft and the cumulative number of such aircraft produced. Armed with the learning-curve equation and the initial production cost of an aircraft, it was easy to estimate of the cost of producing any number of the same aircraft.

The learning curve figures prominently in tests that purport to measure intelligence. Two factors that may explain the Flynn effect — a secular rise in average IQ scores — are aspects of learning: schooling and test familiarity and a generally more stimulating environment in which one learns more. The Flynn effect doesn’t measure changes in intelligence, it measures changes in IQ scores resulting from learning. There is an essential difference between ignorance and stupidity. The Flynn effect is about the former, not the latter.

Here’s a personal example of the Flynn effect in action. I’ve been doing The New York Times crossword puzzle online since February 18 of this year. I have completed all 210 puzzles published by TNYT from that date through the puzzle for September 15, with generally increasing ease:

The difficulty of the puzzle varies from day to day, with Monday puzzles being the easiest and Sunday puzzles being the hardest (as measured by time to complete). For each day of the week, my best time is more recent than my worst time, and the trend of time to complete is downward for every day of the week (as reflected in the graph above). In fact, in the past week I tied my best time for a Monday puzzle and set new bests for the Thursday and Friday puzzles.

I know that that I haven’t become more intelligent in the last 30 weeks. And being several decades past the peak of my intelligence, I am certain that it diminishes daily, though only fractionally so (I hope). I have simply become more practiced at doing the crossword puzzle because I have learned a lot about it. For example, certain clues recur with some frequency, and they always have the same answers. Clues often have double meanings, which were hard to decipher at first, but which have become easier to decipher with practice. There are other subtleties, all of which reflect the advantages of learning.

In a nutshell, I am no smarter than I was 30 weeks ago, but my ignorance of TNYT crossword puzzle has diminished significantly.

(See also “More about Intelligence“, “Selected Writings about Intelligence“, and especially “Intelligence“, in which I quote experts about the Flynn Effect.)

Modeling Is Not Science: Another Demonstration

The title of this post is an allusion to an earlier one: “Modeling Is Not Science“. This post addresses a model that is the antithesis of science. Tt seems to have been extracted from the ether. It doesn’t prove what its authors claim for it. It proves nothing, in fact, but the ability of some people to dazzle other people with mathematics.

In this case, a writer for MIT Technology Review waxes enthusiastic about

the work of Alessandro Pluchino at the University of Catania in Italy and a couple of colleagues. These guys [sic] have created a computer model of human talent and the way people use it to exploit opportunities in life. The model allows the team to study the role of chance in this process.

The results are something of an eye-opener. Their simulations accurately reproduce the wealth distribution in the real world. But the wealthiest individuals are not the most talented (although they must have a certain level of talent). They are the luckiest. And this has significant implications for the way societies can optimize the returns they get for investments in everything from business to science.

Pluchino and co’s [sic] model is straightforward. It consists of N people, each with a certain level of talent (skill, intelligence, ability, and so on). This talent is distributed normally around some average level, with some standard deviation. So some people are more talented than average and some are less so, but nobody is orders of magnitude more talented than anybody else….

The computer model charts each individual through a working life of 40 years. During this time, the individuals experience lucky events that they can exploit to increase their wealth if they are talented enough.

However, they also experience unlucky events that reduce their wealth. These events occur at random.

At the end of the 40 years, Pluchino and co rank the individuals by wealth and study the characteristics of the most successful. They also calculate the wealth distribution. They then repeat the simulation many times to check the robustness of the outcome.

When the team rank individuals by wealth, the distribution is exactly like that seen in real-world societies. “The ‘80-20’ rule is respected, since 80 percent of the population owns only 20 percent of the total capital, while the remaining 20 percent owns 80 percent of the same capital,” report Pluchino and co.

That may not be surprising or unfair if the wealthiest 20 percent turn out to be the most talented. But that isn’t what happens. The wealthiest individuals are typically not the most talented or anywhere near it. “The maximum success never coincides with the maximum talent, and vice-versa,” say the researchers.

So if not talent, what other factor causes this skewed wealth distribution? “Our simulation clearly shows that such a factor is just pure luck,” say Pluchino and co.

The team shows this by ranking individuals according to the number of lucky and unlucky events they experience throughout their 40-year careers. “It is evident that the most successful individuals are also the luckiest ones,” they say. “And the less successful individuals are also the unluckiest ones.”

The writer, who is dazzled by pseudo-science, gives away his Obamanomic bias (“you didn’t build that“) by invoking fairness. Luck and fairness have nothing to do with each other. Luck is luck, and it doesn’t make the beneficiary any less deserving of the talent, or legally obtained income or wealth, that comes his way.

In any event, the model in question is junk. To call it junk science would be to imply that it’s just bad science. But it isn’t science; it’s a model pulled out of thin air. The modelers admit this in the article cited by the Technology Review writer, “Talent vs. Luck, the Role of Randomness in Success and Failure“:

In what follows we propose an agent-based model, called “Talent vs Luck” (TvL) model, which builds on a small set of very simple assumptions, aiming to describe the evolution of careers of a group of people influenced by lucky or unlucky random events.

We consider N individuals, with talent Ti (intelligence, skills, ability, etc.) normally distributed in the interval [0; 1] around a given mean mT with a standard deviation T , randomly placed in xed positions within a square world (see Figure 1) with periodic boundary conditions (i.e. with a toroidal topology) and surrounded by a certain number NE of “moving” events (indicated by dots), someone lucky, someone else unlucky (neutral events are not considered in the model, since they have not relevant effects on the individual life). In Figure 1 we report these events as colored points: lucky ones, in green and with relative percentage pL, and unlucky ones, in red and with percentage (100􀀀pL). The total number of event-points NE are uniformly distributed, but of course such a distribution would be perfectly uniform only for NE ! 1. In our simulations, typically will be NE N=2: thus, at the beginning of each simulation, there will be a greater random concentration of lucky or unlucky event-points in different areas of the world, while other areas will be more neutral. The further random movement of the points inside the square lattice, the world, does not change this fundamental features of the model, which exposes dierent individuals to dierent amount of lucky or unlucky events during their life, regardless of their own talent.

In other words, this is a simplistic, completely abstract model set in a simplistic, completely abstract world, using only the authors’ assumptions about the values of a small number of abstract variables and the effects of their interactions. Those variables are “talent” and two kinds of event: “lucky” and “unlucky”.

What could be further from science — actual knowledge — than that? The authors effectively admit the model’s complete lack of realism when they describe “talent”:

[B]y the term “talent” we broadly mean intelligence, skill, smartness, stubbornness, determination, hard work, risk taking and so on.

Think of all of the ways that those various — and critical — attributes vary from person to person. “Talent”, in other words, subsumes an array of mostly unmeasured and unmeasurable attributes, without distinguishing among them or attempting to weight them. The authors might as well have called the variable “sex appeal” or “body odor”. For that matter, given the complete abstractness of the model, they might as well have called its three variables “body mass index”, “elevation”, and “race”.

It’s obvious that the model doesn’t account for the actual means by which wealth is acquired. In the model, wealth is just the mathematical result of simulated interactions among an arbitrarily named set of variables. It’s not even a multiple regression model based on statistics. (Although no set of statistics could capture the authors’ broad conception of “talent”.)

The modelers seem surprised that wealth isn’t normally distributed. But that wouldn’t be a surprise if they were to consider that wealth represents a compounding effect, which naturally favors those with higher incomes over those with lower incomes. But they don’t even try to model income.

So when wealth (as modeled) doesn’t align with “talent”, the discrepancy — according to the modelers — must be assigned to “luck”. But a model that lacks any nuance in its definition of variables, any empirical estimates of their values, and any explanation of the relationship between income and wealth cannot possibly tell us anything about the role of luck in the determination of wealth.

At any rate, it is meaningless to say that the model is valid because its results mimic the distribution of wealth in the real world. The model itself is meaningless, so any resemblance between its results and the real world is coincidental (“lucky”) or, more likely, contrived to resemble something like the distribution of wealth in the real world. On that score, the authors are suitably vague about the actual distribution, pointing instead to various estimates.

(See also “Modeling, Science, and Physics Envy” and “Modeling Revisited“.)


There is a post at Politico about the adventures of McKinsey & Company, a giant consulting firm, in the world of intelligence:

America’s vast spying apparatus was built around a Cold War world of dead drops and double agents. Today, that world has fractured and migrated online, with hackers and rogue terrorist cells, leaving intelligence operatives scrambling to keep up.

So intelligence agencies did what countless other government offices have done: They brought in a consultant. For the past four years, the powerhouse firm McKinsey and Co., has helped restructure the country’s spying bureaucracy, aiming to improve response time and smooth communication.

Instead, according to nearly a dozen current and former officials who either witnessed the restructuring firsthand or are familiar with the project, the multimillion dollar overhaul has left many within the country’s intelligence agencies demoralized and less effective.

These insiders said the efforts have hindered decision-making at key agencies — including the CIA, National Security Agency and the Office of the Director of National Intelligence.

They said McKinsey helped complicate a well-established linear chain of command, slowing down projects and turnaround time, and applied cookie-cutter solutions to agencies with unique cultures. In the process, numerous employees have become dismayed, saying the efforts have at best been a waste of money and, at worst, made their jobs more difficult. It’s unclear how much McKinsey was paid in that stretch, but according to news reports and people familiar with the effort, the total exceeded $10 million.

Consulting to U.S.-government agencies on a grand scale grew out of the perceived successes in World War II of civilian analysts who were embedded in military organizations. To the extent that the civilian analysts were actually helpful*, it was because they focused on specific operations, such as methods of searching for enemy submarines. In such cases, the government client can benefit from an outside look at the effectiveness of the operations, the identification of failure points, and suggestions for changes in weapons and tactics that are informed by first-hand observation of military operations.

Beyond that, however, outsiders are of little help, and may be a hindrance, as in the case cited above. Outsiders can’t really grasp the dynamics and unwritten rules of organizational cultures that embed decades of learning and adaptation.

The consulting game is now (and has been for decades) an invasive species. It is a perverse outgrowth of operations research as it was developed in World War II. Too much of a “good thing” is a bad thing — as I saw for myself many years ago.
* The success of the U.S. Navy’s antisubmarine warfare (ASW) operations had been for decades ascribed to the pioneering civilian organization known as the Antisubmarine Warfare Operations Research Group (ASWORG). However, with the publication of The Ultra Secret in 1974 (and subsequent revelations), it became known that code-breaking may have contributed greatly to the success of various operations against enemy forces, including ASW.

Beware of Outliers

An outlier, in the field of operations research, is an unusual event that can distract the observer from the normal run of events. Because an outlier is an unusual event, it is more memorable than events of the same kind that occur more frequently.

Take the case of the late Bill Buckner, who was a steady first baseman and good hitter for many years. What is Buckner remembered for? Not his many accomplishments in a long career. No, he is remembered for a fielding error that cost his team (the accursed Red Sox) game 6 of the 1986 World Series, a game that would have clinched the series for the Red Sox had they won it. But they lost it, and went on to lose the deciding 7th game.

Buckner’s bobble was an outlier that erased from the memories of most fans his prowess as a player and the many occasions on which he helped his team to victory. He is remembered, if at all, for the error — though he erred on less than 1/10 of 1 percent of more than 15,000 fielding plays during his career.

I am beginning to think of America’s decisive victory in Word War II as an outlier.

To be continued.

The “Candle Problem” and Its Ilk

Among the many topics that I address in “The Balderdash Chronicles” is the management “science” fad; in particular, as described by Graham Morehead,

[t]he Candle Problem [which] was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

The implication of which, according to Morehead, is (supposedly) this:

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

My take (in part):

[T]he Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

Now comes James Thompson, with this general conclusion about such exercises:

One important conclusion I draw from this entire paper [by Gerd Gigerenzer, here] is that the logical puzzles enjoyed by Kahneman, Tversky, Stanovich and others are rightly rejected by psychometricians as usually being poor indicators of real ability. They fail because they are designed to lead people up the garden path, and depend on idiosyncratic interpretations.

Told you so.

Is Race a Social Construct?

Of course it is. Science, generally, is a social construct. Everything that human beings do and “know” is a social construct, in that human behavior and “knowledge” are products of acculturation and the irrepressible urge to name and classify things.

Whence that urge? You might say that it’s genetically based. But our genetic inheritance is inextricably twined with social constructs — preferences for, say, muscular men and curvaceous women, and so on. What we are depends not only on our genes but also on the learned preferences that shape the gene pool. There’s no way to sort them out, despite claims (from the left) that human beings are blank slates and claims (from loony libertarians) that genes count for everything.

All of that, however true it may be (and I believe it to be true), is a recipe for solipsism, nay, for Humean chaos. The only way out of this morass, as I see it, is to admit that human beings (or most of them) possess a life-urge that requires them to make distinctions: friend vs. enemy, workable from non-workable ways of building things, etc.

Race is among those useful distinctions for reasons that will be obvious to anyone who has actually observed the behaviors of groups that can be sorted along racial lines instead of condescending to “tolerate” or “celebrate” differences (a luxury that is easily indulged in the safety of ivory towers and gated communities). Those lines may be somewhat arbitrary, for, as many have noted there are more genetic differences within a racial classification than between racial classifications. Which is a fatuous observation, in that there are more genetic differences among, say, the apes than there are between what are called apes and what are called human beings.

In other words, the usual “scientific” objection to the concept of race is based on a false premise, namely, that all genetic differences are equal. If one believes that, one should be just as willing to live among apes as among human beings. But human beings do not choose to live among apes (though a few human beings do choose to observe them at close quarters). Similarly, human beings — for the most part — do not choose to live among people from whom they are racially distinct, and therefore (usually) socially distinct.

Why? Because under the skin we are not all alike. Under the skin there are social (cultural) differences that are causally correlated with genetic differences.

Race may be a social construct, but — like engineering — it is a useful one.

“Science Is Real”

Yes, it is. But the real part of science is the never-ending search for truth about the “natural” world. Scientific “knowledge” is always provisional.

The flower children — young and old — who display “science is real” posters have it exactly backwards. They believe that science consists of provisional knowledge, and when that “knowledge” matches their prejudices the search for truth is at an end.

Provisional knowledge is valuable in some instances — building bridges and airplanes, for example. But bridges and airplanes are (or should be) built by allowing for error, and a lot of it.

The Pretense of Knowledge

Anyone with more than a passing knowledge of science and disciplines that pretend to be scientific (e.g., economics) will appreciate the shallowness and inaccuracy of humans’ “knowledge” of nature and human nature — from the farthest galaxies to our own psyches. Anyone, that is, but a pretentious “scientist” or an over-educated ignoramus.

Not with a Bang

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

T.S. Elliot, The Hollow Men

It’s also the way that America is ending. Yes, there are verbal fireworks aplenty, but there will not be a “hot” civil war. The country that my parents and grandparents knew and loved — the country of my youth in the 1940s and 1950s — is just fading away.

This would not necessarily be a bad thing if the remaking of America were a gradual, voluntary process, leading to time-tested changes for the better. But that isn’t the case. The very soul of America has been and is being ripped out by the government that was meant to protect that soul, and by movements that government not only tolerates but fosters.

Before I go further, I should explain what I mean by America, which is not the same thing as the geopolitical entity known as the United States, though the two were tightly linked for a long time.

America was a relatively homogeneous cultural order that fostered mutual respect, mutual trust, and mutual forbearance — or far more of those things than one might expect in a nation as populous and far-flung as the United States. Those things — conjoined with a Constitution that has been under assault since the New Deal — made America a land of liberty. That is to say, they fostered real liberty, which isn’t an unattainable state of bliss but an actual (and imperfect) condition of peaceful, willing coexistence and its concomitant: beneficially cooperative behavior.

The attainment of this condition depends on social comity, which depends in turn on (a) genetic kinship and (b) the inculcation and enforcement of social norms, especially the norms that define harm.

All of that is going by the boards because the emerging cultural order is almost diametrically opposite that which prevailed in America. The new dispensation includes:

  • casual sex
  • serial cohabitation
  • subsidized illegitimacy
  • abortion on demand
  • easy divorce
  • legions of non-mothering mothers
  • concerted (and deluded) efforts to defeminize females and to neuter or feminize males
  • gender-confusion as a burgeoning norm
  • “alternative lifestyles” that foster disease, promiscuity, and familial instability
  • normalization of drug abuse
  • forced association (with accompanying destruction of property and employment rights)
  • suppression of religion
  • rampant obscenity
  • identity politics on steroids
  • illegal immigration as a “right”
  • “free stuff” from government (Social Security was meant to be self-supporting)
  • America as the enemy
  • all of this (and more) as gospel to influential elites whose own lives are modeled mostly on old America.

As the culture has rotted, so have the ties that bound America.

The rot has occurred to the accompaniment of cacophony. Cultural coarsening begets loud and inconsiderate vulgarity. Worse than that is the cluttering of the ether with the vehement and belligerent propaganda, most of it aimed at taking down America.

The advocates of the new dispensation haven’t quite finished the job of dismantling America. But that day isn’t far off. Complete victory for the enemies of America is only a few election cycles away. The squishy center of the electorate — as is its wont — will swing back toward the Democrat Party. With a Democrat in the White House, a Democrat-controlled Congress, and a few party switches in the Supreme Court (of the packing of it), the dogmas of the anti-American culture will become the law of the land; for example:

Billions and trillions of dollars will be wasted on various “green” projects, including but far from limited to the complete replacement of fossil fuels by “renewables”, with the resulting impoverishment of most Americans, except for comfortable elites who press such policies).

It will be illegal to criticize, even by implication, such things as abortion, illegal immigration, same-sex marriage, transgenderism, anthropogenic global warming, or the confiscation of firearms. These cherished beliefs will be mandated for school and college curricula, and enforced by huge fines and draconian prison sentences (sometimes in the guise of “re-education”).

Any hint of Christianity and Judaism will be barred from public discourse, and similarly punished. Islam will be held up as a model of unity and tolerance.

Reverse discrimination in favor of females, blacks, Hispanics, gender-confused persons, and other “protected” groups will be required and enforced with a vengeance. But “protections” will not apply to members of such groups who are suspected of harboring libertarian or conservative impulses.

Sexual misconduct (as defined by the “victim”) will become a crime, and any male person may be found guilty of it on the uncorroborated testimony of any female who claims to have been the victim of an unwanted glance, touch (even if accidental), innuendo (as perceived by the victim), etc.

There will be parallel treatment of the “crimes” of racism, anti-Islamism, nativism, and genderism.

All health care in the United States will be subject to review by a national, single-payer agency of the central government. Private care will be forbidden, though ready access to doctors, treatments, and medications will be provided for high officials and other favored persons. The resulting health-care catastrophe that befalls most of the populace (like that of the UK) will be shrugged off as a residual effect of “capitalist” health care.

The regulatory regime will rebound with a vengeance, contaminating every corner of American life and regimenting all businesses except those daring to operate in an underground economy. The quality and variety of products and services will decline as their real prices rise as a fraction of incomes.

The dire economic effects of single-payer health care and regulation will be compounded by massive increases in other kinds of government spending (defense excepted). The real rate of economic growth will approach zero.

The United States will maintain token armed forces, mainly for the purpose of suppressing domestic uprisings. Given its economically destructive independence from foreign oil and its depressed economy, it will become a simulacrum of the USSR and Mao’s China — and not a rival to the new superpowers, Russia and China, which will largely ignore it as long as it doesn’t interfere in their pillaging of respective spheres of influence. A policy of non-interference (i.e., tacit collusion) will be the order of the era in Washington.

Though it would hardly be necessary to rig elections in favor of Democrats, given the flood of illegal immigrants who will pour into the country and enjoy voting rights, a way will be found to do just that. The most likely method will be election laws requiring candidates to pass ideological purity tests by swearing fealty to the “law of the land” (i.e., abortion, unfettered immigration, same-sex marriage, freedom of gender choice for children, etc., etc., etc.). Those who fail such a test will be barred from holding any kind of public office, no matter how insignificant.

Are my fears exaggerated? I don’t think so, given what has happened in recent decades and the cultural revolutionaries’ tightening grip on the Democrat party. What I have sketched out can easily happen within a decade after Democrats seize total control of the central government.

Will the defenders of liberty rally to keep it from happening? Perhaps, but I fear that they will not have a lot of popular support, for three reasons:

First, there is the problem of asymmetrical ideological warfare, which favors the party that says “nice” things and promises “free” things.

Second, What has happened thus far — mainly since the 1960s — has happened slowly enough that it seems “natural” to too many Americans. They are like fish in water who cannot grasp the idea of life in a different medium.

Third, although change for the worse has accelerated in recent years, it has occurred mainly in forums that seem inconsequential to most Americans, for example, in academic fights about free speech, in the politically correct speeches of Hollywood stars, and in culture wars that are conducted mainly in the blogosphere. The unisex-bathroom issue seems to have faded as quickly as it arose, mainly because it really affects so few people. The latest gun-control mania may well subside — though it has reached new heights of hysteria — but it is only one battle in the broader war being waged by the left. And most Americans lack the political and historical knowledge to understand that there really is a civil war underway — just not a “hot” one.

Is a reversal possible? Possible, yes, but unlikely. The rot is too deeply entrenched. Public schools and universities are cesspools of anti-Americanism. The affluent elites of the information-entertainment-media-academic complex are in the saddle. Republican politicians, for the most part, are of no help because they are more interested on preserving their comfortable sinecures than in defending America or the Constitution.

On that note, I will take a break from blogging — perhaps forever. I urge you to read one of my early posts, “Reveries“, for a taste of what America means to me. As for my blogging legacy, please see “A Summing Up“, which links to dozens of posts and pages that amplify and support this post.

Il faut cultiver notre jardin.

Voltaire, Candide

Related reading:

Michael Anton, “What We Still Have to Lose“, American Greatness, February 10, 2019

Rod Dreher, “Benedict Option FAQ“, The American Conservative, October 6, 2015

Roger Kimball, “Shall We Defend Our Common History?“, Imprimis, February 2019

Joel Kotkin, “Today’s Cultural Engineers“, newgeography, January 26, 2019

Daniel Oliver, “Where Has All the Culture Gone?“, The Federalist, February 8, 2019

Malcolm Pollack, “On Civil War“, Motus Mentis, March 7, 2019

Fred Reed, “The White Man’s Burden: Reflections on the Custodial State“, Fred on Everything, January 17, 2019

Gilbert T. Sewall, “The Diminishing Authority of the Bourgeois Culture“, The American Conservative, February 4, 2019

Bob Unger, “Requiem for America“, The New American, January 24, 2019