More about Modeling and Science

This post is based on a paper that I wrote 38 years ago. The subject then was the bankruptcy of warfare models, which shows through in parts of this post. I am trying here to generalize the message to encompass all complex, synthetic models (defined below). For ease of future reference, I have created a page that includes links to this post and the many that are listed at the bottom.

THE METAPHYSICS OF MODELING

Alfred North Whitehead said in Science and the Modern World (1925) that “the certainty of mathematics depends on its complete abstract generality” (p. 25). The attraction of mathematical models is their apparent certainty. But a model is only a representation of reality, and its fidelity to reality must be tested rather than assumed. And even if a model seems faithful to reality, its predictive power is another thing altogether. We are living in an era when models that purport to reflect reality are given credence despite their lack of predictive power. Ironically, those who dare point this out are called anti-scientific and science-deniers.

To begin at the beginning, I am concerned here with what I will call complex, synthetic models of abstract variables like GDP and “global” temperature. These are open-ended, mathematical models that estimate changes in the variable of interest by attempting to account for many contributing factors (parameters) and describing mathematically the interactions between those factors. I call such models complex because they have many “moving parts” — dozens or hundreds of sub-models — each of which is a model in itself. I call them synthetic because the estimated changes in the variables of interest depend greatly on the selection of sub-models, the depictions of their interactions, and the values assigned to the constituent parameters of the sub-models. That is to say, compared with a model of the human circulatory system or an internal combustion engine, a synthetic model of GDP or “global” temperature rests on incomplete knowledge of the components of the systems in question and the interactions among those components.

Modelers seem ignorant of or unwilling to acknowledge what should be a basic tenet of scientific inquiry: the complete dependence of logical systems (such as mathematical models) on the underlying axioms (assumptions) of those systems. Kurt Gödel addressed this dependence in his incompleteness theorems:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic….

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

There is the view that Gödel’s theorems aren’t applicable in fields outside of mathematical logic. But any quest for certainty about the physical world necessarily uses mathematical logic (which includes statistics).

This doesn’t mean that the results of computational exercises are useless. It simply means that they are only as good as the assumptions that underlie them; for example, assumptions about relationships between parameters, assumptions about the values of the parameters, and assumptions as to whether the correct parameters have been chosen (and properly defined) in the first place.

There is nothing new in that, certainly nothing that requires Gödel’s theorems by way of proof. It has long been understood that a logical argument may be valid — the conclusion follows from the premises — but untrue if the premises (axioms) are untrue. But it bears repeating — and repeating.

REAL MODELERS AT WORK

There have been mathematical models of one kind and another for centuries, but formal models weren’t used much outside the “hard sciences” until the development of microeconomic theory in the 19th century. Then came F.W. Lanchester, who during World War I devised what became known as Lanchester’s laws (or Lanchester’s equations), which are

mathematical formulae for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two [opponents’] strengths A and B as a function of time, with the function depending only on A and B.

Lanchester’s equations are nothing more than abstractions that must be given a semblance of reality by the user, who is required to make myriad assumptions (explicit and implicit) about the factors that determine the “strengths” of A and B, including but not limited to the relative killing power of various weapons, the effectiveness of opponents’ defenses, the importance of the speed and range of movement of various weapons, intelligence about the location of enemy forces, and commanders’ decisions about when, where, and how to engage the enemy. It should be evident that the predictive value of the equations, when thus fleshed out, is limited to small, discrete engagements, such as brief bouts of aerial combat between two (or a few) opposing aircraft. Alternatively — and in practice — the values are selected so as to yield results that mirror what actually happened (in the “replication” of a historical battle) or what “should” happen (given the preferences of the analyst’s client).

More complex (and realistic) mathematical modeling (also known as operations research) had seen limited use in industry and government before World War II. Faith in the explanatory power of mathematical models was burnished by their use during the war, where such models seemed to be of aid in the design of more effective tactics and weapons.

But the foundation of that success wasn’t the mathematical character of the models. Rather, it was the fact that the models were tested against reality. Philip M. Morse and George E. Kimball put it well in Methods of Operations Research (1946):

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [Op cit., p. 10]

A mathematical model doesn’t represent scientific knowledge unless its predictions can be and have been tested. Even then, a valid model can represent only a narrow slice of reality. The expansion of a model beyond that narrow slice requires the addition of parameters whose interactions may not be well understood and whose values will be uncertain.

Morse and Kimball accordingly urged “hemibel thinking”:

Having obtained the constants of the operations under study … we compare the value of the constants obtained in actual operations with the optimum theoretical value, if this can be computed. If the actual value is within a hemibel ( … a factor of 3) of the theoretical value, then it is extremely unlikely that any improvement in the details of the operation will result in significant improvement. [When] there is a wide gap between the actual and theoretical results … a hint as to the possible means of improvement can usually be obtained by a crude sorting of the operational data to see whether changes in personnel, equipment, or tactics produce a significant change in the constants. [Op cit., p. 38]

Should we really attach little significance to differences of less than a hemibel? Consider a five-parameter model involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model can easily yield a cumulative error of a hemibel (or greater), given a twenty-five percent error in the value each parameter. (Mathematically, 1.255 = 3.05; alternatively, 0.755 = 0.24, or about one-fourth.)

ANTI-SCIENTIFIC MODELING

What does this say about complex, synthetic models such as those of economic activity or “climate change”? Any such model rests on the modeler’s assumptions as to the parameters that should be included, their values (and the degree of uncertainty surrounding them), and the interactions among them. The interactions must be modeled based on further assumptions. And so assumptions and uncertainties — and errors — multiply apace.

But the prideful modeler (I have yet to meet a humble one) will claim validity if his model has been fine-tuned to replicate the past (e.g., changes in GDP, “global” temperature anomalies). But the model is useless unless it predicts the future consistently and with great accuracy, where “great” means accurately enough to validly represent the effects of public-policy choices (e.g., setting the federal funds rate, investing in CO2 abatement technology).

Macroeconomic Modeling: A Case Study

In macroeconomics, for example, there is Professor Ray Fair, who teaches macroeconomic theory, econometrics, and macroeconometric modeling at Yale University. He has been plying his trade at prestigious universities since 1968, first at Princeton, then at MIT, and since 1974 at Yale. Professor Fair has since 1983 been forecasting changes in real GDP — not decades ahead, just four quarters (one year) ahead. He has made 141 such forecasts, the earliest of which covers the four quarters ending with the second quarter of 1984, and the most recent of which covers the four quarters ending with the second quarter of 2019. The forecasts are based on a model that Professor Fair has revised many times over the years. The current model is here. His forecasting track record is here.) How has he done? Here’s how:

1. The median absolute error of his forecasts is 31 percent.

2. The mean absolute error of his forecasts is 69 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 3 percent; too low when real, four-quarter GDP growth is greater than 3 percent.

4. His forecasts have grown generally worse — not better — with time. Recent forecasts are better, but still far from the mark.

Thus:


This and the next two graphs were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing, as noted in the caption.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

It fails a crucial test, in that it doesn’t reflect the downward trend in economic growth:

General Circulation Models (GCMs) and “Climate Change”

As for climate models, Dr. Tim Ball writes about a

fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).

clip_image006

Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

And yet the proponents of CO2-forced “climate change” rely heavily on that flawed temperature record because it is the only one that goes back far enough to “prove” the modelers’ underlying assumption, namely, that it is anthropogenic CO2 emissions which have caused the rise in “global” temperatures. See, for example, Dr. Roy Spencer’s “The Faith Component of Global Warming Predictions“, wherein Dr. Spencer points out that the modelers

have only demonstrated what they assumed from the outset. It is circular reasoning. A tautology. Evidence that nature also causes global energy imbalances is abundant: e.g., the strong warming before the 1940s; the Little Ice Age; the Medieval Warm Period. This is why many climate scientists try to purge these events from the historical record, to make it look like only humans can cause climate change.

In fact the models deal in temperature anomalies, that is, departures from a 30-year average. The anomalies — which range from -1.41 to +1.68 degrees C — are so small relative to the errors and uncertainties inherent in the compilation, estimation, and model-driven adjustments of the temperature record, that they must fail Morse and Kimball’s hemibel test. (The model-driven adjustments are, as Dr. Spencer suggests, downward adjustments of historical temperature data for consistency with the models which “prove” that CO2 emissions induce a certain rate of warming. More circular reasoning.)

They also fail, and fail miserably, the acid test of predicting future temperatures with accuracy. This failure has been pointed out many times. Dr. John Christy, for example, has testified to that effect before Congress (e.g., this briefing). Defenders of the “climate change” faith have attacked Dr. Christy’s methods and finding, but the rebuttals to one such attack merely underscore the validity of Dr. Christy’s work.

This is from “Manufacturing Alarm: Dana Nuccitelli’s Critique of John Christy’s Climate Science Testimony“, by Mario Lewis Jr.:

Christy’s testimony argues that the state-of-the-art models informing agency analyses of climate change “have a strong tendency to over-warm the atmosphere relative to actual observations.” To illustrate the point, Christy provides a chart comparing 102 climate model simulations of temperature change in the global mid-troposphere to observations from two independent satellite datasets and four independent weather balloon data sets….

To sum up, Christy presents an honest, apples-to-apples comparison of modeled and observed temperatures in the bulk atmosphere (0-50,000 feet). Climate models significantly overshoot observations in the lower troposphere, not just in the layer above it. Christy is not “manufacturing doubt” about the accuracy of climate models. Rather, Nuccitelli is manufacturing alarm by denying the models’ growing inconsistency with the real world.

And this is from Christopher Monckton of Brenchley’s “The Guardian’s Dana Nuccitelli Uses Pseudo-Science to Libel Dr. John Christy“:

One Dana Nuccitelli, a co-author of the 2013 paper that found 0.5% consensus to the effect that recent global warming was mostly manmade and reported it as 97.1%, leading Queensland police to inform a Brisbane citizen who had complained to them that a “deception” had been perpetrated, has published an article in the British newspaper The Guardian making numerous inaccurate assertions calculated to libel Dr John Christy of the University of Alabama in connection with his now-famous chart showing the ever-growing discrepancy between models’ wild predictions and the slow, harmless, unexciting rise in global temperature since 1979….

… In fact, as Mr Nuccitelli knows full well (for his own data file of 11,944 climate science papers shows it), the “consensus” is only 0.5%. But that is by the bye: the main point here is that it is the trends on the predictions compared with those on the observational data that matter, and, on all 73 models, the trends are higher than those on the real-world data….

[T]he temperature profile [of the oceans] at different strata shows little or no warming at the surface and an increasing warming rate with depth, raising the possibility that, contrary to Mr Nuccitelli’s theory that the atmosphere is warming the ocean, the ocean is instead being warmed from below, perhaps by some increase in the largely unmonitored magmatic intrusions into the abyssal strata from the 3.5 million subsea volcanoes and vents most of which Man has never visited or studied, particularly at the mid-ocean tectonic divergence boundaries, notably the highly active boundary in the eastern equatorial Pacific. [That possibility is among many which aren’t considered by GCMs.]

How good a job are the models really doing in their attempts to predict global temperatures? Here are a few more examples:

Mr Nuccitelli’s scientifically illiterate attempts to challenge Dr Christy’s graph are accordingly misconceived, inaccurate and misleading.

I have omitted the bulk of both pieces because this post is already longer than needed to make my point. I urge you to follow the links and read the pieces for yourself.

Finally, I must quote a brief but telling passage from a post by Pat Frank, “Why Roy Spencer’s Criticism is Wrong“:

[H]ere’s NASA on clouds and resolution: “A doubling in atmospheric carbon dioxide (CO2), predicted to take place in the next 50 to 100 years, is expected to change the radiation balance at the surface by only about 2 percent. … If a 2 percent change is that important, then a climate model to be useful must be accurate to something like 0.25%. Thus today’s models must be improved by about a hundredfold in accuracy, a very challenging task.

Frank’s very long post substantiates what I say here about the errors and uncertainties in GCMs — and the multiplicative effect of those errors and uncertainties. I urge you to read it. It is telling that “climate skeptics” like Spencer and Frank will argue openly, whereas “true believers” work clandestinely to present a united front to the public. It’s science vs. anti-science.

CONCLUSION

In the end, complex, synthetic models can be defended only by resorting to the claim that they are “scientific”, which is a farcical claim when models consistently fail to yield accurate predictions. It is a claim based on a need to believe in the models — or, rather, what they purport to prove. It is, in other words, false certainty, which is the enemy of truth.

Newton said it best:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Just as Newton’s self-doubt was not an attack on science, neither have I essayed an attack on science or modeling — only on the abuses of both that are too often found in the company of complex, synthetic models. It is too easily forgotten that the practice of science (of which modeling is a tool) is in fact an art, not a science. With this art we may portray vividly the few pebbles and shells of truth that we have grasped; we can but vaguely sketch the ocean of truth whose horizons are beyond our reach.


Related pages and posts:

Climate Change
Modeling and Science

Modeling Is Not Science
Modeling, Science, and Physics Envy
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
Ty Cobb and the State of Science
Is Science Self-Correcting?
Mathematical Economics
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Hurricane Hysteria
Deduction, Induction, and Knowledge
A (Long) Footnote about Science
The Balderdash Chronicles
Analytical and Scientific Arrogance
The Pretence of Knowledge
Wildfires and “Climate Change”
Why I Don’t Believe in “Climate Change”
Modeling Is Not Science: Another Demonstration
Ad-Hoc Hypothesizing and Data Mining
Analysis vs. Reality

Understanding the “Resistance”: The Enemies Within

There have been, since the 1960s, significant changes in the culture of America. Those changes have been led by a complex consisting of the management of big corporations (especially but not exclusively Big Tech), the crypto-authoritarians of academia, the “news” and “entertainment” media, and affluent adherents of  “hip” urban culture on the two Left Coasts. The changes include but are far from limited to the breakdown of long-standing, civilizing, and uniting norms. These are notably (but far from exclusively) traditional marriage and family formation, religious observance, self-reliance, gender identity, respect for the law (including immigration law), pride in America’s history, and adherence to the rules of politics even when you are on the losing end of an election.

Most of the changes haven’t occurred through cultural diffusion, trial and error, and general acceptance of what seems to be a change for the better. No, most of the changes have been foisted on the public at large through legislative, executive, and judicial “activism” by the disciples of radical guru Saul Alinsky (e.g., Barack Obama), who got their start in the anti-war riots of the 1960s and 1970s. They and their successors then cloaked themselves in respectability (e.g., by obtaining Ivy League degrees) to infiltrate and subvert the established order.

How were those disciples bred? Through the public-education system, the universities, and the mass media. The upside-down norms of the new order became gospel to the disciples. Thus the Constitution is bad, free markets are bad, freedom of association (for thee) is bad, self-defense (for thee) is bad, defense of the country must not get in the way of “social justice”, socialism and socialized medicine (for thee) are good, a long list of “victims” of “society” must be elevated, compensated, and celebrated regardless of their criminality and lack of ability.

And the disciples of the new dispensation must do whatever it takes to achieve their aims. Even if it means tearing up long-accepted rules, from those inculcated through religion to those written in the Constitution. Even if it means aggression beyond beyond strident expression of views to the suppression of unwelcome views, “outing” those who don’t subscribe to those views, and assaulting perceived enemies — physically and verbally.

All of this is the product of a no-longer-stealthy revolution fomented by a vast, left-wing conspiracy. One aspect of this movement has been the unrelenting attempt to subvert the 2016 election and reverse its outcome. Thus the fraud known as Spygate (a.k.a. Russiagate) and the renewed drive to impeach Trump, engineered with the help of a former Biden staffer.

Why such a hysterical and persistent reaction to the outcome of the 2016 election?  (The morally corrupt, all-out effort to block the confirmation of Justice Kavanaugh was a loud echo of that reaction.) Because the election of 2016 had promised to be the election to end all elections — the election that might have all-but-assured the the ascendancy of the left in America, with the Supreme Court as a strategic high ground.

But Trump — through his budget priorities, deregulatory efforts, and selection of constitutionalist judges — has made a good start on undoing Obama’s great leap forward in the left’s century-long march toward its vision of Utopia. The left cannot allow this to continue, for if Trump succeeds (and a second term might cement his success), its vile work could be undone.

There has been, in short, a secession — not of States (though some of them act that way), but of a broad and powerful alliance, many of whose members serve in government. They constitute a foreign presence in the midst of “real Americans“.

They are barbarians inside the gate, and must be thought of as enemies.

Expressing Certainty (or Uncertainty)

I have waged war on the misuse of probability for a long time. As I say in the post at the link:

A probability is a statement about a very large number of like events, each of which has an unpredictable (random) outcome. Probability, properly understood, says nothing about the outcome of an individual event. It certainly says nothing about what will happen next.

From a later post:

It is a logical fallacy to ascribe a probability to a single event. A probability represents the observed or computed average value of a very large number of like events. A single event cannot possess that average value. A single event has a finite number of discrete and mutually exclusive outcomes. Those outcomes will not “average out” — only one of them will obtain, like Schrödinger’s cat.

To say that the outcomes will average out — which is what a probability implies — is tantamount to saying that Jack Sprat and his wife were neither skinny nor fat because their body-mass indices averaged to a normal value. It is tantamount to saying that one can’t drown by walking across a pond with an average depth of 1 foot, when that average conceals the existence of a 100-foot-deep hole.

But what about hedge words that imply “probability” without saying it: certain, uncertain, likely, unlikely, confident, not confident, sure, unsure, and the like? I admit to using such words, which are common in discussions about possible future events and the causes of past events. But what do I, and presumably others, mean by them?

Hedge words are statements about the validity of hypotheses about phenomena or causal relationships. There are two ways of looking at such hypotheses, frequentist and Bayesian:

While for the frequentist, a hypothesis is a proposition (which must be either true or false) so that the frequentist probability of a hypothesis is either 0 or 1, in Bayesian statistics, the probability that can be assigned to a hypothesis can also be in a range from 0 to 1 if the truth value is uncertain.

Further, as discussed above, there is no such thing as the probability of a single event. For example, the Mafia either did or didn’t have JFK killed, and that’s all there is to say about that. One might claim to be “certain” that the Mafia had JFK killed, but one can be certain only if one is in possession of incontrovertible evidence to that effect. But that certainty isn’t a probability, which can refer only to the frequency with which many events of the same kind have occurred and can be expected to occur.

A Bayesian view about the “probability” of the Mafia having JFK killed is nonsensical. Even If a Bayesian is certain, based on incontrovertible evidence, that the Mafia had JFK killed, there is no probability attached to the occurrence. It simply happened, and that’s that.

Lacking such evidence, a Bayesian (or an unwitting “man on the street”) might say “I believe there’s a 50-50 chance that the Mafia had JFK killed”. Does that mean (1) there’s some evidence to support the hypothesis, but it isn’t conclusive, or (2) that the speaker would bet X amount of money, at even odds, that incontrovertible evidence (if any) surfaces it will prove that the Mafia had JFK killed? In the first case, attaching a 50-percent probability to the hypothesis is nonsensical; how does the existence of some evidence translate into a statement about the probability of a one-off event that either occurred or didn’t occur? In the second case, the speaker’s willingness to bet on the occurrence of an event at certain odds tells us something about the speaker’s preference for risk-taking but nothing at all about whether or not the event occurred.

What about the familiar use of “probability” (a.k.a., “chance”) in weather forecasts? Here’s my take:

[W]hen you read or hear a statement like “the probability of rain tomorrow is 80 percent”, you should mentally translate it into language like this:

X guesses that Y will (or will not) happen at time Z, and the “probability” that he attaches to his guess indicates his degree of confidence in it.

The guess may be well-informed by systematic observation of relevant events, but it remains a guess. As most Americans have learned and relearned over the years, when rain has failed to materialize or has spoiled an outdoor event that was supposed to be rain-free.

Further, it is true that some things happen more often than other things but

only one thing will happen at a given time and place.

[A] clever analyst could concoct a probability of a person’s being shot by writing an equation that includes such variables as his size, the speed with which he walks, the number of shooters, their rate of fire, and the distance across the shooting range.

What would the probability estimate mean? It would mean that if a very large number of persons walked across the shooting range under identical conditions, approximately S percent of them would be shot. But the clever analyst cannot specify which of the walkers would be among the S percent.

Here’s another way to look at it. One person wearing head-to-toe bullet-proof armor could walk across the range a large number of times and expect to be hit by a bullet on S percent of his crossings. But the hardy soul wouldn’t know on which of the crossings he would be hit.

Suppose the hardy soul became a foolhardy one and made a bet that he could cross the range without being hit. Further, suppose that S is estimated to be 0.75; that is, 75 percent of a string of walkers would be hit, or a single (bullet-proof) walker would be hit on 75 percent of his crossings. Knowing the value of S, the foolhardy fellow offers to pay out $1 million dollars if he crosses the range unscathed — one time — and claim $4 million (for himself or his estate) if he is shot. That’s an even-money bet, isn’t it?

No it isn’t….

The bet should be understood for what it is, an either-or-proposition. The foolhardy walker will either lose $1 million or win $4 million. The bettor (or bettors) who take the other side of the bet will either win $1 million or lose $4 million.

As anyone with elementary reading and reasoning skills should be able to tell, those possible outcomes are not the same as the outcome that would obtain (approximately) if the foolhardy fellow could walk across the shooting range 1,000 times. If he could, he would come very close to breaking even, as would those who bet against him.

I omitted from the preceding quotation a sentence in which I used “more likely”:

If a person walks across a shooting range where live ammunition is being used, he is more likely to be killed than if he walks across the same patch of ground when no one is shooting.

Inasmuch as “more likely” is a hedge word, I seem to have contradicted my own position about the probability of a single event, such as being shot while walking across a shooting range. In that context, however, “more likely” means that something could happen (getting shot) that wouldn’t happen in a different situation. That’s not really a probabilistic statement. It’s a statement about opportunity; thus:

  • Crossing a firing range generates many opportunities to be shot.
  • Going into a crime-ridden neighborhood certainly generates some opportunities to be shot, but their number and frequency depends on many variables: which neighborhood, where in the neighborhood, the time of day, who else is present, etc.
  • Sitting by oneself, unarmed, in a heavy-gauge steel enclosure generates no opportunities to be shot.

The “chance” of being shot is, in turn, “more likely”, “likely”, and “unlikely” — or a similar ordinal pattern that uses “certain”, “confident”, “sure”, etc. But the ordinal pattern, in any case, can never (logically) include statements like “completely certain”, “completely confident”, etc.

An ordinal pattern is logically valid only if it conveys the relative number of opportunities to attain a given kind of outcome — being shot, in the example under discussion.

Ordinal statements about different types of outcome are meaningless. Consider, for example, the claim that the probability that the Mafia had JFK killed is higher than (or lower than or the same as) the probability that the moon is made of green cheese. First, and to repeat myself for the nth time, the phenomena in question are one-of-a-kind and do not lend themselves to statements about their probability, nor even about the frequency of opportunities for the occurrence of the phenomena. Second, the use of “probability” is just a hifalutin way of saying that the Mafia could have had a hand in the killing of JFK, whereas it is known (based on ample scientific evidence, including eye-witness accounts) that the Moon isn’t made of green cheese. So the ordinal statement is just a cheap rhetorical trick that is meant to (somehow) support the subjective belief that the Mafia “must” have had a hand in the killing of JFK.

Similarly, it is meaningless to say that the “average person” is “more certain” of being killed in an auto accident than in a plane crash, even though one may have many opportunities to die in an auto accident or a plane crash. There is no “average person”; the incidence of auto travel and plane travel varies enormously from person to person; and the conditions that conduce to fatalities in auto travel and plane travel vary just as enormously.

Other examples abound. Be on the lookout for them, and avoid emulating them.

Socialism, Communism, and Three Paradoxes

According to Wikipedia, socialism

is a range of economic and social systems characterised by social ownership of the means of production and workers’ self-management, as well as the political theories and movements associated with them. Social ownership can be public, collective[,] or cooperative ownership, or citizen ownership of equity.

Communism

is the philosophical, social, political, and economic ideology and movement whose ultimate goal is the establishment of the communist society, which is a socioeconomic order structured upon the common ownership of the means of production and the absence of social classes, money, and the state.

The only substantive difference between socialism and communism, in theory, is that communism somehow manages to do away with the state. This, of course, never happens, except in real communes, most of which were and are tiny, short-lived arrangements. (In what follows, I therefore put communism in “sneer quotes”.)

The common thread of socialism and “communism” is collective ownership of “equity”, that is, the means of production. But that kind of ownership eliminates an important incentive to invest in the development and acquisition of capital improvements that yield more and better output and therefore raise the general standard of living. The incentive, of course, is the opportunity to reap a substantial reward for taking a substantial risk. Absent that incentive, as has been amply demonstrated by the tragic history of socialist and “communist” regimes, the general standard of living is low and economic growth is practically (if not actually) stagnant.*

So here’s the first paradox: Systems that, by magical thinking, are supposed to make people better off do just the opposite: They make people worse off than they would otherwise be.

All of this because of class envy. Misplaced class envy, at that. “Capitalism” (a smear word) is really the voluntary and relatively unfettered exchange of products and services, including labor. Its ascendancy in the West is just a happy accident of the movement toward the kind of liberalism exemplified in the Declaration of Independence and Constitution. People were from traditional economic roles and allowed to put their talents to more productive uses, which included investing their time and money in capital that yielded more and better products and services.

Most “capitalists” in America were and still are workers who made risky investments to start and build businesses. Businesses that employs other workers and which offer things of value that consumers can take or leave, as they wish (unlike the typical socialist or “communist” system).

So here’s the second paradox: Socialism and “communism” actually suppress the very workers whom they are meant to benefit, in theory and rhetoric.

The third paradox is that socialist and “communist” regimes like to portray themselves as “democratic”, even though they are quite the opposite: ruled by party bosses who bestow favors on their protegees. Free markets are in fact truly democratic, in that their outcomes are determined directly by the participants in those markets.
__________
* If you believe that socialist and “communist” regimes can efficiently direct capital formation and make an economy more productive, see “Socialist Calculation and the Turing Test“, “Monopoly: Private Is Better Than Public“, and “The Rahn Curve in Action“, which quantifies the stultifying effects of government spending and regulation.

As for China, imagine what an economic powerhouse it would be if, long ago, its emperors (including its “communist” ones, like Mao) had allowed its intelligent populace to become capitalists. China’s recent emergence as an economic dynamo is built on the sand of state ownership and direction. China, in fact, ranks low in per-capita GDP among industrialized nations. Its progress is a testament to forced industrialization, and was bound to better than what had come before. But it is worse than what could have been had China not suffered under autocratic rule for millennia.

Trump Re-election Watch Updated

Here. See also “Trump in the Polls“. Bottom line: Trump has a lot of work to do in the next year. But first, he must somehow stop the impeachment train, which at least a few GOP Congress-critters seem eager to join.

Keep your eye on the graph in the sidebar for updates.

In Defense of the Oxford Comma

The Oxford comma, also known as the serial comma, is the comma that precedes the last item in a list of three or more items (e.g., the red, white, and blue). Newspapers (among other sinners) eschew the serial comma for reasons too arcane to pursue here. Thoughtful counselors advise its use. (See, for example, Wilson Follett’s Modern American Usage at pp. 422-423.) Why? Because the serial comma, like the hyphen in a compound adjective, averts ambiguity. It isn’t always necessary, but if it is used consistently, ambiguity can be avoided.

Here’s a great example, from the Wikipedia article linked to in the first sentence of this paragraph: “To my parents, Ayn Rand and God”. The writer means, of course, “To my parents, Ayn Rand, and God”.

Kylee Zempel has much more to say in her essay, “Using the Oxford Comma Is a Sign of Grace and Clarity“. It is, indeed.

(For much more about writing, see my page “Writing: A Guide“.)

Regarding Napoleon Chagnon

Napoleon Alphonseau Chagnon (1938-2019) was a noted anthropologist to whom the label “controversial” was applied. Some of the story is told in this surprisingly objective New York Times article about Chagnon’s life and death. Matthew Blackwell gives a more complete account in “The Dangerous Life of an Anthropologist” (Quilette, October 5, 2019).

Chagnon’s sin was his finding that “nature” trumped “nurture”, as demonstrated by his decades-long ethnographic field work among the Yanomamö, indigenous Amazonians who live in the border area between Venezuela and Brazil. As Blackwell tells it,

Chagnon found that up to 30 percent of all Yanomamö males died a violent death. Warfare and violence were common, and duelling was a ritual practice, in which two men would take turns flogging each other over the head with a club, until one of the combatants succumbed. Chagnon was adamant that the primary causes of violence among the Yanomamö were revenge killings and women. The latter may not seem surprising to anyone aware of the ubiquity of ruthless male sexual competition in the animal kingdom, but anthropologists generally believed that human violence found its genesis in more immediate matters, such as disputes over resources. When Chagnon asked the Yanomamö shaman Dedeheiwa to explain the cause of violence, he replied, “Don’t ask such stupid questions! Women! Women! Women! Women! Women!” Such fights erupted over sexual jealousy, sexual impropriety, rape, and attempts at seduction, kidnap and failure to deliver a promised girl….

Chagnon would make more than 20 fieldwork visits to the Amazon, and in 1968 he published Yanomamö: The Fierce People, which became an instant international bestseller. The book immediately ignited controversy within the field of anthropology. Although it commanded immense respect and became the most commonly taught book in introductory anthropology courses, the very subtitle of the book annoyed those anthropologists, who preferred to give their monographs titles like The Gentle Tasaday, The Gentle People, The Harmless People, The Peaceful People, Never in Anger, and The Semai: A Nonviolent People of Malaya. The stubborn tendency within the discipline was to paint an unrealistic façade over such cultures—although 61 percent of Waorani men met a violent death, an anthropologist nevertheless described this Amazonian people as a “tribe where harmony rules,” on account of an “ethos that emphasized peacefulness.”…

These anthropologists were made more squeamish still by Chagnon’s discovery that the unokai of the Yanomamö—men who had killed and assumed a ceremonial title—had about three times more children than others, owing to having twice as many wives. Drawing on this observation in his 1988 Science article “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” Chagnon suggested that men who had demonstrated success at a cultural phenomenon, the military prowess of revenge killings, were held in higher esteem and considered more attractive mates. In some quarters outside of anthropology, Chagnon’s theory came as no surprise, but its implication for anthropology could be profound. In The Better Angels of Our Nature, Steven Pinker points out that if violent men turn out to be more evolutionarily fit, “This arithmetic, if it persisted over many generations, would favour a genetic tendency to be willing and able to kill.”…

Chagnon considered his most formidable critic to be the eminent anthropologist Marvin Harris. Harris had been crowned the unofficial historian of the field following the publication of his all-encompassing work The Rise of Anthropological Theory. He was the founder of the highly influential materialist school of anthropology, and argued that ethnographers should first seek material explanations for human behavior before considering alternatives, as “human social life is a response to the practical problems of earthly existence.” Harris held that the structure and “superstructure” of a society are largely epiphenomena of its “infrastructure,” meaning that the economic and social organization, beliefs, values, ideology, and symbolism of a culture evolve as a result of changes in the material circumstances of a particular society, and that apparently quaint cultural practices tend to reflect man’s relationship to his environment. For instance, prohibition on beef consumption among Hindus in India is not primarily due to religious injunctions. These religious beliefs are themselves epiphenomena to the real reasons: that cows are more valuable for pulling plows and producing fertilizers and dung for burning. Cultural materialism places an emphasis on “-etic” over “-emic” explanations, ignoring the opinions of people within a society and trying to uncover the hidden reality behind those opinions.

Naturally, when the Yanomamö explained that warfare and fights were caused by women and blood feuds, Harris sought a material explanation that would draw upon immediate survival concerns. Chagnon’s data clearly confirmed that the larger a village, the more likely fighting, violence, and warfare were to occur. In his book Good to Eat: Riddles of Food and Culture Harris argued that fighting occurs more often in larger Yanomamö villages because these villages deplete the local game levels in the rainforest faster than smaller villages, leaving the men no option but to fight with each other or to attack outside groups for meat to fulfil their protein macronutrient needs. When Chagnon put Harris’s materialist theory to the Yanomamö they laughed and replied, “Even though we like meat, we like women a whole lot more.” Chagnon believed that smaller villages avoided violence because they were composed of tighter kin groups—those communities had just two or three extended families and had developed more stable systems of borrowing wives from each other.

There’s more:

Survival International … has long promoted the Rousseauian image of a traditional people who need to be preserved in all their natural wonder from the ravages of the modern world. Survival International does not welcome anthropological findings that complicate this harmonious picture, and Chagnon had wandered straight into their line of fire….

For years, Survival International’s Terence Turner had been assisting a self-described journalist, Patrick Tierney, as the latter investigated Chagnon for his book, Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon. In 2000, as Tierney’s book was being readied for publication, Turner and his colleague Leslie Sponsel wrote to the president of the American Anthropological Association (AAA) and informed her that an unprecedented crisis was about to engulf the field of anthropology. This, they warned, would be a scandal that, “in its scale, ramifications, and sheer criminality and corruption, is unparalleled in the history of Anthropology.” Tierney alleged that Chagnon and Neel had spread measles among the Yanomamö in 1968 by using compromised vaccines, and that Chagnon’s documentaries depicting Yanomamö violence were faked by using Yanomamö to act out dangerous scenes, in which further lives were lost. Chagnon was blamed, inter alia, for inciting violence among the Yanomamö, cooking his data, starting wars, and aiding corrupt politicians. Neel was also accused of withholding vaccines from certain populations of natives as part of an experiment. The media were not slow to pick up on Tierney’s allegations, and the Guardian ran an article under an inflammatory headline accusing Neel and Chagnon of eugenics: “Scientists ‘killed Amazon Indians to test race theory.’” Turner claimed that Neel believed in a gene for “leadership” and that the human genetic stock could be upgraded by wiping out mediocre people. “The political implication of this fascistic eugenics,” Turner told the Guardian, “is clearly that society should be reorganised into small breeding isolates in which genetically superior males could emerge into dominance, eliminating or subordinating the male losers.”

By the end of 2000, the American Anthropological Association announced a hearing on Tierney’s book. This was not entirely reassuring news to Chagnon, given their history with anthropologists who failed to toe the party line….

… Although the [AAA] taskforce [appointed to investigate Tierney’s accusations] was not an “investigation” concerned with any particular person, for all intents and purposes, it blamed Chagnon for portraying the Yanomamö in a way that was harmful and held him responsible for prioritizing his research over their interests.

Nonetheless, the most serious claims Tierney made in Darkness in El Dorado collapsed like a house of cards. Elected Yanomamö leaders issued a statement in 2000 stating that Chagnon had arrived after the measles epidemic and saved lives, “Dr. Chagnon—known to us as Shaki—came into our communities with some physicians and he vaccinated us against the epidemic disease which was killing us. Thanks to this, hundreds of us survived and we are very thankful to Dr. Chagnon and his collaborators for help.” Investigations by the American Society of Human Genetics and the International Genetic Epidemiology Society both found Tierney’s claims regarding the measles outbreak to be unfounded. The Society of Visual Anthropology reviewed the so-called faked documentaries, and determined that these allegations were also false. Then an independent preliminary report released by a team of anthropologists dissected Tierney’s book claim by claim, concluding that all of Tierney’s most important assertions were either deliberately fraudulent or, at the very least, misleading. The University of Michigan reached the same conclusion. “We are satisfied,” its Provost stated, “that Dr. Neel and Dr. Chagnon, both among the most distinguished scientists in their respective fields, acted with integrity in conducting their research… The serious factual errors we have found call into question the accuracy of the entire book [Darkness in El Dorado] as well as the interpretations of its author.” Academic journal articles began to proliferate, detailing the mis-inquiry and flawed conclusions of the 2002 taskforce. By 2005, only three years later, the American Anthropological Association voted to withdraw the 2002 taskforce report, re-exonerating Chagnon.

A 2000 statement by the leaders of the Yanomamö and their Ye’kwana neighbours called for Tierney’s head: “We demand that our national government investigate the false statements of Tierney, which taint the humanitarian mission carried out by Shaki [Chagnon] with much tenderness and respect for our communities. The investigation never occurred, but Tierney’s public image lay in ruins and would suffer even more at the hands of historian of science Alice Dreger, who interviewed dozens of people involved in the controversy. Although Tierney had thanked a Venezuelan anthropologist for providing him with a dossier of information on Chagnon for his book, the anthropologist told Dreger that Tierney had actually written the dossier himself and then misrepresented it as an independent source of information.

A “dossier” and its use to smear an ideological opponent. Where else have we seen that?

Returning to Blackwell:

Scientific American has described the controversy as “Anthropology’s Darkest Hour,” and it raises troubling questions about the entire field. In 2013, Chagnon published his final book, Noble Savages: My Life Among Two Dangerous Tribes—The Yanomamö and the Anthropologists. Chagnon had long felt that anthropology was experiencing a schism more significant than any difference between research paradigms or schools of ethnography—a schism between those dedicated to the very science of mankind, anthropologists in the true sense of the word, and those opposed to science; either postmodernists vaguely defined, or activists disguised as scientists who seek to place indigenous advocacy above the pursuit of objective truth. Chagnon identified Nancy Scheper-Hughes as a leader in the activist faction of anthropologists, citing her statement that we “need not entail a philosophical commitment to Enlightenment notions of reason and truth.”

Whatever the rights and wrong of his debates with Marvin Harris across three decades, Harris’s materialist paradigm was a scientifically debatable hypothesis, which caused Chagnon to realize that he and his old rival shared more in common than they did with the activist forces emerging in the field: “Ironically, Harris and I both argued for a scientific view of human behavior at a time when increasing numbers of anthropologists were becoming skeptical of the scientific approach.”…

Both Chagnon and Harris agreed that anthropology’s move away from being a scientific enterprise was dangerous. And both believed that anthropologists, not to mention thinkers in other fields of social sciences, were disguising their increasingly anti-scientific activism as research by using obscurantist postmodern gibberish. Observers have remarked at how abstruse humanities research has become and even a world famous linguist like Noam Chomsky admits, “It seems to me to be some exercise by intellectuals who talk to each other in very obscure ways, and I can’t follow it, and I don’t think anybody else can.” Chagnon resigned his membership of the American Anthropological Association in the 1980s, stating that he no longer understood the “unintelligible mumbo jumbo of postmodern jargon” taught in the field. In his last book, Theories of Culture in Postmodern Times, Harris virtually agreed with Chagnon. “Postmodernists,” he wrote, “have achieved the ability to write about their thoughts in a uniquely impenetrable manner. Their neo-baroque prose style with its inner clauses, bracketed syllables, metaphors and metonyms, verbal pirouettes, curlicues and figures is not a mere epiphenomenon; rather, it is a mocking rejoinder to anyone who would try to write simple intelligible sentences in the modernist tradition.”…

The quest for knowledge of mankind has in many respects become unrecognizable in the field that now calls itself anthropology. According to Chagnon, we’ve entered a period of “darkness in cultural anthropology.” With his passing, anthropology has become darker still.

I recount all of this for three reasons. First, Chagnon’s findings testify to the immutable urge to violence that lurks within human beings, and to the dominance of “nature” over “nurture”. That dominance is evident not only in the urge to violence (pace Steven Pinker), but in the strong heritability of such traits as intelligence.

The second reason for recounting Chagnon’s saga it is to underline the corruption of science in the service of left-wing causes. The underlying problem is always the same: When science — testable and tested hypotheses based on unbiased observations — challenges left-wing orthodoxy, left-wingers — many of them so-called scientists — go all out to discredit real scientists. And they do so by claiming, in good Orwellian fashion, to be “scientific”. (I have written many posts about this phenomenon.) Leftists are, in fact, delusional devotees of magical thinking.

The third reason for my interest in the story of Napoleon Chagnon is a familial connection of sorts. He was born in a village where his grandfather, also Napoleon Chagnon, was a doctor. My mother was one of ten children, most of them born and all of them raised in the same village. When the tenth child was born, he was given Napoleon as his middle name, in honor of Doc Chagnon.

Another Anniversary

I will be offline for a few days, so I’m reposting this item from a year ago.

Today is the 21st 22nd anniversary of my retirement from full-time employment at a defense think-tank. (I later, and briefly, ventured into part-time employment for the intellectual fulfillment it offered. But it became too much like work, and so I retired in earnest.) If your idea of a think-tank is an outfit filled with hacks who spew glib, politically motivated “policy analysis“, you have the wrong idea about the think-tank where I worked. For most of its history, it was devoted to rigorous, quantitative analysis of military tactics, operations, and systems. Most of its analysts held advanced degrees in STEM fields and economics — about two-thirds of them held Ph.D.s.

I had accumulated 30 years of employment at the think-tank when I retired. (That was in addition to four years as a Pentagon “whiz kid” and owner-operator of a small business.) I spent my first 17 years at the think-tank in analytical pursuits, which included managing other analysts and reviewing their work. I spent the final 13 years on the think-tank’s business side, and served for 11 of those 13 years as chief financial and administrative officer.

I take special delight in observing the anniversary of my retirement because it capped a subtle campaign to arrange the end of my employment on favorable financial terms. The success of the campaign brought a profitable end to a bad relationship with a bad boss.

I liken the campaign to fly-fishing: I reeled in a big fish by accurately casting an irresistible lure then playing the fish into my net. I have long wondered whether my boss ever grasped what I had done and how I had done it. The key was patience; more than a year passed between my casting of the lure and the netting of the fish (early retirement with a financial sweetener). Without going into the details of my “fishing expedition,” I can translate them into the elements of success in any major undertaking:

  • strategy — a broad and feasible outline of a campaign to attain a major objective
  • intelligence — knowledge of the opposition’s objectives, resources, and tactical repertoire, supplemented by timely reporting of his actual moves (especially unanticipated ones)
  • resources — the physical and intellectual wherewithal to accomplish the strategic objective while coping with unforeseen moves by the opposition and strokes of bad luck
  • tactical flexibility — a willingness and ability to adjust the outline of the campaign, to fill in the outline with maneuvers that take advantage of the opposition’s errors, and to compensate for one’s own mistakes and bad luck
  • and — as mentioned — a large measure of patience, especially when one is tempted either to quit or escalate blindly.

My patience was in the service of my felt need to quit the think-tank as it had become under the direction of my boss, the CEO. He had politicized an organization whose effectiveness depended upon its long-standing (and mostly deserved) reputation for independence and objectivity. That reputation rested largely on the organization’s emphasis on empirical research, as opposed to the speculative “policy analysis” that he favored. Further, he — as an avowed Democrat — was also in thrall to political correctness (e.g., a foolish and futile insistence on trying to give blacks a “fair share” of representation on the research staff, despite the paucity of qualified blacks with requisite qualifications). There are other matters that are best left unmentioned, despite the lapse of 21 years.

Because of a special project that I was leading, I could have stayed at the think-tank for at least another three years, had I the stomach for it. And in those three years my retirement fund and savings would have grown to make my retirement more comfortable. But the stress of working for a boss whom I disrespected was too great, so I took the money and ran. And despite occasional regrets, which are now well in the past, I am glad of it.

All of this is by way of prelude to some lessons that I gleaned from my years of work — lessons that may be of interest and value to readers.

If you are highly conscientious (as I am), your superiors will hold a higher opinion of your work than you do. You must constantly remind yourself that you are probably doing better than you think you are. In other words, you should be confident of your ability, because if you feel confident (not self-deluded or big-headed, just confident), you will be less fearful of making mistakes and more willing to venture into new territory. Your value to the company will be enhanced by your self-confidence and by your (justified) willingness to take on new challenges.

When you have established yourself as a valued contributor, you will be better able to stand up to a boss who is foolish, overbearing, incompetent (either singly or in combination). Rehearse your grievances carefully, confront the boss, and then go over his head if he shrugs off your complaints or retaliates against you. But go over his head only if you are confident of (a) your value to the company, (b) the validity of your complaints, and (c) the fair-mindedness of your boss’s boss. (I did this three times in my career. I succeeded in getting rid of a boss the first two times. I didn’t expect to succeed the third time, but it was worth a try because it positioned me for my cushioned exit.)

Patience, which I discussed earlier, is a key to successfully ridding yourself of a bad boss. Don’t push the boss’s boss. He has to admit (to himself) the mistake that he made in appointing your boss. And he has to find a graceful way to retract the mistake.

Patience is also a key to advancement. Never openly campaign for someone else’s job. I got my highest-ranking job simply by positioning myself for it. The big bosses took it from there and promoted me.

On the other hand, if you can invent a job at which you know you’ll succeed — and if that job is clearly of value to the company — go for it. I did it once, and my performance in the job that I invented led to my highest-ranking position.

Through all of that, be prepared to go it alone. Work “friendships” are usually transitory. Your colleagues are (rightly) concerned with their own preservation and advancement. Do not count on them when it comes to fighting battles — like getting rid of a bad boss. More generally, do not count on them. (See the first post listed below.)

Finally, having been a manager for more than half of my 30 years at the think-tank, I learned some things that are spelled out in the third post listed below. Read it if you are a manager, aspiring to be a manager, or simply intrigued by the “mystique” of management.


Related posts:

The Best Revenge
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
How to Manage
Not-So-Random Thoughts (V) (first entry)

Impeachment Tit for Tat

The immediate impetus for the drive to impeach Trump, which began on election night 2016, is the fact that Democrats fully expected Hillary Clinton to win. The underlying impetus is that Democrats have long since abandoned more than token allegiance to the Constitution, which prescribes the rules by which Trump was elected. The Democrat candidate should have won, because that’s the way Democrats think. And the Democrat candidate would have won were the total popular vote decisive instead of the irrelevant Constitution. So Trump is an illegitimate president — in their view.

There is a contributing factor: The impeachment of Bill Clinton. It was obvious to me at the time that the drive to impeach Clinton was fueled by the widely held view, among Republicans, that he was an illegitimate president, even though he was duly elected according to the same rules that put Trump in office. A good case can be made that G.H.W. Bush would have won re-election in 1992 but for the third-party candidacy of Ross Perot. Once installed as president, Clinton incumbency (and relatively moderate policies) enabled him to win re-election in 1996.

The desperation of the impeachment effort against Clinton is evident in the scope of the articles of impeachment, which are about the lying and obstruction of justice that flowed from his personal conduct, and not about his conduct as chief executive of the United States government.

I admit that, despite the shallowness of the charges against Clinton, I was all for his impeachment and conviction. Moderate as his policies were, in hindsight, he was nevertheless a mealy-mouthed statist who, among other things, tried to foist Hillarycare on the nation.

At any rate, the effort to remove Clinton undoubtedly still rankles Democrats, and must be a factor in their fervent determination to remove Trump. This is telling:

“This partisan coup d’etat will go down in infamy in the history of our nation,” the congressman said.

He was outraged and wanted the nation to know why.

“Mr. Speaker,” he said, “this is clearly a partisan railroad job.”

“We are losing sight of the distinction between sins, which ought to be between a person and his family and his God, and crimes which are the concern of the state and of society as a whole,” he said.

“Are we going to have a new test if someone wants to run for office: Are you now or have you ever been an adulterer?” he said.

The date was Dec. 19, 1998. The House was considering articles of impeachment that the Judiciary Committee had approved against then-President Bill Clinton. The outraged individual, speaking on the House floor, was Democratic Rep. Jerry Nadler of New York.

Nadler now serves as chairman of the Judiciary Committee.

What goes around comes around.

I Hate to Hear Millennials Speak

My wife and I have a favorite Thai restaurant in Austin. It’s not the best Thai restaurant in our experience. We’ve dined at much better ones in Washington, D.C., and Yorktown, Virginia. The best one, in our book, is in Arlington, Virginia.

At any rate, our favorite Thai restaurant in Austin is very good and accordingly popular. And because Thai food is relatively inexpensive, it draws a lot of twenty-and-thirty-somethings.

Thus the air was filled (as usual) with “like”, “like”, “like”, “like”, and more “like”, ad nauseum. It makes me want to stand up and shout “Shut up, I can’t take it any more”.

The fellow at the next table not only used “like” in every sentence, but had a raspy, penetrating vocal fry, which is another irritating speech pattern of millennials. He was seated so that he was facing in my direction. As a result, I had to turn down my hearing aids to soften the creak that ended his every sentence.

His date (a female, which is noteworthy in Austin) merely giggled at everything he said. It must have been a getting-to-know you date. The relationship is doomed if she’s at all fussy about “like”. Though it may be that he doesn’t like giggly gals.

Harumph!

That’s today’s gripe. For more gripes, see these posts:

Stuff White (Liberal Yuppie) People Like
Driving and Politics
I’ve Got a LIttle List
Driving and Politics (2)
Amazon and Austin
Driving Is an IQ Test
The Renaming Mania Hits a New Low
Let the Punishment Deter the Crime

Oh, The Irony

Who damaged America greatly with his economic, social, and defense policies and with his anti-business, race-bating rhetoric? Obama, that’s who.

Who has undone much of Obama’s damage, but might be removed from office on a phony abuse-of-power charge — because Democrats (and some Republicans) can’t accept the outcome of the 2016 election? Trump, that’s who.

Do I smell the makings of a great upheaval if Democrats are successful? I think so.

“Will the Circle Be Unbroken?”

That’s the title of the sixth episode of Country Music, produced by Ken Burns et al. The episode ends with a segment about the production of Will the Circle be Unbroken?, a three-LP album released in 1972, with Mother Maybelle Carter of the original Carter Family taking the lead. I have the album in my record collection. It sits proudly next to a two-LP album of recordings by Jimmie Rodgers, Jimmie Rodgers on Record: America’s Blue Yodeler.

The juxtaposition of the albums is fitting because, as Country Music‘s first episode makes clear, it was the 1927 recordings of Rodgers and the Carters that “made” country music. Country music had been recorded and broadcast live since 1922. But Rodgers and the Carters brought something new to the genre and it caught the fancy of a large segment of the populace.

In Rodgers’s case it was his original songs (mostly of heartbreak and rambling) and his unique delivery, which introduced yodeling to country music. In the Carters’ case it was the tight harmonies of Maybelle Addington Carter and her cousin and sister-in-law, Sara Dougherty Carter, applied to nostalgic ballads old and new (but old-sounding, even if new) compiled and composed mostly by Sara’s then-husband, A.P. Carter, who occasionally chimed in on the bass line. (“School House on the Hill” is a particular favorite of mine. The other songs at the link to “School House …” are great, too.)

Rodgers and the original Carters kept it simple. Rodgers accompanied himself on the guitar; Maybelle and Sara Carter accompanied themselves on guitar and autoharp. And that was it. No electrification or amplification, no backup players or singers, no aural tricks of any kind. What you hear is unadorned, and all the better for it. Only the Bluegrass sound introduced by Bill Monroe could equal it for a true “country” sound. Its fast pace and use of acoustic, stringed instruments harked back to the reels and jigs brought to this land (mainly from the British Isles) by the first “country” people — the settlers of Appalachia and the South.

As for the miniseries, I give it a B, or 7 out of 10. As at least one commentator has said, it’s a good crash course for those who are new to country music, but only a glib refresher course for those who know it well. At 16 hours in length, it is heavily padded with mostly (but not always) vapid commentary by interviewees who were and are, in some way, associated with country music; Burns’s typical and tedious social commentary about the treatment of blacks and women, as if no one knows about those things; and biographical information that really adds nothing to the music.

The biographical information suggests that to be a country singer you must be an orphan from a hardscrabble-poor, abusive home who survived the Great Depression or run-ins with the law. Well, you might think that until you reflect on the fact that little is said about the childhoods of the many country singers who weren’t of that ilk, especially the later ones whose lives were untouched by the Great Depression or World War II.

Based on what I’ve seen of the series thus far (six of eight episodes), what it takes to be a country singer — with the notable exception of the great Hank Snow (a native of Nova Scotia) — is (a) to have an accent that hails from the South, and (b) to sing in a way that emphasizes the accent. A nasal twang seems to be a sine qua non, even though many of the singers who are interviewees don’t speak like they sing. It’s mostly put on, in other words, and increasingly so as regional accents fade away.

The early greats, like Rodgers and the Carters, were authentic, but the genre is becoming increasingly phony. And the Nashville sound and its later variants are abominations.

So, the circle has been broken. And the only way to mend it is to listen to the sounds of yesteryear.

Up from Darkness

I’m in the midst of writing a post about conservatism that will put a capstone on the many posts that I’ve written on the subject. It occurred to me that it might be helpful to understand why some conservatives (or people who thought of themselves as conservatives) abandoned the faith. I found “Do conservatives ever become liberal?” at Quora. None of the more than 100 replies is a good argument for switching, in my view. Most of them are whining, posturing, and erroneous characterizations of conservatism.

But the first reply struck home because it describes how a so-called conservative became a “liberal” in a matter of minutes. What that means, of course, is that the convert’s conservatism was superficial. (More about that in the promised post.) But the tale struck home because it reminded me of my own conversion, in the opposite direction, which began with a kind of “eureka” moment.

Here’s the story, from my “About” page:

I was apolitical until I went to college. There, under the tutelage of economists of the Keynesian persuasion, I became convinced that government could and should intervene in economic affairs. My pro-interventionism spread to social affairs in my early post-college years, as I joined the “intellectuals” of the time in their support for the Civil Rights Act and the Great Society, which was about social engineering as much as anything.

The urban riots that followed the murder of Martin Luther King Jr. opened my eyes to the futility of LBJ’s social tinkering. I saw at once that plowing vast sums into a “war” on black poverty would be rewarded with a lack of progress, sullen resentment, and generations of dependency on big brother in Washington.

There’s a lot more after that about my long journey home to conservatism, if you’re interested.

Certainty about Uncertainty

Words fail us. Numbers, too, for they are only condensed words. Words and numbers are tools of communication and calculation. As tools, they cannot make certain that which is uncertain, though they often convey a false sense of certainty.

Yes, arithmetic seems certain: 2 + 2 = 4 is always and ever (in base-10 notation). But that is only because the conventions of arithmetic require 2 + 2 to equal 4. Neither arithmetic nor any higher form of mathematics reveals the truth about the world around us, though mathematics (and statistics) can be used to find approximate truths — approximations that are useful in practical applications like building bridges, finding effective medicines, and sending rockets into space (though the practicality of that has always escaped me).

But such practical things are possible only because the uncertainty surrounding them (e.g., the stresses that may cause a bridge to fail) is hedged against by making things more robust than they would need to be under perfect conditions. And, even then, things sometimes fail: bridges collapse, medicines have unforeseen side effects, rockets blow up, etc.

I was reminded of uncertainty by a recent post by Timothy Taylor (Conversable Economist):

For the uninitiated, “statistical significance” is a way of summarizing whether a certain statistical result is likely to have happened by chance, or not. For example, if I flip a coin 10 times and get six heads and four tails, this could easily happen by chance even with a fair and evenly balanced coin. But if I flip a coin 10 times and get 10 heads, this is extremely unlikely to happen by chance. Or if I flip a coin 10,000 times, with a result of 6,000 heads and 4,000 tails (essentially, repeating the 10-flip coin experiment 1,000 times), I can be quite confident that the coin is not a fair one. A common rule of thumb has been that if the probability of an outcome occurring by chance is 5% or less–in the jargon, has a p-value of 5% or less–then the result is statistically significant. However, it’s also pretty common to see studies that report a range of other p-values like 1% or 10%.

Given the omnipresence of “statistical significance” in pedagogy and the research literature, it was interesting last year when the American Statistical Association made an official statement “ASA Statement on Statistical Significance and P-Values” (discussed here) which includes comments like: “Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold. … A p-value, or statistical significance, does not measure the size of an effect or the importance of a result. … By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.”

Now, the ASA has followed up with a special supplemental issue of its journal The American Statistician on the theme “Statistical Inference in the 21st Century: A World Beyond p < 0.05” (January 2019).  The issue has a useful overview essay, “Moving to a World Beyond “p < 0.05.” by Ronald L. Wasserstein, Allen L. Schirm, and  Nicole A. Lazar. They write:

We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term “statistically significant” entirely. Nor should variants such as “significantly different,” “p < 0.05,” and “nonsignificant” survive, whether expressed in words, by asterisks in a table, or in some other way. Regardless of whether it was ever useful, a declaration of “statistical significance” has today become meaningless. … In sum, `statistically significant’—don’t say it and don’t use it.

. . .

So let’s accept the that the “statistical significance” label has some severe problems, as Wasserstein, Schirm, and Lazar write:

[A] label of statistical significance does not mean or imply that an association or effect is highly probable, real, true, or important. Nor does a label of statistical nonsignificance lead to the association or effect being improbable, absent, false, or unimportant. Yet the dichotomization into “significant” and “not significant” is taken as an imprimatur of authority on these characteristics. In a world without bright lines, on the other hand, it becomes untenable to assert dramatic differences in interpretation from inconsequential differences in estimates. As Gelman and Stern (2006) famously observed, the difference between “significant” and “not significant” is not itself statistically significant.

In the middle of the post, Taylor quotes Edward Leamer’s 1983 article, “Taking the Con out of Econometrics” (American Economic Review, March 1983, pp. 31-43).

Leamer wrote:

The econometric art as it is practiced at the computer terminal involves fitting many, perhaps thousands, of statistical models. One or several that the researcher finds pleasing are selected for re- porting purposes. This searching for a model is often well intentioned, but there can be no doubt that such a specification search in-validates the traditional theories of inference. … [I]n fact, all the concepts of traditional theory, utterly lose their meaning by the time an applied researcher pulls from the bramble of computer output the one thorn of a model he likes best, the one he chooses to portray as a rose. The consuming public is hardly fooled by this chicanery. The econometrician’s shabby art is humorously and disparagingly labelled “data mining,” “fishing,” “grubbing,” “number crunching.” A joke evokes the Inquisition: “If you torture the data long enough, Nature will confess” … This is a sad and decidedly unscientific state of affairs we find ourselves in. Hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else’s data analyses seriously.”

Economists and other social scientists have become much more aware of these issues over the decades, but Leamer was still writing in 2010 (“Tantalus on the Road to Asymptopia,” Journal of Economic Perspectives, 24: 2, pp. 31-46):

Since I wrote my “con in econometrics” challenge much progress has been made in economic theory and in econometric theory and in experimental design, but there has been little progress technically or procedurally on this subject of sensitivity analyses in econometrics. Most authors still support their conclusions with the results implied by several models, and they leave the rest of us wondering how hard they had to work to find their favorite outcomes … It’s like a court of law in which we hear only the experts on the plaintiff’s side, but are wise enough to know that there are abundant for the defense.

Taylor wisely adds this:

Taken together, these issues suggest that a lot of the findings in social science research shouldn’t be believed with too much firmness. The results might be true. They might be a result of a researcher pulling out “from the bramble of computer output the one thorn of a model he likes best, the one he chooses to portray as a rose.” And given the realities of real-world research, it seems goofy to say that a result with, say, only a 4.8% probability of happening by chance is “significant,” while if the result had a 5.2% probability of happening by chance it is “not significant.” Uncertainty is a continuum, not a black-and-white difference [emphasis added].

The italicized sentence expresses my long-held position.

But there is a deeper issue here, to which I alluded above in my brief comments about the nature of mathematics. The deeper issue is the complete dependence of logical systems on the underlying axioms (assumptions) of those systems, which Kurt Gödel addressed in his incompleteness theorems:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic….

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

This is very deep stuff. I own the book in which Gödel proves his theorems, and I admit that I have to take the proofs on faith. (Which simply means that I have been too lazy to work my way through the proofs.) But there seem to be no serious or fatal criticisms of the theorems, so my faith is justified (thus far).

There is also the view that the theorems aren’t applicable in fields outside of mathematical logic. But any quest for certainty about the physical world necessarily uses mathematical logic (which includes statistics).

This doesn’t mean that the results of computational exercises are useless. It simply means that they are only as good as the assumptions that underlie them; for example, assumptions about relationships between variables, assumptions about the values of the variables, assumptions as to whether the correct variable have been chosen (and properly defined), in the first place.

There is nothing new in that, certainly nothing that requires Gödel’s theorems by way of proof. It has long been understood that a logical argument may be valid — the conclusion follows from the premises — but untrue if the premises (axioms) are untrue.

But it bears repeating and repeating — especially in the age of “climate change“. That CO2 is a dominant determinant of “global” temperatures is taken as axiomatic. Everything else flows from that assumption, including the downward revision of historical (actual) temperature readings, to ensure that the “observed” rise in temperatures agrees with — and therefore “proves” — the validity of climate models that take CO2 as dominant variable. How circular can you get?

Check your assumptions at the door.

This Is Objectivism? Another Sequel

I see that The Objective Standard has posted a review of The Rediscovery of America: Essays by Harry V. Jaffa on the New Birth of Politics. The reviewer is ambivalent about the volume, which collects most of Jaffa‘s writings in the final two decades of his life (1918-2015):

Harry Jaffa was perhaps the most philosophically astute of all American conservatives. His books, though often flawed, were studded with thought-provoking insights….

At last, a new book, The Rediscovery of America, gathers his often dazzling, sometimes outrageous, valedictory writings.

What does the reviewer like about Jaffa’s valedictory writings? This:

Jaffa called himself a “gadfly” because he criticized his fellow conservatives, especially traditionalists such as Russell Kirk and Robert Bork, who, as Jaffa proved, actually surrendered the principles they purported to defend. His attacks on those he called “false prophets of American conservatism” often were harsh, because he wisely approached philosophical disputes with grave seriousness and because he believed they had embraced the same fatal thesis that modern liberals had: “there is no objective knowledge of, or rational ground for distinguishing good and bad, right and wrong, just and unjust” (101). This obliterated the only ground—reason—from which justice or liberty could be defended.

But:

Jaffa’s effort to defend reason and freedom … was handicapped by his defense of religion (which he vainly tried to portray as rational) and his homophobia—a word sometimes abused but appropriate for Jaffa, whose ferocity toward those he insisted on calling “sodomites” was grounded in an irrational fear that homosexuality represented the “repudiation” of “all morality”.

Nevertheless:

Despite these flaws, Rediscovery often is enlightening and instructive. Jaffa’s essays display an intellectual depth lamentably absent from today’s conservatism. And for all of his errors, his insistence that the truths of the Declaration are not historical artifacts but timeless principles worthy of defending will make his best work last forever.

I am struck by the reviewer’s totemic invocation of reason. It must be an Objectivist’s “thing”, because there is a similar invocation in the inaugural issue of The Objective Standard that was the subject of my earlier post, “This Is Objectivism?”:

We hold that reason—the faculty that operates by way of observation and logic—is man’s means of knowledge…. Reason is the means by which everyone learns about the world, himself, and his needs. Human knowledge—all human knowledge—is a product of perceptual observation and logical inference therefrom….

In short, man has a means of knowledge; it is reason—and reason alone. If people want to know what is true or good or right, they must observe reality and use logic.

Thus, to an Objectivist, reason — the application of logic to observations about the world — is the only source of knowledge, and Jaffa (usually) defended reason. Therefore, Jaffa was (mostly) correct in the views with which the reviewer agrees. An interesting mix of post hoc ergo propter hoc and circular reasoning.

Reason, of course, is subject to error — great error. Observations can be in error, or selected with the aim of reaching a particular (and erroneous) conclusion. The application of logic to observations usually means, in practice, the application of mathematical and statistical tools to understand the relationships between those observations, and to make falsifiable predictions based on those relationships. Even, then, the “knowledge” that arises from scientific reason is always provisional, unlike the certitudes of Objectivists.

As I wrote in “Objectivism: Tautologies in Search of Reality” (a sequel to “This Is Objectivism?):

Reason operates on perceptions and prejudices. To the extent that there are “real” facts, we filter and interpret them according to our prejudices. When it comes to that, Objectivists are no less prejudiced than anyone else….

Reason is an admirable and useful thing, but it does not ensure valid “knowledge,” right action, or survival. Some non-cognitive precepts — such as the “Golden Rule“, “praise the Lord and pass the ammunition”, and “talk softly but carry a big stick” — are indispensable guides to action which help to ensure the collective (joint) survival of those who observe them. Survival, in the real world (as opposed to the ideal world of Objectivism) depends very much on prejudice.

That is, human beings often rely on ingrained knowledge — instinct, if you will — which isn’t a product of “reason”.

That there is such knowledge seems to escape Objectivists. How can anyone possibly write with a straight face that “Human knowledge—all human knowledge—is a product of perceptual observation and logical inference therefrom”? It takes a rather strained view of logical inference to account for such things as the mating and suckling instincts (without which human life would end), or the squeamishness and disgust that helps people to avoid infectious diseases. But such things are human knowledge — essential human knowledge.

Objectivism is a cult. To be a member of the cult, one must not only invoke reason ritualistically, one must also profess atheism. The reviewer is an atheist, and it shows here:

Jaffa’s effort to defend reason and freedom … was handicapped by his defense of religion (which he vainly tried to portray as rational)….

An Objectivist will perform intellectual somersaults in the defense of atheism. This is from a post at the website of The Atlas Society, an Objectivist organization:

Objectivism holds that in order to obtain knowledge, man must use an objective process of thought. The essence of objective thought is, first, integration of perceptual data in accordance with logic and, second, a commitment to acknowledging all of the facts of reality, and only the facts. In other words, the only thoughts to consider when forming knowledge of reality are those logically derived from reality….

Agnosticism—as a general approach to knowledge—refuses to reject arbitrary propositions….

The primary problem for the agnostic is that he allows arbitrary claims to enter his cognitive context. The fully rational man, on the other hand, does not seek evidence to prove or disprove arbitrary claims, for he has no reason to believe that such claims are true in the first place….

[E]]ven if the notion of God were formulated in a testable, coherent manner, the claim that God exists would be no less arbitrary and would be equally unworthy of evaluation. The proposition was formed not on the basis of evidence (i.e., perceptual data integrated by logic)—it could have been formed only on the basis of imagination.

Wow!

In fact, the existence of the physical universe is “perceptual data”. And there is a logically valid argument to explain existence as the creation of a being who stands apart from it.

Whether or not one accepts the argument isn’t a matter of reason but a matter of faith. The mandatory atheism of Objectivism is therefore a matter of faith, not a product of reason.

As I say, it’s a cult.

(See also Theodore Dalrymple’s In Praise of Prejudice: The Necessity of Preconceived Ideas, which I have discussed at some length here; “Social Norms and Liberty” and the many posts listed therein; “Words Fail Us“, “Through a Glass Darkly“, and “Libertarianism, the Autism Spectrum, and Ayn Rand“.)

Putting in Some Good Words for Monopoly

Long ago and far away, when I studied economics, one of the first things that was drummed into my head was the badness of monopoly, oligopoly, and other forms of imperfect competition. The ideal, of course, is perfect competition because it

provides both allocative efficiency and productive efficiency:

  • Such markets are allocatively efficient, as output will always occur where marginal cost is equal to average revenue i.e. price (MC = AR). In perfect competition, any profit-maximizing producer faces a market price equal to its marginal cost (P = MC). This implies that a factor’s price equals the factor’s marginal revenue product. It allows for derivation of the supply curve on which the neoclassical approach is based. This is also the reason why “a monopoly does not have a supply curve”. The abandonment of price taking creates considerable difficulties for the demonstration of a general equilibrium except under other, very specific conditions such as that of monopolistic competition.
  • In the short-run, perfectly competitive markets are not necessarily productively efficient as output will not always occur where marginal cost is equal to average cost (MC = AC). However, in long-run, productive efficiency occurs as new firms enter the industry. Competition reduces price and cost to the minimum of the long run average costs. At this point, price equals both the marginal cost and the average total cost for each good (P = MC = AC).

All of this assumes that a market for a particular product or service is amenable to perfect competition. Economists recognize that such isn’t always the case (e.g., natural monopoly), but most of them nevertheless preach about the evils of market concentration (i.e., monopoly and other forms of less-than-perfect competition).

Contrarian economist Robin Hanson attacks the general view about the badness of market concentration in a pair of recent posts at his blog Overcoming Bias (here and here):

Many have recently said 1) US industries have become more concentrated lately, 2) this is a bad thing, and 3) inadequate antitrust enforcement is in part to blame….

I’m teaching grad Industrial Organization again this fall, and in that class I go through many standard simple (game-theoretic) math models about firms competing within industries. And occurs to me to mention that when these models allow “free entry”, i.e., when the number of firms is set by the constraint that they must all expect to make non-negative profits, then such models consistently predict that too many firms enter, not too few. These models suggest that we should worry more about insufficient, not excess, concentration.

*    *    *

My last post talked about how our standard economic models of firms competing in industries typically show industries having too many, not too few, firms. It is a suspicious and damning fact that economists and policy makers have allowed themselves and the public to gain the opposite impression, that our best theories support interventions to cut industry concentration.

My last post didn’t mention the most extreme example of this, the case where we have the strongest theory reason to expect insufficient concentration: [multi-monopoly]….

The coordination failure among these firms is severe. It produces a much lower quantity and welfare than would result if all these firms were merged into a single monopolist who sold a single merged product. So in this case the equilibrium industry concentration is far too low.

Hanson’s posts caught my eye because I am pleased that at least one practicing academic economist agrees with me. Somewhat long ago, I put it this way (with light editing and block-quotation format omitted for ease of reading):

Regulators live in a dream world. They believe that they can emulate — and even improve on — the outcomes that would be produced by competitive markets. And that’s precisely where regulation fails: Bureaucratic rules cannot be devised to respond to consumers’ preferences and technological opportunities in the same ways that markets respond to those things. The main purpose of regulation (as even most regulators would admit) is to impose preferred outcomes, regardless of the immense (but mostly hidden) cost of regulation.

There should be a place of honor in regulatory hell for those who pursue “monopolists”, even though the only true monopolies are run by governments or exist with the connivance of governments (think of courts and cable franchises, for example). The opponents of “monopoly” really believe that success is bad. Those who agitate for antitrust actions against successful companies — branding them “monopolistic” — are stuck in a zero-sum view of the economic universe, in which “winners” must be balanced by “losers”. Antitrusters forget (if they ever knew) that (1) successful companies become successful by satisfying consumers; (2) consumers wouldn’t buy the damned stuff if they didn’t think it was worth the price; (3) “immense” profits invite competition (direct and indirect), which benefits consumers; and (4) the kind of innovation and risk-taking that (sometimes) leads to wealth for a few also benefits the many by fueling economic growth.

What about those “immense” profits? They don’t just disappear into thin air. Monopoly profits (“rent” in economists’ jargon) have to go somewhere, and so they do: into consumption, investment (which fuels economic growth), and taxes (which should make liberals happy). It’s just a question of who gets the money.

But isn’t output restricted, thus making people generally worse off? That may be what you learned in Econ 101, but that’s based on a static model which assumes that there’s a choice between monopoly and competition. In fact:

  • Monopoly (except when it’s gained by force, fraud, or government license) usually is a transitory state of affairs resulting from invention, innovation, and/or entrepreneurial skill.
  • Transitory? Why? Because monopoly profits invite competition — if not directly, then from substitutes.
  • Transitory monopolies arise as part of economic growth. Therefore, such monopolies exist as a “bonus” alongside competitive markets, not as alternatives to them.
  • The prospect of monopoly profits entices more invention, innovation, and entrepreneurship, which fuels more economic growth.

(See also “Socialist Calculation and the Turing Test“, “Monopoly: Private Is Better Than Public“, and “The Rahn Curve in Action“, which quantifies the stultifying effects of government spending and regulation.)

Intellectuals and Authoritarianism

In the preceding post I quoted the German political theorist, Carl Schmitt (1888-1985). The quotation is from a book published in 1926, seven years before Schmitt joined the Nazi Party. But Schmitt’s attraction to authoritarianism long predates his party membership. In 1921, according to Wikipedia,

Schmitt became a professor at the University of Greifswald, where he published his essay Die Diktatur (on dictatorship), in which he discussed the foundations of the newly established Weimar Republic, emphasising the office of the Reichspräsident. In this essay, Schmitt compared and contrasted what he saw as the effective and ineffective elements of the new constitution of his country. He saw the office of the president as a comparatively effective element, because of the power granted to the president to declare a state of exception (Ausnahmezustand). This power, which Schmitt discussed and implicitly praised as dictatorial,[21] was more in line with the underlying mentality of executive power than the comparatively slow and ineffective processes of legislative power reached through parliamentary discussion and compromise.

Shades of Woodrow Wilson, the holder of an earned doctorate and erstwhile academician who had recently been succeeded as president of the United States by Warren G. Harding. Wilson

believed the Constitution had a “radical defect” because it did not establish a branch of government that could “decide at once and with conclusive authority what shall be done.”…

He also wrote that charity efforts should be removed from the private domain and “made the imperative legal duty of the whole,” a position which, according to historian Robert M. Saunders, seemed to indicate that Wilson “was laying the groundwork for the modern welfare state.”

Another renowned German academic, the philosopher Martin Heidegger (1889-1976), also became a Nazi in 1933. Whereas Schmitt never expressed regret or doubts about his membership in the party. Heidegger did, though perhaps not sincerely:

In his postwar thinking, Heidegger distanced himself from Nazism, but his critical comments about Nazism seem “scandalous” to some since they tend to equate the Nazi war atrocities with other inhumane practices related to rationalisation and industrialisation, including the treatment of animals by factory farming. For instance in a lecture delivered at Bremen in 1949, Heidegger said: “Agriculture is now a motorized food industry, the same thing in its essence as the production of corpses in the gas chambers and the extermination camps, the same thing as blockades and the reduction of countries to famine, the same thing as the manufacture of hydrogen bombs.”…

In [a 1966 interview for Der Spiegel], Heidegger defended his entanglement with National Socialism in two ways: first, he argued that there was no alternative, saying that he was trying to save the university (and science in general) from being politicized and thus had to compromise with the Nazi administration. Second, he admitted that he saw an “awakening” (Aufbruch) which might help to find a “new national and social approach,” but said that he changed his mind about this in 1934, largely prompted by the violence of the Night of the Long Knives.

In his interview Heidegger defended as double-speak his 1935 lecture describing the “inner truth and greatness of this movement.” He affirmed that Nazi informants who observed his lectures would understand that by “movement” he meant National Socialism. However, Heidegger asserted that his dedicated students would know this statement was no eulogy for the Nazi Party. Rather, he meant it as he expressed it in the parenthetical clarification later added to Introduction to Metaphysics (1953), namely, “the confrontation of planetary technology and modern humanity.”

The eyewitness account of Löwith from 1940 contradicts the account given in the Der Spiegel interview in two ways: that he did not make any decisive break with National Socialism in 1934, and that Heidegger was willing to entertain more profound relations between his philosophy and political involvement.

Schmitt and Heidegger were far from the only German intellectuals who were attracted to Nazism, whether out of philosophical conviction or expediency. More to the point, as presaged by my inclusion of Woodrow Wilson’s views, Schmitt and Heidegger were and are far from the only intellectual advocates of authoritarianism. Every academic, of any nation, who propounds government action that usurps the functions of private institutions is an authoritarian, whether or not he admits it to himself. Whether they are servants of an overtly totalitarian regime, like Schmitt and Heidegger, or of a formally libertarian one, like Wilson, they are all authoritarians under the skin.

Why? Because intellectualism is essentially rationalism. As Michael Oakeshott explains, a rationalist

never doubts the power of his ‘reason … to determine the worth of a thing, the truth of an opinion or the propriety of an action. Moreover, he is fortified by a belief in a ‘reason’ common to all mankind, a common power of rational consideration….

… And having cut himself off from the traditional knowledge of his society, and denied the value of any education more extensive than a training in a technique of analysis, he is apt to attribute to mankind a necessary inexperience in all the critical moments of life, and if he were more self-critical he might begin to wonder how the race had ever succeeded in surviving. [“Rationalism in Politics,” pp. 5-7, as republished in Rationalism in Politics and Other Essays]

If you have everything “figured out”, what is more natural than the desire to make it so? It takes a truly deep thinker to understand that everything can’t be “figured out”, and that rationalism is bunk. That is why intellectuals of the caliber of Oakeshott, Friederich Hayek, and Thomas Sowell are found so rarely in academia, and why jackboot-lickers like Paul Krugman abound.

(See also “Academic Bias“, “Intellectuals and Capitalism“,”Intellectuals and Society: A Review“, and “Rationalism, Empiricism, and Scientific Knowledge“.)

A Paradox for Liberals

Libertarianism is liberalism in the classic meaning of the term, given here by one Zack Beauchamp:

[L]iberalism refers to a school of thought that takes freedom, consent, and autonomy as foundational moral values. Liberals agree that it is generally wrong to coerce people, to seize control of their bodies or force them to act against their will….

Beauchamp, in the next paragraph, highlights the paradox inherent in liberalism:

Given that people will always disagree about politics, liberalism’s core aim is to create a generally acceptable mechanism for settling political disputes without undue coercion — to give everyone a say in government through fair procedures, so that citizens consent to the state’s authority even when they disagree with its decisions.

Which is to say that liberalism does entail coercion. Thus the paradox. (What is now called “liberalism” in America is so rife with coercion that only a person who is ignorant of the meaning of liberalism can call it that with a straight face.)

There is nothing new about this paradox, as far as I’m concerned. I wrote about it 14 years ago in “A Paradox for Libertarians“:

Libertarians, by definition, believe in the superiority of liberty: the negative right to be left alone — in one’s person, pursuits, and property — as long as one leaves others alone. Libertarians therefore believe in the illegitimacy of state-enforced values (e.g., income redistribution, censorship, punishment of “victimless” crimes) because they are inimical to liberty.

Some libertarians (minarchists, such as I) nevertheless believe in the necessity of a state, as long as the state’s role is restricted to the protection of liberty. Other libertarians (anarcho-capitalists) argue that the state itself is illegitimate because the existence of a state necessarily compromises liberty. I have dealt elsewhere with the anarcho-capitalist position, and have found it wanting. (See “But Wouldn’t Warlords Take Over?” and the posts linked to at the bottom of that post.)

Let’s nevertheless imagine a pure anarcho-capitalist society whose members agree voluntarily to leave each other alone. All social and economic transactions are voluntary. Contracts and disputes are enforced through arbitration, to which all parties agree to submit and by the results of which all parties agree to abide. A private agency enforces contractual obligations and adherence to the outcomes of arbitration. (You know that this anarcho-capitalist society is pure fantasy because a private agency with such power is a de facto state. And competing private agencies, each of which may represent a party to a dispute are de facto warlords. But I digress.)

Now, for the members of this fantasyland to enjoy liberty implies, among other things, absolute freedom of speech, except for speech that amounts to harassment, slander, or libel (which are forms of aggression that deprive others of liberty). But what about speech that would sunder the society into libertarian and non-libertarian factions? Suppose that a persuasive orator were to convince a potentially dominant faction of the society of the following proposition: The older members of society should be supported by the younger members, all of whom must “contribute” to the support of the elders, like it or not. Suppose further that the potentially dominant faction heeds the persuasive orator and forces everyone to “contribute” to the support of elders.

Note that our little society’s prior agreement to let everyone live in peace wouldn’t survive persuasive oratory (just as America’s relatively libertarian economic order didn’t survive FDR, the Constitution notwithstanding). Perhaps our little society should therefore adopt this restraint on liberty: No one may advocate or conspire in the coercion of the populace, for any end other than defense of the society.

Why an exception for defense? Imagine the long-term consequences for our little society if it were to dither as a marauding band approached, or if too few members of the society were to volunteer the resources needed to defeat the marauding band. What’s the good of the society’s commitment to liberty if it leads to the society’s demise?

Now, the restraint on speech and the exception for defense couldn’t be self-enforcing. There would have to a single agency empowered to enforce such things. That agency might as well be called the state.

Here, then, is the paradox for libertarians: Some aspects of liberty must be circumscribed in order to preserve most aspects of liberty.

The last word goes to Beauchamp, or rather to Carl Schmitt who is quoted by Beauchamp:

Even if Bolshevism is suppressed and Fascism held at bay, the crisis of contemporary parliamentarism would not be overcome in the least. For it has not appeared as a result of the appearance of those two opponents; it was there before them and will persist after them. Rather, the crisis springs from the consequences of modern mass democracy and in the final analysis from the contradiction of a liberal individualism burdened by moral pathos and a democratic sentiment governed essentially by political ideals. A century of historical alliance and common struggle against royal absolutism has obscured the awareness of this contradiction. But the crisis unfolds today ever more strikingly, and no cosmopolitan rhetoric can prevent or eliminate it. It is, in its depths, the inescapable contradiction of liberal individualism and democratic homogeneity [emphasis added].

The Crisis of Parliamentary Democracy (1926),
translated by Ellen Kennedy

(See also “Inventing ‘Liberalism‘”, which about the advent of the modern abomination, and “Conservatism vs. ‘Libertarianism’ and Leftism on the Moral Dimension“, especially the footnote about “libertarianism”.)

Reflections on the “Feel Good” War

Prompted by my current reading — another novel about World War II — and the viewing of yet another film about Winston Churchill’s leadership during that war.

World War II was by no means a “feel good” war at the time it was fought. But it became one, eventually, as memories of a generation’s blood, toil, tears, and sweat faded away, to be replaced by the consoling fact of total victory. (That FDR set the stage for the USSR’s long dominance of Eastern Europe and status as a rival world power is usually overlooked.)

World War II is a “feel good” war in that it has been and continues to be depicted in countless novels, TV series, and movies as a valiant, sometimes romantic, and ultimately successful effort to defeat evil enemies: Nazi Germany and Imperial Japan. Most of the treatments, in fact, are about the war in Europe against Nazi Germany, because Hitler lingers in the general view as a personification of evil. Also, to the extent that the treatments are about stirring speeches, heroism, espionage, sabotage, and resistance, they are more readily depicted (and more commonly imagined) as the efforts of white Americans, Britons, and citizens of the various European nations that had been conquered by Nazi Germany.

World War II is also a “feel good” war — for millions of Americans, at least — because it is a reminder that the United States, once upon a time, united to fight and decisively won a great war against evil enemies. Remembering it in that way is a kind of antidote to the memories of later wars that left bitterness, divisiveness, and a sense of futility (if not failure) in their wake: from Korea, Vietnam, Afghanistan, and Iraq.

That World War II was nothing like a “feel good” war while it was being fought should never be forgotten. Americans got off “lightly” by comparison with the citizens of enemy and Allied nations. But “lightly” means more than 400,000 combat deaths, almost 700,000 combat injuries (too many of them disabling and disfiguring), millions of lives disrupted, the reduction of Americans’ standard of living to near-Depression levels so that vast quantities of labor and materiel could be poured into the war effort, and — not the least of it — the dread that hung over Americans for several years before it became clear that the war would end in the defeat of Nazi Germany and Imperial Japan.

The generations that fought and lived through World War II deserved to look back on it as a “feel good” war, if that was their wont. But my impression — as a grandson, son, and nephew of members of those generations — is that they looked back on it as a part of their lives that they wouldn’t want to relive. They never spoke of it in my presence, and I was “all ears”, as they say.

But there was no choice. World War II had to be fought, and it had to be won. I only hope that if such a war comes along someday Americans will support it and fight it as fiercely and tenaciously as did their ancestors in World War II. If Americans do fight it fiercely and tenaciously it will be won. But I am not confident. the character of Americans has changed a lot — mostly for the worst — in the nearly 75 years since the end of World War II.

(See also “A Grand Strategy for the United States“, “Rating America’s Wars“, “The War on Terror As It Should Have Been Fought“, “1963: The Year Zero“, and “World War II As an Aberration“.)