Government Failure: An Example

John Goodman’s post about “Government Failure” is chock-full of wisdom. Among other things, Goodman nails the model of “market failure” used by some economists and many politicians:

When economists talk about “market failure” they begin with a model in which consumer welfare is maximized. “Market failure” arises when imperfections cause outcomes that fall short of the ideal.  If we were to do the same thing in politics, we would begin with a model in which the political system produced ideal outcomes and then consider factors that take us away from the ideal.

The model “in which consumer welfare is maximized” — perfect competition — is unattainable in most of the real world, given constant shifts in tastes, preferences, technologies, the availability of factors of production. “Market failure” is nothing more than a label that a left-wing economist or politician pins out market a outcome of which he or his constituents (e.g., labor unions) happen to disapprove. (The long version of my case against “market failure” is here.)

Goodman continues:

[W]hereas in economics, “market failure” is considered an exception to the norm, in politics, “government failure” is the norm.  In general, there is no model of political decision making that can reliably produce ideal outcomes.

I offer an example of a not-unusual kind of government failure: the scam perpetrated by Dennis Montgomery on intelligence officials, and the subsequent effort to cover up the gullibility of those officials. This is from “Hiding Details of Dubious Deal, U.S. Invokes National Security” (The New York Times, February 19, 2011):

For eight years, government officials turned to Dennis Montgomery, a California computer programmer, for eye-popping technology that he said could catch terrorists. Now, federal officials want nothing to do with him and are going to extraordinary lengths to ensure that his dealings with Washington stay secret.

The Justice Department, which in the last few months has gotten protective orders from two federal judges keeping details of the technology out of court, says it is guarding state secrets that would threaten national security if disclosed. But others involved in the case say that what the government is trying to avoid is public embarrassment over evidence that Mr. Montgomery bamboozled federal officials….

Interviews with more than two dozen current and former officials and business associates and a review of documents show that Mr. Montgomery and his associates received more than $20 million in government contracts by claiming that software he had developed could help stop Al Qaeda’s next attack on the United States. But the technology appears to have been a hoax, and a series of government agencies, including the Central Intelligence Agency and the Air Force, repeatedly missed the warning signs, the records and interviews show.

Mr. Montgomery’s former lawyer, Michael Flynn — who now describes Mr. Montgomery as a “con man” — says he believes that the administration has been shutting off scrutiny of Mr. Montgomery’s business for fear of revealing that the government has been duped.

“The Justice Department is trying to cover this up,” Mr. Flynn said. “If this unravels, all of the evidence, all of the phony terror alerts and all the embarrassment comes up publicly, too. The government knew this technology was bogus, but these guys got paid millions for it.”

Similar cases abound in the unrecorded history of government contracting. Most of them don’t involve outright scams, but they do involve vain, gullible, and pressured government officials who tolerate — and even encourage — shoddy work on the part of contractors. Why? Because (a) they have money to spend, (b) they’re expected to spend it, and (c) there’s no bottom-line accountability.

If the flaws in government programs and systems are detected, it’s usually years or decades after their inception, by which time the responsible individuals have gone on (usually) to better jobs or cushy pensions. And when the flaws are detected, the usual response of the politicians, officials, and bureaucrats with a stake in a program is to throw more money at it. It’s not their money, so what do they care?

I offer an illustrative example from my long-ago days as a defense analyst. There was an ambitious rear admiral (as they all are) whose “shop” in the Pentagon was responsible for preparing the Navy’s long-range plan for the development and acquisition of new ships, aircraft, long-range detections systems, missiles, and so on.

The admiral — like many of his contemporaries in the officer corps of the armed forces — had been indoctrinated in the RAND-McNamara tradition of quantitative analysis. Which is to say that most of them were either naïve or opportunistic believers in the reductionism of cost-effectiveness analysis.

By that time (this was in the early 1980s) I had long outgrown my own naïveté about the power of quantification. (An account of my conversion is here.) But I was still naïve about admirals and their motivations. Having been asked by the admiral for a simple, quantitative model with which he could compare the effectiveness of alternative future weapon systems, I marched into his office with a presentation that was meant to convince him of his folly. (This post contains the essence of my presentation.)

For my pains, I was banished forever from the admiral’s presence and given a new assignment. (I was working for a non-profit advisory organization with fixed funding, so my employment wasn’t at stake.) The admiral wanted to know how to do what he had made up his mind to do, not why he had chosen to do something that couldn’t be done except by committing intellectual fraud.

Multiply this kind of government-contractor relationship by a million, throw in the usual kind of contractor who is willing to sell the client what the client wants — feasible or not — and you have a general picture of the kind of failure that pervades government contracting. Adapt that picture to inter-governmental relationships, where the primary job of each bureaucracy (and its political patrons) is to preserve its funding, without regard for the (questionable) value of its services to taxpayers, and you have a general picture of what drives government spending.

In sum, what drives government spending is not the welfare of the American public. It is cupidity, ego, power-lust, ignorance, stupidity, and — above all — lack of real accountability. Private enterprises pay for their mistakes because, in the end, they are held accountable by consumers. Governments, by contrast, hold consumers accountable (as taxpayers).

Perhaps — just perhaps — the era of governmental non-accountability is coming to an end. We shall see.

Modeling Is Not Science

The title of this post applies, inter alia, to econometric models — especially those that purport to forecast macroeconomic activity — and climate models — especially those that purport to forecast global temperatures. I have elsewhere essayed my assessments of macroeconomic and climate models. (See this and this, for example.) My purpose here is to offer a general warning about models that claim to depict and forecast the behavior of connected sets of phenomena (systems) that are large, complex, and dynamic. I draw, in part, on a paper that I wrote 28 years ago. That paper is about warfare models, but it has general applicability.

HEMIBEL THINKING

Philip M. Morse and George E. Kimball, pioneers in the field of military operations research — the analysis and modeling of military operations — wrote that the

successful application of operations research usually results in improvements by factors of 3 or 10 or more. . . . In our first study of any operation we are looking for these large factors of possible improvement. . . .

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Methods of Operations Research, 1946, p. 38)

This is science-speak for the following proposition: In large, complex, and dynamic systems (e.g., war, economy, climate) there is much uncertainty about the relevant parameters, about how to characterize their interactions mathematically, and about their numerical values.

Hemibel thinking assumes great importance in light of the imprecision inherent in models of large, complex, and dynamic systems. Consider, for example, a simple model with only 10 parameters. Even if such a model doesn’t omit crucial parameters or mischaracterize their interactions,  its results must be taken with large doses of salt. Simple mathematics tells the cautionary tale: An error of about 12 percent in the value of each parameter can produce a result that is off by a factor of 3 (a hemibel); An error of about 25 percent in the value of each parameter can produce a result that is off by a factor of 10. (Remember, this is a model of a relatively small system.)

If you think that models and “data” about such things as macroeconomic activity and climatic conditions cannot be as inaccurate as that, you have no idea how such models are devised or how such data are collected and reported. It would be kind to say that such models are incomplete, inaccurate guesswork. It would be fair to say that all too many of them reflect their developers’ policy biases.

Of course, given a (miraculously) complete model, data errors might (miraculously) be offsetting, but don’t bet on it. It’s not that simple: Some errors will be large and some errors will be small (but which are which?), and the errors may lie in either direction (but in which direction?). In any event, no amount of luck can prevent a modeler from constructing a model whose estimates advance a favored agenda (e.g., massive, indiscriminate government spending; massive, futile, and costly efforts to cool the planet).

NO MODEL IS EVER PROVEN

The construction of a model is only one part of the scientific method. A model means nothing unless it can be tested repeatedly against facts (facts not already employed in the development of the model) and, through such tests, is found to be more accurate than alternative explanations of the same facts.As Morse and Kimball put it,

[t]o be valuable [operations research] must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. (Op. cit., p. 10)

Even after rigorous testing, a model is never proven. It is, at best, a plausible working hypothesis about relations between the phenomena that it encompasses.

A model is never proven for two reasons. First, new facts may be discovered that do not comport with the model. Second, the facts upon which a model is based may be open to a different interpretation, that is, they may support a new model that yields better predictions than its predecessor.

The fact that a model cannot be proven can be take as an excuse for action: “We must act on the best information we have.”  That excuse — which justifies an entire industry, namely, government-funded analysis — does not fly, as I discuss below.

MODELS LIE WHEN LIARS MODEL

Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a masterful demonstration of a model that proves what it was designed to prove by the assumptions built into it, see this.

IMPLICATIONS

Government policies can be dangerous and impoverishing things. Despite that, it is hard (if not impossible) to modify and reverse government policies. Consider, for example, the establishment of public schools more than a century ago, the establishment of Social Security more than 70 years ago, and the establishment of Medicare and Medicaid more than 40 years ago. There is plenty of evidence that all four institutions are monumentally expensive failures. But all four institutions have become so entrenched that to call for their abolition is to be thought of as an eccentric, if not an uncaring anti-government zealot. (For the latest about public schools, see this.)

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).

CLOSING THOUGHTS

The average person is so mystified and awed by “science” that he has little if any understanding of its limitations and pitfalls, some of which I have addressed here in the context of modeling. The average person’s mystification and awe are unjustified, given that many so-called scientists exploit the public’s mystification and awe in order to advance personal biases, gain the approval of other scientists (whence “consensus”), and garner funding for research that yields results congenial to its sponsors (e.g., global warming is an artifact of human activity).

Isaac Newton, who must be numbered among the greatest scientists in human history, was not a flawless scientist. (Has there ever been one?) But scientists and non-scientists alike should heed Newton on the subject of scientific humility:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. (Quoted in Horace Freeland Judson,The Search for Solutions, 1980, p. 5.)


Related reading: Willis Eschenbach, “How Not to Model the Historical Temperature“, Watts Up With That?, March 25, 2018