models

Modeling Revisited

Arnold Kling comments on a post by John Taylor, who writes about the Macroeconomic Modelling and Model Comparison Network (MMCN), which

is one part of a larger project called the Macroeconomic Model Comparison Initiative (MMCI)…. That initiative includes the Macroeconomic Model Data Base, which already has 82 models that have been developed by researchers at central banks, international institutions, and universities. Key activities of the initiative are comparing solution methods for speed and accuracy, performing robustness studies of policy evaluations, and providing more powerful and user-friendly tools for modelers.

Kling says: “Why limit the comparison to models? Why not compare models with verbal reasoning?” I say: a pox on economic models, whether they are mathematical or verbal.

That said, I do harbor special disdain for mathematical models, including statistical estimates of such models. Reality is nuanced. Verbal descriptions of reality, being more nuanced than mathematics, can more closely represent reality than can be done with mathematics.

Mathematical modelers are quick to point out that a mathematical model can express complex relationships which are difficult to express in words. True, but the words must always precede the mathematics. Long usage may enable a person to grasp the meaning of 2 + 2 = 4 without consciously putting it into words, but only because he already done so and committed the formula to memory.

Do you remember word problems? As I remember them, the words came first:

John is twenty years younger than Amy, and in five years’ time he will be half her age. What is John’s age now?

Then came the math:

Solve for J [John’s age]:

J = A − 20
J + 5 = (A + 5) / 2

[where A = Amy’s age]

What would be the point of presenting the math, then asking for the words?

Mathematics is a man-made tool. It probably started with counting. Sheep? Goats? Bananas? It doesn’t matter what it was. What matters is that the actual thing, which had a spoken name, came before the numbering convention that enabled people to refer to three sheep without having to draw or produce three actual sheep.

But … when it came to bartering sheep for loaves of bread, or whatever, those wily ancestors of ours knew that sheep come in many sizes, ages, fecundity, and states of health, and in two sexes. (Though I suppose that the LGBTQ movement has by now “discovered” homosexual and transgender sheep, and transsexual sheep may be in the offing.) Anyway, there are so many possible combinations of sizes, ages, fecundity, and states of health that it was (and is) impractical to reduce them to numbers. A quick, verbal approximation would have to do in the absence of the real thing. And the real thing would have to be produced before Grog and Grok actually exchanged X sheep for Y loaves of bread, unless they absolutely trusted each other’s honesty and descriptive ability.

Things are somewhat different in this age of mass production and commodification. But even if it’s possible to add sheep that have been bred for near-uniformity or nearly identical loaves of bread or Paper Mate Mirado Woodcase Pencils, HB 2, Yellow Barrel, it’s not possible to add those pencils to the the sheep and the loaves of bread. The best that one could do is to list the components of such a conglomeration by name and number, with the caveat that there’s a lot of variability in the sheep, goats, banana, and bread.

An economist would say that it is possible to add a collection of disparate things: Just take the sales price of each one, multiply it by the quantity sold, and if you do that for every product and service produced in the U.S. during a year you have an estimate of GDP. (I’m being a bit loose with the definition of GDP, but it’s good enough for the point I wish to make.) Further, some economists will tout this or that model which estimates changes in the value of GDP as a function of such things as interest rates, the rate of government spending, and estimates of projected consumer spending.

I don’t disagree that GDP can be computed or that economic models can be concocted. But it is to say that such computations and models, aside from being notoriously inaccurate (even though they deal in dollars, not in quantities of various products and services), are essentially meaningless. Aside from the errors that are inevitable in the use of sampling to estimate the dollar value of billions of transactions, there is the essential meaninglessness of the dollar value. Every transaction represented in an estimate of GDP (or any lesser aggregation) has a different real value to each participant in the transaction. Further, those real values, even if they could be measured and expressed in “utils“, can’t be summed because “utils” are incommensurate — there is no such thing as a social-welfare function.

Quantitative aggregations are not only meaningless, but their existence simply encourages destructive government interference in economic affairs. Mathematical modeling of “aggregate economic activity” (there is no such thing) may serve as an amusing and even lucrative pastime, but it does nothing to advance the lives and fortunes of the vast majority of Americans. In fact, it serves to retard their lives and fortunes.

All of that because pointy-headed academics, power-lusting politicians, and bamboozled bureaucrats believe that economic aggregates and quantitative economic models are meaningful. If they spent more than a few minutes thinking about what those models are supposed to represent — and don’t and can’t represent — they would at least use them with a slight pang of conscience. (I hold little hope that they would abandon them. The allure of power and the urge to “do something” are just too strong.)

Economic aggregates and models gain meaning and precision only as their compass shrinks to discrete markets for closely similar products and services. But even in the quantification of such markets there will always be some kind of misrepresentation by aggregation, if only because tastes, preferences, materials, processes, and relative prices change constantly. Only a fool believes that a quantitative economic model (of any kind) is more than a rough approximation of past reality — an approximation that will fade quickly as time marches on.

Economist Tony Lawson puts it this way:

Given the modern emphasis on mathematical modelling it is important to determine the conditions in which such tools are appropriate or useful. In other words we need to uncover the ontological presuppositions involved in the insistence that mathematical methods of a certain sort be everywhere employed. The first thing to note is that all these mathematical methods that economists use presuppose event regularities or correlations. This makes modern economics a form of deductivism. A closed system in this context just means any situation in which an event regularity occurs. Deductivism is a form of explanation that requires event regularities. Now event regularities can just be assumed to hold, even if they cannot be theorised, and some econometricians do just that and dedicate their time to trying to uncover them. But most economists want to theorise in economic terms as well. But clearly they must do so in terms that guarantee event regularity results. The way to do this is to formulate theories in terms of isolated atoms. By an atom I just mean a factor that has the same independent effect whatever the context. Typically human individuals are portrayed as the atoms in question, though there is nothing essential about this. Notice too that most debates about the nature of rationality are beside the point. Mainstream modellers just need to fix the actions of the individual of their analyses to render them atomistic, i.e., to fix their responses to given conditions. It is this implausible fixing of actions that tends to be expressed though, or is the task of, any rationality axiom. But in truth any old specification will do, including fixed rule or algorithm following as in, say, agent based modelling; the precise assumption used to achieve this matters little. Once some such axiom or assumption-fixing behaviour is made economists can predict/deduce what the factor in question will do if stimulated. Finally the specification in this way of what any such atom does in given conditions allows the prediction activities of economists ONLY if nothing is allowed to counteract the actions of the atoms of analysis. Hence these atoms must additionally be assumed to act in isolation. It is easy to show that this ontology of closed systems of isolated atoms characterises all of the substantive theorising of mainstream economists.

It is also easy enough to show that the real world, the social reality in which we actually live, is of a nature that is anything but a set of closed systems of isolated atoms (see Lawson, [Economics and Reality, London and New York: Routledge] 1997, [Reorienting Economics, London and New York: Routledge] 2003).

Mathematical-statistical descriptions of economic phenomena are either faithful (if selective) depictions of one-off events (which are unlikely to recur) or highly stylized renditions of complex chains of events (which almost certainly won’t recur). As Arnold Kling says in his review of Richard Bookstaber’s The End of Theory,

people are assumed to know, now and for the indefinite future, the entire range of possibilities, and the likelihood of each. The alternative assumption, that the future has aspects that are not foreseeable today, goes by the name of “radical uncertainty.” But we might just call it the human condition. Bookstaber writes that radical uncertainty “leads the world to go in directions we had never imagined…. The world could be changing right now in ways that will blindside you down the road.”

I’m picking on economics because it’s an easy target. But the “hard sciences” have their problems, too. See, for example, my work in progress about Einstein’s special theory of relativity.


Related reading:

John Cochrane, “Mallaby, the Fed, and Technocratic Illusions“, The Grumpy Economist, July 5, 2017

Vincent Randall: “The Uncertainty Monster: Lessons from Non-Orthodox Economics“, Climate Etc., July 5, 2017

Related posts:

Modeling Is Not Science
Microeconomics and Macroeconomics
Why the “Stimulus” Failed to Stimulate
Baseball Statistics and the Consumer Price Index
The Keynesian Multiplier: Phony Math
Further Thoughts about the Keynesian Multiplier
The Wages of Simplistic Economics
The Essence of Economics
Economics and Science
Economists As Scientists
Mathematical Economics
Economic Modeling: A Case of Unrewarded Complexity
Economics from the Bottom Up
Unorthodox Economics: 1. What Is Economics?
Unorthodox Economics: 2. Pitfalls
Unorthodox Economics: 3. What Is Scientific about Economics?
Unorthodox Economics 4: A Parable of Political Economy

Modeling, Science, and Physics Envy

Climate Skeptic notes the similarity of climate models and macroeconometric models:

The climate modeling approach is so similar to that used by the CEA to score the stimulus that there is even a climate equivalent to the multiplier found in macro-economic models. In climate models, small amounts of warming from man-made CO2 are multiplied many-fold to catastrophic levels by hypothetical positive feedbacks, in the same way that the first-order effects of government spending are multiplied in Keynesian economic models. In both cases, while these multipliers are the single most important drivers of the models’ results, they also tend to be the most controversial assumptions. In an odd parallel, you can find both stimulus and climate debates arguing whether their multiplier is above or below one.

Here is my take, from “Modeling Is Not Science“:

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).

Scientists and politicians who stand by models of unfathomably complex processes are guilty of physics envy, at best, and fraud, at worst.

Modeling Is Not Science

The title of this post applies, inter alia, to econometric models — especially those that purport to forecast macroeconomic activity — and climate models — especially those that purport to forecast global temperatures. I have elsewhere essayed my assessments of macroeconomic and climate models. (See this and this, for example.) My purpose here is to offer a general warning about models that claim to depict and forecast the behavior of connected sets of phenomena (systems) that are large, complex, and dynamic. I draw, in part, on a paper that I wrote 28 years ago. That paper is about warfare models, but it has general applicability.

HEMIBEL THINKING

Philip M. Morse and George E. Kimball, pioneers in the field of military operations research — the analysis and modeling of military operations — wrote that the

successful application of operations research usually results in improvements by factors of 3 or 10 or more. . . . In our first study of any operation we are looking for these large factors of possible improvement. . . .

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Methods of Operations Research, 1946, p. 38)

This is science-speak for the following proposition: In large, complex, and dynamic systems (e.g., war, economy, climate) there is much uncertainty about the relevant parameters, about how to characterize their interactions mathematically, and about their numerical values.

Hemibel thinking assumes great importance in light of the imprecision inherent in models of large, complex, and dynamic systems. Consider, for example, a simple model with only 10 parameters. Even if such a model doesn’t omit crucial parameters or mischaracterize their interactions,  its results must be taken with large doses of salt. Simple mathematics tells the cautionary tale: An error of about 12 percent in the value of each parameter can produce a result that is off by a factor of 3 (a hemibel); An error of about 25 percent in the value of each parameter can produce a result that is off by a factor of 10. (Remember, this is a model of a relatively small system.)

If you think that models and “data” about such things as macroeconomic activity and climatic conditions cannot be as inaccurate as that, you have no idea how such models are devised or how such data are collected and reported. It would be kind to say that such models are incomplete, inaccurate guesswork. It would be fair to say that all too many of them reflect their developers’ policy biases.

Of course, given a (miraculously) complete model, data errors might (miraculously) be offsetting, but don’t bet on it. It’s not that simple: Some errors will be large and some errors will be small (but which are which?), and the errors may lie in either direction (but in which direction?). In any event, no amount of luck can prevent a modeler from constructing a model whose estimates advance a favored agenda (e.g., massive, indiscriminate government spending; massive, futile, and costly efforts to cool the planet).

NO MODEL IS EVER PROVEN

The construction of a model is only one part of the scientific method. A model means nothing unless it can be tested repeatedly against facts (facts not already employed in the development of the model) and, through such tests, is found to be more accurate than alternative explanations of the same facts.As Morse and Kimball put it,

[t]o be valuable [operations research] must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. (Op. cit., p. 10)

Even after rigorous testing, a model is never proven. It is, at best, a plausible working hypothesis about relations between the phenomena that it encompasses.

A model is never proven for two reasons. First, new facts may be discovered that do not comport with the model. Second, the facts upon which a model is based may be open to a different interpretation, that is, they may support a new model that yields better predictions than its predecessor.

The fact that a model cannot be proven can be take as an excuse for action: “We must act on the best information we have.”  That excuse — which justifies an entire industry, namely, government-funded analysis — does not fly, as I discuss below.

MODELS LIE WHEN LIARS MODEL

Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a masterful demonstration of a model that proves what it was designed to prove by the assumptions built into it, see this.

IMPLICATIONS

Government policies can be dangerous and impoverishing things. Despite that, it is hard (if not impossible) to modify and reverse government policies. Consider, for example, the establishment of public schools more than a century ago, the establishment of Social Security more than 70 years ago, and the establishment of Medicare and Medicaid more than 40 years ago. There is plenty of evidence that all four institutions are monumentally expensive failures. But all four institutions have become so entrenched that to call for their abolition is to be thought of as an eccentric, if not an uncaring anti-government zealot. (For the latest about public schools, see this.)

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).

CLOSING THOUGHTS

The average person is so mystified and awed by “science” that he has little if any understanding of its limitations and pitfalls, some of which I have addressed here in the context of modeling. The average person’s mystification and awe are unjustified, given that many so-called scientists exploit the public’s mystification and awe in order to advance personal biases, gain the approval of other scientists (whence “consensus”), and garner funding for research that yields results congenial to its sponsors (e.g., global warming is an artifact of human activity).

Isaac Newton, who must be numbered among the greatest scientists in human history, was not a flawless scientist. (Has there ever been one?) But scientists and non-scientists alike should heed Newton on the subject of scientific humility:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. (Quoted in Horace Freeland Judson,The Search for Solutions, 1980, p. 5.)