Intuition vs. Rationality

To quote myself:

[I]ntuition [is] a manifestation of intelligence, not a cause of it. To put it another way, intuition is not an emotion; it is the opposite of emotion.

Intuition is reasoning at high speed. For example, a skilled athlete knows where and when to make a move (e.g., whether and where to swing at a pitched ball) because he subconsciously makes the necessary calculations, which he could not make consciously in the split-second that is available to him once the pitcher releases the ball.

Intuition is an aspect of reasoning (rationality) that is missing from “reason” — the cornerstone of the Enlightenment. The Enlightenment’s proponents and defenders are always going on about the power of logic applied to facts, and how that power brought mankind (or mankind in the West, at least) out of the benighted Middle Ages (via the Renaissance) and into the light of Modernity.

But “reason” of the kind associated with the Enlightenment is of the plodding variety, whereby “truth” is revealed at the conclusion of deliberate, conscious processes (e.g., the scientific method). But those processes, as I point out in the preceding paragraph, are susceptible of error because they rest on errors and assumptions that are hidden from view — often wittingly, as in the case of “climate change“.

Science, for all of its value to mankind, requires abstraction from reality. That is to say, it is reductionist. A good example is the arbitrary division of continuous social and scientific processes into discrete eras (the Middle Ages, the Renaissance, the Enlightenment, etc.). This ought to be a warning that mere abstractions are often, and mistakenly, taken as “facts”.

Reductionism makes it possible to “prove” almost anything by hiding errors and assumptions (wittingly or not) behind labels. Thus: x + y = z only when x and y are strictly defined and commensurate. Otherwise, x and y cannot be summed, or their summation can result in many correct values other than z. Further, as in the notable case of “climate change”, it is easy to assume (from bias or error) that z is determined only by x and y, when there are good reasons to believe that it is also determined by other factors: known knowns, known unknowns, and unknown unknowns.

Such things happen because human beings are ineluctably emotional and biased creatures, and usually unaware of their emotions and biases. The Enlightenment’s proponents and defenders are no more immune from emotion and bias than the “lesser” beings whom they presume to lecture about rationality.

The plodding search for “answers” is, furthermore, inherently circumscribed because it dismisses or minimizes the vital role played by unconscious deliberation — to coin a phrase. How many times have you found the answer to a question, a problem, or a puzzle by putting aside your deliberate, conscious search for the answer, only to have it come to you in a “Eureka!” moment sometime later (perhaps after a nap or good night’s sleep). That’s your brain at work in ways that aren’t well understood.

This process (to put too fine a word on it) is known as combinatorial play. Its importance has been acknowledged by many creative persons. Combinatorial play can be thought of as slow-motion intuition, where the brain takes some time to assemble (unconsciously) existing knowledge into an answer to a question, a problem, or a puzzle.

There is also fast-motion intuition, an example of which I invoked in the quotation at the top of this post: the ability of a batter to calculate in a split-second where a pitch will be when it reaches him. Other examples abound, including such vital ones as the ability of drivers to maneuver lethal objects in infinitely varied and often treacherous conditions. Much is made of the number of fatal highway accidents; too little is made of their relative infrequency given the billions of daily opportunities for their occurrence.  Imagine the carnage if drivers relied on plodding “reason” instead of fast-motion intuition.

The plodding version of “reason” that has been celebrated since the Enlightenment is therefore just one leg of a triad: thinking quickly and unconsciously, thinking somewhat less quickly and unconsciously, and thinking slowly and consciously.

Wasn’t it ever thus? Of course it was. Which means that the Enlightenment and its sequel unto the present day have merely fetishized one mode of dealing with the world and its myriad uncertainties. I would have said arriving at the truth, but it is well known (except by ignorant science-idolaters) that scientific “knowledge” is provisional and ever-changing. (Just think of the many things that were supposed to be bad for you but are now supposed to be good for you, and conversely.)

I am not a science-denier by any means. But scientific “knowledge” must be taken with copious quantities of salt because it is usually inadequate in the face of messy reality. A theoretical bridge, for example, may hold up under theoretical conditions, but it is likely to collapse when built in the real world, where there is much uncertainty about present and future conditions (e.g., the integrity of materials, adherence to best construction practices, soil conditions, the cumulative effects of traffic). An over-built bridge — the best kind — is one that allows wide margins of error for such uncertainties. The same is true of planes, trains, automobiles, buildings, and much else that our lives depend on. All such things fail less frequently than in the past not only because of the advance of knowledge but also because greater material affluence enables the use of designs and materials that afford wider margins of error.

In any event, too little credit is given to the other legs of reason’s triad: fast-motion and slow-motion intuition. Any good athlete, musician, or warrior will attest the the value former. I leave it to Albert Einstein to attest to the value of the latter,

combinatory [sic] play seems to be the essential feature in productive thought — before there is any connection with logical construction in words or other kinds of signs which can be communicated to others….

[F]ull consciousness is a limit case which can never be fully accomplished. This seems to me connected with the fact called the narrowness of consciousness.


Related page and category:

Modeling and Science
Science and Understanding

A (Long) Footnote about Science

In “Deduction, Induction, and Knowledge” I make a case that knowledge (as opposed to belief) can only be inductive, that is, limited to specific facts about particular phenomena. It’s true that a hypothesis or theory about a general pattern of relationships (e.g., the general theory of relativity) can be useful, and even necessary. As I say at the end of “Deduction…”, the fact that a general theory can’t be proven

doesn’t — and shouldn’t — stand in the way of acting as if we possess general knowledge. We must act as if we possess general knowledge. To do otherwise would result in stasis, or analysis-paralysis.

Which doesn’t mean that a general theory should be accepted just because it seems plausible. Some general theories — such as global climate models (or GCMs) are easily falsified. They persist only because pseudo-scientists and true believers refuse to abandon them. (There is no such thing as “settled science”.)

Neil Lock, writing at Watts Up With That?, offers this perspective on inductive vs. deductive thinking:

Bottom up thinking is like the way we build a house. Starting from the ground, we work upwards, using what we’ve done already as support for what we’re working on at the moment. Top down thinking, on the other hand, starts out from an idea that is a given. It then works downwards, seeking evidence for the idea, or to add detail to it, or to put it into practice….

The bottom up thinker seeks to build, using his senses and his mind, a picture of the reality of which he is a part. He examines, critically, the evidence of his senses. He assembles this evidence into percepts, things he perceives as true. Then he pulls them together and generalizes them into concepts. He uses logic and reason to seek understanding, and he often stops to check that he is still on the right lines. And if he finds he has made an error, he tries to correct it.

The top down thinker, on the other hand, has far less concern for logic or reason, or for correcting errors. He tends to accept new ideas only if they fit his pre-existing beliefs. And so, he finds it hard to go beyond the limitations of what he already knows or believes. [“‘Bottom Up’ versus ‘Top Down’ Thinking — On Just about Everything“, October 22, 2017]

(I urge you to read the whole thing, in which Lock applies the top down-bottom up dichotomy to a broad range of issues.)

Lock overstates the distinction between the two modes of thought. A lot of “bottom up” thinkers derive general hypotheses from their observations about particular events. But — and this is a big “but” — they are also amenable to revising their hypotheses when they encounter facts that contradict them. The best scientists are bottom-up and top-down thinkers whose beliefs are based on bottom-up thinking.

General hypotheses are indispensable guides to “everyday” living. Some of them (e.g., fire burns, gravity causes objects to fall) are such reliable guides that it’s foolish to assume their falsity. Nor does it take much research to learn, for example, that there are areas within a big city where violent crime is rampant. A prudent person — even a “liberal” one — will therefore avoid those areas.

There are also general patterns — now politically incorrect to mention — with respect to differences in physical, psychological, and intellectual traits and abilities between men and women and among races. (See this, this, and this, for example.) These patterns explain disparities in achievement, but they are ignored by true believers who would wish away the underlying causes and penalize those who are more able (in a relevant dimension) for the sake of ersatz equality. The point is that a good many people — perhaps most people — studiously ignore facts of some kind in order to preserve their cherished beliefs about themselves and the world around them.

Which brings me back to science and scientists. Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

This is certainly the case in physics, where scientists admit that the standard model of sub-atomic physics “proves” that the universe shouldn’t exist. (See Andrew Griffin, “The Universe Shouldn’t Exist, Scientists Say after Finding Bizarre Behaviour of Anti-Matter“, The Independent, October 23, 2017.) It is most certainly the case in climatology, where many pseudo-scientists have deployed hopelessly flawed models in the service of policies that would unnecessarily cripple the economy of the United States.

As I say here,

scientists are human and fallible. It is in the best tradition of science to distrust their claims and to dismiss their non-scientific utterances.

Non-scientific utterances are not only those which have nothing to do with a scientist’s field of specialization, but also include those that are based on theories which derive from preconceptions more than facts. It is scientific to admit lack of certainty. It is unscientific — anti-scientific, really — to proclaim certainty about something that is so little understood the origin of the universe or Earth’s climate.


Related posts:
Hemibel Thinking
The Limits of Science
The Thing about Science
Science in Politics, Politics in Science
Global Warming and the Liberal Agenda
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
“Warmism”: The Myth of Anthropogenic Global Warming
Modeling Is Not Science
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Pinker Commits Scientism
AGW: The Death Knell
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Is Science Self-Correcting?
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
Bayesian Irrationality
Mettenheim on Einstein’s Relativity
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unkownable

A True Scientist Speaks

I am reading, with great delight, Old Physics for New: A Worldview Alternative to Einstein’s Relativity Theory, by Thomas E. Phipps Jr. (1925-2016). Dr. Phipps was a physicist who happened to have been a member of a World War II operations research unit that evolved into the think-tank where I worked for 30 years.

Phipps challenged the basic tenets of Einstein’s special theory of relativity (STR) in Old Physics for New, an earlier book (Heretical Verities: Mathematical Themes in Physical Description), and many of his scholarly articles. I have drawn on Old Physics for New in two of my posts about STR (this and this), and will do so in future posts on the subject. But aside from STR, about which Phipps is refreshingly skeptical, I admire his honesty and clear-minded view of science.

Regarding Phipps’s honesty, I turn to his preface to the second edition of Old Physics for New:

[I]n the first edition I wrongly claimed awareness of two “crucial” experiments that would decide between Einstein’s special relativity theory and my proposed alternative. These two were (1) an accurate assessment of stellar aberration and (2) a measurement of light speed in orbit. Only the first of these is valid. The other was an error on my part, which I am obligated and privileged to correct here. [pp. xi-xii]

Phipps’s clear-minded view of science is evident throughout the book. In the preface, he scores a direct hit on the pseudo-scientific faddism:

The attitude of the traditional scientist toward lies and errors has always been that it is his job to tell the truth and to eradicate mistakes. Lately, scientists, with climate science in the van, have begun openly to espouse an opposite view, a different paradigm, which marches under the black banner of “post-normal science.”

According to this new perception, before the scientist goes into his laboratory it is his duty, for the sake of mankind, to study the worldwide political situation and to decide what errors need promulgating and what lies need telling. Then he goes into his laboratory, interrogates his computer, fiddles his theory, fabricates or massages his data, etc., and produces the results required to support those predetermined lies and errors. Finally he emerges into the light of publicity and writes reports acceptable to like-minded bureaucrats in such government agencies as the National Science Foundation, offers interviews to reporters working for like-minded bosses in the media, testifies before Congress, etc., all in such a way as to suppress traditional science and ultimately to make it impossible….

In this way post-normal science wages pre-emptive war on what Thomas Kuhn famously called “normal science,” because the latter fails to promote with adequate zeal those political and social goals that the post-normal scientist happens to recognize as deserving promotion…. Post-normal behavior seamlessly blends the implacable arrogance of the up-to-date terrorist with the technique of The Big Lie, pioneered by Hitler and Goebbels…. [pp. xii-xiii]

I regret deeply that I never met or corresponded with Dr. Phipps.

Economists As Scientists

This is the third entry in a series of loosely connected posts on economics. The first entry is here and the second entry is here. (Related posts by me are noted parenthetically throughout this one.)

Science is something that some people “do” some of the time. There are full-time human beings and part-time scientists. And the part-timers are truly scientists only when they think and act in accordance with the scientific method.*

Acting in accordance with the scientific method is a matter of attitude and application. The proper attitude is one of indifference about the correctness of a hypothesis or theory. The proper application rejects a hypothesis if it can’t be tested, and rejects a theory if it’s refuted (falsified) by relevant and reliable observations.

Regarding attitude, I turn to the most famous person who was sometimes a scientist: Albert Einstein. This is from the Wikipedia article about the Bohr-Einstein debate:

The quantum revolution of the mid-1920s occurred under the direction of both Einstein and [Niels] Bohr, and their post-revolutionary debates were about making sense of the change. The shocks for Einstein began in 1925 when Werner Heisenberg introduced matrix equations that removed the Newtonian elements of space and time from any underlying reality. The next shock came in 1926 when Max Born proposed that mechanics were to be understood as a probability without any causal explanation.

Einstein rejected this interpretation. In a 1926 letter to Max Born, Einstein wrote: “I, at any rate, am convinced that He [God] does not throw dice.” [Apparently, Einstein also used the line in Bohr’s presence, and Bohr replied, “Einstein, stop telling God what to do.” — TEA]

At the Fifth Solvay Conference held in October 1927 Heisenberg and Born concluded that the revolution was over and nothing further was needed. It was at that last stage that Einstein’s skepticism turned to dismay. He believed that much had been accomplished, but the reasons for the mechanics still needed to be understood.

Einstein’s refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated.

It’s true that quantum mechanics was inchoate in the mid-1920s, and that it took a couple of decades to mature into quantum field theory. But there’s more than a trace of “attitude” in Einstein’s refusal to accept quantum mechanics, to stay abreast of developments in the theory, and to search quixotically for his own theory of everything, which he hoped would obviate the need for a non-deterministic explanation of quantum phenomena.

Improper application of the scientific method is rife. See, for example the Wikipedia article about the replication crisis, John Ioannidis’s article, “Why Most Published Research Findings Are False.” (See also “Ty Cobb and the State of Science” and “Is Science Self-Correcting?“) For a thorough analysis of the roots of the crisis, read Michael Hart’s book, Hubris: The Troubling Science, Economics, and Politics of Climate Change.

A bad attitude and improper application are both found among the so-called scientists who declare that the “science” of global warming is “settled,” and that human-generated CO2 emissions are the primary cause of the apparent rise in global temperatures during the last quarter of the 20th century. The bad attitude is the declaration of “settled science.” In “The Science Is Never Settled” I give many prominent examples of the folly of declaring it to be “settled.”

The improper application of the scientific method with respect to global warming began with the hypothesis that the “culprit” is CO2 emissions generated by the activities of human beings — thus anthropogenic global warming (AGW). There’s no end of evidence to the contrary, some of which is summarized in these posts and many of the links found therein. There’s enough evidence, in my view, to have rejected the CO2 hypothesis many times over. But there’s a great deal of money and peer-approval at stake, so the rush to judgment became a stampede. And attitude rears its ugly head when pro-AGW “scientists” shun the real scientists who are properly skeptical about the CO2 hypothesis, or at least about the degree to which CO2 supposedly influences temperatures. (For a depressingly thorough account of the AGW scam, read Michael Hart’s Hubris: The Troubling Science, Economics, and Politics of Climate Change.)

I turn now to economists, as I have come to know them in more than fifty years of being taught by them, working with them, and reading their works. Scratch an economist and you’re likely to find a moralist or reformer just beneath a thin veneer of rationality. Economists like to believe that they’re objective. But they aren’t; no one is. Everyone brings to the table a large serving of biases that are incubated in temperament, upbringing, education, and culture.

Economists bring to the table a heaping helping of tunnel vision. “Hard scientists” do, too, but their tunnel vision is generally a good thing, because it’s actually aimed at a deeper understanding of the inanimate and subhuman world rather than the advancement of a social or economic agenda. (I make a large exception for “hard scientists” who contribute to global-warming hysteria, as discussed above.)

Some economists, especially behavioralists, view the world through the lens of wealth-and-utility-maximization. Their great crusade is to force everyone to make rational decisions (by their lights), through “nudging.” It almost goes without saying that government should be the nudger-in-chief. (See “The Perpetual Nudger” and the many posts linked to therein.)

Other economists — though far fewer than in the past — have a thing about monopoly and oligopoly (the domination of a market by one or a few sellers). They’re heirs to the trust-busting of the late 1800s and early 1900s, a movement led by non-economists who sought to blame the woes of working-class Americans on the “plutocrats” (Rockefeller, Carnegie, Ford, etc.) who had merely made life better and more affordable for Americans, while also creating jobs for millions of them and reaping rewards for the great financial risks that they took. (See “Monopoly and the General Welfare” and “Monopoly: Private Is Better than Public.”) As it turns out, the biggest and most destructive monopoly of all is the federal government, so beloved and trusted by trust-busters — and too many others. (See “The Rahn Curve Revisited.”)

Nowadays, a lot of economists are preoccupied by income inequality, as if it were something evil and not mainly an artifact of differences in intelligence, ambition, and education, etc. And inequality — the prospect of earning rather grand sums of money — is what drives a lot of economic endeavor, to good of workers and consumers. (See “Mass (Economic) Hysteria: Income Inequality and Related Themes” and the many posts linked to therein.) Remove inequality and what do you get? The Soviet Union and Communist China, in which everyone is equal except party operatives and their families, friends, and favorites.

When the inequality-preoccupied economists are confronted by the facts of life, they usually turn their attention from inequality as a general problem to the (inescapable) fact that an income distribution has a top one-percent and top one-tenth of one-percent — as if there were something especially loathsome about people in those categories. (Paul Krugman shifted his focus to the top one-tenth of one percent when he realized that he’s in the top one percent, so perhaps he knows that’s he’s loathsome and wishes to deny it, to himself.)

Crony capitalism is trotted out as a major cause of very high incomes. But that’s hardly a universal cause, given that a lot of very high incomes are earned by athletes and film stars beside whom most investment bankers and CEOs are making peanuts. Moreover, as I’ve said on several occasions, crony capitalists are bright and driven enough to be in the stratosphere of any income distribution. Further, the fertile soil of crony capitalism is the regulatory power of government that makes it possible.

Many economists became such, it would seem, in order to promote big government and its supposed good works — income redistribution being one of them. Joseph Stiglitz and Paul Krugman are two leading exemplars of what I call the New Deal school of economic thought, which amounts to throwing government and taxpayers’ money at every perceived problem, that is, every economic outcome that is deemed unacceptable by accountants of the soul. (See “Accountants of the Soul.”)

Stiglitz and Krugman — both Nobel laureates in economics — are typical “public intellectuals” whose intelligence breeds in them a kind of arrogance. (See “Intellectuals and Society: A Review.”) It’s the kind of arrogance that I mentioned in the preceding post in this series: a penchant for deciding what’s best for others.

New Deal economists like Stiglitz and Krugman carry it a few steps further. They ascribe to government an impeccable character, an intelligence to match their own, and a monolithic will. They then assume that this infallible and wise automaton can and will do precisely what they would do: Create the best of all possible worlds. (See the many posts in which I discuss the nirvana fallacy.)

New Deal economists, in other words, live their intellectual lives  in a dream-world populated by the likes of Jiminy Cricket (“When You Wish Upon a Star”), Dorothy (“Somewhere Over the Rainbow”), and Mary Jane of a long-forgotten comic book (“First I shut my eyes real tight, then I wish with all my might! Magic words of poof, poof, piffles, make me just as small as [my mouse] Sniffles!”).

I could go on, but you should by now have grasped the point: What too many economists want to do is change human nature, channel it in directions deemed “good” (by the economist), or simply impose their view of “good” on everyone. To do such things, they must rely on government.

It’s true that government can order people about, but it can’t change human nature, which has an uncanny knack for thwarting Utopian schemes. (Obamacare, whose chief architect was economist Jonathan Gruber, is exhibit A this year.) And government (inconveniently for Utopians) really consists of fallible, often unwise, contentious human beings. So government is likely to march off in a direction unsought by Utopian economists.

Nevertheless, it’s hard to thwart the tax collector. The regulator can and does make things so hard for business that if one gets off the ground it can’t create as much prosperity and as many jobs as it would in the absence of regulation. And the redistributor only makes things worse by penalizing success. Tax, regulate, and redistribute should have been the mantra of the New Deal and most presidential “deals” since.

I hold economists of the New Deal stripe partly responsible for the swamp of stagnation into which the nation’s economy has descended. (See “Economic Growth Since World War II.”) Largely responsible, of course, are opportunistic if not economically illiterate politicians who pander to rent-seeking, economically illiterate constituencies. (Yes, I’m thinking of old folks and the various “disadvantaged” groups with which they have struck up an alliance of convenience.)

The distinction between normative economics and positive economics is of no particular use in sorting economists between advocates and scientists. A lot of normative economics masquerades as positive economics. The work of Thomas Piketty and his comrades-in-arms comes to mind, for example. (See “McCloskey on Piketty.”) Almost everything done to quantify and defend the Keynesian multiplier counts as normative economics, inasmuch as the work is intended (wittingly or not) to defend an intellectual scam of 80 years’ standing. (See “The Keynesian Multiplier: Phony Math,” “The True Multiplier,” and “Further Thoughts about the Keynesian Multiplier.”)

Enough said. If you want to see scientific economics in action, read Regulation. Not every article in it exemplifies scientific inquiry, but a good many of them do. It’s replete with articles about microeconomics, in which the authors uses real-world statistics to validate and quantify the many axioms of economics.

A final thought is sparked by Arnold Kling’s post, “Ed Glaeser on Science and Economics.” Kling writes:

I think that the public has a sort of binary classification. If it’s “science,” then an expert knows more than the average Joe. If it’s not a science, then anyone’s opinion is as good as anyone else’s. I strongly favor an in-between category, called a discipline. Think of economics as a discipline, where it is possible for avid students to know more than ordinary individuals, but without the full use of the scientific method.

On this rare occasion I disagree with Kling. The accumulation of knowledge about economic variables, or pseudo-knowledge such as estimates of GDP (see “Macroeconomics and Microeconomics“), either leads to well-tested, verified, and reproducible theories of economic behavior or it leads to conjectures, of which there are so many opposing ones that it’s “take your pick.” If that’s what makes a discipline, give me the binary choice between science and story-telling. Most of economics seems to be story-telling. “Discipline” is just a fancy word for it.

Collecting baseball cards and memorizing the statistics printed on them is a discipline. Most of economics is less useful than collecting baseball cards — and a lot more destructive.

Here’s my hypothesis about economists: There are proportionally as many of them who act like scientists as there are baseball players who have career batting averages of at least .300.
__________
* Richard Feynman, a physicist and real scientist, had a different view of the scientific method than Karl Popper’s standard taxonomy. I see Feynman’s view as complementary to Popper’s, not at odds with it. What is “constructive skepticism” (Feynman’s term) but a gentler way of saying that a hypothesis or theory might be falsified and that the act of falsification may point to a better hypothesis or theory?

Modeling Is Not Science

The title of this post applies, inter alia, to econometric models — especially those that purport to forecast macroeconomic activity — and climate models — especially those that purport to forecast global temperatures. I have elsewhere essayed my assessments of macroeconomic and climate models. (See this and this, for example.) My purpose here is to offer a general warning about models that claim to depict and forecast the behavior of connected sets of phenomena (systems) that are large, complex, and dynamic. I draw, in part, on a paper that I wrote 28 years ago. That paper is about warfare models, but it has general applicability.

HEMIBEL THINKING

Philip M. Morse and George E. Kimball, pioneers in the field of military operations research — the analysis and modeling of military operations — wrote that the

successful application of operations research usually results in improvements by factors of 3 or 10 or more. . . . In our first study of any operation we are looking for these large factors of possible improvement. . . .

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Methods of Operations Research, 1946, p. 38)

This is science-speak for the following proposition: In large, complex, and dynamic systems (e.g., war, economy, climate) there is much uncertainty about the relevant parameters, about how to characterize their interactions mathematically, and about their numerical values.

Hemibel thinking assumes great importance in light of the imprecision inherent in models of large, complex, and dynamic systems. Consider, for example, a simple model with only 10 parameters. Even if such a model doesn’t omit crucial parameters or mischaracterize their interactions,  its results must be taken with large doses of salt. Simple mathematics tells the cautionary tale: An error of about 12 percent in the value of each parameter can produce a result that is off by a factor of 3 (a hemibel); An error of about 25 percent in the value of each parameter can produce a result that is off by a factor of 10. (Remember, this is a model of a relatively small system.)

If you think that models and “data” about such things as macroeconomic activity and climatic conditions cannot be as inaccurate as that, you have no idea how such models are devised or how such data are collected and reported. It would be kind to say that such models are incomplete, inaccurate guesswork. It would be fair to say that all too many of them reflect their developers’ policy biases.

Of course, given a (miraculously) complete model, data errors might (miraculously) be offsetting, but don’t bet on it. It’s not that simple: Some errors will be large and some errors will be small (but which are which?), and the errors may lie in either direction (but in which direction?). In any event, no amount of luck can prevent a modeler from constructing a model whose estimates advance a favored agenda (e.g., massive, indiscriminate government spending; massive, futile, and costly efforts to cool the planet).

NO MODEL IS EVER PROVEN

The construction of a model is only one part of the scientific method. A model means nothing unless it can be tested repeatedly against facts (facts not already employed in the development of the model) and, through such tests, is found to be more accurate than alternative explanations of the same facts.As Morse and Kimball put it,

[t]o be valuable [operations research] must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. (Op. cit., p. 10)

Even after rigorous testing, a model is never proven. It is, at best, a plausible working hypothesis about relations between the phenomena that it encompasses.

A model is never proven for two reasons. First, new facts may be discovered that do not comport with the model. Second, the facts upon which a model is based may be open to a different interpretation, that is, they may support a new model that yields better predictions than its predecessor.

The fact that a model cannot be proven can be take as an excuse for action: “We must act on the best information we have.”  That excuse — which justifies an entire industry, namely, government-funded analysis — does not fly, as I discuss below.

MODELS LIE WHEN LIARS MODEL

Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a masterful demonstration of a model that proves what it was designed to prove by the assumptions built into it, see this.

IMPLICATIONS

Government policies can be dangerous and impoverishing things. Despite that, it is hard (if not impossible) to modify and reverse government policies. Consider, for example, the establishment of public schools more than a century ago, the establishment of Social Security more than 70 years ago, and the establishment of Medicare and Medicaid more than 40 years ago. There is plenty of evidence that all four institutions are monumentally expensive failures. But all four institutions have become so entrenched that to call for their abolition is to be thought of as an eccentric, if not an uncaring anti-government zealot. (For the latest about public schools, see this.)

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).

CLOSING THOUGHTS

The average person is so mystified and awed by “science” that he has little if any understanding of its limitations and pitfalls, some of which I have addressed here in the context of modeling. The average person’s mystification and awe are unjustified, given that many so-called scientists exploit the public’s mystification and awe in order to advance personal biases, gain the approval of other scientists (whence “consensus”), and garner funding for research that yields results congenial to its sponsors (e.g., global warming is an artifact of human activity).

Isaac Newton, who must be numbered among the greatest scientists in human history, was not a flawless scientist. (Has there ever been one?) But scientists and non-scientists alike should heed Newton on the subject of scientific humility:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. (Quoted in Horace Freeland Judson,The Search for Solutions, 1980, p. 5.)


Related reading: Willis Eschenbach, “How Not to Model the Historical Temperature“, Watts Up With That?, March 25, 2018