Analytical and Scientific Arrogance

It is customary in democratic countries to deplore expenditures on armaments as conflicting with the requirements of the social services. There is a tendency to forget that the most important social service that a government can do for its people is to keep them alive and free.

Marshal of the Royal Air Force Sir John Slessor, Strategy for the West

I’m returning to the past to make a timeless point: Analysis is a tool of decision-making, not a substitute for it.

That’s a point to which every analyst will subscribe, just as every judicial candidate will claim to revere the Constitution. But analysts past and present have tended to read their policy preferences into their analytical work, just as too many judges real their political preferences into the Constitution.

What is an analyst? Someone whose occupation requires him to gather facts bearing on an issue, discern robust relationships among the facts, and draw conclusions from those relationships.

Many professionals — from economists to physicists to so-called climate scientists — are more or less analytical in the practice of their professions. That is, they are not just seeking knowledge, but seeking to influence policies which depend on that knowledge.

There is also in this country (and in the West, generally) a kind of person who is an analyst first and a disciplinary specialist second (if at all). Such a person brings his pattern-seeking skills to the problems facing decision-makers in government and industry. Depending on the kinds of issues he addresses or the kinds of techniques that he deploys, he may be called a policy analyst, operations research analyst, management consultant, or something of that kind.

It is one thing to say, as a scientist or analyst, that a certain option (a policy, a system, a tactic) is probably better than the alternatives, when judged against a specific criterion (most effective for a given cost, most effective against a certain kind of enemy force). It is quite another thing to say that the option is the one that the decision-maker should adopt. The scientist or analyst is looking a small slice of the world; the decision-maker has to take into account things that the scientist or analyst did not (and often could not) take into account (economic consequences, political feasibility, compatibility with other existing systems and policies).

It is (or should be) unsconsionable for a scientist or analyst to state or imply that he has the “right” answer. But the clever arguer avoids coming straight out with the “right” answer; instead, he slants his presentation in a way that makes the “right” answer seem right.

A classic case in point is they hysteria surrounding the increase in “global” temperature in the latter part of the 20th century, and the coincidence of that increase with the rise in CO2. I have had much to say about the hysteria and the pseudo-science upon which it is based. (See links at the end of this post.) Here, I will take as a case study an event to which I was somewhat close: the treatment of the Navy’s proposal, made in the early 1980s, for an expansion to what was conveniently characterized as the 600-ship Navy. (The expansion would have involved personnel, logistics systems, ancillary war-fighting systems, stockpiles of parts and ammunition, and aircraft of many kinds — all in addition to a 25-percent increase in the number of ships in active service.)

The usual suspects, of an ilk I profiled here, wasted no time in making the 600-ship Navy seem like a bad idea. Of the many studies and memos on the subject, two by the Congressional Budget Office stand out a exemplars of slanted analysis by innuendo: “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches” (March 1982), and “Future Budget Requirements for the 600-Ship Navy: Preliminary Analysis” (April 1985). What did the “whiz kids” at CBO have to say about the 600-ship Navy? Here are excerpts of the concluding sections:

The Administration’s five-year shipbuilding plan, containing 133 new construction ships and estimated to cost over $80 billion in fiscal year 1983 dollars, is more ambitious than previous programs submitted to the Congress in the past few years. It does not, however, contain enough ships to realize the Navy’s announced force level goals for an expanded Navy. In addition, this plan—as has been the case with so many previous plans—has most of its ships programmed in the later out-years. Over half of the 133 new construction ships are programmed for the last two years of the five-year plan. Achievement of the Navy’s expanded force level goals would require adhering to the out-year building plans and continued high levels of construction in the years beyond fiscal year 1987. [1982 report, pp. 71-72]

Even the budget increases estimated here would be difficult to achieve if history is a guide. Since the end of World War II, the Navy has never sustained real increases in its budget for more than five consecutive years. The sustained 15-year expansion required to achieve and sustain the Navy’s present plans would result in a historic change in budget trends. [1985 report, p. 26]

The bias against the 600-ship Navy drips from the pages. The “argument” goes like this: If it hasn’t been done, it can’t be done and, therefore, shouldn’t be attempted. Why not? Because the analysts at CBO were a breed of cat that emerged in the 1960s, when Robert Strange McNamara and his minions used simplistic analysis (“tablesmanship”) to play “gotcha” with the military services:

We [I was one of the minions] did it because we were encouraged to do it, though not in so many words. And we got away with it, not because we were better analysts — most of our work was simplistic stuff — but because we usually had the last word. (Only an impassioned personal intercession by a service chief might persuade McNamara to go against SA [the Systems Analysis office run by Alain Enthoven] — and the key word is “might.”) The irony of the whole process was that McNamara, in effect, substituted “civilian judgment” for oft-scorned “military judgment.” McNamara revealed his preference for “civilian judgment” by elevating Enthoven and SA a level in the hierarchy, 1965, even though (or perhaps because) the services and JCS had been open in their disdain of SA and its snotty young civilians.

In the case of the 600-ship Navy, civilian analysts did their best to derail it by sending the barely disguised message that it was “unaffordable”. I was reminded of this “insight” by a colleague of long-standing who recently proclaimed that “any half-decent cost model would show a 600-ship Navy was unsustainable into this century.” How could a cost model show such a thing when the sustainability (affordability) of defense is a matter of political will, not arithmetic?

Defense spending fluctuates as function of perceived necessity. Consider, for example, this graph (misleadingly labeled “Recent Defense Spending”) from usgovernmentspending.com, which shows defense spending as a percentage of GDP for fiscal year (FY) 1792 to FY 2017:

What was “unaffordable” before World War II suddenly became affordable. And so it has gone throughout the history of the republic. Affordability (or sustainability) is a political issue, not a line drawn in the sand by an smart-ass analyst who gives no thought to the consequences of spending too little on defense.

I will now zoom in on the era of interest.

CBO’s “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches“, which crystallized opposition to the 600-ship Navy estimates the long-run, annual obligational authority required to sustain a 600-ship Navy (of the Navy’s design) to be about 20-percent higher in constant dollars than the FY 1982 Navy budget. (See Options I and II in Figure 2, p. 50.) The long-run would have begun around FY 1994, following several years of higher spending associated with the buildup of forces. I don’t have a historical breakdown of the Department of Defense (DoD) budget by service, but I found values for all-DoD spending on military programs at Office of Management and Budget Historical Tables. Drawing on Tables 5.2 and 10.1, I constructed a constant-dollar of DoD’s obligational authority (FY 1982 = 1):

FY Index
1983 1.08
1984 1.13
1985 1.21
1986 1.17
1987 1.13
1988 1.11
1989 1.10
1990 1.07
1991 0.97
1992 0.97
1993 0.90
1994 0.82
1995 0.82
1996 0.80
1997 0.80
1998 0.79
1999 0.84
2000 0.86
2001 0.92
2002 0.98
2003 1.23
2004 1.29
2005 1.28
2006 1.36
2007 1.50
2008 1.65
2009 1.61
2010 1.66
2011 1.62
2012 1.51
2013 1.32
2014 1.32
2015 1.25
2016 1.29
2017 1.34

There was no inherent reason that defense spending couldn’t have remained on the trajectory of the middle 1980s. The slowdown of the late 1980s was a reflection of improved relations between the U.S. and USSR. Those improved relations had much to do with the Reagan defense buildup, of which the goal of attaining a 600-ship Navy was an integral part.

The Reagan buildup helped to convince Soviet leaders (Gorbachev in particular) that trying to keep pace with the U.S. was futile and (actually) unaffordable. The rest — the end of the Cold War and the dissolution of the USSR — is history. The buildup, in other words, sowed the seeds of its own demise. But that couldn’t have been predicted with certainty in the early-to-middle 1980s, when CBO and others were doing their best to undermine political support for more defense spending. Had CBO and the other nay-sayers succeeded in their aims, the Cold War and the USSR might still be with us.

The defense drawdown of the mid-1990s was a deliberate response to the end of the Cold War and lack of other serious threats, not a historical necessity. It was certainly not on the table in the early 1980s, when the 600-ship Navy was being pushed. Had the Cold War not thawed and ended, there is no reason that U.S. defense spending couldn’t have continued at the pace of the middle 1980s, or higher. As is evident in the index values for recent years, even after drastic force reductions in Iraq, defense spending is now about one-third higher than it was in FY 1982.

John Lehman, Secretary of the Navy from 1981 to 1987, was rightly incensed that analysts — some of them on his payroll as civilian employees and contractors — were, in effect, undermining a deliberate strategy of pressing against a key Soviet weakness — the unsustainability of its defense strategy. There was much lamentation at the time about Lehman’s “war” on the offending parties, one of which was the think-tank for which I then worked. I can now admit openly that I was sympathetic to Lehman and offended by the arrogance of analysts who believed that it was their job to suggest that spending more on defense was “unaffordable”.

When I was a young analyst I was handed a pile of required reading material. One of the items was was Methods of Operations Research, by Philip M. Morse and George E. Kimball. Morse, in the early months of America’s involvement in World War II, founded the civilian operations-research organization from which my think-tank evolved. Kimball was a leading member of that organization. Their book is notable not just a compendium of analytical methods that were applied, with much success, to the war effort. It is also introspective — and properly humble — about the power and role of analysis.

Two passages, in particular, have stuck with me for the more than 50 years since I first read the book. Here is one of them:

[S]uccessful application of operations research usually results in improvements by factors of 3 or 10 or more…. In our first study of any operation we are looking for these large factors of possible improvement…. They can be discovered if the [variables] are given only one significant figure,…any greater accuracy simply adds unessential detail.

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. [p. 38]

Morse and Kimball — two brilliant scientists and analysts, who had worked with actual data (pardon the redundancy) about combat operations — counseled against making too much of quantitative estimates given the uncertainties inherent in combat. But, as I have seen over the years, analysts eager to “prove” something nevertheless make a huge deal out of minuscule differences in quantitative estimates — estimates based not on actual combat operations but on theoretical values derived from models of systems and operations yet to see the light of day. (I also saw, and still see, too much “analysis” about soft subjects, such as domestic politics and international relations. The amount of snake oil emitted by “analysts” — sometimes called scholars, journalists, pundits, and commentators — would fill the Great Lakes. Their perceptions of reality have an uncanny way of supporting their unabashed decrees about policy.)

The second memorable passage from Methods of Operations Research goes directly to the point of this post:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. [p. 10].

In the case of CBO and other opponents of the 600-ship Navy, substitute “cost estimate” for “operations research”, “responsible defense official” for “administrator in charge”, and “strategy” for “operations”. The principle is the same: The CBO and its ilk knew the price of the 600-ship Navy, but had no inkling of its value.

Too many scientists and analysts want to make policy. On the evidence of my close association with scientists and analysts over the years — including a stint as an unsparing reviewer of their products — I would say that they should learn to think clearly before they inflict their views on others. But too many of them — even those with Ph.D.s in STEM disciplines — are incapable of thinking clearly, and more than capable of slanting their work to support their biases. Exhibit A: Michael Mann, James Hansen (more), and their co-conspirators in the catastrophic-anthropogenic-global-warming scam.


Related posts:
The Limits of Science
How to View Defense Spending
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
The McNamara Legacy: A Personal Perspective
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
AGW in Austin? (II)
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Babe Ruth and the Hot-Hand Hypothesis
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unknowable
A (Long) Footnote about Science
Further Thoughts about Probability
Climate Scare Tactics: The Mythical Ever-Rising 30-Year Average
A Grand Strategy for the United States

Pattern-Seeking

UPDATED 09/04/17

Scientists and analysts are reluctant to accept the “stuff happens” explanation for similar but disconnected events. The blessing and curse of the scientific-analytic mind is that it always seeks patterns, even where there are none to be found.

UPDATE 1

The version of this post that appears at Ricochet includes the following comments and replies:

Comment — Cool stuff, but are you thinking of any particular patter/maybe-not-pattern in particular?

My reply — The example that leaps readily to mind is “climate change”, the gospel of which is based on the fleeting (25-year) coincidence of rising temperatures and rising CO2 emissions. That, in turn, leads to the usual kind of hysteria about “climate change” when something like Harvey occurs.

Comment — It’s not a coincidence when the numbers are fudged.

My reply — The temperature numbers have been fudged to some extent, but even qualified skeptics accept the late 20th century temperature rise and the long-term rise in CO2. What’s really at issue is the cause of the temperature rise. The true believers seized on CO2 to the near-exclusion of other factors. How else could they then justify their puritanical desire to control the lives of others, or (if not that) their underlying anti-scientific mindset which seeks patterns instead of truths.

Another example, which applies to non-scientists and (some) scientists, is the identification of random arrangements of stars as “constellations”, simply because they “look” like something. Yet another example is the penchant for invoking conspiracy theories to explain (or rationalize) notorious events.

Returning to science, it is pattern-seeking which drives scientists to develop explanations that are later discarded and even discredited as wildly wrong. I list a succession of such explanations in my post “The Science Is Settled“.

UPDATE 2

Political pundits, sports writers, and sports commentators are notorious for making predictions that rely on tenuous historical parallels. I herewith offer an example, drawn from this very blog.

Here is the complete text of “A Baseball Note: The 2017 Astros vs. the 1951 Dodgers“, which I posted on the 14th of last month:

If you were following baseball in 1951 (as I was), you’ll remember how that season’s Brooklyn Dodgers blew a big lead, wound up tied with the New York Giants at the end of the regular season, and lost a 3-game playoff to the Giants on Bobby Thomson’s “shot heard ’round the world” in the bottom of the 9th inning of the final playoff game.

On August 11, 1951, the Dodgers took a doubleheader from the Boston Braves and gained their largest lead over the Giants — 13 games. The Dodgers at that point had a W-L record of 70-36 (.660), and would top out at .667 two games later. But their W-L record for the rest of the regular season was only .522. So the Giants caught them and went on to win what is arguably the most dramatic playoff in the history of professional sports.

The 2017 Astros peaked earlier than the 1951 Dodgers, attaining a season-high W-L record of .682 on July 5, and leading the second-place team in the AL West by 18 games on July 28. The Astros’ lead has dropped to 12 games, and the team’s W-L record since the July 5 peak is only .438.

The Los Angeles Angels might be this year’s version of the 1951 Giants. The Angels have come from 19 games behind the Astros on July 28, to trail by 12. In that span, the Angels have gone 11-4 (.733).

Hold onto your hats.

Since I wrote that, the Angels have gone 10-9, while the Astros have gone gone 12-8 and increased their lead over the Angels to 13.5 games. It’s still possible that the Astros will collapse and the Angels will surge. But the contest between the two teams no longer resembles the Dodgers-Giants duel of 1951, when the Giants had closed to 5.5 games behind the Dodgers at this point in the season.

My “model” of the 2017 contest between the Astros and Angels was on a par with the disastrously wrong models that “prove” the inexorability of catastrophic anthropogenic global warming. The models are disastrously wrong because they are being used to push government policy in counterproductive directions: wasting money on “green energy” while shutting down efficient sources of energy at the cost of real jobs and economic growth.


Related posts:
Hemibel Thinking
The Limits of Science
The Thing about Science
Words of Caution for Scientific Dogmatists
What’s Wrong with Game Theory
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
Mathematical Economics
Modeling Is Not Science
Beware the Rare Event
Physics Envy
What Is Truth?
The Improbability of Us
We, the Children of the Enlightenment
In Defense of Subjectivism
The Atheism of the Gaps
The Ideal as a False and Dangerous Standard
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Luck and Baseball, One More Time
Are the Natural Numbers Supernatural?
The Candle Problem: Balderdash Masquerading as Science
More about Luck and Baseball
Combinatorial Play
Pseudoscience, “Moneyball,” and Luck
The Fallacy of Human Progress
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
The “Marketplace” of Ideas
Time and Reality
My War on the Misuse of Probability
Ty Cobb and the State of Science
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
Is Science Self-Correcting?
Taleb’s Ruinous Rhetoric
Words Fail Us
Fine-Tuning in a Wacky Wrapper
Tricky Reasoning
Modeling Revisited
Bayesian Irrationality
The Fragility of Knowledge

Institutional Bias

Arnold Kling:

On the question of whether Federal workers are overpaid relative to private sector workers, [Justin Fox] writes,

The Federal Salary Council, a government advisory body composed of labor experts and government-employee representatives, regularly finds that federal employees make about a third less than people doing similar work in the private sector. The conservative American Enterprise Institute and Heritage Foundation, on the other hand, have estimated that federal employees make 14 percent and 22 percent more, respectively, than comparable private-sector workers….

… Could you have predicted ahead of time which organization’s “research” would find a result favorable to Federal workers and which organization would find unfavorable results? Of course you could. So how do you sustain the belief that normative economics and positive economics are distinct from one another, that economic research cleanly separates facts from values?

I saw institutional bias at work many times in my career as an analyst at a tax-funded think-tank. My first experience with it came in the first project to which I was assigned. The issue at hand was a hot one on those days: whether the defense budget should be altered to increase the size of the Air Force’s land-based tactical air (tacair)  forces while reducing the size of Navy’s carrier-based counterpart. The Air Force’s think-tank had issued a report favorable to land-based tacair (surprise!), so the Navy turned to its think-tank (where I worked). Our report favored carrier-based tacair (surprise!).

How could two supposedly objective institutions study the same issue and come to opposite conclusions? Analytical fraud abetted by overt bias? No, that would be too obvious to the “neutral” referees in the Office of the Secretary of Defense. (Why “neutral”? Read this.)

Subtle bias is easily introduced when the issue is complex, as the tacair issue was. Where would tacair forces be required? What payloads would fighters and bombers carry? How easy would it be to set up land bases? How vulnerable would they be to an enemy’s land and air forces? How vulnerable would carriers be to enemy submarines and long-range bombers? How close to shore could carriers approach? How much would new aircraft, bases, and carriers cost to buy and maintain? What kinds of logistical support would they need, and how much would it cost? And on and on.

Hundreds, if not thousands, of assumptions underlay the results of the studies. Analysts at the Air Force’s think-tank chose those assumptions that favored the Air Force; analysts at the Navy’s think-tank chose those assumptions that favored the Navy.

Why? Not because analysts’ jobs were at stake; they weren’t. Not because the Air Force and Navy directed the outcomes of the studies; they didn’t. They didn’t have to because “objective” analysts are human beings who want “their side” to win. When you work for an institution you tend to identify with it; its success becomes your success, and its failure becomes your failure.

The same was true of the “neutral” analysts in the Office of the Secretary of Defense. They knew which way Mr. McNamara leaned on any issue, and they found themselves drawn to the assumptions that would justify his biases.

And so it goes. Bias is a rampant and ineradicable aspect of human striving. It’s ever-present in the political arena The current state of affairs in Washington, D.C., is just the tip of the proverbial iceberg.

The prevalence and influence of bias in matters that affect hundreds of millions of Americans is yet another good reason to limit the power of government.