scientific method

A True Scientist Speaks

I am reading, with great delight, Old Physics for New: A Worldview Alternative to Einstein’s Relativity Theory, by Thomas E. Phipps Jr. (1925-2016). Dr. Phipps was a physicist who happened to have been a member of a World War II operations research unit that evolved into the think-tank where I worked for 30 years.

Phipps challenged the basic tenets of Einstein’s special theory of relativity (STR) in Old Physics for New, an earlier book (Heretical Verities: Mathematical Themes in Physical Description), and many of his scholarly articles. I have drawn on Old Physics for New in two of my posts about STR (this and this), and will do so in future posts on the subject. But aside from STR, about which Phipps is refreshingly skeptical, I admire his honesty and clear-minded view of science.

Regarding Phipps’s honesty, I turn to his preface to the second edition of Old Physics for New:

[I]n the first edition I wrongly claimed awareness of two “crucial” experiments that would decide between Einstein’s special relativity theory and my proposed alternative. These two were (1) an accurate assessment of stellar aberration and (2) a measurement of light speed in orbit. Only the first of these is valid. The other was an error on my part, which I am obligated and privileged to correct here. [pp. xi-xii]

Phipps’s clear-minded view of science is evident throughout the book. In the preface, he scores a direct hit on the pseudo-scientific faddism:

The attitude of the traditional scientist toward lies and errors has always been that it is his job to tell the truth and to eradicate mistakes. Lately, scientists, with climate science in the van, have begun openly to espouse an opposite view, a different paradigm, which marches under the black banner of “post-normal science.”

According to this new perception, before the scientist goes into his laboratory it is his duty, for the sake of mankind, to study the worldwide political situation and to decide what errors need promulgating and what lies need telling. Then he goes into his laboratory, interrogates his computer, fiddles his theory, fabricates or massages his data, etc., and produces the results required to support those predetermined lies and errors. Finally he emerges into the light of publicity and writes reports acceptable to like-minded bureaucrats in such government agencies as the National Science Foundation, offers interviews to reporters working for like-minded bosses in the media, testifies before Congress, etc., all in such a way as to suppress traditional science and ultimately to make it impossible….

In this way post-normal science wages pre-emptive war on what Thomas Kuhn famously called “normal science,” because the latter fails to promote with adequate zeal those political and social goals that the post-normal scientist happens to recognize as deserving promotion…. Post-normal behavior seamlessly blends the implacable arrogance of the up-to-date terrorist with the technique of The Big Lie, pioneered by Hitler and Goebbels…. [pp. xii-xiii]

I regret deeply that I never met or corresponded with Dr. Phipps.

Economists As Scientists

This is the third entry in a series of loosely connected posts on economics. The first entry is here and the second entry is here. (Related posts by me are noted parenthetically throughout this one.)

Science is something that some people “do” some of the time. There are full-time human beings and part-time scientists. And the part-timers are truly scientists only when they think and act in accordance with the scientific method.*

Acting in accordance with the scientific method is a matter of attitude and application. The proper attitude is one of indifference about the correctness of a hypothesis or theory. The proper application rejects a hypothesis if it can’t be tested, and rejects a theory if it’s refuted (falsified) by relevant and reliable observations.

Regarding attitude, I turn to the most famous person who was sometimes a scientist: Albert Einstein. This is from the Wikipedia article about the Bohr-Einstein debate:

The quantum revolution of the mid-1920s occurred under the direction of both Einstein and [Niels] Bohr, and their post-revolutionary debates were about making sense of the change. The shocks for Einstein began in 1925 when Werner Heisenberg introduced matrix equations that removed the Newtonian elements of space and time from any underlying reality. The next shock came in 1926 when Max Born proposed that mechanics were to be understood as a probability without any causal explanation.

Einstein rejected this interpretation. In a 1926 letter to Max Born, Einstein wrote: “I, at any rate, am convinced that He [God] does not throw dice.” [Apparently, Einstein also used the line in Bohr’s presence, and Bohr replied, “Einstein, stop telling God what to do.” — TEA]

At the Fifth Solvay Conference held in October 1927 Heisenberg and Born concluded that the revolution was over and nothing further was needed. It was at that last stage that Einstein’s skepticism turned to dismay. He believed that much had been accomplished, but the reasons for the mechanics still needed to be understood.

Einstein’s refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated.

It’s true that quantum mechanics was inchoate in the mid-1920s, and that it took a couple of decades to mature into quantum field theory. But there’s more than a trace of “attitude” in Einstein’s refusal to accept quantum mechanics, to stay abreast of developments in the theory, and to search quixotically for his own theory of everything, which he hoped would obviate the need for a non-deterministic explanation of quantum phenomena.

Improper application of the scientific method is rife. See, for example the Wikipedia article about the replication crisis, John Ioannidis’s article, “Why Most Published Research Findings Are False.” (See also “Ty Cobb and the State of Science” and “Is Science Self-Correcting?“) For a thorough analysis of the roots of the crisis, read Michael Hart’s book, Hubris: The Troubling Science, Economics, and Politics of Climate Change.

A bad attitude and improper application are both found among the so-called scientists who declare that the “science” of global warming is “settled,” and that human-generated CO2 emissions are the primary cause of the apparent rise in global temperatures during the last quarter of the 20th century. The bad attitude is the declaration of “settled science.” In “The Science Is Never Settled” I give many prominent examples of the folly of declaring it to be “settled.”

The improper application of the scientific method with respect to global warming began with the hypothesis that the “culprit” is CO2 emissions generated by the activities of human beings — thus anthropogenic global warming (AGW). There’s no end of evidence to the contrary, some of which is summarized in these posts and many of the links found therein. There’s enough evidence, in my view, to have rejected the CO2 hypothesis many times over. But there’s a great deal of money and peer-approval at stake, so the rush to judgment became a stampede. And attitude rears its ugly head when pro-AGW “scientists” shun the real scientists who are properly skeptical about the CO2 hypothesis, or at least about the degree to which CO2 supposedly influences temperatures. (For a depressingly thorough account of the AGW scam, read Michael Hart’s Hubris: The Troubling Science, Economics, and Politics of Climate Change.)

I turn now to economists, as I have come to know them in more than fifty years of being taught by them, working with them, and reading their works. Scratch an economist and you’re likely to find a moralist or reformer just beneath a thin veneer of rationality. Economists like to believe that they’re objective. But they aren’t; no one is. Everyone brings to the table a large serving of biases that are incubated in temperament, upbringing, education, and culture.

Economists bring to the table a heaping helping of tunnel vision. “Hard scientists” do, too, but their tunnel vision is generally a good thing, because it’s actually aimed at a deeper understanding of the inanimate and subhuman world rather than the advancement of a social or economic agenda. (I make a large exception for “hard scientists” who contribute to global-warming hysteria, as discussed above.)

Some economists, especially behavioralists, view the world through the lens of wealth-and-utility-maximization. Their great crusade is to force everyone to make rational decisions (by their lights), through “nudging.” It almost goes without saying that government should be the nudger-in-chief. (See “The Perpetual Nudger” and the many posts linked to therein.)

Other economists — though far fewer than in the past — have a thing about monopoly and oligopoly (the domination of a market by one or a few sellers). They’re heirs to the trust-busting of the late 1800s and early 1900s, a movement led by non-economists who sought to blame the woes of working-class Americans on the “plutocrats” (Rockefeller, Carnegie, Ford, etc.) who had merely made life better and more affordable for Americans, while also creating jobs for millions of them and reaping rewards for the great financial risks that they took. (See “Monopoly and the General Welfare” and “Monopoly: Private Is Better than Public.”) As it turns out, the biggest and most destructive monopoly of all is the federal government, so beloved and trusted by trust-busters — and too many others. (See “The Rahn Curve Revisited.”)

Nowadays, a lot of economists are preoccupied by income inequality, as if it were something evil and not mainly an artifact of differences in intelligence, ambition, and education, etc. And inequality — the prospect of earning rather grand sums of money — is what drives a lot of economic endeavor, to good of workers and consumers. (See “Mass (Economic) Hysteria: Income Inequality and Related Themes” and the many posts linked to therein.) Remove inequality and what do you get? The Soviet Union and Communist China, in which everyone is equal except party operatives and their families, friends, and favorites.

When the inequality-preoccupied economists are confronted by the facts of life, they usually turn their attention from inequality as a general problem to the (inescapable) fact that an income distribution has a top one-percent and top one-tenth of one-percent — as if there were something especially loathsome about people in those categories. (Paul Krugman shifted his focus to the top one-tenth of one percent when he realized that he’s in the top one percent, so perhaps he knows that’s he’s loathsome and wishes to deny it, to himself.)

Crony capitalism is trotted out as a major cause of very high incomes. But that’s hardly a universal cause, given that a lot of very high incomes are earned by athletes and film stars beside whom most investment bankers and CEOs are making peanuts. Moreover, as I’ve said on several occasions, crony capitalists are bright and driven enough to be in the stratosphere of any income distribution. Further, the fertile soil of crony capitalism is the regulatory power of government that makes it possible.

Many economists became such, it would seem, in order to promote big government and its supposed good works — income redistribution being one of them. Joseph Stiglitz and Paul Krugman are two leading exemplars of what I call the New Deal school of economic thought, which amounts to throwing government and taxpayers’ money at every perceived problem, that is, every economic outcome that is deemed unacceptable by accountants of the soul. (See “Accountants of the Soul.”)

Stiglitz and Krugman — both Nobel laureates in economics — are typical “public intellectuals” whose intelligence breeds in them a kind of arrogance. (See “Intellectuals and Society: A Review.”) It’s the kind of arrogance that I mentioned in the preceding post in this series: a penchant for deciding what’s best for others.

New Deal economists like Stiglitz and Krugman carry it a few steps further. They ascribe to government an impeccable character, an intelligence to match their own, and a monolithic will. They then assume that this infallible and wise automaton can and will do precisely what they would do: Create the best of all possible worlds. (See the many posts in which I discuss the nirvana fallacy.)

New Deal economists, in other words, live their intellectual lives  in a dream-world populated by the likes of Jiminy Cricket (“When You Wish Upon a Star”), Dorothy (“Somewhere Over the Rainbow”), and Mary Jane of a long-forgotten comic book (“First I shut my eyes real tight, then I wish with all my might! Magic words of poof, poof, piffles, make me just as small as [my mouse] Sniffles!”).

I could go on, but you should by now have grasped the point: What too many economists want to do is change human nature, channel it in directions deemed “good” (by the economist), or simply impose their view of “good” on everyone. To do such things, they must rely on government.

It’s true that government can order people about, but it can’t change human nature, which has an uncanny knack for thwarting Utopian schemes. (Obamacare, whose chief architect was economist Jonathan Gruber, is exhibit A this year.) And government (inconveniently for Utopians) really consists of fallible, often unwise, contentious human beings. So government is likely to march off in a direction unsought by Utopian economists.

Nevertheless, it’s hard to thwart the tax collector. The regulator can and does make things so hard for business that if one gets off the ground it can’t create as much prosperity and as many jobs as it would in the absence of regulation. And the redistributor only makes things worse by penalizing success. Tax, regulate, and redistribute should have been the mantra of the New Deal and most presidential “deals” since.

I hold economists of the New Deal stripe partly responsible for the swamp of stagnation into which the nation’s economy has descended. (See “Economic Growth Since World War II.”) Largely responsible, of course, are opportunistic if not economically illiterate politicians who pander to rent-seeking, economically illiterate constituencies. (Yes, I’m thinking of old folks and the various “disadvantaged” groups with which they have struck up an alliance of convenience.)

The distinction between normative economics and positive economics is of no particular use in sorting economists between advocates and scientists. A lot of normative economics masquerades as positive economics. The work of Thomas Piketty and his comrades-in-arms comes to mind, for example. (See “McCloskey on Piketty.”) Almost everything done to quantify and defend the Keynesian multiplier counts as normative economics, inasmuch as the work is intended (wittingly or not) to defend an intellectual scam of 80 years’ standing. (See “The Keynesian Multiplier: Phony Math,” “The True Multiplier,” and “Further Thoughts about the Keynesian Multiplier.”)

Enough said. If you want to see scientific economics in action, read Regulation. Not every article in it exemplifies scientific inquiry, but a good many of them do. It’s replete with articles about microeconomics, in which the authors uses real-world statistics to validate and quantify the many axioms of economics.

A final thought is sparked by Arnold Kling’s post, “Ed Glaeser on Science and Economics.” Kling writes:

I think that the public has a sort of binary classification. If it’s “science,” then an expert knows more than the average Joe. If it’s not a science, then anyone’s opinion is as good as anyone else’s. I strongly favor an in-between category, called a discipline. Think of economics as a discipline, where it is possible for avid students to know more than ordinary individuals, but without the full use of the scientific method.

On this rare occasion I disagree with Kling. The accumulation of knowledge about economic variables, or pseudo-knowledge such as estimates of GDP (see “Macroeconomics and Microeconomics“), either leads to well-tested, verified, and reproducible theories of economic behavior or it leads to conjectures, of which there are so many opposing ones that it’s “take your pick.” If that’s what makes a discipline, give me the binary choice between science and story-telling. Most of economics seems to be story-telling. “Discipline” is just a fancy word for it.

Collecting baseball cards and memorizing the statistics printed on them is a discipline. Most of economics is less useful than collecting baseball cards — and a lot more destructive.

Here’s my hypothesis about economists: There are proportionally as many of them who act like scientists as there are baseball players who have career batting averages of at least .300.
* Richard Feynman, a physicist and real scientist, had a different view of the scientific method than Karl Popper’s standard taxonomy. I see Feynman’s view as complementary to Popper’s, not at odds with it. What is “constructive skepticism” (Feynman’s term) but a gentler way of saying that a hypothesis or theory might be falsified and that the act of falsification may point to a better hypothesis or theory?

Modeling Is Not Science

The title of this post applies, inter alia, to econometric models — especially those that purport to forecast macroeconomic activity — and climate models — especially those that purport to forecast global temperatures. I have elsewhere essayed my assessments of macroeconomic and climate models. (See this and this, for example.) My purpose here is to offer a general warning about models that claim to depict and forecast the behavior of connected sets of phenomena (systems) that are large, complex, and dynamic. I draw, in part, on a paper that I wrote 28 years ago. That paper is about warfare models, but it has general applicability.


Philip M. Morse and George E. Kimball, pioneers in the field of military operations research — the analysis and modeling of military operations — wrote that the

successful application of operations research usually results in improvements by factors of 3 or 10 or more. . . . In our first study of any operation we are looking for these large factors of possible improvement. . . .

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Methods of Operations Research, 1946, p. 38)

This is science-speak for the following proposition: In large, complex, and dynamic systems (e.g., war, economy, climate) there is much uncertainty about the relevant parameters, about how to characterize their interactions mathematically, and about their numerical values.

Hemibel thinking assumes great importance in light of the imprecision inherent in models of large, complex, and dynamic systems. Consider, for example, a simple model with only 10 parameters. Even if such a model doesn’t omit crucial parameters or mischaracterize their interactions,  its results must be taken with large doses of salt. Simple mathematics tells the cautionary tale: An error of about 12 percent in the value of each parameter can produce a result that is off by a factor of 3 (a hemibel); An error of about 25 percent in the value of each parameter can produce a result that is off by a factor of 10. (Remember, this is a model of a relatively small system.)

If you think that models and “data” about such things as macroeconomic activity and climatic conditions cannot be as inaccurate as that, you have no idea how such models are devised or how such data are collected and reported. It would be kind to say that such models are incomplete, inaccurate guesswork. It would be fair to say that all too many of them reflect their developers’ policy biases.

Of course, given a (miraculously) complete model, data errors might (miraculously) be offsetting, but don’t bet on it. It’s not that simple: Some errors will be large and some errors will be small (but which are which?), and the errors may lie in either direction (but in which direction?). In any event, no amount of luck can prevent a modeler from constructing a model whose estimates advance a favored agenda (e.g., massive, indiscriminate government spending; massive, futile, and costly efforts to cool the planet).


The construction of a model is only one part of the scientific method. A model means nothing unless it can be tested repeatedly against facts (facts not already employed in the development of the model) and, through such tests, is found to be more accurate than alternative explanations of the same facts.As Morse and Kimball put it,

[t]o be valuable [operations research] must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. (Op. cit., p. 10)

Even after rigorous testing, a model is never proven. It is, at best, a plausible working hypothesis about relations between the phenomena that it encompasses.

A model is never proven for two reasons. First, new facts may be discovered that do not comport with the model. Second, the facts upon which a model is based may be open to a different interpretation, that is, they may support a new model that yields better predictions than its predecessor.

The fact that a model cannot be proven can be take as an excuse for action: “We must act on the best information we have.”  That excuse — which justifies an entire industry, namely, government-funded analysis — does not fly, as I discuss below.


Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a masterful demonstration of a model that proves what it was designed to prove by the assumptions built into it, see this.


Government policies can be dangerous and impoverishing things. Despite that, it is hard (if not impossible) to modify and reverse government policies. Consider, for example, the establishment of public schools more than a century ago, the establishment of Social Security more than 70 years ago, and the establishment of Medicare and Medicaid more than 40 years ago. There is plenty of evidence that all four institutions are monumentally expensive failures. But all four institutions have become so entrenched that to call for their abolition is to be thought of as an eccentric, if not an uncaring anti-government zealot. (For the latest about public schools, see this.)

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).


The average person is so mystified and awed by “science” that he has little if any understanding of its limitations and pitfalls, some of which I have addressed here in the context of modeling. The average person’s mystification and awe are unjustified, given that many so-called scientists exploit the public’s mystification and awe in order to advance personal biases, gain the approval of other scientists (whence “consensus”), and garner funding for research that yields results congenial to its sponsors (e.g., global warming is an artifact of human activity).

Isaac Newton, who must be numbered among the greatest scientists in human history, was not a flawless scientist. (Has there ever been one?) But scientists and non-scientists alike should heed Newton on the subject of scientific humility:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. (Quoted in Horace Freeland Judson,The Search for Solutions, 1980, p. 5.)