Old News Incites a Rant

A correspondent sent me a link to a video about Greenland ice core records. He called the video an eye opener, which is rather surprising to me because the man is a trained scientist and an experienced analyst of quantitative data. The video wasn’t at all an eye-opener for me. Here is my reply to the correspondent:

I began to look seriously at global warming ~2005, and used to write extensively about it. The info provided in the video is consistent with other observations, including icc-core measurements taken at Vostok, Antarctica. Here’s a related post, which includes the Vostok readings and much more: http://libertycorner.blogspot.com/2007/08/more-climate-stuff.html. Some of the other evidence that I have accrued is summarized here: https://politicsandprosperity.com/climate-change/.

Findings like those presented in the video seem to have no effect on the politics of “climate change”. It is a chimera, concocted by “scientists” who manipulate complex models (which have almost no predictive power) and, on the basis of those models, constantly adjust historical temperature readings to comport with what should have happened according to the models. (Thus “proving” the correctness of the models.) This kind of manipulation is widely known and well documented, as is the predictive failure of the models. But there is a “climate change” industry — a government-academic-media complex if you will — that has a life of its own, and it has transformed what should be a scientific issue into a secular religion. Due in no small part to the leftist leanings of public-school and university educators, tens of millions of American children and young adults have been brainwashed into believing that Earth is headed for a fiery denouement if “evil” things like fossil fuels aren’t banned. Being impressionable — not to mention scientifically and economically illiterate — they don’t question the pseudo-science that underlies “climate change” or the consider the economic consequences of drastic anti-warming measures, which would yield (at best) a lowering of Earth’s average temperature by ~0.1 degree by 2100 in exchange for a return to the horse-and-buggy age.

Here’s what I left unsaid:

The only possible way to defeat the “climate change” industry is to elect politicians who firmly reject its “intellectual” foundations and its draconian prescriptions. There was one such politician who managed to claw his way to the top in the U.S., but he was turned out of office, due in large part to the efforts a powerful cabal (https://time.com/5936036/secret-2020-election-campaign/) which is heavily invested in an all-powerful central government that can shape the U.S. to its liking. It didn’t help that the politician was rude and crude, which turned off fastidious voters (like you) who didn’t think about or care about the consequences of a Democrat return to power.

End of rant.

Modeling and Science Revisited

I have written a lot about modeling and science. (See the long list of posts at “Modeling, Science, and ‘Reason’“.) I have said, more than once, that modeling isn’t science. What I should have said — though it was always implied — is that a model isn’t scientific if it is merely synthetic.

What do I mean by that? Here is an example by way of contrast. The famous equation E = mc2 is an synthetic model in that it is derived Einstein’s special theory of relativity (and other physical equations). But it is also an empirical model in that the relationship between mass (m) and energy (E) can also be confirmed by observation (given suitable instruments).

On the other hand, a complex model of the U.S. economy, a model of Earth’s “average” temperature (called misleadingly a climate model), or a model of combat (to give a few examples) is only synthetic.

Why do I say that a complex model (of the kind mentioned above) is only synthetic? Such a model consists of a large number of modules, each of which is mathematical formulation of some aspect of the larger phenomenon being modeled. Here’s a simple example: An encounter between a submarine and a surface ship, where the outcome is expressed as the probability that the submarine will sink the surface ship. The outcome could be expressed in this way:

S = D x F x H x K x C, where S = probability that submarine sinks surface ship, which is the product of:

D = probability that submarine detects surface ship within torpedo range

F = probability that, given detection, submarine is able to “fix” the target and fire a torpedo (or salvo of them)

H = probability that, given the firing of a torpedo (or salvo), the surface ship is hit

K = probability that, given a hit (or hits), the surface ship is sunk

C = probability that the submarine survives efforts to find and nullify it before it can detect a surface ship

This is a simple model by comparison with a model of the U.S. economy, a global climate model, or a model of a battle involving large numbers of various kinds of weapons. In fact, it is a simplistic model of combat. Each of the modules could be decomposed into many sub-modules; for example, the module for D could consist of sub-modules for sonar accuracy, sonar operator acuity, acoustic conditions in the area of operation, countermeasures deployed by the target, etc.. In any event, the module for D will consist of a mathematical relationship, based perhaps on some statistics collected from tests or exercises (i.e., not actual combat). The mathematical relationship will encompass many assumptions (mainly implicit ones) about sonar accuracy, sonar operator acuity, etc. The same goes for the other modules — C, in particular, which encompasses all of the effects of D, F, H, and K — at a minimum.

In sum, the number of unknowns completely swamps the number of knowns. There is nothing close to certainty about the model — or any model of its kind. (In the case of the model of S, for example, relatively small errors — say, 25 percent from the actual value of each variable — can yield an estimate of S that is three times greater than or one-third as much as the actual value of S.) The mathematical operations involved do nothing to resolve the uncertainty, they merely multiply it. But the mathematical operations nevertheless convey the appearance of certainty because they yield numbers. The numbers merely represent a lot of guesses, but they seem authoritative because numbers mesmerize most people — even scientists who should be always be skeptical of them.

Despite all of that, analysts have for many decades been producing — and decision-makers have been consuming — the results of such models as the basis for choosing defense systems. Models of similar complexity have been and are being used in making decisions about a broad range of policies affecting the economy, health care, transportation, education, the environment, the climate (i.e., “global warming”), and on into the night.

The unfounded confidence that modelers have in their models, because the models produce numbers, captivates most decision-makers, who simply want answers. And so, modelers will go to ridiculous extremes. One not untypical example that I recall from my days as an in-house critic of analysts’ work is the model that purported to compare competing weapons (on of which was still in development) based on their relative contribution to the outcome of a hypothetical battle. The specific measure was the movement of the forward edge of the battle area (FEBA) to within a yard.

Global climate models are like that warfare model: Their creators pretend that they can estimate the change in the average temperature of the globe to within less than a tenth of a degree. If you believe that, I have a bridge to sell you.


Related reading: Robert L. Bradley Jr. “Climate Models: Worse than Nothing?“, Watts Up With That?, June 23, 2021 (Yes. See below.)

Related pages:

Climate Change

Modeling, Science, and “Reason”

Analytical and Scientific Arrogance

It is customary in democratic countries to deplore expenditures on armaments as conflicting with the requirements of the social services. There is a tendency to forget that the most important social service that a government can do for its people is to keep them alive and free.

Marshal of the Royal Air Force Sir John Slessor, Strategy for the West

I’m returning to the past to make a timeless point: Analysis is a tool of decision-making, not a substitute for it.

That’s a point to which every analyst will subscribe, just as every judicial candidate will claim to revere the Constitution. But analysts past and present have tended to read their policy preferences into their analytical work, just as too many judges real their political preferences into the Constitution.

What is an analyst? Someone whose occupation requires him to gather facts bearing on an issue, discern robust relationships among the facts, and draw conclusions from those relationships.

Many professionals — from economists to physicists to so-called climate scientists — are more or less analytical in the practice of their professions. That is, they are not just seeking knowledge, but seeking to influence policies which depend on that knowledge.

There is also in this country (and in the West, generally) a kind of person who is an analyst first and a disciplinary specialist second (if at all). Such a person brings his pattern-seeking skills to the problems facing decision-makers in government and industry. Depending on the kinds of issues he addresses or the kinds of techniques that he deploys, he may be called a policy analyst, operations research analyst, management consultant, or something of that kind.

It is one thing to say, as a scientist or analyst, that a certain option (a policy, a system, a tactic) is probably better than the alternatives, when judged against a specific criterion (most effective for a given cost, most effective against a certain kind of enemy force). It is quite another thing to say that the option is the one that the decision-maker should adopt. The scientist or analyst is looking a small slice of the world; the decision-maker has to take into account things that the scientist or analyst did not (and often could not) take into account (economic consequences, political feasibility, compatibility with other existing systems and policies).

It is (or should be) unsconsionable for a scientist or analyst to state or imply that he has the “right” answer. But the clever arguer avoids coming straight out with the “right” answer; instead, he slants his presentation in a way that makes the “right” answer seem right.

A classic case in point is they hysteria surrounding the increase in “global” temperature in the latter part of the 20th century, and the coincidence of that increase with the rise in CO2. I have had much to say about the hysteria and the pseudo-science upon which it is based. (See links at the end of this post.) Here, I will take as a case study an event to which I was somewhat close: the treatment of the Navy’s proposal, made in the early 1980s, for an expansion to what was conveniently characterized as the 600-ship Navy. (The expansion would have involved personnel, logistics systems, ancillary war-fighting systems, stockpiles of parts and ammunition, and aircraft of many kinds — all in addition to a 25-percent increase in the number of ships in active service.)

The usual suspects, of an ilk I profiled here, wasted no time in making the 600-ship Navy seem like a bad idea. Of the many studies and memos on the subject, two by the Congressional Budget Office stand out a exemplars of slanted analysis by innuendo: “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches” (March 1982), and “Future Budget Requirements for the 600-Ship Navy: Preliminary Analysis” (April 1985). What did the “whiz kids” at CBO have to say about the 600-ship Navy? Here are excerpts of the concluding sections:

The Administration’s five-year shipbuilding plan, containing 133 new construction ships and estimated to cost over $80 billion in fiscal year 1983 dollars, is more ambitious than previous programs submitted to the Congress in the past few years. It does not, however, contain enough ships to realize the Navy’s announced force level goals for an expanded Navy. In addition, this plan—as has been the case with so many previous plans—has most of its ships programmed in the later out-years. Over half of the 133 new construction ships are programmed for the last two years of the five-year plan. Achievement of the Navy’s expanded force level goals would require adhering to the out-year building plans and continued high levels of construction in the years beyond fiscal year 1987. [1982 report, pp. 71-72]

Even the budget increases estimated here would be difficult to achieve if history is a guide. Since the end of World War II, the Navy has never sustained real increases in its budget for more than five consecutive years. The sustained 15-year expansion required to achieve and sustain the Navy’s present plans would result in a historic change in budget trends. [1985 report, p. 26]

The bias against the 600-ship Navy drips from the pages. The “argument” goes like this: If it hasn’t been done, it can’t be done and, therefore, shouldn’t be attempted. Why not? Because the analysts at CBO were a breed of cat that emerged in the 1960s, when Robert Strange McNamara and his minions used simplistic analysis (“tablesmanship”) to play “gotcha” with the military services:

We [I was one of the minions] did it because we were encouraged to do it, though not in so many words. And we got away with it, not because we were better analysts — most of our work was simplistic stuff — but because we usually had the last word. (Only an impassioned personal intercession by a service chief might persuade McNamara to go against SA [the Systems Analysis office run by Alain Enthoven] — and the key word is “might.”) The irony of the whole process was that McNamara, in effect, substituted “civilian judgment” for oft-scorned “military judgment.” McNamara revealed his preference for “civilian judgment” by elevating Enthoven and SA a level in the hierarchy, 1965, even though (or perhaps because) the services and JCS had been open in their disdain of SA and its snotty young civilians.

In the case of the 600-ship Navy, civilian analysts did their best to derail it by sending the barely disguised message that it was “unaffordable”. I was reminded of this “insight” by a colleague of long-standing who recently proclaimed that “any half-decent cost model would show a 600-ship Navy was unsustainable into this century.” How could a cost model show such a thing when the sustainability (affordability) of defense is a matter of political will, not arithmetic?

Defense spending fluctuates as function of perceived necessity. Consider, for example, this graph (misleadingly labeled “Recent Defense Spending”) from usgovernmentspending.com, which shows defense spending as a percentage of GDP for fiscal year (FY) 1792 to FY 2017:

What was “unaffordable” before World War II suddenly became affordable. And so it has gone throughout the history of the republic. Affordability (or sustainability) is a political issue, not a line drawn in the sand by an smart-ass analyst who gives no thought to the consequences of spending too little on defense.

I will now zoom in on the era of interest.

CBO’s “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches“, which crystallized opposition to the 600-ship Navy estimates the long-run, annual obligational authority required to sustain a 600-ship Navy (of the Navy’s design) to be about 20-percent higher in constant dollars than the FY 1982 Navy budget. (See Options I and II in Figure 2, p. 50.) The long-run would have begun around FY 1994, following several years of higher spending associated with the buildup of forces. I don’t have a historical breakdown of the Department of Defense (DoD) budget by service, but I found values for all-DoD spending on military programs at Office of Management and Budget Historical Tables. Drawing on Tables 5.2 and 10.1, I constructed a constant-dollar of DoD’s obligational authority (FY 1982 = 1):

FY Index
1983 1.08
1984 1.13
1985 1.21
1986 1.17
1987 1.13
1988 1.11
1989 1.10
1990 1.07
1991 0.97
1992 0.97
1993 0.90
1994 0.82
1995 0.82
1996 0.80
1997 0.80
1998 0.79
1999 0.84
2000 0.86
2001 0.92
2002 0.98
2003 1.23
2004 1.29
2005 1.28
2006 1.36
2007 1.50
2008 1.65
2009 1.61
2010 1.66
2011 1.62
2012 1.51
2013 1.32
2014 1.32
2015 1.25
2016 1.29
2017 1.34

There was no inherent reason that defense spending couldn’t have remained on the trajectory of the middle 1980s. The slowdown of the late 1980s was a reflection of improved relations between the U.S. and USSR. Those improved relations had much to do with the Reagan defense buildup, of which the goal of attaining a 600-ship Navy was an integral part.

The Reagan buildup helped to convince Soviet leaders (Gorbachev in particular) that trying to keep pace with the U.S. was futile and (actually) unaffordable. The rest — the end of the Cold War and the dissolution of the USSR — is history. The buildup, in other words, sowed the seeds of its own demise. But that couldn’t have been predicted with certainty in the early-to-middle 1980s, when CBO and others were doing their best to undermine political support for more defense spending. Had CBO and the other nay-sayers succeeded in their aims, the Cold War and the USSR might still be with us.

The defense drawdown of the mid-1990s was a deliberate response to the end of the Cold War and lack of other serious threats, not a historical necessity. It was certainly not on the table in the early 1980s, when the 600-ship Navy was being pushed. Had the Cold War not thawed and ended, there is no reason that U.S. defense spending couldn’t have continued at the pace of the middle 1980s, or higher. As is evident in the index values for recent years, even after drastic force reductions in Iraq, defense spending is now about one-third higher than it was in FY 1982.

John Lehman, Secretary of the Navy from 1981 to 1987, was rightly incensed that analysts — some of them on his payroll as civilian employees and contractors — were, in effect, undermining a deliberate strategy of pressing against a key Soviet weakness — the unsustainability of its defense strategy. There was much lamentation at the time about Lehman’s “war” on the offending parties, one of which was the think-tank for which I then worked. I can now admit openly that I was sympathetic to Lehman and offended by the arrogance of analysts who believed that it was their job to suggest that spending more on defense was “unaffordable”.

When I was a young analyst I was handed a pile of required reading material. One of the items was was Methods of Operations Research, by Philip M. Morse and George E. Kimball. Morse, in the early months of America’s involvement in World War II, founded the civilian operations-research organization from which my think-tank evolved. Kimball was a leading member of that organization. Their book is notable not just a compendium of analytical methods that were applied, with much success, to the war effort. It is also introspective — and properly humble — about the power and role of analysis.

Two passages, in particular, have stuck with me for the more than 50 years since I first read the book. Here is one of them:

[S]uccessful application of operations research usually results in improvements by factors of 3 or 10 or more…. In our first study of any operation we are looking for these large factors of possible improvement…. They can be discovered if the [variables] are given only one significant figure,…any greater accuracy simply adds unessential detail.

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. [p. 38]

Morse and Kimball — two brilliant scientists and analysts, who had worked with actual data (pardon the redundancy) about combat operations — counseled against making too much of quantitative estimates given the uncertainties inherent in combat. But, as I have seen over the years, analysts eager to “prove” something nevertheless make a huge deal out of minuscule differences in quantitative estimates — estimates based not on actual combat operations but on theoretical values derived from models of systems and operations yet to see the light of day. (I also saw, and still see, too much “analysis” about soft subjects, such as domestic politics and international relations. The amount of snake oil emitted by “analysts” — sometimes called scholars, journalists, pundits, and commentators — would fill the Great Lakes. Their perceptions of reality have an uncanny way of supporting their unabashed decrees about policy.)

The second memorable passage from Methods of Operations Research goes directly to the point of this post:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. [p. 10].

In the case of CBO and other opponents of the 600-ship Navy, substitute “cost estimate” for “operations research”, “responsible defense official” for “administrator in charge”, and “strategy” for “operations”. The principle is the same: The CBO and its ilk knew the price of the 600-ship Navy, but had no inkling of its value.

Too many scientists and analysts want to make policy. On the evidence of my close association with scientists and analysts over the years — including a stint as an unsparing reviewer of their products — I would say that they should learn to think clearly before they inflict their views on others. But too many of them — even those with Ph.D.s in STEM disciplines — are incapable of thinking clearly, and more than capable of slanting their work to support their biases. Exhibit A: Michael Mann, James Hansen (more), and their co-conspirators in the catastrophic-anthropogenic-global-warming scam.


Related posts:
The Limits of Science
How to View Defense Spending
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
The McNamara Legacy: A Personal Perspective
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
AGW in Austin? (II)
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Babe Ruth and the Hot-Hand Hypothesis
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unknowable
A (Long) Footnote about Science
Further Thoughts about Probability
Climate Scare Tactics: The Mythical Ever-Rising 30-Year Average
A Grand Strategy for the United States

The Balderdash Chronicles

Balderdash is nonsense, to put it succinctly. Less succinctly, balderdash is stupid or illogical talk; senseless rubbish. Rather thoroughly, it is

balls, bull, rubbish, shit, rot, crap, garbage, trash, bunk, bullshit, hot air, tosh, waffle, pap, cobblers, bilge, drivel, twaddle, tripe, gibberish, guff, moonshine, claptrap, hogwash, hokum, piffle, poppycock, bosh, eyewash, tommyrot, horsefeathers, or buncombe.

I have encountered innumerable examples of balderdash in my 35 years of full-time work,  14 subsequent years of blogging, and many overlapping years as an observer of the political scene.  This essay documents some of the worst balderdash that I have come across.

THE LIMITS OF SCIENCE

Science (or what too often passes for it) generates an inordinate amount of balderdash. Consider an article in The Christian Science Monitor: “Why the Universe Isn’t Supposed to Exist”, which reads in part:

The universe shouldn’t exist — at least according to a new theory.

Modeling of conditions soon after the Big Bang suggests the universe should have collapsed just microseconds after its explosive birth, the new study suggests.

“During the early universe, we expected cosmic inflation — this is a rapid expansion of the universe right after the Big Bang,” said study co-author Robert Hogan, a doctoral candidate in physics at King’s College in London. “This expansion causes lots of stuff to shake around, and if we shake it too much, we could go into this new energy space, which could cause the universe to collapse.”

Physicists draw that conclusion from a model that accounts for the properties of the newly discovered Higgs boson particle, which is thought to explain how other particles get their mass; faint traces of gravitational waves formed at the universe’s origin also inform the conclusion.

Of course, there must be something missing from these calculations.

“We are here talking about it,” Hogan told Live Science. “That means we have to extend our theories to explain why this didn’t happen.”

No kidding!

Though there’s much more to come, this example should tell you all that you need to know about the fallibility of scientists. If you need more examples, consider these.

MODELS LIE WHEN LIARS MODEL

Not that there’s anything wrong with being wrong, but there’s a great deal wrong with seizing on a transitory coincidence between two variables (CO2 emissions and “global” temperatures in the late 1900s) and spurring a massively wrong-headed “scientific” mania — the mania of anthropogenic global warming.

What it comes down to is modeling, which is simply a way of baking one’s assumptions into a pseudo-scientific mathematical concoction. Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a general explanation of the sham, see this.

SCIENCE VS. SCIENTISM: STEVEN PINKER’S BALDERDASH

The examples that I’ve adduced thus far (and most of those that follow) demonstrate a mode of thought known as scientism: the application of the tools and language of science to create a pretense of knowledge.

No less a personage than Steven Pinker defends scientism in “Science Is Not Your Enemy”. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. Pinker says this, for example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists”. We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (e.g., bluffing, spotting “tells”, and avoiding “tells”).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real test of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification”. With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, discussed later.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

Much of the problem arises because of a simple fact that is too often overlooked: Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

In sum, scientists are human and fallible. It is in the best tradition of science to distrust their scientific claims and to dismiss their non-scientific utterances.

ECONOMICS: PHYSICS ENVY AT WORK

Economics is rife with balderdash cloaked in mathematics. Economists who rely heavily on mathematics like to say (and perhaps even believe) that mathematical expression is more precise than mere words. But, as Arnold Kling points out in “An Important Emerging Economic Paradigm”, mathematical economics is a language of faux precision, which is useful only when applied to well defined, narrow problems. It can’t address the big issues — such as economic growth — which depend on variables such as the rule of law and social norms which defy mathematical expression and quantification.

I would go a step further and argue that mathematical economics borders on obscurantism. It’s a cult whose followers speak an arcane language not only to communicate among themselves but to obscure the essentially bankrupt nature of their craft from others. Mathematical expression actually hides the assumptions that underlie it. It’s far easier to identify and challenge the assumptions of “literary” economics than it is to identify and challenge the assumptions of mathematical economics.

I daresay that this is true even for persons who are conversant in mathematics. They may be able to manipulate easily the equations of mathematical economics, but they are able to do so without grasping the deeper meanings — the assumptions and complexities — hidden by those equations. In fact, the ease of manipulating the equations gives them a false sense of mastery of the underlying, real concepts.

Much of the economics profession is nevertheless dedicated to the protection and preservation of the essential incompetence of mathematical economists. This is from “An Important Emerging Economic Paradigm”:

One of the best incumbent-protection rackets going today is for mathematical theorists in economics departments. The top departments will not certify someone as being qualified to have an advanced degree without first subjecting the student to the most rigorous mathematical economic theory. The rationale for this is reminiscent of fraternity hazing. “We went through it, so should they.”

Mathematical hazing persists even though there are signs that the prestige of math is on the decline within the profession. The important Clark Medal, awarded to the most accomplished American economist under the age of 40, has not gone to a mathematical theorist since 1989.

These hazing rituals can have real consequences. In medicine, the controversial tradition of long work hours for medical residents has come under scrutiny over the last few years. In economics, mathematical hazing is not causing immediate harm to medical patients. But it probably is working to the long-term detriment of the profession.

The hazing ritual in economics has as least two real and damaging consequences. First, it discourages entry into the economics profession by persons who, like Kling, can discuss economic behavior without resorting to the sterile language of mathematics. Second, it leads to economics that’s irrelevant to the real world — and dead wrong.

How wrong? Economists are notoriously bad at constructing models that adequately predict near-term changes in GDP. That task should be easier than sorting out the microeconomic complexities of the labor market.

Take Professor Ray Fair, for example. Professor Fair teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983, Prof. Fair has been forecasting changes in real GDP over the next four quarters. He has made 80 such forecasts based on a model that he has undoubtedly tweaked over the years. The current model is here. His forecasting track record is here. How has he done? Here’s how:

1. The median absolute error of his forecasts is 30 percent.

2. The mean absolute error of his forecasts is 70 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 4 percent; too low when real, four-quarter GDP growth is greater than 4 percent.

4. His forecasts have grown generally worse — not better — with time.

Prof. Fair is still at it. And his forecasts continue to grow worse with time:

fair-model-forecasting-errors-vs-time
This and later graphs pertaining to Prof. Fair’s forecasts were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing; 8 percent of the errors exceed 200 percent.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

fair-model-estimated-vs-actual-growth-rate

It fails the most important test; that is, it doesn’t reflect the downward trend in economic growth:

fair-model-year-over-year-growth-estimated-and-actual

THE INVISIBLE ELEPHANT IN THE ROOM

Professor Fair and his prognosticating ilk are pikers compared with John Maynard Keynes and his disciples. The Keynesian multiplier is the fraud of all frauds, not just in economics but in politics, where it is too often invoked as an excuse for taking money from productive uses and pouring it down the rathole of government spending.

The Keynesian (fiscal) multiplier is defined as

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier. Please go to “The Keynesian Multiplier: Fiction vs. Fact” for a detailed explanation of the phony math and a derivation of the true multiplier, which is decidedly negative. Here’s the short version:

  • The phony math involves the use of an accounting identity that can be manipulated in many ways, to “prove” many things. But the accounting identity doesn’t express an operational (or empirical) relationship between a change in government spending and a change in GDP.
  • The true value of the multiplier isn’t 5 (a common mathematical estimate), 1.5 (a common but mistaken empirical estimate used for government purposes), or any positive number. The true value represents the negative relationship between the change in government spending (including transfer payments) as a fraction of GDP and the change in the rate of real GDP growth. Specifically, where F represents government spending as a fraction of GDP,

a rise in F from 0.24 to 0.33 (the actual change from 1947 to 2007) would reduce the real rate of economic growth by 0.031 percentage points. The real rate of growth from 1947 to 1957 was 4 percent. Other things being the same, the rate of growth would have dropped to 0.9 percent in the period 2008-2017. It actually dropped to 1.4 percent, which is within the standard error of the estimate.

  • That kind of drop makes a huge difference in the incomes of Americans. In 10 years, rise GDP rises by almost 50 percent when the rate of growth is 4 percent, but only by 15 percent when the rate of growth is 1.9 percent. Think of the tens of millions of people who would be living in comfort rather than squalor were it not for Keynesian balderdash, which turns reality on its head in order to promote big government.

MANAGEMENT “SCIENCE”

A hot new item in management “science” a few years ago was the Candle Problem. Graham Morehead describes the problem and discusses its broader, “scientifically” supported conclusions:

The Candle Problem was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

Sam Glucksberg added a fascinating twist to this finding in his 1962 paper, “Influece of strength of drive on functional fixedness and perceptual recognition.” (Journal of Experimental Psychology 1962. Vol. 63, No. 1, 36-41). He studied the effect of financial incentives on solving the candle problem. To one group he offered no money. To the other group he offered an amount of money for solving the problem fast.

Remember, there are two candle problems. Let the “Simple Candle Problem” be the one where the tacks are outside the box — no functional fixedness. The solution is straightforward. Here are the results for those who solved it:

Simple Candle Problem Mean Times :

  • WITHOUT a financial incentive : 4.99 min
  • WITH a financial incentive : 3.67 min

Nothing unexpected here. This is a classical incentivization effect anybody would intuitively expect.

Now, let “In-Box Candle Problem” refer to the original description where the tacks start off in the box.

In-Box Candle Problem Mean Times :

  • WITHOUT a financial incentive : 7:41 min
  • WITH a financial incentive : 11:08 min

How could this be? The financial incentive made people slower? It gets worse — the slowness increases with the incentive. The higher the monetary reward, the worse the performance! This result has been repeated many times since the original experiment.

Glucksberg and others have shown this result to be highly robust. Daniel Pink calls it a legally provable “fact.” How should we interpret the above results?

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

All of this is found in a video (to which Morehead links), wherein Daniel Pink (an author and journalist whose actual knowledge of science and business appears to be close to zero) expounds the lessons of the Candle Problem. Pink displays his (no-doubt-profitable) conviction that the Candle Problem and related “science” reveals (a) the utter bankruptcy of capitalism and (b) the need to replace managers with touchy-feely gurus (like himself, I suppose). That Pink has worked for two of the country’s leading anti-capitalist airheads — Al Gore and Robert Reich — should tell you all that you need to know about Pink’s real agenda.

Here are my reasons for sneering at Pink and his ilk:

1. I have been there and done that. That is to say, as a manager, I lived through (and briefly bought into) the touchy-feely fads of the ’80s and ’90s. Think In Search of Excellence, The One Minute Manager, The Seven Habits of Highly Effective People, and so on. What did anyone really learn from those books and the lectures and workshops based on them? A perceptive person would have learned that it is easy to make up plausible stories about the elements of success, and having done so, it is possible to make a lot of money peddling those stories. But the stories are flawed because (a) they are based on exceptional cases; (b) they attribute success to qualitative assessments of behaviors that seem to be present in those exceptional cases; and (c) they do not properly account for the surrounding (and critical) circumstances that really led to success, among which are luck and rare combinations of personal qualities (e.g., high intelligence, perseverance, people-reading skills). In short, Pink and his predecessors are guilty of reductionism and the post hoc ergo propter hoc fallacy.

2. Also at work is an undue generalization about the implications of the Candle Problem. It may be true that workers will perform better — at certain kinds of tasks (very loosely specified) — if they are not distracted by incentives that are related to the performance of those specific tasks. But what does that have to do with incentives in general? Not much, because the Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

3. The Candle Problem, with its anti-incentive “lesson”, is therefore inapplicable to the real world, where incentives play a crucial and positive role:

  • The profit incentive leads firms to invest resources in the development and/or production of things that consumers are willing to buy because those things satisfy wants at the right price.
  • Firms acquire resources to develop and produce things by bidding for those resources, that is, by offering monetary incentives to attract the resources required to make the things that consumers are willing to buy.
  • The incentives (compensation) offered to workers of various kinds (from scientists with doctorates to burger-flippers) are generally commensurate with the contributions made by those workers to the production of things of value to consumers, and to the value placed on those things by consumers.
  • Workers agree to the terms and conditions of employment (including compensation) before taking a job. The incentive for most workers is to keep a job by performing adequately over a sustained period — not by demonstrating creativity in a few minutes. Some workers (but not a large fraction of them) are striving for performance-based commissions, bonuses, and profit-sharing distributions. But those distributions are based on performance over a sustained period, during which the striving workers have plenty of time to think about how they can perform better.
  • Truly creative work is done, for the most part, by persons who are hired for such work on the basis of their credentials (education, prior employment, test results). Their compensation is based on their credentials, initially, and then on their performance over a sustained period. If they are creative, they have plenty of psychological space in which to exercise and demonstrate their creativity.
  • On-the-job creativity — the improvement of protocols and tools by workers using them — does not occur under conditions of the kind assumed in the Candle Problem. Rather, on-the-job creativity flows from actual work and insights about how to do the work better. It happens when it happens, and has nothing to do with artificial time constraints and monetary incentives to be “creative” within those constraints.
  • Pink’s essential pitch is that incentives can be replaced by offering jobs that yield autonomy (self-direction), mastery (the satisfaction of doing difficult things well), and purpose (that satisfaction of contributing to the accomplishment of something important). Well, good luck with that, but I (and millions of other consumers) want what we want, and if workers want to make a living they will just have to provide what we want, not what turns them on. Yes, there is a lot to be said for autonomy, mastery, and purpose, but there is also a lot to be said for getting a paycheck. And, contrary to Pink’s implication, getting a paycheck does not rule out autonomy, mastery, and purpose — where those happen to go with the job.

Pink and company’s “insights” about incentives and creativity are 180 degrees off-target. McDonald’s could use the Candle Problem to select creative burger-flippers who will perform well under tight deadlines because their compensation is unrelated to the creativity of their burger-flipping. McDonald’s customers should be glad that McDonald’s has taken creativity out of the picture by reducing burger-flipping to the routine application of protocols and tools.

In summary:

  • The Candle Problem is an interesting experiment, and probably valid with respect to the performance of specific tasks against tight deadlines. I think the results apply whether the stakes are money or any other kind of prize. The experiment illustrates the “choke” factor, and nothing more profound than that.
  • I question whether the experiment applies to the usual kind of incentive (e.g., a commissions or bonus), where the “incentee” has ample time (months, years) for reflection and research that will enable him to improve his performance and attain a bigger commission or bonus (which usually isn’t an all-or-nothing arrangement).
  • There’s also the dissimilarity of the Candle Problem — which involves more-or-less randomly chosen subjects, working against an artificial deadline — and actual creative thinking — usually involving persons who are experts (even if the expertise is as mundane as ditch-digging), working against looser deadlines or none at all.

PARTISAN POLITICS IN THE GUISE OF PSEUDO-SCIENCE

There’s plenty of it to go around, but this one is a whopper. Peter Singer outdoes his usual tendentious self in this review of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. In the course of the review, Singer writes:

Pinker argues that enhanced powers of reasoning give us the ability to detach ourselves from our immediate experience and from our personal or parochial perspective, and frame our ideas in more abstract, universal terms. This in turn leads to better moral commitments, including avoiding violence. It is just this kind of reasoning ability that has improved during the 20th century. He therefore suggests that the 20th century has seen a “moral Flynn effect, in which an accelerating escalator of reason carried us away from impulses that lead to violence” and that this lies behind the long peace, the new peace, and the rights revolution. Among the wide range of evidence he produces in support of that argument is the tidbit that since 1946, there has been a negative correlation between an American president’s I.Q. and the number of battle deaths in wars involving the United States.

Singer does not give the source of the IQ estimates on which Pinker relies, but the supposed correlation points to a discredited piece of historiometry by Dean Keith Simonton, Simonton jumps through various hoops to assess the IQs of  every president from Washington to Bush II — to one decimal place. That is a feat on a par with reconstructing the final thoughts of Abel, ere Cain slew him.

Before I explain the discrediting of Simonton’s obviously discreditable “research”, there is some fun to be had with the Pinker-Singer story of presidential IQ (Simonton-style) for battle deaths. First, of course, there is the convenient cutoff point of 1946. Why 1946? Well, it enables Pinker-Singer to avoid the inconvenient fact that the Civil War, World War I, and World War II happened while the presidency was held by three men who (in Simonton’s estimation) had high IQs: Lincoln, Wilson, and FDR.

The next several graphs depict best-fit relationships between Simonton’s estimates of presidential IQ and the U.S. battle deaths that occurred during each president’s term of office.* The presidents, in order of their appearance in the titles of the graphs are Harry S Truman (HST), George W. Bush (GWB), Franklin Delano Roosevelt (FDR), (Thomas) Woodrow Wilson (WW), Abraham Lincoln (AL), and George Washington (GW). The number of battle deaths is rounded to the nearest thousand, so that the prevailing value is 0, even in the case of the Spanish-American War (385 U.S. combat deaths) and George H.W. Bush’s Gulf War (147 U.S. combat deaths).

This is probably the relationship referred to by Singer, though Pinker may show a linear fit, rather than the tighter polynomial fit used here:

It looks bad for the low “IQ” presidents — if you believe Simonton’s estimates of IQ, which you shouldn’t, and if you believe that battle deaths are a bad thing per se, which they aren’t. I will come back to those points. For now, just suspend your well-justified disbelief.

If the relationship for the HST-GWB era were statistically meaningful, it would not change much with the introduction of additional statistics about “IQ” and battle deaths, but it does:




If you buy the brand of snake oil being peddled by Pinker-Singer, you must believe that the “dumbest” and “smartest” presidents are unlikely to get the U.S. into wars that result in a lot of battle deaths, whereas some (but, mysteriously, not all) of the “medium-smart” presidents (Lincoln, Wilson, FDR) are likely to do so.

In any event, if you believe in Pinker-Singer’s snake oil, you must accept the consistent “humpback” relationship that is depicted in the preceding four graphs, rather than the highly selective, one-shot negative relationship of the HST-GWB graph.

More seriously, the relationship in the HST-GWB graph is an evident ploy to discredit certain presidents (especially GWB, I suspect), which is why it covers only the period since WWII. Why not just say that you think GWB is a chimp-like, war-mongering, moron and be done with it? Pseudo-statistics of the kind offered up by Pinker-Singer is nothing more than a talking point for those already convinced that Bush=Hitler.

But as long as this silly game is in progress, let us continue it, with a new rule. Let us advance from one to two explanatory variables. The second explanatory variable that strongly suggests itself is political party. And because it is not good practice to omit relevant statistics (a favorite gambit of liars), I estimated an equation based on “IQ” and battle deaths for the 27 men who served as president from the first Republican presidency (Lincoln’s) through the presidency of GWB.  The equation looks like this:

U.S. battle deaths (000) “owned” by a president =

-80.6 + 0.841 x “IQ” – 31.3 x party (where 0 = Dem, 1 = GOP)

In other words, battle deaths rise at the rate of 841 per IQ point (so much for Pinker-Singer). But there will be fewer deaths with a Republican in the White House (so much for Pinker-Singer’s implied swipe at GWB).

All of this is nonsense, of course, for two reasons: Simonton’s estimates of IQ are hogwash, and the number of U.S. battle deaths is a meaningless number, taken by itself.

With regard to the hogwash, Simonton’s estimates of presidents’ IQs put every one of them — including the “dumbest,” U.S. Grant — in the top 2.3 percent of the population. And the mean of Simonton’s estimates puts the average president in the top 0.1 percent (one-tenth of one percent) of the population. That is literally incredible. Good evidence of the unreliability of Simonton’s estimates is found in an entry by Thomas C. Reeves at George Mason University’s History New Network. Reeves is the author of A Question of Character: A Life of John F. Kennedy, the negative reviews of which are evidently the work of JFK idolators who refuse to be disillusioned by facts. Anyway, here is Reeves:

I’m a biographer of two of the top nine presidents on Simonton’s list and am highly familiar with the histories of the other seven. In my judgment, this study has little if any value. Let’s take JFK and Chester A. Arthur as examples.

Kennedy was actually given an IQ test before entering Choate. His score was 119…. There is no evidence to support the claim that his score should have been more than 40 points higher [i.e., the IQ of 160 attributed to Kennedy by Simonton]. As I described in detail in A Question Of Character [link added], Kennedy’s academic achievements were modest and respectable, his published writing and speeches were largely done by others (no study of Kennedy is worthwhile that downplays the role of Ted Sorensen)….

Chester Alan Arthur was largely unknown before my Gentleman Boss was published in 1975. The discovery of many valuable primary sources gave us a clear look at the president for the first time. Among the most interesting facts that emerged involved his service during the Civil War, his direct involvement in the spoils system, and the bizarre way in which he was elevated to the GOP presidential ticket in 1880. His concealed and fatal illness while in the White House also came to light.

While Arthur was a college graduate, and was widely considered to be a gentleman, there is no evidence whatsoever to suggest that his IQ was extraordinary. That a psychologist can rank his intelligence 2.3 points ahead of Lincoln’s suggests access to a treasure of primary sources from and about Arthur that does not exist.

This historian thinks it impossible to assign IQ numbers to historical figures. If there is sufficient evidence (as there usually is in the case of American presidents), we can call people from the past extremely intelligent. Adams, Wilson, TR, Jefferson, and Lincoln were clearly well above average intellectually. But let us not pretend that we can rank them by tenths of a percentage point or declare that a man in one era stands well above another from a different time and place.

My educated guess is that this recent study was designed in part to denigrate the intelligence of the current occupant of the White House….

That is an excellent guess.

The meaninglessness of battle deaths as a measure of anything — but battle deaths — should be evident. But in case it is not evident, here goes:

  • Wars are sometimes necessary, sometimes not. (I give my views about the wisdom of America’s various wars at this post.) Necessary or not, presidents usually act in accordance with popular and elite opinion about the desirability of a particular war. Imagine, for example, the reaction if FDR had not gone to Congress on December 8, 1941, to ask for a declaration of war against Japan, or if GWB had not sought the approval of Congress for action in Afghanistan.
  • Presidents may have a lot to do with the decision to enter a war, but they have little to do with the external forces that help to shape that decision. GHWB, for example, had nothing to do with Saddam’s decision to invade Kuwait and thereby threaten vital U.S. interests in the Middle East. GWB, to take another example, was not a party to the choices of earlier presidents (GHWB and Clinton) that enabled Saddam to stay in power and encouraged Osama bin Laden to believe that America could be brought to its knees by a catastrophic attack.
  • The number of battle deaths in a war depends on many things outside the control of a particular president; for example, the size and capabilities of enemy forces, the size and capabilities of U.S. forces (which have a lot to do with the decisions of earlier administrations and Congresses), and the scope and scale of a war (again, largely dependent on the enemy).
  • Battle deaths represent personal tragedies, but — in and of themselves — are not a measure of a president’s wisdom or acumen. Whether the deaths were in vain is a separate issue that depends on the aforementioned considerations. To use battle deaths as a single, negative measure of a president’s ability is rank cynicism — the rankness of which is revealed in Pinker’s decision to ignore Lincoln and FDR and their “good” but deadly wars.

To put the last point another way, if the number of battle death deaths is a bad thing, Lincoln and FDR should be rotting in hell for the wars that brought an end to slavery and Hitler.
__________
* The numbers of U.S. battle deaths, by war, are available at infoplease.com, “America’s Wars: U.S. Casualties and Veterans”. The deaths are “assigned” to presidents as follows (numbers in parentheses indicate thousands of deaths):

All of the deaths (2) in the War of 1812 occurred on Madison’s watch.

All of the deaths (2) in the Mexican-American War occurred on Polk’s watch.

I count only Union battle deaths (140) during the Civil War; all are “Lincoln’s.” Let the Confederate dead be on the head of Jefferson Davis. This is a gift, of sorts, to Pinker-Singer because if Confederate dead were counted as Lincoln, with his high “IQ,” it would make Pinker-Singer’s hypothesis even more ludicrous than it is.

WW is the sole “owner” of WWI battle deaths (53).

Some of the U.S. battle deaths in WWII (292) occurred while HST was president, but Truman was merely presiding over the final months of a war that was almost won when FDR died. Truman’s main role was to hasten the end of the war in the Pacific by electing to drop the A-bombs on Hiroshima and Nagasaki. So FDR gets “credit” for all WWII battle deaths.

The Korean War did not end until after Eisenhower succeeded Truman, but it was “Truman’s war,” so he gets “credit” for all Korean War battle deaths (34). This is another “gift” to Pinker-Singer because Ike’s “IQ” is higher than Truman’s.

Vietnam was “LBJ’s war,” but I’m sure that Singer would not want Nixon to go without “credit” for the battle deaths that occurred during his administration. Moreover, LBJ had effectively lost the Vietnam war through his gradualism, but Nixon chose nevertheless to prolong the agony. So I have shared the “credit” for Vietnam War battle deaths between LBJ (deaths in 1965-68: 29) and RMN (deaths in 1969-73: 17). To do that, I apportioned total Vietnam War battle deaths, as given by infoplease.com, according to the total number of U.S. deaths in each year of the war, 1965-1973.

The wars in Afghanistan and Iraq are “GWB’s wars,” even though Obama has continued them. So I have “credited” GWB with all the battle deaths in those wars, as of May 27, 2011 (5).

The relative paucity of U.S. combat  deaths in other post-WWII actions (e.g., Lebanon, Somalia, Persian Gulf) is attested to by “Post-Vietnam Combat Casualties”, at infoplease.com.

A THIRD APPEARANCE BY PINKER

Steven Pinker, whose ignominious outpourings I have addressed twice here, deserves a third strike (which he shall duly be awarded). Pinker’s The Better Angels of Our Nature is cited gleefully by leftists and cockeyed optimists as evidence that human beings, on the whole, are becoming kinder and gentler because of:

  • The Leviathan – The rise of the modern nation-state and judiciary “with a monopoly on the legitimate use of force,” which “can defuse the [individual] temptation of exploitative attack, inhibit the impulse for revenge, and circumvent…self-serving biases.”
  • Commerce – The rise of “technological progress [allowing] the exchange of goods and services over longer distances and larger groups of trading partners,” so that “other people become more valuable alive than dead” and “are less likely to become targets of demonization and dehumanization”;
  • Feminization – Increasing respect for “the interests and values of women.”
  • Cosmopolitanism – the rise of forces such as literacy, mobility, and mass media, which “can prompt people to take the perspectives of people unlike themselves and to expand their circle of sympathy to embrace them”;
  • The Escalator of Reason – an “intensifying application of knowledge and rationality to human affairs,” which “can force people to recognize the futility of cycles of violence, to ramp down the privileging of their own interests over others’, and to reframe violence as a problem to be solved rather than a contest to be won.”

I can tell you that Pinker’s book is hogwash because two very bright leftists — Peter Singer and Will Wilkinson — have strongly and wrongly endorsed some of its key findings. I dispatched Singer in earlier. As for Wilkinson, he praises statistics adduced by Pinker that show a decline in the use of capital punishment:

In the face of such a decisive trend in moral culture, we can say a couple different things. We can say that this is just change and says nothing in particular about what is really right or wrong, good or bad. Or we can take take say this is evidence of moral progress, that we have actually become better. I prefer the latter interpretation for basically the same reasons most of us see the abolition of slavery and the trend toward greater equality between races and sexes as progress and not mere morally indifferent change. We can talk about the nature of moral progress later. It’s tricky. For now, I want you to entertain the possibility that convergence toward the idea that execution is wrong counts as evidence that it is wrong.

I would count convergence toward the idea that execution is wrong as evidence that it is wrong, if that idea were (a) increasingly held by individuals who (b) had arrived at their “enlightenment” unnfluenced by operatives of the state (legislatures and judges), who take it upon themselves to flout popular support of the death penalty. What we have, in the case of the death penalty, is moral regress, not moral progress.

Moral regress because the abandonment of the death penalty puts innocent lives at risk. Capital punishment sends a message, and the message is effective when it is delivered: it deters homicide. And even if it didn’t, it would at least remove killers from our midst, permanently. By what standard of morality can one claim that it is better to spare killers than to protect innocents? For that matter, by what standard of morality is it better to kill innocents in the womb than to spare killers? Proponents of abortion (like Singer and Wilkinson) — who by and large oppose capital punishment — are completely lacking in moral authority.

Returning to Pinker’s thesis that violence has declined, I quote a review at Foseti:

Pinker’s basic problem is that he essentially defines “violence” in such a way that his thesis that violence is declining becomes self-fulling. “Violence” to Pinker is fundamentally synonymous with behaviors of older civilizations. On the other hand, modern practices are defined to be less violent than newer practices.

A while back, I linked to a story about a guy in my neighborhood who’s been arrested over 60 times for breaking into cars. A couple hundred years ago, this guy would have been killed for this sort of vandalism after he got caught the first time. Now, we feed him and shelter him for a while and then we let him back out to do this again. Pinker defines the new practice as a decline in violence – we don’t kill the guy anymore! Someone from a couple hundred years ago would be appalled that we let the guy continue destroying other peoples’ property without consequence. In the mind of those long dead, “violence” has in fact increased. Instead of a decline in violence, this practice seems to me like a decline in justice – nothing more or less.

Here’s another example, Pinker uses creative definitions to show that the conflicts of the 20th Century pale in comparison to previous conflicts. For example, all the Mongol Conquests are considered one event, even though they cover 125 years. If you lump all these various conquests together and you split up WWI, WWII, Mao’s takeover in China, the Bolshevik takeover of Russia, the Russian Civil War, and the Chinese Civil War (yes, he actually considers this a separate event from Mao), you unsurprisingly discover that the events of the 20th Century weren’t all that violent compared to events in the past! Pinker’s third most violent event is the “Mideast Slave Trade” which he says took place between the 7th and 19th Centuries. Seriously. By this standard, all the conflicts of the 20th Century are related. Is the Russian Revolution or the rise of Mao possible without WWII? Is WWII possible without WWI? By this consistent standard, the 20th Century wars of Communism would have seen the worst conflict by far. Of course, if you fiddle with the numbers, you can make any point you like.

There’s much more to the review, including some telling criticisms of Pinker’s five reasons for the (purported) decline in violence. That the reviewer somehow still wants to believe in the rightness of Pinker’s thesis says more about the reviewer’s optimism than it does about the validity of Pinker’s thesis.

That thesis is fundamentally flawed, as Robert Epstein points out in a review at Scientific American:

[T]he wealth of data [Pinker] presents cannot be ignored—unless, that is, you take the same liberties as he sometimes does in his book. In two lengthy chapters, Pinker describes psychological processes that make us either violent or peaceful, respectively. Our dark side is driven by a evolution-based propensity toward predation and dominance. On the angelic side, we have, or at least can learn, some degree of self-control, which allows us to inhibit dark tendencies.

There is, however, another psychological process—confirmation bias—that Pinker sometimes succumbs to in his book. People pay more attention to facts that match their beliefs than those that undermine them. Pinker wants peace, and he also believes in his hypothesis; it is no surprise that he focuses more on facts that support his views than on those that do not. The SIPRI arms data are problematic, and a reader can also cherry-pick facts from Pinker’s own book that are inconsistent with his position. He notes, for example, that during the 20th century homicide rates failed to decline in both the U.S. and England. He also describes in graphic and disturbing detail the savage way in which chimpanzees—our closest genetic relatives in the animal world—torture and kill their own kind.

Of greater concern is the assumption on which Pinker’s entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease? By this logic, when we reach a world population of nine billion in 2050, Pinker will conceivably be satisfied if a mere two million people are killed in war that year.

The biggest problem with the book, though, is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Pinker’s belief that violence is on the decline reminds me of “it’s different this time”, a phrase that was on the lips of hopeful stock-pushers, stock-buyers, and pundits during the stock-market bubble of the late 1990s. That bubble ended, of course, in the spectacular crash of 2000.

Predictions about the future of humankind are better left in the hands of writers who see human nature whole, and who are not out to prove that it can be shaped or contained by the kinds of “liberal” institutions that Pinker so obviously favors.

Consider this, from an article by Robert J. Samuelson at The Washington Post:

[T]he Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar. Amid the controversy over leaks from the National Security Agency, this looms as an even bigger downside.

By cyberwarfare, I mean the capacity of groups — whether nations or not — to attack, disrupt and possibly destroy the institutions and networks that underpin everyday life. These would be power grids, pipelines, communication and financial systems, business record-keeping and supply-chain operations, railroads and airlines, databases of all types (from hospitals to government agencies). The list runs on. So much depends on the Internet that its vulnerability to sabotage invites doomsday visions of the breakdown of order and trust.

In a report, the Defense Science Board, an advisory group to the Pentagon, acknowledged “staggering losses” of information involving weapons design and combat methods to hackers (not identified, but probably Chinese). In the future, hackers might disarm military units. “U.S. guns, missiles and bombs may not fire, or may be directed against our own troops,” the report said. It also painted a specter of social chaos from a full-scale cyberassault. There would be “no electricity, money, communications, TV, radio or fuel (electrically pumped). In a short time, food and medicine distribution systems would be ineffective.”

But Pinker wouldn’t count the resulting chaos as violence, as long as human beings were merely starving and dying of various diseases. That violence would ensue, of course, is another story, which is told by John Gray in The Silence of Animals: On Progress and Other Modern Myths. Gray’s book — published  18 months after Better Angels — could be read as a refutation of Pinker’s book, though Gray doesn’t mention Pinker or his book.

The gist of Gray’s argument is faithfully recounted in a review of Gray’s book by Robert W. Merry at The National Interest:

The noted British historian J. B. Bury (1861–1927) … wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future…. The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

… Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong….

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption….

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

In the Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny….

And yet the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves….

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy.

RACE AS A SOCIAL CONSTRUCT

David Reich‘s hot new book, Who We Are and How We Got Here, is causing a stir in genetic-research circles. Reich, who takes great pains to assure everyone that he isn’t a racist, and who deplores racism, is nevertheless candid about race:

I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”

Groundbreaking advances in DNA sequencing technology have been made over the last two decades. These advances enable us to measure with exquisite accuracy what fraction of an individual’s genetic ancestry traces back to, say, West Africa 500 years ago — before the mixing in the Americas of the West African and European gene pools that were almost completely isolated for the last 70,000 years. With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real….

Self-identified African-Americans turn out to derive, on average, about 80 percent of their genetic ancestry from enslaved Africans brought to America between the 16th and 19th centuries. My colleagues and I searched, in 1,597 African-American men with prostate cancer, for locations in the genome where the fraction of genes contributed by West African ancestors was larger than it was elsewhere in the genome. In 2006, we found exactly what we were looking for: a location in the genome with about 2.8 percent more African ancestry than the average.

When we looked in more detail, we found that this region contained at least seven independent risk factors for prostate cancer, all more common in West Africans. Our findings could fully account for the higher rate of prostate cancer in African-Americans than in European-Americans. We could conclude this because African-Americans who happen to have entirely European ancestry in this small section of their genomes had about the same risk for prostate cancer as random Europeans.

Did this research rely on terms like “African-American” and “European-American” that are socially constructed, and did it label segments of the genome as being probably “West African” or “European” in origin? Yes. Did this research identify real risk factors for disease that differ in frequency across those populations, leading to discoveries with the potential to improve health and save lives? Yes.

While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another.

But whether we like it or not, that line has already been crossed. A recent study led by the economist Daniel Benjamin compiled information on the number of years of education from more than 400,000 people, almost all of whom were of European ancestry. After controlling for differences in socioeconomic background, he and his colleagues identified 74 genetic variations that are over-represented in genes known to be important in neurological development, each of which is incontrovertibly more common in Europeans with more years of education than in Europeans with fewer years of education.

It is not yet clear how these genetic variations operate. A follow-up study of Icelanders led by the geneticist Augustine Kong showed that these genetic variations also nudge people who carry them to delay having children. So these variations may be explaining longer times at school by affecting a behavior that has nothing to do with intelligence.

This study has been joined by others finding genetic predictors of behavior. One of these, led by the geneticist Danielle Posthuma, studied more than 70,000 people and found genetic variations in more than 20 genes that were predictive of performance on intelligence tests.

Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly. And since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.

You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work. Indeed, the study led by Dr. Kong showed that in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century….

So how should we prepare for the likelihood that in the coming years, genetic studies will show that many traits are influenced by genetic variations, and that these traits will differ on average across human populations? It will be impossible — indeed, anti-scientific, foolish and absurd — to deny those differences. [“How Genetics Is Changing Our Understanding of ‘Race’“, The New York Times, March 23, 2018]

Reich engages in a lot of non-scientific wishful thinking about racial differences and how they should be treated by “society” — none of which is in his purview as a scientist. Reich’s forays into psychobabble have been addressed at length by Steve Sailer (here and here) and Gregory Cochran (here, here, here, here, and here). Suffice it to say that Reich is trying in vain to minimize the scientific fact of racial differences that show up crucially in intelligence and rates of violent crime.

The lesson here is that it’s all right to show that race isn’t a social construct as long as you proclaim that it is a social construct. This is known as talking out of both sides of one’s mouth — another manifestation of balderdash.

DIVERSITY IS GOOD, EXCEPT WHEN IT ISN’T

I now invoke Robert Putnam, a political scientist known mainly for his book Bowling Alone: The Collapse and Revival of American Community (2005), in which he

makes a distinction between two kinds of social capital: bonding capital and bridging capital. Bonding occurs when you are socializing with people who are like you: same age, same race, same religion, and so on. But in order to create peaceful societies in a diverse multi-ethnic country, one needs to have a second kind of social capital: bridging. Bridging is what you do when you make friends with people who are not like you, like supporters of another football team. Putnam argues that those two kinds of social capital, bonding and bridging, do strengthen each other. Consequently, with the decline of the bonding capital mentioned above inevitably comes the decline of the bridging capital leading to greater ethnic tensions.

In later work on diversity and trust within communities, Putnam concludes that

other things being equal, more diversity in a community is associated with less trust both between and within ethnic groups….

Even when controlling for income inequality and crime rates, two factors which conflict theory states should be the prime causal factors in declining inter-ethnic group trust, more diversity is still associated with less communal trust.

Lowered trust in areas with high diversity is also associated with:

  • Lower confidence in local government, local leaders and the local news media.
  • Lower political efficacy – that is, confidence in one’s own influence.
  • Lower frequency of registering to vote, but more interest and knowledge about politics and more participation in protest marches and social reform groups.
  • Higher political advocacy, but lower expectations that it will bring about a desirable result.
  • Less expectation that others will cooperate to solve dilemmas of collective action (e.g., voluntary conservation to ease a water or energy shortage).
  • Less likelihood of working on a community project.
  • Less likelihood of giving to charity or volunteering.
  • Fewer close friends and confidants.
  • Less happiness and lower perceived quality of life.
  • More time spent watching television and more agreement that “television is my most important form of entertainment”.

It’s not as if Putnam is a social conservative who is eager to impart such news. To the contrary, as Michal Jonas writes in “The Downside of Diversity“, Putnam’s

findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis … , he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer….

After releasing the initial results in 2001, Putnam says he spent time “kicking the tires really hard” to be sure the study had it right. Putnam realized, for instance, that more diverse communities tended to be larger, have greater income ranges, higher crime rates, and more mobility among their residents — all factors that could depress social capital independent of any impact ethnic diversity might have.

“People would say, ‘I bet you forgot about X,’” Putnam says of the string of suggestions from colleagues. “There were 20 or 30 X’s.”

But even after statistically taking them all into account, the connection remained strong: Higher diversity meant lower social capital. In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”

“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes….

In a recent study, [Harvard economist Edward] Glaeser and colleague Alberto Alesina demonstrated that roughly half the difference in social welfare spending between the US and Europe — Europe spends far more — can be attributed to the greater ethnic diversity of the US population. Glaeser says lower national social welfare spending in the US is a “macro” version of the decreased civic engagement Putnam found in more diverse communities within the country.

Economists Matthew Kahn of UCLA and Dora Costa of MIT reviewed 15 recent studies in a 2003 paper, all of which linked diversity with lower levels of social capital. Greater ethnic diversity was linked, for example, to lower school funding, census response rates, and trust in others. Kahn and Costa’s own research documented higher desertion rates in the Civil War among Union Army soldiers serving in companies whose soldiers varied more by age, occupation, and birthplace.

Birds of different feathers may sometimes flock together, but they are also less likely to look out for one another. “Everyone is a little self-conscious that this is not politically correct stuff,” says Kahn….

In his paper, Putnam cites the work done by Page and others, and uses it to help frame his conclusion that increasing diversity in America is not only inevitable, but ultimately valuable and enriching. As for smoothing over the divisions that hinder civic engagement, Putnam argues that Americans can help that process along through targeted efforts. He suggests expanding support for English-language instruction and investing in community centers and other places that allow for “meaningful interaction across ethnic lines.”

Some critics have found his prescriptions underwhelming. And in offering ideas for mitigating his findings, Putnam has drawn scorn for stepping out of the role of dispassionate researcher. “You’re just supposed to tell your peers what you found,” says John Leo, senior fellow at the Manhattan Institute, a conservative think tank. [Michael Jonas, “The downside of diversity,” The Boston Globe (boston.com), August 5, 2007]

What is it about academics like Reich and Putnam who can’t bear to face the very facts that they have uncovered? The magic word is “academics”. They are denizens of a milieu in which the facts of life about race, guns, sex, and many other things are in the habit of being suppressed in favor of “hope and change”, and the facts be damned.

ONE MORE BIT OF RACE-RELATED BALDERDASH

I was unaware of the Implicit Association Test (IAT) until a few years ago, when I took a test at YourMorals.Org that purported to measure my implicit racial preferences. IAT has been exposed as junk, John. J. Ray calls it:

Psychologists are well aware that people often do not say what they really think.  It is therefore something of a holy grail among them to find ways that WILL detect what people really think. A very popular example of that is the Implicit Associations test (IAT).  It supposedly measures racist thoughts whether you are aware of them or not.  It sometimes shows people who think they are anti-racist to be in fact secretly racist.

I dismissed it as a heap of junk long ago (here and here) but it has remained very popular and is widely accepted as revealing truth.  I am therefore pleased that a very long and thorough article has just appeared which comes to the same conclusion that I did.

The article in question (which has the same title as Ray’s post) is by Jesse Singal. It appeared at Science of Us on January 11, 2017. Here are some excerpts:

Perhaps no new concept from the world of academic psychology has taken hold of the public imagination more quickly and profoundly in the 21st century than implicit bias — that is, forms of bias which operate beyond the conscious awareness of individuals. That’s in large part due to the blockbuster success of the so-called implicit association test, which purports to offer a quick, easy way to measure how implicitly biased individual people are….

Since the IAT was first introduced almost 20 years ago, its architects, as well as the countless researchers and commentators who have enthusiastically embraced it, have offered it as a way to reveal to test-takers what amounts to a deep, dark secret about who they are: They may not feel racist, but in fact, the test shows that in a variety of intergroup settings, they will act racist….

[The] co-creators are Mahzarin Banaji, currently the chair of Harvard University’s psychology department, and Anthony Greenwald, a highly regarded social psychology researcher at the University of Washington. The duo introduced the test to the world at a 1998 press conference in Seattle — the accompanying press release noted that they had collected data suggesting that 90–95 percent of Americans harbored the “roots of unconscious prejudice.” The public immediately took notice: Since then, the IAT has been mostly treated as a revolutionary, revelatory piece of technology, garnering overwhelmingly positive media coverage….

Maybe the biggest driver of the IAT’s popularity and visibility, though, is the fact that anyone can take the test on the Project Implicit website, which launched shortly after the test was unveiled and which is hosted by Harvard University. The test’s architects reported that, by October 2015, more than 17 million individual test sessions had been completed on the website. As will become clear, learning one’s IAT results is, for many people, a very big deal that changes how they view themselves and their place in the world.

Given all this excitement, it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way….

Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such.

How does IAT work? Singal summarizes:

You sit down at a computer where you are shown a series of images and/or words. First, you’re instructed to hit ‘i’ when you see a “good” term like pleasant, or to hit ‘e’ when you see a “bad” one like tragedy. Then, hit ‘i’ when you see a black face, and hit ‘e’ when you see a white one. Easy enough, but soon things get slightly more complex: Hit ‘i’ when you see a good word or an image of a black person, and ‘e’ when you see a bad word or an image of a white person. Then the categories flip to black/bad and white/good. As you peck away at the keyboard, the computer measures your reaction times, which it plugs into an algorithm. That algorithm, in turn, generates your score.

If you were quicker to associate good words with white faces than good words with black faces, and/or slower to associate bad words with white faces than bad words with black ones, then the test will report that you have a slight, moderate, or strong “preference for white faces over black faces,” or some similar language. You might also find you have an anti-white bias, though that is significantly less common. By the normal scoring conventions of the test, positive scores indicate bias against the out-group, while negative ones indicate bias against the in-group.

The rough idea is that, as humans, we have an easier time connecting concepts that are already tightly linked in our brains, and a tougher time connecting concepts that aren’t. The longer it takes to connect “black” and “good” relative to “white” and “good,” the thinking goes, the more your unconscious biases favor white people over black people.

Singal continues (at great length) to pile up the mountain of evidence against IAT, and to caution against reading anything into the results it yields.

Having become aware of the the debunking of IAT, I went to the website of Project Implicit. When I reached this page, I was surprised to learn that I could not only find out whether I’m a closet racist but also whether I prefer dark or light skin tones, Asians or non-Asians, Trump or a previous president, and several other things or their opposites. I chose to discover my true feelings about Trump vs. a previous president, and was faced with a choice between Trump and Clinton.

What was the result of my several minutes of tapping “e” and “i” on the keyboard of my PC? This:

Your data suggest a moderate automatic preference for Bill Clinton over Donald Trump.

Balderdash! Though Trump is obviously not of better character than Clinton, he’s obviously not of worse character. And insofar as policy goes, the difference between Trump and Clinton is somewhat like the difference between a non-silent Calvin Coolidge and an FDR without the patriotism. (With apologies to the memory of Coolidge, my favorite president.)

What did I learn from the IAT? I must have very good reflexes. A person who processes information rapidly and then almost instantly translates it into a physical response should be able to “beat” the IAT. And that’s probably what I did in the Trump vs. Clinton test.

Perhaps the IAT for racism could be used to screen candidates for fighter-pilot training. Only “non-racists” would be admitted. Anyone who isn’t quick enough to avoid the “racist” label isn’t quick enough to win a dogfight.

OTHER “LIBERAL” DELUSIONS

There are plenty of them under the heading of balderdash. It’s also known as magical thinking, in which “ought” becomes “is” and the forces of nature and human nature can be held in abeyance by edict. The following examples revisit some ground already covered here:

  • Men are unnecessary.
  • Women can do everything that men can do, but it doesn’t work the other way … just because.
  • Mothers can work outside the home without damage to their children.
  • Race is a “social construct”; there is no such thing as intelligence; women and men are mentally and physically equal in all respects; and the under-representation of women and blacks in certain fields is therefore due to rank discrimination (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).
  • A minimum wage can be imposed without an increase in unemployment, a “fact” which can be “proven” only by concocting special cases of limited applicability.
  • Taxes can be raised without discouraging investment and therefore reducing the rate of economic growth.
  • Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”. There can “free lunches” all around.
  • Health insurance premiums will go down while the number of mandates is increased.
  • The economy can be stimulated through the action of the Keynesian multiplier, which is nothing but phony math.
  • “Green” programs create jobs (but only because they are inefficient).
  • Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).

There’s much more in a different vein here.

BALDERDASH AS EUPHEMISTIC THINKING

Balderdash, as I have sampled it here, isn’t just nonsense — it’s nonsense in the service of an agenda. The agenda is too often the expansion of government power. Those who favor the expansion of government power don’t like to think that it hurts people. (“We’re from the government and we’re here to help.”) This is a refusal to face facts, which is amply if not exhautively illustrated in the preceding entries.

But there’s a lot more where that comes from; for example:

  • Crippled became handicapped, which became disabled and then differently abled or something-challenged.
  • Stupid became learning disabled, which became special needs (a euphemistic category that houses more than the stupid).
  • Poor became underprivileged, which became economically disadvantaged, which became (though isn’t overtly called) entitled (as in entitled to other people’s money).
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

Why do lefties — lovers of big government — persist in varnishing the truth? They are — they insist — strong supporters of science, which is (ideally) the pursuit of truth. Well, that’s because they aren’t really supporters of science (witness their devotion to the “unsettled” science of AGW, among many fabrications). Nor do they really want the truth. They simply want to portray the world as they would like it to be, or to lie about it so that they can strive to reshape it to their liking.

BALDERDASH IN THE SERVICE OF SLAVERY, MODERN STYLE

I will end with this one, which is less conclusive than what has gone before, but which further illustrates the left’s penchant for evading reality in the service of growing government.

Thomas Nagel writes:

Some would describe taxation as a form of theft and conscription as a form of slavery — in fact some would prefer to describe taxation as slavery too, or at least as forced labor. Much might be said against these descriptions, but that is beside the point. For within proper limits, such practices when engaged in by governments are acceptable, whatever they are called. If someone with an income of $2000 a year trains a gun on someone with an income of $100000 a year and makes him hand over his wallet, that is robbery. If the federal government withholds a portion of the second person’s salary (enforcing the laws against tax evasion with threats of imprisonment under armed guard) and gives some of it to the first person in the form of welfare payments, food stamps, or free health care, that is taxation. In the first case it is (in my opinion) an impermissible use of coercive means to achieve a worthwhile end. In the second case the means are legitimate, because they are impersonally imposed by an institution designed to promote certain results. Such general methods of distribution are preferable to theft as a form of private initiative and also to individual charity. This is true not only for reasons of fairness and efficiency, but also because both theft and charity are disturbances of the relations (or lack of them) between individuals and involve their individual wills in a way that an automatic, officially imposed system of taxation does not. [Mortal Questions, “Ruthlessness in Public Life,” pp. 87-88]

How many logical and epistemic errors can a supposedly brilliant philosopher make in one (long) paragraph? Too many:

  • “For within proper limits” means that Nagel is about to beg the question by shaping an answer that fits his idea of proper limits.
  • Nagel then asserts that the use by government of coercive means to achieve the same end as robbery is “legitimate, because [those means] are impersonally imposed by an institution designed to promote certain results.” Balderdash! Nagel’s vision of government as some kind of omniscient, benevolent arbiter is completely at odds with reality.  The “certain results” (redistribution of income) are achieved by functionaries, armed or backed with the force of arms, who themselves share in the spoils of coercive redistribution. Those functionaries act under the authority of bare majorities of elected representatives, who are chosen by bare majorities of voters. And those bare majorities are themselves coalitions of interested parties — hopeful beneficiaries of redistributionist policies, government employees, government contractors, and arrogant statists — who believe, without justification, that forced redistribution is a proper function of government.
  • On the last point, Nagel ignores the sordid history of the unconstitutional expansion of the powers of government. Without justification, he aligns himself with proponents of the “living Constitution.”
  • Nagel’s moral obtuseness is fully revealed when he equates coercive redistribution with “fairness and efficiency,” as if property rights and liberty were of no account.
  • The idea that coercive redistribution fosters efficiency is laughable. It does quite the opposite because it removes resources from productive uses — including job-creating investments. The poor are harmed by coercive redistribution because it drastically curtails economic growth, from which they would benefit as job-holders and (where necessary) recipients of private charity (the resources for which would be vastly greater in the absence of coercive redistribution).
  • Finally (though not exhaustively), Nagel’s characterization of private charity as a “disturbance of the relations … among individuals” is so wrong-headed that it leaves me dumbstruck. Private charity arises from real relations among individuals — from a sense of community and feelings of empathy. It is the “automatic, officially imposed system of taxation” that distorts and thwarts (“disturbs”) the social fabric.

In any event, taxation for the purpose of redistribution is slavery: the subjection of one person to others, namely, agents of the government and the recipients of the taxes extracted from the person who pays them under threat of punishment. It’s slavery without whips and chains, but slavery nevertheless.

A (Long) Footnote about Science

In “Deduction, Induction, and Knowledge” I make a case that knowledge (as opposed to belief) can only be inductive, that is, limited to specific facts about particular phenomena. It’s true that a hypothesis or theory about a general pattern of relationships (e.g., the general theory of relativity) can be useful, and even necessary. As I say at the end of “Deduction…”, the fact that a general theory can’t be proven

doesn’t — and shouldn’t — stand in the way of acting as if we possess general knowledge. We must act as if we possess general knowledge. To do otherwise would result in stasis, or analysis-paralysis.

Which doesn’t mean that a general theory should be accepted just because it seems plausible. Some general theories — such as global climate models (or GCMs) are easily falsified. They persist only because pseudo-scientists and true believers refuse to abandon them. (There is no such thing as “settled science”.)

Neil Lock, writing at Watts Up With That?, offers this perspective on inductive vs. deductive thinking:

Bottom up thinking is like the way we build a house. Starting from the ground, we work upwards, using what we’ve done already as support for what we’re working on at the moment. Top down thinking, on the other hand, starts out from an idea that is a given. It then works downwards, seeking evidence for the idea, or to add detail to it, or to put it into practice….

The bottom up thinker seeks to build, using his senses and his mind, a picture of the reality of which he is a part. He examines, critically, the evidence of his senses. He assembles this evidence into percepts, things he perceives as true. Then he pulls them together and generalizes them into concepts. He uses logic and reason to seek understanding, and he often stops to check that he is still on the right lines. And if he finds he has made an error, he tries to correct it.

The top down thinker, on the other hand, has far less concern for logic or reason, or for correcting errors. He tends to accept new ideas only if they fit his pre-existing beliefs. And so, he finds it hard to go beyond the limitations of what he already knows or believes. [“‘Bottom Up’ versus ‘Top Down’ Thinking — On Just about Everything“, October 22, 2017]

(I urge you to read the whole thing, in which Lock applies the top down-bottom up dichotomy to a broad range of issues.)

Lock overstates the distinction between the two modes of thought. A lot of “bottom up” thinkers derive general hypotheses from their observations about particular events. But — and this is a big “but” — they are also amenable to revising their hypotheses when they encounter facts that contradict them. The best scientists are bottom-up and top-down thinkers whose beliefs are based on bottom-up thinking.

General hypotheses are indispensable guides to “everyday” living. Some of them (e.g., fire burns, gravity causes objects to fall) are such reliable guides that it’s foolish to assume their falsity. Nor does it take much research to learn, for example, that there are areas within a big city where violent crime is rampant. A prudent person — even a “liberal” one — will therefore avoid those areas.

There are also general patterns — now politically incorrect to mention — with respect to differences in physical, psychological, and intellectual traits and abilities between men and women and among races. (See this, this, and this, for example.) These patterns explain disparities in achievement, but they are ignored by true believers who would wish away the underlying causes and penalize those who are more able (in a relevant dimension) for the sake of ersatz equality. The point is that a good many people — perhaps most people — studiously ignore facts of some kind in order to preserve their cherished beliefs about themselves and the world around them.

Which brings me back to science and scientists. Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

This is certainly the case in physics, where scientists admit that the standard model of sub-atomic physics “proves” that the universe shouldn’t exist. (See Andrew Griffin, “The Universe Shouldn’t Exist, Scientists Say after Finding Bizarre Behaviour of Anti-Matter“, The Independent, October 23, 2017.) It is most certainly the case in climatology, where many pseudo-scientists have deployed hopelessly flawed models in the service of policies that would unnecessarily cripple the economy of the United States.

As I say here,

scientists are human and fallible. It is in the best tradition of science to distrust their claims and to dismiss their non-scientific utterances.

Non-scientific utterances are not only those which have nothing to do with a scientist’s field of specialization, but also include those that are based on theories which derive from preconceptions more than facts. It is scientific to admit lack of certainty. It is unscientific — anti-scientific, really — to proclaim certainty about something that is so little understood the origin of the universe or Earth’s climate.


Related posts:
Hemibel Thinking
The Limits of Science
The Thing about Science
Science in Politics, Politics in Science
Global Warming and the Liberal Agenda
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
“Warmism”: The Myth of Anthropogenic Global Warming
Modeling Is Not Science
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Pinker Commits Scientism
AGW: The Death Knell
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Is Science Self-Correcting?
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
Bayesian Irrationality
Mettenheim on Einstein’s Relativity
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unkownable

Not-So-Random Thoughts (XV)

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

*     *     *

Victor Davis Hanson writes:

This descent into the Dark Ages will not end well. It never has in the past. [“Building the New Dark-Age Mind,” Works and Days, June 8, 2015]

Hamson’s chronicle of political correctness and doublespeak echoes one theme of my post, “1963: The Year Zero.”

*     *     *

Timothy Taylor does the two-handed economist act:

It may be that the question of “does inequality slow down economic growth” is too broad and diffuse to be useful. Instead, those of us who care about both the rise in inequality and the slowdown in economic growth should be looking for policies to address both goals, without presuming that substantial overlap will always occur between them. [“Does Inequality Reduce Economic Growth: A Skeptical View,” The Conversible Economist, May 29, 2015]

The short answer to the question “Does inequality reduce growth?” is no. See my post “Income Inequality and Economic Growth.” Further, even if inequality does reduce growth, the idea of reducing inequality (through income redistribution, say) to foster growth is utilitarian and therefore morally egregious. (See “Utilitarianism vs. Liberty.”)

*     *     *

In “Diminishing Marginal Utility and the Redistributive Urge” I write:

[L]eftists who deign to offer an economic justification for redistribution usually fall back on the assumption of the diminishing marginal utility (DMU) of income and wealth. In doing so, they commit (at least) four errors.

The first error is the fallacy of misplaced concreteness which is found in the notion of utility. Have you ever been able to measure your own state of happiness? I mean measure it, not just say that you’re feeling happier today than you were when your pet dog died. It’s an impossible task, isn’t it? If you can’t measure your own happiness, how can you (or anyone) presume to measure and aggregate the happiness of millions or billions of individual human beings? It can’t be done.

Which brings me to the second error, which is an error of arrogance. Given the impossibility of measuring one person’s happiness, and the consequent impossibility of measuring and comparing the happiness of many persons, it is pure arrogance to insist that “society” would be better off if X amount of income or wealth were transferred from Group A to Group B….

The third error lies in the implicit assumption embedded in the idea of DMU. The assumption is that as one’s income or wealth rises one continues to consume the same goods and services, but more of them….

All of that notwithstanding, the committed believer in DMU will shrug and say that at some point DMU must set in. Which leads me to the fourth error, which is an error of introspection….  [If over the years] your real income has risen by a factor of two or three or more — and if you haven’t messed up your personal life (which is another matter) — you’re probably incalculably happier than when you were just able to pay your bills. And you’re especially happy if you put aside a good chunk of money for your retirement, the anticipation and enjoyment of which adds a degree of utility (such a prosaic word) that was probably beyond imagining when you were in your twenties, thirties, and forties.

Robert Murphy agrees:

[T]he problem comes in when people sometimes try to use the concept of DMU to justify government income redistribution. Specifically, the argument is that (say) the billionth dollar to Bill Gates has hardly any marginal utility, while the 10th dollar to a homeless man carries enormous marginal utility. So clearly–the argument goes–taking a dollar from Bill Gates and giving it to a homeless man raises “total social utility.”

There are several serious problems with this type of claim. Most obvious, even if we thought it made sense to attribute units of utility to individuals, there is no reason to suppose we could compare them across individuals. For example, even if we thought a rich man had units of utility–akin to the units of his body temperature–and that the units declined with more money, and likewise for a poor person, nonetheless we have no way of placing the two types of units on the same scale….

In any event, this is all a moot point regarding the original question of interpersonal utility comparisons. Even if we thought individuals had cardinal utilities, it wouldn’t follow that redistribution would raise total social utility.

Even if we retreat to the everyday usage of terms, it still doesn’t follow as a general rule that rich people get less happiness from a marginal dollar than a poor person. There are many people, especially in the financial sector, whose self-esteem is directly tied to their earnings. And as the photo indicates, Scrooge McDuck really seems to enjoy money. Taking gold coins from Scrooge and giving them to a poor monk would not necessarily increase happiness, even in the everyday psychological sense. [“Can We Compare People’s Utilities?,” Mises Canada, May 22, 2015]

See also David Henderson’s “Murphy on Interpersonal Utility Comparisons” (EconLog, May 22, 2015) and Henderson’s earlier posts on the subject, to which he links. Finally, see my comment on an earlier post by Henderson, in which he touches on the related issue of cost-benefit analysis.

*     *     *

Here’s a slice of what Robert Tracinski has to say about “reform conservatism”:

The key premise of this non-reforming “reform conservatism” is the idea that it’s impossible to really touch the welfare state. We might be able to alter its incentives and improve its clanking machinery, but only if we loudly assure everyone that we love it and want to keep it forever.

And there’s the problem. Not only is this defeatist at its core, abandoning the cause of small government at the outset, but it fails to address the most important problem facing the country.

“Reform conservatism” is an answer to the question: how can we promote the goal of freedom and small government—without posing any outright challenge to the welfare state? The answer: you can’t. All you can do is tinker around the edges of Leviathan. And ultimately, it won’t make much difference, because it will all be overwelmed in the coming disaster. [“Reform Conservatism Is an Answer to the Wrong Question,” The Federalist, May 22, 2015]

Further, as I observe in “How to Eradicate the Welfare State, and How Not to Do It,” the offerings of “reform conservatives”

may seem like reasonable compromises with the left’s radical positions. But they are reasonable compromises only if you believe that the left wouldn’t strive vigorously to undo them and continue the nation’s march toward full-blown state socialism. That’s the way leftists work. They take what they’re given and then come back for more, lying and worse all the way.

See also Arnold Kling’s “Reason Roundtable on Reform Conservatism” (askblog, May 22, 2015) and follow the links therein.

*     *     *

I’ll end this installment with a look at science and the anti-scientific belief in catastrophic anthropogenic global warming.

Here’s Philip Ball in “The Trouble With Scientists“:

It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions. “Seeing the reproducibility rates in psychology and other empirical science, we can safely say that something is not working out the way it should,” says Susann Fiedler, a behavioral economist at the Max Planck Institute for Research on Collective Goods in Bonn, Germany. “Cognitive biases might be one reason for that.”

Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea. Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?

Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”). When facts come up that suggest we might, in fact, not be right after all, we are inclined to dismiss them as irrelevant, if not indeed mistaken….

Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says [Chris] Hartgerink [of Tilburg University in the Netherlands], “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.”…

One of the reasons the science literature gets skewed is that journals are much more likely to publish positive than negative results: It’s easier to say something is true than to say it’s wrong. Journal referees might be inclined to reject negative results as too boring, and researchers currently get little credit or status, from funders or departments, from such findings. “If you do 20 experiments, one of them is likely to have a publishable result,” [Ivan] Oransky and [Adam] Marcus [who run the service Retraction Watch] write. “But only publishing that result doesn’t make your findings valid. In fact it’s quite the opposite.”9 [Nautilus, May 14, 2015]

Zoom to AGW. Robert Tracinski assesses the most recent bit of confirmation bias:

A lot of us having been pointing out one of the big problems with the global warming theory: a long plateau in global temperatures since about 1998. Most significantly, this leveling off was not predicted by the theory, and observed temperatures have been below the lowest end of the range predicted by all of the computerized climate models….

Why, change the data, of course!

Hence a blockbuster new report: a new analysis of temperature data since 1998 “adjusts” the numbers and magically finds that there was no plateau after all. The warming just continued….

How convenient.

It’s so convenient that they’re signaling for everyone else to get on board….

This is going to be the new party line. “Hiatus”? What hiatus? Who are you going to believe, our adjustments or your lying thermometers?…

The new adjustments are suspiciously convenient, of course. Anyone who is touting a theory that isn’t being borne out by the evidence and suddenly tells you he’s analyzed the data and by golly, what do you know, suddenly it does support his theory—well, he should be met with more than a little skepticism.

If we look, we find some big problems. The most important data adjustments by far are in ocean temperature measurements. But anyone who has been following this debate will notice something about the time period for which the adjustments were made. This is a time in which the measurement of ocean temperatures has vastly improved in coverage and accuracy as a whole new set of scientific buoys has come online. So why would this data need such drastic “correcting”?

As climatologist Judith Curry puts it:

The greatest changes in the new NOAA surface temperature analysis is to the ocean temperatures since 1998. This seems rather ironic, since this is the period where there is the greatest coverage of data with the highest quality of measurements–ARGO buoys and satellites don’t show a warming trend. Nevertheless, the NOAA team finds a substantial increase in the ocean surface temperature anomaly trend since 1998.

….

I realize the warmists are desperate, but they might not have thought through the overall effect of this new “adjustment” push. We’ve been told to take very, very seriously the objective data showing global warming is real and is happening—and then they announce that the data has been totally changed post hoc. This is meant to shore up the theory, but it actually calls the data into question….

All of this fits into a wider pattern: the global warming theory has been awful at making predictions about the data ahead of time. But it has been great at going backward, retroactively reinterpreting the data and retrofitting the theory to mesh with it. A line I saw from one commenter, I can’t remember where, has been rattling around in my head: “once again, the theory that predicts nothing explains everything.” [“Global Warming: The Theory That Predicts Nothing and Explains Everything,” The Federalist, June 8, 2015]

Howard Hyde also weighs in with “Climate Change: Where Is the Science?” (American Thinker, June 11, 2015).

Bill Nye, the so-called Science Guy, seems to epitomize the influence of ideology on “scientific knowledge.”  I defer to John Derbyshire:

Bill Nye the Science Guy gave a commencement speech at Rutgers on Sunday. Reading the speech left me thinking that if this is America’s designated Science Guy, I can be the nation’s designated swimsuit model….

What did the Science Guy have to say to the Rutgers graduates? Well, he warned them of the horrors of climate change, which he linked to global inequality.

We’re going to find a means to enable poor people to advance in their societies in countries around the world. Otherwise, the imbalance of wealth will lead to conflict and inefficiency in energy production, which will lead to more carbon pollution and a no-way-out overheated globe.

Uh, given that advanced countries use far more energy per capita than backward ones—the U.S.A. figure is thirty-four times Bangladesh’s—wouldn’t a better strategy be to keep poor countries poor? We could, for example, encourage all their smartest and most entrepreneurial people to emigrate to the First World … Oh, wait: we already do that.

The whole climate change business is now a zone of hysteria, generating far more noise—mostly of a shrieking kind—than its importance justifies. Opinions about climate change are, as Greg Cochran said, “a mark of tribal membership.” It is also the case, as Greg also said, that “the world is never going to do much about in any event, regardless of the facts.”…

When Ma Nature means business, stuff happens on a stupendously colossal scale.  And Bill Nye the Science Guy wants Rutgers graduates to worry about a 0.4ºC warming over thirty years? Feugh.

The Science Guy then passed on from the dubiously alarmist to the batshit barmy.

There really is no such thing as race. We are one species … We all come from Africa.

Where does one start with that? Perhaps by asserting that: “There is no such thing as states. We are one country.”

The climatological equivalent of saying there is no such thing as race would be saying that there is no such thing as weather. Of course there is such a thing as race. We can perceive race with at least three of our five senses, and read it off from the genome. We tick boxes for it on government forms: I ticked such a box for the ATF just this morning when buying a gun.

This is the Science Guy? The foundational text of modern biology bears the title On the Origin of Species by Means of Natural Selection, or the Preservation of Favored Races in the Struggle for Life. Is biology not a science?

Darwin said that populations of a species long separated from each other will diverge in their biological characteristics, forming races. If the separation goes on long enough, any surviving races will diverge all the way to separate species. Was Ol’ Chuck wrong about that, Mr. Science Guy?

“We are one species”? Rottweilers and toy poodles are races within one species, a species much newer than ours; yet they differ mightily, not only in appearance but also—gasp!—in behavior, intelligence, and personality. [“Nye Lied, I Sighed,” Taki’s Magazine, May 21, 2015]

This has gone on long enough. Instead of quoting myself, I merely refer you to several related posts:

Demystifying Science
AGW: The Death Knell
Evolution and Race
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?

Signature

“Settled Science” and the Monty Hall Problem

The so-called 97-percent consensus among climate scientists about anthropogenic global warming (AGW) isn’t evidence of anything but the fact that scientists are only human. Even if there were such a consensus, it certainly wouldn’t prove the inchoate theory of AGW, any more than the early consensus against Einstein’s special theory of relativity disproved that theory.

Actually, in the case of AGW, the so-called consensus is far from a consensus about the extent of warming, its causes, and its implications. (See, for example, this post and this one.) But it’s undeniable that a lot of climate scientists believe in a “strong” version of AGW, and in its supposedly dire consequences for humanity.

Why is that? Well, in a field as inchoate as climate science, it’s easy to let one’s prejudices drive one’s research agenda and findings, even if only subconsciously. And isn’t it more comfortable and financially rewarding to be with the crowd and where the money is than to stand athwart the conventional wisdom? (Lennart Bengtsson certainly found that to be the case.) Moreover, there was, in the temperature records of the late 20th century, a circumstantial case for AGW, which led to the development of theories and models that purport to describe a strong relationship between temperature and CO2. That the theories and models are deeply flawed and lacking in predictive value seems not to matter to the 97 percent (or whatever the number is).

In other words, a lot of climate scientists have abandoned the scientific method, which demands skepticism, in order to be on the “winning” side of the AGW issue. How did it come to be thought of as the “winning” side? Credit vocal so-called scientists who were and are (at least) guilty of making up models to fit their preconceptions, and ignoring evidence that human-generated CO2 is a minor determinant of atmospheric temperature. Credit influential non-scientists (e.g., Al Gore) and various branches of the federal government that have spread the gospel of AGW and bestowed grants on those who can furnish evidence of it. Above all, credit the media, which for the past two decades has pumped out volumes of biased, half-baked stories about AGW, in the service of the “liberal” agenda: greater control of the lives and livelihoods of Americans.

Does this mean that the scientists who are on the AGW bandwagon don’t believe in the correctness of AGW theory? I’m sure that most of them do believe in it — to some degree. They believe it at least to the same extent as a religious convert who zealously proclaims his new religion to prove (mainly to himself) his deep commitment to that religion.

What does all of this have to do with the Monty Hall problem? This:

Making progress in the sciences requires that we reach agreement about answers to questions, and then move on. Endless debate (think of global warming) is fruitless debate. In the Monty Hall case, this social process has actually worked quite well. A consensus has indeed been reached; the mathematical community at large has made up its mind and considers the matter settled. But consensus is not the same as unanimity, and dissenters should not be stifled. The fact is, when it comes to matters like Monty Hall, I’m not sufficiently skeptical. I know what answer I’m supposed to get, and I allow that to bias my thinking. It should be welcome news that a few others are willing to think for themselves and challenge the received doctrine. Even though they’re wrong. (Brian Hayes, “Monty Hall Redux” (a book review), American Scientist, September-October 2008)

The admirable part of Hayes’s statement is its candor: Hayes admits that he may have adopted the “consensus” answer because he wants to go with the crowd.

The dismaying part of Hayes’s statement is his smug admonition to accept “consensus” and move on. As it turns out the “consensus” about the Monty Hall problem isn’t what it’s cracked up to be. A lot of very bright people have solved a tricky probability puzzle, but not the Monty Hall problem. (For the details, see my post, “The Compleat Monty Hall Problem.”)

And the “consensus” about AGW is very far from being the last word, despite the claims of true believers. (See, for example, the relatively short list of recent articles, posts, and presentations given at the end of this post.)

Going with the crowd isn’t the way to do science. It’s certainly not the way to ascertain the contribution of human-generated CO2 to atmospheric warming, or to determine whether the effects of any such warming are dire or beneficial. And it’s most certainly not the way to decide whether AGW theory implies the adoption of policies that would stifle economic growth and hamper the economic betterment of millions of Americans and billions of other human beings — most of whom would love to live as well as the poorest of Americans.

Given the dismal track record of global climate models, with their evident overstatement of the effects of CO2 on temperatures, there should be a lot of doubt as to the causes of rising temperatures in the last quarter of the 20th century, and as to the implications for government action. And even if it could be shown conclusively that human activity will temperatures to resume the rising trend of the late 1900s, several important questions remain:

  • To what extent would the temperature rise be harmful and to what extent would it be beneficial?
  • To what extent would mitigation of the harmful effects negate the beneficial effects?
  • What would be the costs of mitigation, and who would bear those costs, both directly and indirectly (e.g., the effects of slower economic growth on the poorer citizens of thw world)?
  • If warming does resume gradually, as before, why should government dictate precipitous actions — and perhaps technologically dubious and economically damaging actions — instead of letting households and businesses adapt over time by taking advantage of new technologies that are unavailable today?

Those are not issues to be decided by scientists, politicians, and media outlets that have jumped on the AGW bandwagon because it represents a “consensus.” Those are issues to be decided by free, self-reliant, responsible persons acting cooperatively for their mutual benefit through the mechanism of free markets.

*     *     *

Recent Related Reading:
Roy Spencer, “95% of Climate Models Agree: The Observations Must Be Wrong,” Roy Spencer, Ph.D., February 7, 2014
Roy Spencer, “Top Ten Good Skeptical Arguments,” Roy Spencer, Ph.D., May 1, 2014
Ross McKittrick, “The ‘Pause’ in Global Warming: Climate Policy Implications,” presentation to the Friends of Science, May 13, 2014 (video here)
Patrick Brennan, “Abuse from Climate Scientists Forces One of Their Own to Resign from Skeptic Group after Week: ‘Reminds Me of McCarthy’,” National Review Online, May 14, 2014
Anthony Watts, “In Climate Science, the More Things Change, the More They Stay the Same,” Watts Up With That?, May 17, 2014
Christopher Monckton of Brenchley, “Pseudoscientists’ Eight Climate Claims Debunked,” Watts Up With That?, May 17, 2014
John Hinderaker, “Why Global Warming Alarmism Isn’t Science,” PowerLine, May 17, 2014
Tom Sheahan, “The Specialized Meaning of Words in the “Antarctic Ice Shelf Collapse’ and Other Climate Alarm Stories,” Watts Up With That?, May 21, 2014
Anthony Watts, “Unsettled Science: New Study Challenges the Consensus on CO2 Regulation — Modeled CO2 Projections Exaggerated,” Watts Up With That?, May 22, 2014
Daniel B. Botkin, “Written Testimony to the House Subcommittee on Science, Space, and Technology,” May 29, 2014

Related posts:
The Limits of Science
The Thing about Science
Debunking “Scientific Objectivity”
Modeling Is Not Science
The Left and Its Delusions
Demystifying Science
AGW: The Death Knell
Modern Liberalism as Wishful Thinking
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”

Pinker Commits Scientism

Steven Pinker, who seems determined to outdo Bryan Caplan in wrongheadedness, devotes “Science Is Not Your Enemy” (The New Republic,  August 6, 2013), to the defense of scientism. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. You don’t need to take my word for it; Pinker’s own words tell the tale.

But, first, let’s get clear about the meaning and fallaciousness of scientism. The various writers cited by Pinker describe it well, but Hayek probably offers the most thorough indictment of it; for example:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it…..

The blind transfer of the striving for quantitative measurements to a field in which the specific conditions are not present which give it its basic importance in the natural sciences, is the result of an entirely unfounded prejudice. It is probably responsible for the worst aberrations and absurdities produced by scientism in the social sciences. It not only leads frequently to the selection for study of the most irrelevant aspects of the phenomena because they happen to be measurable, but also to “measurements” and assignments of numerical values which are absolutely meaningless. What a distinguished philosopher recently wrote about psychology is at least equally true of the social sciences, namely that it is only too easy “to rush off to measure something without considering what it is we are measuring, or what measurement means. In this respect some recent measurements are of the same logical type as Plato’s determination that a just ruler is 729 times as happy as an unjust one.”…

Closely connected with the “objectivism” of the scientistic approach is its methodological collectivism, its tendency to treat “wholes” like “society” or the “economy,” “capitalism” (as a given historical “phase”) or a particular “industry” or “class” or “country” as definitely given objects about which we can discover laws by observing their behavior as wholes. While the specific subjectivist approach of the social sciences starts … from our knowledge of the inside of these social complexes, the knowledge of the individual attitudes which form the elements of their structure, the objectivism of the natural sciences tries to view them from the outside ; it treats social phenomena not as something of which the human mind is a part and the principles of whose organization we can reconstruct from the familiar parts, but as if they were objects directly perceived by us as wholes….

The belief that human history, which is the result of the interaction of innumerable human minds, must yet be subject to simple laws accessible to human minds is now so widely held that few people are at all aware what an astonishing claim it really implies. Instead of working patiently at the humble task of rebuilding from the directly known elements the complex and unique structures which we find in the world, and of tracing from the changes in the relations between the elements the changes in the wholes, the authors of these pseudo-theories of history pretend to be able to arrive by a kind of mental short cut at a direct insight into the laws of succession of the immediately apprehended wholes. However doubtful their status, these theories of development have achieved a hold on public imagination much greater than any of the results of genuine systematic study. “Philosophies” or “theories” of history (or “historical theories”) have indeed become the characteristic feature, the “darling vice” of the 19th century. From Hegel and Comte, and particularly Marx, down to Sombart and Spengler these spurious theories came to be regarded as representative results of social science; and through the belief that one kind of “system” must as a matter of historical necessity be superseded by a new and different “system,” they have even exercised a profound influence on social evolution. This they achieved mainly because they looked like the kind of laws which the natural sciences produced; and in an age when these sciences set the standard by which all intellectual effort was measured, the claim of these theories of history to be able to predict future developments was regarded as evidence of their pre-eminently scientific character. Though merely one among many characteristic 19th century products of this kind, Marxism more than any of the others has become the vehicle through which this result of scientism has gained so wide an influence that many of the opponents of Marxism equally with its adherents are thinking in its terms. (Friedrich A. Hayek, The Counter Revolution Of Science [Kindle Locations 120-1180], The Free Press.)

After a barrage like that (and this), what’s a defender of scientism to do? Pinker’s tactic is to stop using “scientism” and start using “science.” This makes it seem as if he really isn’t defending scientism, but rather trying to show how science can shed light onto subjects that are usually not in the province of science. In reality, Pinker preaches scientism by calling it science.

For example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists.” We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (bluffing, spotting “tells,” avoiding “tells,” for example).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature, which defies scientific control.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real aim of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification.” With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, which I examine here.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

If Pinker is right about anything, it is when he says that “the intrusion of science into the territories of the humanities has been deeply resented.” The resentment, though some of it may be wrongly motivated, is fully justified.

Related reading (added 08/10/13 and 09/06/13):
Bill Vallicella, “Steven Pinker on Scientism, Part One,” Maverick Philosopher, August 10, 2013
Leon Wieseltier, “Crimes Against Humanities,” The New Republic, September 3, 2013 (gated)

Related posts about Pinker:
Nonsense about Presidents, IQ, and War
The Fallacy of Human Progress

Related posts about modernism:
Speaking of Modern Art
Making Sense about Classical Music
An Addendum about Classical Music
My Views on Classical Music, Vindicated
But It’s Not Music
A Quick Note about Music
Modernism in the Arts and Politics
Taste and Art
Modernism and the Arts

Related posts about science:
Science’s Anti-Scientific Bent
Modeling Is Not Science
Physics Envy
We, the Children of the Enlightenment
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Scientism, Evolution, and the Meaning of Life
The Candle Problem: Balderdash Masquerading as Science
Mysteries: Sacred and Profane
The Glory of the Human Mind

Pseudoscience, “Moneyball,” and Luck

Orin Kerr of The Volokh Conspiracy endorses the following clap-trap, uttered by Michael Lewis (author of Liar’s Poker and Moneyball) in the course of a commencement speech at Princeton University:

A few years ago, just a few blocks from my home, a pair of researchers in the Cal psychology department staged an experiment. They began by grabbing students, as lab rats. Then they broke the students into teams, segregated by sex. Three men, or three women, per team. Then they put these teams of three into a room, and arbitrarily assigned one of the three to act as leader. Then they gave them some complicated moral problem to solve: say what should be done about academic cheating, or how to regulate drinking on campus.

Exactly 30 minutes into the problem-solving the researchers interrupted each group. They entered the room bearing a plate of cookies. Four cookies. The team consisted of three people, but there were these four cookies. Every team member obviously got one cookie, but that left a fourth cookie, just sitting there. It should have been awkward. But it wasn’t. With incredible consistency the person arbitrarily appointed leader of the group grabbed the fourth cookie, and ate it. Not only ate it, but ate it with gusto: lips smacking, mouth open, drool at the corners of their mouths. In the end all that was left of the extra cookie were crumbs on the leader’s shirt.

This leader had performed no special task. He had no special virtue. He’d been chosen at random, 30 minutes earlier. His status was nothing but luck. But it still left him with the sense that the cookie should be his.

So far, sort of okay. But then:

This experiment helps to explain Wall Street bonuses and CEO pay, and I’m sure lots of other human behavior. But it also is relevant to new graduates of Princeton University. In a general sort of way you have been appointed the leader of the group. Your appointment may not be entirely arbitrary. But you must sense its arbitrary aspect: you are the lucky few. Lucky in your parents, lucky in your country, lucky that a place like Princeton exists that can take in lucky people, introduce them to other lucky people, and increase their chances of becoming even luckier. Lucky that you live in the richest society the world has ever seen, in a time when no one actually expects you to sacrifice your interests to anything.

All of you have been faced with the extra cookie. All of you will be faced with many more of them. In time you will find it easy to assume that you deserve the extra cookie. For all I know, you may. But you’ll be happier, and the world will be better off, if you at least pretend that you don’t.

Never forget: In the nation’s service. In the service of all nations.

Thank you.

And good luck.

I am unsurprised by Kerr’s endorsement of Lewis’s loose logic, given Kerr’s rather lackadaisical attitude toward the Constitution (e.g., this post).

Well, what could be wrong with the experiment or Lewis’s interpretation of it? The cookie experiment does not mean what Lewis thinks it means. It is like the Candle Problem in that Lewis  draws conclusions that are unwarranted by the particular conditions of the experiment. And those conditions are so artificial as to be inapplicable to real situations. Thus:

1. The  teams and their leaders were chosen randomly. Businesses, governments, universities, and other voluntary organizations do not operate that way. Members choose themselves. Leaders (in business, at least) are either self-chosen (if they are owners) or chosen by higher-ups on the basis of past performance and what it says (imperfectly) about future performance.

2. Because managers of businesses are not arbitrarily chosen, there is no analogy to the team leaders in the experiment, who were arbitrarily chosen and who arbitrarily consumed the fourth cookie. For one thing, if a manager reaps a greater reward than his employees, that is because the higher-ups value the manager’s contributions more than those of his employees. That is an unsurprising relationship, when you think about it, but it bears no resemblance to the case of a randomly chosen team with a randomly chosen leader.

3. Being the beneficiary of some amount of luck in one’s genetic and environmental inheritance does not negate the fact that one must do something with that luck to reap material rewards. The “extra cookie,” as I have said, is generally produced and earned, not simply put on a plate to be gobbled. If a person earns more cookies because he is more productive, and if he is more productive (in part) because of his genetic and environmental inheritance, that person’s great earning power (over the long haul) is based on the value of what he produces. He does not take from others (as Lewis implies), nor does he owe to others a share of what he earns (as Lewis implies).

Just to drive home the point about Lewis’s cluelessness, I will address his book Moneyball, from which a popular film of the same name was derived. This is Amazon.com‘s review of the book:

Billy Beane, general manager of MLB’s Oakland A’s and protagonist of Michael Lewis’s Moneyball, had a problem: how to win in the Major Leagues with a budget that’s smaller than that of nearly every other team. Conventional wisdom long held that big name, highly athletic hitters and young pitchers with rocket arms were the ticket to success. But Beane and his staff, buoyed by massive amounts of carefully interpreted statistical data, believed that wins could be had by more affordable methods such as hitters with high on-base percentage and pitchers who get lots of ground outs. Given this information and a tight budget, Beane defied tradition and his own scouting department to build winning teams of young affordable players and inexpensive castoff veterans.

Lewis was in the room with the A’s top management as they spent the summer of 2002 adding and subtracting players and he provides outstanding play-by-play…. Lewis, one of the top nonfiction writers of his era (Liar’s Poker, The New New Thing), offers highly accessible explanations of baseball stats and his roadmap of Beane’s economic approach makes Moneyball an appealing reading experience for business people and sports fans alike.

The only problems with Moneyball are (a) its essential inaccuracy and (b) its incompleteness as an analysis of success in baseball.

On the first point, “moneyball” did not start with Billy Beane and the Oakland A’s, and it is not what it is made out to be. Enter Eric Walker, the subject and author of “The Forgotten Man of Moneyball, Part 1,” and “The Forgotten Man of Moneyball, Part 2,” published October 7, 2009, on a site at deadspin.com. (On the site’s home page, the title bar displays the following: Deadspin, Sports News without Access, Favor, or Discretion.) Walker’s recollections merit extensive quotation:

…[W]ho am I, and why would I be considered some sort of expert on moneyball? Perhaps you recognized my name; more likely, though, you didn’t. Though it is hard to say this without an appearance of personal petulance, I find it sad that the popular history of what can only be called a revolution in the game leaves out quite a few of the people, the outsiders, who actually drove that revolution.

Anyway, the short-form answer to the question is that I am the fellow who first taught Billy Beane the principles that Lewis later dubbed “moneyball.” For the long-form answer, we ripple-dissolve back in time …

. . . to San Francisco in 1975, where the news media are reporting, often and at length, on the supposed near-certainty that the Giants will be sold and moved. There sit I, a man no longer young but not yet middle-aged, a man who has not been to a baseball game — or followed the sport — for probably over two decades….

With my lady, also a baseball fan of old, I go to a game. We have a great time; we go to more games, have more great times. I am becoming enthused. But I am considering and wondering — wondering about the mechanisms of run scoring, things like the relative value of average versus power…. I go to the San Francisco main library, looking for books that in some way actually analyze baseball. I find one. One. But what a one.

If this were instead Reader’s Digest, my opening of that book would be “The Moment That Changed My Life!” The book was Percentage Baseball, by one Earnshaw Cook, a Johns Hopkins professor who had consulted on the development of the atomic bomb….

…Bill James and some others, who were in high school when Cook was conceiving the many sorts of formulae they would later get famous publicizing in their own works, have had harsh things to say about Cook and his work. James, for example, wrote in 1981, “Cook knew everything about statistics and nothing at all about baseball — and for that reason, all of his answers are wrong, all of his methods useless.” That is breathtakingly wrong, and arrogant. Bill James has done an awful lot for analysis, both in promoting the concepts and in original work (most notably a methodology for converting minor-league stats to major-league equivalents). But, as Chili Davis once remarked about Nolan Ryan, “He ain’t God, man.” A modicum of humility and respect is in order…. Cook’s further work, using computer simulations of games to test theory (recorded in his second book, Percentage Baseball and the Computer), was ground-breaking, and it came long before anyone thought to describe what Cook was up to as “sabermetrics” and longer still before anyone emulated it.

…I wanted to get a lot closer to the game than box seats. I had, some years before, been a radio newscaster and telephone-talk host, and I decided to trade on that background. But in a market like the Bay Area, one does not just walk into a major radio station and ask for a job if it has been years since one’s last position; so, I walked into a minor radio station, a little off-the-wall FM outfit, and instantly became their “sports reporter”; unsalaried, but eligible for press credentials from the Giants….

Meanwhile, however, I was constantly working on expanding Cook’s work in various ways, trying to develop more-practical methods of applying his, and in time my, ideas….

When I felt I had my principles in a practical, usable condition, I started nagging the Giants about their using the techniques. At first, it was a very tough slog; in those days — this would be 1979 or so, well before Bill James’ Abstracts were more than a few hundred mimeographed copies -– even the basic concepts were unknown, and, to old baseball men, they were very, very weird ideas….

In early 1981, as a demonstration, I gave the Giants an extensive analysis of their organization; taking a great risk, I included predictions for the coming season. I have that very document beside me now as I type…. I was, despite the relative crudeness of the methodology in those days, a winner: 440 runs projected, 427 scored; ERA projected, 3.35, ERA achieved, 3.28; errors projected, 103, actual errors committed, 102; and, bottom line, projected wins, 57, actual wins 56….

By this time, I had taken a big step up as a broadcaster, moving from that inconsequential little station to KQED, the NPR outlet in San Francisco, whence I would eventually be syndicated by satellite to 20 NPR affiliates across the country, about half in major markets.

As a first consequence of that move, a book editor who had heard the daily module while driving to work and thought it interesting approached me with a proposal that I write a book in the general style of my broadcasts. I began work in the fall of 1981, and the book, The Sinister First Baseman and Other Observations, was published in 1982, to excellent reviews and nearly no sales. Frank Robinson, then the Giants’ manager and a man I had come to know tolerably well, was kind enough to provide the Foreword for the book, which was a diverse collection of baseball essays….

At any rate, there I was, finally on contract with a major-league ball club, the Giants, but in a dubious situation…. I did persuade them to trade Gary Lavelle to the Blue Jays, but instead of names like John Cerutti and Jimmy Key, whom I had suggested, Haller got Jim Gott, who gave the Giants one good year as a starter and two forgettable years in the pen, plus two guys who never made the majors. But deals for Ken Oberkfell and especially for John Tudor, which I lobbied for intensely, didn’t get made (Haller called 20 minutes too late to get Oberkfell). I still remember then-Giants owner Bob Lurie, when I was actually admitted to the Brain Trust sanctum on trade-deadline day, saying around his cigar, “What’s all this about John Tudor?” (Tudor, then openly available, had a high AL ERA because he was a lefty in Fenway — this was well before “splits” and “park effects” were commonplace concepts — and I tried to explain all that, but no dice; Tudor went on to an NL ERA of 2.66 over seven seasons.)

When Robinson was fired by the Giants, I knew that owing to guilt by association (remember, Robby wrote the Foreword to my book) I would soon be gone, and so I was. My term as a consultant with the Giants was about half a season. In that brief term, I had had some input into a few decisions, but most of what I advocated, while listened to, was never acted on.

But having once crossed the major-league threshold, I was not about to sink back into oblivion. Across the Bay was an organization with a famously more forward-looking front office, with which I had already had contact. I asked, they answered, and so my career with the A’s began.

Modern analysis has shown a whole treasure chest of interesting and often useful performance metrics, but it remains so that the bedrock principle of classic analysis is simple: out-making controls scoring. What I call “classic” analysis is the principles that I presented to the Oakland Athletics in the early 1980s, which governed their thinking through 20 or so successful seasons, and which were dubbed “moneyball” by Michael Lewis in his book of that title. Because of that book, there has arisen a belief that whatever the A’s do is, by definition, “moneyball”; with the decline in their fortunes in recent years has come a corresponding belief that “moneyball” is in decline — dead, some would say [1] — because the A’s and moneyball are seen as essentially one thing.

That is simply wrong…. “Moneyball,” as the name says, is about seeking undervalued commodities [emphasis added]. In my day, what I regard as the crucial aspects of run-generation, notably on-base percentage, were seriously undervalued, so “moneyball” consisted in finding batters with those skills.

A team that today sustains one of the lowest on-base percentages in baseball, and actively acquires players with drastically low career on-base numbers, is very obviously practicing a different “moneyball” than that for which it became famed. Today’s A’s, it seems, see the undervalued commodities as “defense and athletic players drafted out of high school” (as a recent article on the organization put it). These are not your father’s A’s. What success their new tack will have remains to be seen (their present fortunes are a transition state); but “moneyball” as practiced today by the A’s seems no longer to have at its core the same analytic principles that then-GM Sandy Alderson and I worked with a quarter-century ago, and that I presented to Billy Beane in that now semi-famous paper [“Winning Baseball”]….

In 1994, Sandy promoted Billy Beane to assistant GM. At the same time, he asked me to prepare an overview of the general principles of analysis for Billy, so that Billy could get in one sitting an idea of the way the organization was looking at talent. In the end, I delivered a report titled “Winning Baseball,” with the subtitle: “An objective, numerical, analytic analysis of the principles and practices involved in the design of a winning baseball team.” The report was 66 pages long; I still grit my teeth whenever I remember that Michael Lewis described it as a “pamphlet [on page 58 of this edition of Moneyball].”…

My goal in that report, which I seem to have met, was to put the ideas — not the detailed principles, just the ideas — forward in simple, clear language and logical order, so that they would be comprehensible by and reasonable to a working front-office executive. Sandy Alderson didn’t need a document like this, then or at the outset, but he was a Harvard-trained attorney; I considered myself to be writing not just to Billy Beane but to any veteran baseball man (which, as it turned out, was just as well)….

Lewis not only demotes “Winning Baseball” to a pamphlet, but also demotes Walker to passing mention on three pages of Moneyball: 58, 62, and 63 (in the paperback edition linked above). Why would Lewis slight and distort Walker’s contributions to “moneyball”? Remember that Lewis is not a scientist, mathematician, or statistician. He is a journalist with a B.A. in art history who happened to work at Salomon Brothers for a few years. I have read his first book, Liar’s Poker. It is obviously the work of a young man with a grievance and a flair for dramatization. Moneyball is obviously the work of a somewhat older man who has honed his flair for dramatization. Do not mistake it for a rigorous analysis of the origins and effectiveness of “moneyball.”

Just how effective was “moneyball,” as it was practiced by the Oakland Athletics? There is evidence to suggest that it was quite effective. For example:


Sources and notes: Team won-lost records are from Baseball-Reference.com. Estimates of team payrolls are from USA Today’s database of salaries for professional sports teams, which begins in 1988 for major-league baseball (here). The payroll index measures the ratio of each team’s payroll in a given year to the major-league average for the same year.

The more that a team spends on player salaries, the better the team’s record. But payroll accounts for only about 18 percent of the variation in the records of major-league teams during the period 1988-2011. Which means that other factors, taken together, largely determine a team’s record. Among those factors is “moneyball” — the ability to identify, obtain, effectively use, and retain players who are “underpriced” relative to their potential. But the contribution of “moneyball” cannot be teased out of the data because, for one thing, it would be impossible to quantify the extent to which a team actually practices “moneyball.” That said, it is evident that during 1988-2011 the A’s did better than the average team, by the measure of wins per dollar of payroll: Compare the dark green regression line, representing the A’s, with the black regression line, representing all teams.

That is all well and good, but the purpose of a baseball team is not to win a high number of games per dollar of payroll; it is to win — period. By that measure, the A’s of the Alderson-Beane “moneyball” era have been successful, at times, but not uniquely so:


Source: Derived from Baseball-Reference.com.

The sometimes brilliant record of the Athletics franchise during 1901-1950 is owed to one man: Cornelius McGillicuddy (1862-1956). And the often dismal record of the franchise during 1901-1950 is owed to one man: the same Cornelius McGillicuddy. True fans of baseball (and collectors of trivia) know Cornelius McGillicuddy as Connie Mack, or more commonly as Mr. Mack. The latter is an honorific bestowed on Mack because of his dignified mien and distinguished career in baseball: catcher from 1886 to 1896; manager of the Pittsburgh Pirates from 1894 to 1896; manager of the Philadelphia Athletics from 1901 to 1950; part owner and then sole owner of the Athletics from 1901 to 1954.  (He is also an ancestor of two political figures who bear his real name and alias: Connie Mack III and Connie Mack IV.)

Mack’s long leadership and ownership of the A’s is important because it points to the reasons for the A’s successes and failures during the fifty years that he led the team from the bench. Here, from Wikipedia, is a story that is familiar to persons who know their baseball history:

[Mack] was widely praised in the newspapers for his intelligent and innovative managing, which earned him the nickname “the Tall Tactician”. He valued intelligence and “baseball smarts”, always looking for educated players. (He traded away Shoeless Joe Jackson despite his talent because of his bad attitude and unintelligent play.[9]) “Better than any other manager, Mack understood and promoted intelligence as an element of excellence.”[10] He wanted men who were self-directed, self-disciplined and self-motivated; his ideal player was Eddie Collins.[11]

“Mack looked for seven things in a young player: physical ability, intelligence, courage, disposition, will power, general alertness and personal habits.”[12]

He also looked for players with quiet and disciplined personal lives, having seen many players destroy themselves and their teams through heavy drinking in his playing days. Mack himself never drank; before the 1910 World Series he asked all his players to “take the pledge” not to drink during the Series. When Topsy Hartsel told Mack he needed a drink the night before the final game, Mack told him to do what he thought best, but in these circumstances “if it was me, I’d die before I took a drink.”[13]

In any event, his managerial style was not tyrannical but easygoing.[14] He never imposed curfews or bed checks, and made the best of what he had; Rube Waddell was the best pitcher and biggest gate attraction of his first decade as A’s manager, so he put up with his drinking and general unreliability for years until it began to bring the team down and the other players asked Mack to get rid of him.[15]

Mack’s strength as a manager was finding the best players, teaching them well and letting them play. “He did not believe that baseball revolved around managerial strategy.”[10] He was “one of the first managers to work on repositioning his fielders” during the game, often directing the outfielders to move left or right, play shallow or deep, by waving his rolled-up scorecard from the bench.[12] After he became well known for doing this, he often passed his instructions to the fielders by way of other players, and simply waved his scorecard as a feint.[16]

*   *   *

Mack saw baseball as a business, and recognized that economic necessity drove the game. He explained to his cousin, Art Dempsey, that “The best thing for a team financially is to be in the running and finish second. If you win, the players all expect raises.” This was one reason he was constantly collecting players, signing almost anyone to a ten-day contract to assess his talent; he was looking ahead to future seasons when his veterans would either retire or hold out for bigger salaries than Mack could give them.

Unlike most baseball owners, Mack had almost no income apart from the A’s, so he was often in financial difficulties. Money problems – the escalation of his best players’ salaries (due both to their success and to competition from the new, well-financed Federal League), combined with a steep drop in attendance due to World War I — led to the gradual dispersal of his second championship team, the 19101914 team, who [sic] he sold, traded, or released over the years 1915–1917. The war hurt the team badly, leaving Mack without the resources to sign valuable players….

All told, the A’s finished dead last in the AL seven years in a row from 1915 to 1921, and would not reach .500 again until 1926. The rebuilt team won back-to-back championships in 1929–1930 over the Cubs and Cardinals, and then lost a rematch with the latter in 1931. As it turned out, these were the last WS titles and pennants the Athletics would win in Philadelphia or for another four decades.

With the onset of the Great Depression, Mack struggled financially again, and was forced to sell the best players from his second great championship team, such as Lefty Grove and Jimmie Foxx, to stay in business. Although Mack wanted to rebuild again and win more championships, he was never able to do so owing to a lack of funds.

Had an earlier Michael Lewis written Moneyball in the 1950s, as a retrospective on Mack’s career as a manager-owner, that Lewis would have said (correctly) that the A’s successes and failures were directly related to (a) the amount of money spent on the team’s payroll, (b) Connie Mack’s character-based criteria for selecting players, and (c) his particular approach to managing players.  That is quite a different story than the one conveyed by the Moneyball written by the real Lewis.

Which version of Moneyball is correct? No one can say for sure. But the powerful evidence of Connie Mack’s long tenure suggests that it takes a combination of the two versions of Moneyball to be truly successful, that is, to post a winning record year after year. It seems that Lewis (inadvertently) jumped to a conclusion about what makes for a successful baseball team — probably because he was struck by the A’s then-recent success and did not look to the A’s history.

In any event, success through luck is not the moral of Moneyball; the moral is success through deliberate effort. But Michael Lewis ignored the moral of his own “masterwork” when he stood before an audience of Princeton graduates and told them that they are merely (or mainly) lucky. How does one graduate from Princeton merely (or mainly) by being lucky? Does it not require the application of one’s genetic talents? Did not most of the graduates of Princeton arrive there, in the first place, because they had applied their genetic talents well during their years in high school or prep school (and even before that)? Is one’s genetic inheritance merely a matter of luck, or is it the somewhat predictable result of the mating of two persons who were not thrown together randomly, but who had a lot in common — including (most likely) high intelligence?

Just as the cookie experiment invoked by Lewis is a load of pseudoscientific hogwash, the left-wing habit of finding luck at the bottom of every achievement is a load of politically correct hogwash. Worse, it is an excuse for punishing success.

Lewis’s peroration on luck is just a variation on a common left-wing theme: Success is merely a matter of luck, so it is the state’s right and duty to redistribute the spoils of luck.

Related posts:
Moral Luck
The Residue of Choice
Can Money Buy Excellence in Baseball?
Inventing “Liberalism”
Randomness Is Over-Rated
Fooled by Non-Randomness
Accountants of the Soul
Rawls Meets Bentham
Social Justice
Positive Liberty vs. Liberty
More Social Justice
Luck-Egalitarianism and Moral Luck
Nature Is Unfair
Elizabeth Warren Is All Wet
Luck and Baseball, One More Time
The Candle Problem: Balderdash Masquerading as Science
More about Luck and Baseball
Barack Channels Princess SummerFall WinterSpring
Obama’s Big Lie

The Candle Problem: Balderdash Masquerading as Science

For a complete treatment of the Candle Problem and several other cases of balderdash masquerading as science, go here.

In summary:

The Candle Problem is an interesting experiment, and probably valid with respect to the performance of specific tasks against tight deadlines. I think the results apply whether the stakes are money or any other kind of prize. The experiment illustrates the “choke” factor, and nothing more profound than that.

I question whether the experiment applies to the usual kind of incentive (e.g., a commissions or bonus), where the “incentee” has ample time (months, years) for reflection and research that will enable him to improve his performance and attain a bigger commission or bonus (which usually isn’t an all-or-nothing arrangement).

There’s also the dissimilarity of the Candle Problem — which involves more-or-less randomly chosen subjects, working against an artificial deadline — and actual creative thinking — usually involving persons who are experts (even if the expertise is as mundane as ditch-digging), working against looser deadlines or none at all.

Demystifying Science

“Science” is an unnecessarily daunting concept to the uninitiated, which is to say, almost everyone. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — often are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

WHAT IS SCIENCE?

Science is knowledge, but not all knowledge is science. A scientific body of knowledge is systematic; that is, the granular facts or phenomena which comprise the body of knowledge are connected in patterned ways. Moreover, the facts or phenomena represent reality; they are not mere concepts, which may be tools of science but are not science. Beyond that, science — unless it is a purely descriptive body of knowledge — is predictive about the characteristics of as-yet unobserved phenomena. These may be things that exist but have not yet been measured (in terms of the applicable science), or things that are yet to be (as in the effects of new drug on a disease).

Above all, science is not a matter of “consensus” — AGW zealots to the contrary notwithstanding. Science is a matter of rigorously testing theories against facts, and doing it openly. Imagine the state of physics today if Galileo had been unable to question Aristotle’s theory of gravitation, if Newton had been unable to extend and generalize Galileo’s work, and if Einstein had deferred to Newton. The effort to “deny” a prevailing or popular theory is as old as science. There have been “deniers’ in the thousands, each of them responsible for advancing some aspect of knowledge. Not all “deniers” have been as prominent as Einstein (consider Dan Schectman, for example), but each is potentially as important as Einstein.

It is hard for scientists to rise above their human impulses. Einstein, for example, so much wanted quantum physics to be deterministic rather than probabilistic that he said “God does not play dice with the universe.” To which Nils Bohr replied, “Einstein, stop telling God what to do.” But the human urge to be “right” or to be on the “right side” of an issue does not excuse anti-scientific behavior, such as that of so-called scientists who have become invested in AGW.

There are many so-called scientists who subscribe to AGW without having done relevant research. Why? Because AGW is the “in” thing, and they do not wish to be left out. This is the stuff of which “scientific consensus” is made. If you would not buy a make of automobile just because it is endorsed by a celebrity who knows nothing about automotive engineering, why would you “buy” AGW just because it is endorsed by a herd of so-called scientists who have never done research that bears directly on it?

There are two lessons to take from this. The first is  that no theory is ever proven. (A theory may, if it is well and openly tested, be useful guide to action in certain rigorous disciplines, such as engineering and medicine.) Any theory — to be a truly scientific one — must be capable of being tested, even by (and especially by) others who are skeptical of the theory. Those others must be able to verify the facts upon which the theory is predicated, and to replicate the tests and calculations that seem to validate the theory. So-called scientists who restrict access to their data and methods are properly thought of as cultists with a political agenda, not scientists. Their theories are not to be believed — and certainly are not to be taken as guides to action.

The second lesson is that scientists are human and fallible. It is in the best tradition of science to distrust their claims and to dismiss their non-scientific utterances.

THE ROLE OF MATHEMATICS AND STATISTICS IN SCIENCE

Mathematics and statistics are not sciences, despite their vast and organized complexity. They offer ways of thinking about and expressing knowledge, but they are not knowledge. They are languages that enable scientists to converse with each other and outsiders who are fluent in the same languages.

Expressing a theory in mathematical terms may lend the theory a scientific aura. But a theory couched in mathematics (or its verbal equivalent) is not a scientific one unless (a) it can be tested against observable facts by rigorous statistical methods, (b) it is found, consistently, to accord with those facts, and (c) the introduction of new facts does not require adjustment or outright rejection of the theory. If the introduction of new facts requires the adjustment of a theory, then it is a new theory, which must be tested against new facts, and so on.

This “inconvenient fact” — that an adjusted theory is a new theory —  is ignored routinely, especially in the application of regression analysis to a data set for the purpose of quantifying relationships among variables. If a “model” thus derived does a poor job when applied to data outside the original set, it is not an uncommon practice to combine the original and new data and derive a new “model” based on the combined set. This practice (sometimes called data-mining) does not yield scientific theories with predictive power; it yields information (of dubious value) about the the data employed in the regression analysis. As a critic of regression models once put it: Regression is a way of predicting the past with great certainty.

A science may be descriptive rather than mathematical. In a descriptive science (e.g., plant taxonomy), particular phenomena sometimes are described numerically (e.g., the number of leaves on the stem of a species), but the relations among various phenomena are not reducible to mathematics. Nevertheless, a predominantly descriptive discipline will be scientific if the phenomena within its compass are connected in patterned ways.

NON-SCIENCE, SCIENCE, AND PSEUDO-SCIENCE

Non-scientific disciplines can be useful, whereas some purportedly scientific disciplines verge on charlatanism. Thus, for example:

  • History, by my reckoning, is not a science. But a knowledge of history is valuable, nevertheless, for the insights it offers into the influence of human nature on the outcomes of economic and political processes. I call the lessons of history “insights,” not scientific relationships, because history is influenced by so many factors that it does not allow for the rigorous testing of hypotheses.
  • Physics is a science in most of its sub-disciplines, but there are some (e.g., cosmology and certain interpretations of quantum mechanics) where it descends into the realm of speculation. Informed, fascinating speculation to be sure, but speculation all the same. It avoids being pseudo-scientific only because it might give rise to testable hypotheses.
  • Economics is a science only to the extent that it yields valid, statistical insights about specific microeconomic issues (e.g., the effects of laws and regulations on the prices and outputs of goods and services). The postulates of macroeconomics, except to the extent that they are truisms, have no demonstrable validity. (See, for example, my treatment of the Keynesian multiplier.) Macroeconomics is a pseudo-science.

CONCLUSION

There is no such thing as “science,” writ large; that is, no one may appeal, legitimately, to “science” in the abstract. A particular discipline may be a science, but it is a science only to the extent that it comprises a factual body of knowledge and testable theories. Further, its data and methods must be open to verification and testing. And only a particular theory — one that has been put to the proper tests — can be called a scientific one.

For the reasons adduced in this post, scientists who claim to “know” that there is no God are not practicing science when they make that claim. They are practicing the religion that is known as atheism. The existence or non-existence of God is beyond testing, at least by any means yet known to man.

Related posts:
About Economic Forecasting
Is Economics a Science?
Economics as Science
Hemibel Thinking
Climatology
Physics Envy
Global Warming: Realities and Benefits
Words of Caution for the Cautious
Scientists in a Snit
Another Blow to Climatology?
A Telling Truth
Proof That “Smart” Economists Can Be Stupid
Bad News for Politically Correct Science
Another Blow to Chicken-Little Science
Same Old Story, Same Old Song and Dance
Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Hockey Stick Is Broken
Talk about Brainwaves!
The Creation Model
The Thing about Science
Science in Politics, Politics in Science
Global Warming and Life
Evolution and Religion
Speaking of Religion…
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Global Warming and the Liberal Agenda
Science, Logic, and God
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
This Is Objectivism?
Objectivism: Tautologies in Search of Reality
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
Global Warming in Perspective
Mathematical Economics
Economics: The Dismal (Non) Science
The Big Bang and Atheism
More Bad News for Global Warming Zealots

The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Warming, Anyone?
“Warmism”: The Myth of Anthropogenic Global Warming
Re: Climate “Science”
More Evidence against Anthropogenic Global Warming
Yet More Evidence against Anthropogenic Global Warming
A Non-Believer Defends Religion
Evolution as God?
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
Beware the Rare Event
Landsburg Is Half-Right
Physics Envy
The Unreality of Objectivism
What Is Truth?
Evolution, Human Nature, and “Natural Rights”
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
We, the Children of the Enlightenment
Probability, Existence, and Creation
The Atheism of the Gaps
Probability, Existence, and Creation: A Footnote