Not-So-Random Thoughts (XXIII)

CONTENTS

Government and Economic Growth

Reflections on Defense Economics

Abortion: How Much Jail Time?

Illegal Immigration and the Welfare State

Prosperity Isn’t Everything

Google et al. As State Actors

The Transgender Trap


GOVERNMENT AND ECONOMIC GROWTH

Guy Sorman reviews Alan Greenspan and Adrian Wooldridge’s Capitalism in America: A History. Sorman notes that

the golden days of American capitalism are over—or so the authors opine. That conclusion may seem surprising, as the U.S. economy appears to be flourishing. But the current GDP growth rate of roughly 3 percent, after deducting a 1 percent demographic increase, is rather modest, the authors maintain, compared with the historic performance of the postwar years, when the economy grew at an annual average of 5 percent. Moreover, unemployment appears low only because a significant portion of the population is no longer looking for work.

Greenspan and Wooldridge reject the conventional wisdom on mature economies growing more slowly. They blame relatively slow growth in the U.S. on the increase in entitlement spending and the expansion of the welfare state—a classic free-market argument.

They are right to reject the conventional wisdom.  Slow growth is due to the expansion of government spending (including entitlements) and the regulatory burden. See “The Rahn Curve in Action” for details, including an equation that accurately explains the declining rate of growth since the end of World War II.


REFLECTIONS ON DEFENSE ECONOMICS

Arnold Kling opines about defense economics. Cost-effectiveness analysis was the big thing in the 1960s. Analysts applied non-empirical models of warfare and cost estimates that were often WAGs (wild-ass guesses) to the comparison of competing weapon systems. The results were about as accurate a global climate models, which is to say wildly inaccurate. (See “Modeling Is not Science“.) And the results were worthless unless they comported with the prejudices of the “whiz kids” who worked for Robert Strange McNamara. (See “The McNamara Legacy: A Personal Perspective“.)


ABORTION: HOW MUCH JAIL TIME?

Georgi Boorman says “Yes, It Would Be Just to Punish Women for Aborting Their Babies“. But, as she says,

mainstream pro-lifers vigorously resist this argument. At the same time they insist that “the unborn child is a human being, worthy of legal protection,” as Sarah St. Onge wrote in these pages recently, they loudly protest when so-called “fringe” pro-lifers state the obvious: of course women who willfully hire abortionists to kill their children should be prosecuted.

Anna Quindlen addressed the same issue more than eleven years ago, in Newsweek:

Buried among prairie dogs and amateur animation shorts on YouTube is a curious little mini-documentary shot in front of an abortion clinic in Libertyville, Ill. The man behind the camera is asking demonstrators who want abortion criminalized what the penalty should be for a woman who has one nonetheless. You have rarely seen people look more gobsmacked. It’s as though the guy has asked them to solve quadratic equations. Here are a range of responses: “I’ve never really thought about it.” “I don’t have an answer for that.” “I don’t know.” “Just pray for them.”

You have to hand it to the questioner; he struggles manfully. “Usually when things are illegal there’s a penalty attached,” he explains patiently. But he can’t get a single person to be decisive about the crux of a matter they have been approaching with absolute certainty.

… If the Supreme Court decides abortion is not protected by a constitutional guarantee of privacy, the issue will revert to the states. If it goes to the states, some, perhaps many, will ban abortion. If abortion is made a crime, then surely the woman who has one is a criminal. But, boy, do the doctrinaire suddenly turn squirrelly at the prospect of throwing women in jail.

“They never connect the dots,” says Jill June, president of Planned Parenthood of Greater Iowa.

I addressed Quindlen, and queasy pro-lifers, eleven years ago:

The aim of Quindlen’s column is to scorn the idea of jail time as punishment for a woman who procures an illegal abortion. In fact, Quindlen’s “logic” reminds me of the classic definition of chutzpah: “that quality enshrined in a man who, having killed his mother and father, throws himself on the mercy of the court because he is an orphan.” The chutzpah, in this case, belongs to Quindlen (and others of her ilk) who believe that a woman should not face punishment for an abortion because she has just “lost” a baby.

Balderdash! If a woman illegally aborts her child, why shouldn’t she be punished by a jail term (at least)? She would be punished by jail (or confinement in a psychiatric prison) if she were to kill her new-born infant, her toddler, her ten-year old, and so on. What’s the difference between an abortion and murder? None. (Read this, then follow the links in this post.)

Quindlen (who predictably opposes capital punishment) asks “How much jail time?” in a cynical effort to shore up the anti-life front. It ain’t gonna work, lady.

See also “Abortion Q & A“.


ILLEGAL IMMIGRATION AND THE WELFARE STATE

Add this to what I say in “The High Cost of Untrammeled Immigration“:

In a new analysis of the latest numbers [by the Center for Immigration Studies], from 2014, 63 percent of non-citizens are using a welfare program, and it grows to 70 percent for those here 10 years or more, confirming another concern that once immigrants tap into welfare, they don’t get off it.

See also “Immigration and Crime” and “Immigration and Intelligence“.

Milton Friedman, thinking like an economist, favored open borders only if the welfare state were abolished. But there’s more to a country than GDP. (See “Genetic Kinship and Society“.) Which leads me to…


PROSPERITY ISN’T EVERYTHING

Patrick T. Brown writes about Oren Cass’s The Once and Future Worker:

Responding to what he cutely calls “economic piety”—the belief that GDP per capita defines a country’s well-being, and the role of society is to ensure the economic “pie” grows sufficiently to allow each individual to consume satisfactorily—Cass offers a competing hypothesis….

[A]s Cass argues, if well-being is measured by considerations in addition to economic ones, a GDP-based measurement of how our society is doing might not only be insufficient now, but also more costly over the long term. The definition of success in our public policy (and cultural) efforts should certainly include some economic measures, but not at the expense of the health of community and family life.

Consider this line, striking in the way it subverts the dominant paradigm: “If, historically, two-parent families could support themselves with only one parent working outside the home, then something is wrong with ‘growth’ that imposes a de facto need for two incomes.”…

People need to feel needed. The hollowness at the heart of American—Western?—society can’t be satiated with shinier toys and tastier brunches. An overemphasis on production could, of course, be as fatal as an overemphasis on consumption, and certainly the realm of the meritocrats gives enough cause to worry on this score. But as a matter of policy—as a means of not just sustaining our fellow citizen in times of want but of helping him feel needed and essential in his family and community life—Cass’s redefinition of “efficiency” to include not just its economic sense but some measure of social stability and human flourishing is welcome. Frankly, it’s past due as a tenet of mainstream conservatism.

Cass goes astray by offering governmental “solutions”; for example:

Cass suggests replacing the current Earned Income Tax Credit (along with some related safety net programs) with a direct wage subsidy, which would be paid to workers by the government to “top off” their current wage. In lieu of a minimum wage, the government would set a “target wage” of, say, $12 an hour. If an employee received $9 an hour from his employer, the government would step up to fill in that $3 an hour gap.

That’s no solution at all, inasmuch as the cost of a subsidy must be borne by someone. The someone, ultimately, is the low-wage worker whose wage is low because he is less productive than he would be. Why is he less productive? Because the high-income person who is taxed for the subsidy has that much less money to invest in business capital that raises productivity.

The real problem is that America — and the West, generally — has turned into a spiritual and cultural wasteland. See, for example, “A Century of Progress?“, “Prosperity Isn’t Everything“, and “James Burnham’s Misplaced Optimism“.


GOOGLE ET AL. AS STATE ACTORS

In “Preemptive (Cold) Civil War” (03/18/18) I recommended treating Google et al. as state actors to enforce the free-speech guarantee of the First Amendment against them:

The Constitution is the supreme law of the land. (Article V.)

Amendment I to the Constitution says that “Congress shall make no law … abridging the freedom of speech”.

Major entities in the telecommunications, news, entertainment, and education industries have exerted their power to suppress speech because of its content…. The collective actions of these entities — many of them government- licensed and government-funded — effectively constitute a governmental violation of the Constitution’s guarantee of freedom of speech (See Smith v. Allwright, 321 U.S. 649 (1944) and Marsh v. Alabama, 326 U.S. 501 (1946).)

I recommended presidential action. But someone has moved the issue to the courts. Tucker Higgins has the story:

The Supreme Court has agreed to hear a case that could determine whether users can challenge social media companies on free speech grounds.

The case, Manhattan Community Access Corp. v. Halleck, No. 17-702, centers on whether a private operator of a public access television network is considered a state actor, which can be sued for First Amendment violations.

The case could have broader implications for social media and other media outlets. In particular, a broad ruling from the high court could open the country’s largest technology companies up to First Amendment lawsuits.

That could shape the ability of companies like Facebook, Twitter and Alphabet’s Google to control the content on their platforms as lawmakers clamor for more regulation and activists on the left and right spar over issues related to censorship and harassment.

The Supreme Court accepted the case on [October 12]….

the court of Chief Justice John Roberts has shown a distinct preference for speech cases that concern conservative ideology, according to an empirical analysis conducted by researchers affiliated with Washington University in St. Louis and the University of Michigan.

The analysis found that the justices on the court appointed by Republican presidents sided with conservative speech nearly 70 percent of the time.

“More than any other modern Court, the Roberts Court has trained its sights on speech promoting conservative values,” the authors found.

Here’s hoping.


THE TRANSGENDER TRAP

Babette Francis and John Ballantine tell it like it is:

Dr. Paul McHugh, the University Distinguished Service Professor of Psychiatry at Johns Hopkins Medical School and the former psychiatrist-in-chief at Johns Hopkins Hospital, explains that “‘sex change’ is biologically impossible.” People who undergo sex-reassignment surgery do not change from men to women or vice versa.

In reality, gender dysphoria is more often than not a passing phase in the lives of certain children. The American Psychological Association’s Handbook of Sexuality and Psychology has revealed that, before the widespread promotion of transgender affirmation, 75 to 95 percent of pre-pubertal children who were uncomfortable or distressed with their biological sex eventually outgrew that distress. Dr. McHugh says: “At Johns Hopkins, after pioneering sex-change surgery, we demonstrated that the practice brought no important benefits. As a result, we stopped offering that form of treatment in the 1970s.”…

However, in today’s climate of political correctness, it is more than a health professional’s career is worth to offer a gender-confused patient an alternative to pursuing sex-reassignment. In some states, as Dr. McHugh has noted, “a doctor who would look into the psychological history of a transgendered boy or girl in search of a resolvable conflict could lose his or her license to practice medicine.”

In the space of a few years, these sorts of severe legal prohibitions—usually known as “anti-reparative” and “anti-conversion” laws—have spread to many more jurisdictions, not only across the United States, but also in Canada, Britain, and Australia. Transgender ideology, it appears, brooks no opposition from any quarter….

… Brown University succumbed to political pressure when it cancelled authorization of a news story of a recent study by one of its assistant professors of public health, Lisa Littman, on “rapid-onset gender dysphoria.” Science Daily reported:

Among the noteworthy patterns Littman found in the survey data: twenty-one percent of parents reported their child had one or more friends who become transgender-identified at around the same time; twenty percent reported an increase in their child’s social media use around the same time as experiencing gender dysphoria symptoms; and forty-five percent reported both.

A former dean of Harvard Medical School, Professor Jeffrey S. Flier, MD, defended Dr. Littman’s freedom to publish her research and criticized Brown University for censoring it. He said:

Increasingly, research on politically charged topics is subject to indiscriminate attack on social media, which in turn can pressure school administrators to subvert established norms regarding the protection of free academic inquiry. What’s needed is a campaign to mobilize the academic community to protect our ability to conduct and communicate such research, whether or not the methods and conclusions provoke controversy or even outrage.

The examples described above of the ongoing intimidation—sometimes, actual sackings—of doctors and academics who question transgender dogma represent only a small part of a very sinister assault on the independence of the medical profession from political interference. Dr. Whitehall recently reflected: “In fifty years of medicine, I have not witnessed such reluctance to express an opinion among my colleagues.”

For more about this outrage see “The Transgender Fad and Its Consequences“.

Macroeconomic Modeling Revisited

Modeling is not science. Take Professor Ray Fair, for example. He teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983 Professor Fair has been forecasting changes in real GDP four quarters ahead. He has made dozens of forecasts based on a model that he has tweaked many times over the years. The current model can be found here. His forecasting track record is here.

How has he done? Here’s how:

1. The mean absolute error of his forecasts is 70 percent; that is, on average his predictions vary by 70 percent from actual rates of growth.

2. The median absolute error of his forecasts is 33 percent.

3. His forecasts are systematically biased: too high when real, four-quarter GDP growth is less than 3 percent; too low when real, four-quarter GDP growth is greater than 3 percent. (See figure 1.)

4. His forecasts have grown generally worse — not better — with time. (See figure 2.)

5. In sum, the overall predictive value of the model is weak. (See figures 3 and 4.)

FIGURE 1

Figures 1-4 are derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Fair’s website.

FIGURE 2

FIGURE 3

FIGURE 4

Given the foregoing, you might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

Could I do better? Well, I’ve done better, with the simple model that I devised to estimate the Rahn Curve. It’s described in “The Rahn Curve in Action“, which is part III of “Economic Growth Since World War II“.

The theory behind the Rahn Curve is simple — but not simplistic. A relatively small government with powers limited mainly to the protection of citizens and their property is worth more than its cost to taxpayers because it fosters productive economic activity (not to mention liberty). But additional government spending hinders productive activity in many ways, which are discussed in Daniel Mitchell’s paper, “The Impact of Government Spending on Economic Growth.” (I would add to Mitchell’s list the burden of regulatory activity, which grows even when government does not.)

What does the Rahn Curve look like? Mitchell estimates this relationship between government spending and economic growth:

Rahn curve_Mitchell

The curve is dashed rather than solid at low values of government spending because it has been decades since the governments of developed nations have spent as little as 20 percent of GDP. But as Mitchell and others note, the combined spending of governments in the U.S. was 10 percent (and less) until the eve of the Great Depression. And it was in the low-spending, laissez-faire era from the end of the Civil War to the early 1900s that the U.S. enjoyed its highest sustained rate of economic growth.

Elsewhere, I estimated the Rahn curve that spans most of the history of the United States. I came up with this relationship (terms modified for simplicity (with a slight cosmetic change in terminology):

Yg = 0.054 -0.066F

To be precise, it’s the annualized rate of growth over the most recent 10-year span (Yg), as a function of F (fraction of GDP spent by governments at all levels) in the preceding 10 years. The relationship is lagged because it takes time for government spending (and related regulatory activities) to wreak their counterproductive effects on economic activity. Also, I include transfer payments (e.g., Social Security) in my measure of F because there’s no essential difference between transfer payments and many other kinds of government spending. They all take money from those who produce and give it to those who don’t (e.g., government employees engaged in paper-shuffling, unproductive social-engineering schemes, and counterproductive regulatory activities).

When F is greater than the amount needed for national defense and domestic justice — no more than 0.1 (10 percent of GDP) — it discourages productive, growth-producing, job-creating activity. And because government spending weighs most heavily on taxpayers with above-average incomes, higher rates of F also discourage saving, which finances growth-producing investments in new businesses, business expansion, and capital (i.e., new and more productive business assets, both physical and intellectual).

I’ve taken a closer look at the post-World War II numbers because of the marked decline in the rate of growth since the end of the war (Figure 2).

Here’s the revised result, which accounts for more variables:

Yg = 0.0275 -0.340F + 0.0773A – 0.000336R – 0.131P

Where,

Yg = real rate of GDP growth in a 10-year span (annualized)

F = fraction of GDP spent by governments at all levels during the preceding 10 years

A = the constant-dollar value of private nonresidential assets (business assets) as a fraction of GDP, averaged over the preceding 10 years

R = average number of Federal Register pages, in thousands, for the preceding 10-year period

P = growth in the CPI-U during the preceding 10 years (annualized).

The r-squared of the equation is 0.74 and the F-value is 1.60E-13. The p-values of the intercept and coefficients are 0.093, 3.98E-08, 4.83E-09, 6.05E-07, and 0.0071. The standard error of the estimate is 0.0049, that is, about half a percentage point.

Here’s how the equation stacks up against actual 10-year rates of real GDP growth:

What does the new equation portend for the next 10 years? Based on the values of F, A, R, and P for 2008-2017, the real rate of growth for the next 10 years will be about 2.0 percent.

There are signs of hope, however. The year-over-year rate of real growth in the four most recent quarters (2017Q4 – 2018Q3) were 2.4, 2.6, 2.9, and 3.0 percent, as against the dismal rates of 1.4, 1.2, 1.5, and 1.8 percent for four quarters of 2016 — Obama’s final year in office. A possible explanation is the election of Donald Trump and the well-founded belief that his tax and regulatory policies would be more business-friendly.

I took the data set that I used to estimate the new equation and made a series of out-of-sample estimates of growth over the next 10 years. I began with the data for 1946-1964 to estimate the growth for 1965-1974. I continued by taking the data for 1946-1965 to estimate the growth for 1966-1975, and so on, until I had estimated the growth for every 10-year period from 1965-1974 through 2008-2017. In other words, like Professor Fair, I updated my model to reflect new data, and I estimated the rate of economic growth in the future. How did I do? Here’s a first look:

FIGURE 5

For ease of comparison, I made the scale of the vertical axis of figure 5 the same as the scale of the vertical axis of figure 2. It’s obvious that my estimate of the Rahn Curve does a much better job of predicting the real rate of GDP growth than does Fair’s model.

Not only that, but my model is less biased:

FIGURE 6

The systematic bias reflected in figure 6 is far weaker than the systematic bias in Fair’s estimates (figure 1).

Finally, unlike Fair’s model (figure 4), my model captures the downward trend in the rate of real growth:

FIGURE 7

The moral of the story: It’s futile to build complex models of the economy. They can’t begin to capture the economy’s real complexity, and they’re likely to obscure the important variables — the ones that will determine the future course of economic growth.

A final note: Elsewhere (e.g., here) I’ve disparaged economic aggregates, of which GDP is the apotheosis. And yet I’ve built this post around estimates of GDP. Am I contradicting myself? Not really. There’s a rough consistency in measures of GDP across time, and I’m not pretending that GDP represents anything but an estimate of the monetary value of those products and services to which monetary values can be ascribed.

As a practical matter, then, if you want to know the likely future direction and value of GDP, stick with simple estimation techniques like the one I’ve demonstrated here. Don’t get bogged down in the inconclusive minutiae of a model like Professor Fair’s.

Wildfires and “Climate Change”, Again

In view of the current hysteria about the connection between wildfires and “climate change”, I must point readers to a three-month-old post The connection is nil, just like the bogus connection between tropical cyclone activity and “climate change”.

Ford, Kavanaugh, and Probability

I must begin by quoting the ever-quotable Theodore Dalrymple. In closing a post in which he addresses (inter alia) the high-tech low-life lynching of Brett Kavanaugh, he writes:

The most significant effect of the whole sorry episode is the advance of the cause of what can be called Femaoism, an amalgam of feminism and Maoism. For some people, there is a lot of pleasure to be had in hatred, especially when it is made the meaning of life.

Kavanaugh’s most “credible” accuser — Christine Blasey Ford (CBF) — was incredible (in the literal meaning of the word) for many reasons, some of which are given in the items listed at the end of “Where I Stand on Kavanaugh“.

Arnold Kling gives what is perhaps the best reason for believing Kavanaugh’s denial of CBF’s accusation, a reason that occurred to me at the time:

[Kavanaugh] came out early and emphatically with his denial. This risked having someone corroborate the accusation, which would have irreparably ruined his career. If he did it, it was much safer to own it than to attempt to get away with lying about it. If he lied, chances are he would be caught–at some point, someone would corroborate her story. The fact that he took that risk, along with the fact that there was no corroboration, even from her friend, suggests to me that he is innocent.

What does any of this have to do with probability? Kling’s post is about the results of a survey conducted by Scott Alexander, the proprietor of Slate Star Codex. Kling opens with this:

Scott Alexander writes,

I asked readers to estimate their probability that Judge Kavanaugh was guilty of sexually assaulting Dr. Ford. I got 2,350 responses (thank you, you are great). Here was the overall distribution of probabilities.

… A classical statistician would have refused to answer this question. In classical statistics, he is either guilty or he is not. A probability statement is nonsense. For a Bayesian, it represents a “degree of belief” or something like that. Everyone who answered the poll … either is a Bayesian or consented to act like one.

As a staunch adherent of the classical position (though I am not a statistician), I agree with Kling.

But the real issue in the recent imbroglio surrounding Kavanaugh wasn’t the “probability” that he had committed or attempted some kind of assault on CBF. The real issue was the ideological direction of the Supreme Court:

  1. With the departure of Anthony Kennedy from the Court, there arose an opportunity to secure a reliably conservative (constitutionalist) majority. (Assuming that Chief Justice Roberts remains in the fold.)
  2. Kavanaugh is seen to be a reliable constitutionalist.
  3. With Kavanaugh in the conservative majority, the average age of that majority would be (and now is) 63; whereas, the average age of the “liberal” minority is 72, and the two oldest justices (at 85 and 80) are “liberals”.
  4. Though the health and fitness of individual justices isn’t well known, there are more opportunities in the coming years for the enlargement of the Court’s conservative wing than for the enlargement of its “liberal” wing.
  5. This is bad news for the left because it dims the prospects for social and economic revolution via judicial decree — a long-favored leftist strategy. In fact, it brightens the prospects for the rollback of some of the left’s legislative and judicial “accomplishments”.

Thus the transparently fraudulent attacks on Brett Kavanaugh by desperate leftists and “tools” like CBF. That is to say, except for those who hold a reasoned position (e.g., Arnold Kling and me), one’s stance on Kavanaugh is driven by one’s politics.

Scott Alexander’s post supports my view:

Here are the results broken down by party (blue is Democrats, red is Republicans):

And here are the results broken down by gender (blue is men, pink is women):

Given that women are disproportionately Democrat, relative to men, the second graph simply tells us the same thing as the first graph: The “probability” of Kavanaugh’s “guilt” is strongly linked to political persuasion. (I am heartened to see that a large chunk of the female population hasn’t succumbed to Femaoism.)

Probability, in the proper meaning of the word, has nothing to do with question of Kavanaugh’s “guilt”. A feeling or inclination isn’t a probability, it’s just a feeling or inclination. Putting a number on it is false quantification. Scott Alexander should know better.

Why I Don’t Believe in “Climate Change”

UPDATED AND EXTENDED, 11/01/18

There are lots of reasons to disbelieve in “climate change”, that is, a measurable and statistically significant influence of human activity on the “global” temperature. Many of the reasons can be found at my page on the subject — in the text, the list of related readings, and the list of related posts. Here’s the main one: Surface temperature data — the basis for the theory of anthropogenic global warming — simply do not support the theory.

As Dr. Tim Ball points out:

A fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).

clip_image006

Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

“Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.”

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

Take the National Weather Service station for Austin, Texas, which is located 2.7 miles from my house. The station is on the grounds of Camp Mabry, a Texas National Guard base near the center of Austin, the fastest-growing large city in the U.S. The base is adjacent to a major highway (Texas Loop 1) that traverses Austin. The weather station is about 1/4 mile from the highway,100 feet from a paved road on the base, and near a complex of buildings and parking areas.

Here’s a ground view of the weather station:

And here’s an aerial view; the weather station is the tan rectangle at the center of the photo:

As I have shown elsewhere, the general rise in temperatures recorded at the weather station over the past several decades is fully explained by the urban-heat-island effect due to the rise in Austin’s population during those decades.

Further, there is a consistent difference in temperature and rainfall between my house and Camp Mabry. My house is located farther from the center of Austin — northwest of Camp Mabry — in a topographically different area. The topography in my part of Austin is typical of the Texas Hill Country, which begins about a mile east of my house and covers a broad swath of land stretching as far as 250 miles from Austin.

The contrast is obvious in the next photo. Camp Mabry is at the “1” (for Texas Loop 1) near the lower edge of the image. Topographically, it belongs with the flat part of Austin that lies mostly east of Loop 1. It is unrepresentative of the huge chunk of Austin and environs that lies to its north and west.

Getting down to cases. I observed that in the past summer, when daily highs recorded at Camp Mabry hit 100 degrees or more 52 times, the daily high at my house reached 100 or more only on the handful of days when it reached 106-110 at Camp Mabry. That’s consistent with another observation; namely, that the daily high at my house is generally 6 degrees lower than the daily high at Camp Mabry when it is above 90 degrees there.

As for rainfall, my house seems to be in a different ecosystem than Camp Mabry’s. Take September and October of this year: 15.7 inches of rain fell at Camp Mabry, as against 21.0 inches at my house. The higher totals at my house are typical, and are due to a phenomenon called orographic lift. It affects areas to the north and west of Camp Mabry, but not Camp Mabry itself.

So the climate at Camp Mabry is not my climate. Nor is the climate at Camp Mabry typical of a vast area in and around Austin, despite the use of Camp Mabry’s climate to represent that area.

There is another official weather station at Austin-Bergstrom International Airport, which is in the flatland 9.5 miles to the southeast of Camp Mabry. Its rainfall total for September and October was 12.8 inches — almost 3 inches less than at Camp Mabry — but its average temperatures for the two months were within a degree of Camp Mabry’s. Suppose Camp Mabry’s weather station went offline. The weather station at ABIA would then record temperatures and precipitation even less representative of those at my house and similar areas to the north and west.

Speaking of precipitation — it is obviously related to cloud cover. The more it rains, the cloudier it will be. The cloudier it is, the lower the temperature, other things being the same (e.g., locale). This is true for Austin:

12-month avg temp vs. precip

The correlation coefficient is highly significant, given the huge sample size. Note that the relationship is between precipitation in a given month and temperature a month later. Although cloud cover (and thus precipitation) has an immediate effect on temperature, precipitation has a residual effect in that wet ground absorbs more solar radiation than dry ground, so that there is less heat reflected from the ground to the air. The lagged relationship is strongest at 1 month, and considerably stronger than any relationship in which temperature leads precipitation.

I bring up this aspect of Austin’s climate because of a post by Anthony Watts (“Data: Global Temperatures Fell As Cloud Cover Rose in the 1980s and 90s“, Watts Up With That?, November 1, 2018):

I was reminded about a study undertaken by Clive Best and Euan Mearns looking at the role of cloud cover four years ago:

Clouds have a net average cooling effect on the earth’s climate. Climate models assume that changes in cloud cover are a feedback response to CO2 warming. Is this assumption valid? Following a study withEuan Mearns showing a strong correlation in UK temperatures with clouds, we  looked at the global effects of clouds by developing a combined cloud and CO2 forcing model to sudy how variations in both cloud cover [8] and CO2 [14] data affect global temperature anomalies between 1983 and 2008. The model as described below gives a good fit to HADCRUT4 data with a Transient Climate Response (TCR )= 1.6±0.3°C. The 17-year hiatus in warming can then be explained as resulting from a stabilization in global cloud cover since 1998.  An excel spreadsheet implementing the model as described below can be downloaded from http://clivebest.com/GCC.

The full post containing all of the detailed statistical analysis is here.

But this is the key graph:

CC-HC4

Figure 1a showing the ISCCP global averaged monthly cloud cover from July 1983 to Dec 2008 over-laid in blue with Hadcrut4 monthly anomaly data. The fall in cloud cover coincides with a rapid rise in temperatures from 1983-1999. Thereafter the temperature and cloud trends have both flattened. The CO2 forcing from 1998 to 2008 increases by a further ~0.3 W/m2 which is evidence that changes in clouds are not a direct feedback to CO2 forcing.

In conclusion, natural cyclic change in global cloud cover has a greater impact on global average temperatures than CO2. There is little evidence of a direct feedback relationship between clouds and CO2. Based on satellite measurements of cloud cover (ISCCP), net cloud forcing (CERES) and CO2 levels (KEELING) we developed a model for predicting global temperatures. This results in a best-fit value for TCR = 1.4 ± 0.3°C. Summer cloud forcing has a larger effect in the northern hemisphere resulting in a lower TCR = 1.0 ± 0.3°C. Natural phenomena must influence clouds although the details remain unclear, although the CLOUD experiment has given hints that increased fluxes of cosmic rays may increase cloud seeding [19].  In conclusion, the gradual reduction in net cloud cover explains over 50% of global warming observed during the 80s and 90s, and the hiatus in warming since 1998 coincides with a stabilization of cloud forcing.

Why there was a decrease in cloud cover is another question of course.

In addition to Paul Homewood’s piece, we have this WUWT story from 2012:

Spencer’s posited 1-2% cloud cover variation found

A paper published last week finds that cloud cover over China significantly decreased during the period 1954-2005. This finding is in direct contradiction to the theory of man-made global warming which presumes that warming allegedly from CO2 ‘should’ cause an increase in water vapor and cloudiness. The authors also find the decrease in cloud cover was not related to man-made aerosols, and thus was likely a natural phenomenon, potentially a result of increased solar activity via the Svensmark theory or other mechanisms.

Case closed. (Not for the first time.)

Hurricane Hysteria, Updated

In view of Hurricane Michael, and the attendant claims about the role of “climate change”, I have updated “Hurricane Hysteria“. The bottom line remains the same: Global measures of accumulated cyclone energy (ACE) do not support the view that there is a correlation between “climate change” and tropical cyclone activity.

Atheistic Scientism Revisited

I recently had the great pleasure of reading The Devil’s Delusion: Atheism and Its Scientific Pretensions, by David Berlinksi. (Many thanks to Roger Barnett for recommending the book to me.) Berlinski, who knows far more about science than I do, writes with flair and scathing logic. I can’t do justice to his book, but I will try to convey its gist.

Before I do that, I must tell you that I enjoyed Berlinski’s book not only because of the author’s acumen and biting wit, but also because he agrees with me. (I suppose I should say, in modesty, that I agree with him.) I have argued against atheistic scientism in many blog posts (see below).

Here is my version of the argumment against atheism in its briefest form (June 15, 2011):

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

As for scientism, I call upon Friedrich Hayek:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it. [The Counter Revolution Of Science]

As Berlinski amply illustrates and forcibly argues, atheistic scientism is rampant in the so-called sciences. I have reproduced below some key passages from Berlinski’s book. They are representative, but far from exhaustive (though I did nearly exhaust the publisher’s copy limit on the Kindle edition). I have forgone the block-quotation style for ease of reading, and have inserted triple asterisks to indicate (sometimes subtle) changes of topic.

*   *   *

Richard Dawkins, the author of The God Delusion, … is not only an intellectually fulfilled atheist, he is determined that others should be as full as he. A great many scientists are satisfied that at last someone has said out loud what so many of them have said among themselves: Scientific and religious belief are in conflict. They cannot both be right. Let us get rid of the one that is wrong….

Because atheism is said to follow from various scientific doctrines, literary atheists, while they are eager to speak their minds, must often express themselves in other men’s voices. Christopher Hitchens is an example. With forthcoming modesty, he has affirmed his willingness to defer to the world’s “smart scientists” on any matter more exigent than finger-counting. Were smart scientists to report that a strain of yeast supported the invasion of Iraq, Hitchens would, no doubt, conceive an increased respect for yeast….

If nothing else, the attack on traditional religious thought marks the consolidation in our time of science as the single system of belief in which rational men and women might place their faith, and if not their faith, then certainly their devotion. From cosmology to biology, its narratives have become the narratives. They are, these narratives, immensely seductive, so much so that looking at them with innocent eyes requires a very deliberate act. And like any militant church, this one places a familiar demand before all others: Thou shalt have no other gods before me.

It is this that is new; it is this that is important….

For scientists persuaded that there is no God, there is no finer pleasure than recounting the history of religious brutality and persecution. Sam Harris is in this regard especially enthusiastic, The End of Faith recounting in lurid but lingering detail the methods of torture used in the Spanish Inquisition….

Nonetheless, there is this awkward fact: The twentieth century was not an age of faith, and it was awful. Lenin, Stalin, Hitler, Mao, and Pol Pot will never be counted among the religious leaders of mankind….

… Just who has imposed on the suffering human race poison gas, barbed wire, high explosives, experiments in eugenics, the formula for Zyklon B, heavy artillery, pseudo-scientific justifications for mass murder, cluster bombs, attack submarines, napalm, intercontinental ballistic missiles, military space platforms, and nuclear weapons?

If memory serves, it was not the Vatican….

What Hitler did not believe and what Stalin did not believe and what Mao did not believe and what the SS did not believe and what the Gestapo did not believe and what the NKVD did not believe and what the commissars, functionaries, swaggering executioners, Nazi doctors, Communist Party theoreticians, intellectuals, Brown Shirts, Black Shirts, gauleiters, and a thousand party hacks did not believe was that God was watching what they were doing.

And as far as we can tell, very few of those carrying out the horrors of the twentieth century worried overmuch that God was watching what they were doing either.

That is, after all, the meaning of a secular society….

Richard Weikart, … in his admirable treatise, From Darwin to Hitler: Evolutionary Ethics, Eugenics, and Racism in Germany, makes clear what anyone capable of reading the German sources already knew: A sinister current of influence ran from Darwin’s theory of evolution to Hitler’s policy of extermination.

*   *   *

It is wrong, the nineteenth-century British mathematician W. K. Clifford affirmed, “always, everywhere, and for anyone, to believe anything upon insufficient evidence.” I am guessing that Clifford believed what he wrote, but what evidence he had for his belief, he did not say.

Something like Clifford’s injunction functions as the premise in a popular argument for the inexistence of God. If God exists, then his existence is a scientific claim, no different in kind from the claim that there is tungsten to be found in Bermuda. We cannot have one set of standards for tungsten and another for the Deity….

There remains the obvious question: By what standards might we determine that faith in science is reasonable, but that faith in God is not? It may well be that “religious faith,” as the philosopher Robert Todd Carroll has written, “is contrary to the sum of evidence,” but if religious faith is found wanting, it is reasonable to ask for a restatement of the rules by which “the sum of evidence” is computed….

… The concept of sufficient evidence is infinitely elastic…. What a physicist counts as evidence is not what a mathematician generally accepts. Evidence in engineering has little to do with evidence in art, and while everyone can agree that it is wrong to go off half-baked, half-cocked, or half-right, what counts as being baked, cocked, or right is simply too variable to suggest a plausible general principle….

Neither the premises nor the conclusions of any scientific theory mention the existence of God. I have checked this carefully. The theories are by themselves unrevealing. If science is to champion atheism, the requisite demonstration must appeal to something in the sciences that is not quite a matter of what they say, what they imply, or what they reveal.

*   *   *

The universe in its largest aspect is the expression of curved space and time. Four fundamental forces hold sway. There are black holes and various infernal singularities. Popping out of quantum fields, the elementary particles appear as bosons or fermions. The fermions are divided into quarks and leptons. Quarks come in six varieties, but they are never seen, confined as they are within hadrons by a force that perversely grows weaker at short distances and stronger at distances that are long. There are six leptons in four varieties. Depending on just how things are counted, matter has as its fundamental constituents twenty-four elementary particles, together with a great many fields, symmetries, strange geometrical spaces, and forces that are disconnected at one level of energy and fused at another, together with at least a dozen different forms of energy, all of them active.

… It is remarkably baroque. And it is promiscuously catholic. For the atheist persuaded that materialism offers him a no-nonsense doctrinal affiliation, materialism in this sense comes to the declaration of a barroom drinker that he will have whatever he’s having, no matter who he is or what he is having. What he is having is what he always takes, and that is any concept, mathematical structure, or vagrant idea needed to get on with it. If tomorrow, physicists determine that particle physics requires access to the ubiquity of the body of Christ, that doctrine would at once be declared a physical principle and treated accordingly….

What remains of the ideology of the sciences? It is the thesis that the sciences are true— who would doubt it?— and that only the sciences are true. The philosopher Michael Devitt thus argues that “there is only one way of knowing, the empirical way that is the basis of science.” An argument against religious belief follows at once on the assumptions that theology is not science and belief is not knowledge. If by means of this argument it also follows that neither mathematics, the law, nor the greater part of ordinary human discourse have a claim on our epistemological allegiance, they must be accepted as casualties of war.

*   *   *

The claim that the existence of God should be treated as a scientific question stands on a destructive dilemma: If by science one means the great theories of mathematical physics, then the demand is unreasonable. We cannot treat any claim in this way. There is no other intellectual activity in which theory and evidence have reached this stage of development….

Is there a God who has among other things created the universe? “It is not by its conclusions,” C. F. von Weizsäcker has written in The Relevance of Science, but by its methodological starting point that modern science excludes direct creation. Our methodology would not be honest if this fact were denied . . . such is the faith in the science of our time, and which we all share” (italics added).

In science, as in so many other areas of life, faith is its own reward….

The medieval Arabic argument known as the kalam is an example of the genre [cosmological argument].

Its first premise: Everything that begins to exist has a cause.

And its second: The universe began to exist.

And its conclusion: So the universe had a cause.

This is not by itself an argument for the existence of God. It is suggestive without being conclusive. Even so, it is an argument that in a rush covers a good deal of ground carelessly denied by atheists. It is one thing to deny that there is a God; it is quite another to deny that the universe has a cause….

The universe, orthodox cosmologists believe, came into existence as the expression of an explosion— what is now called the Big Bang. The word explosion is a sign that words have failed us, as they so often do, for it suggests a humanly comprehensible event— a gigantic explosion or a stupendous eruption. This is absurd. The Big Bang was not an event taking place at a time or in a place. Space and time were themselves created by the Big Bang, the measure along with the measured….

Whatever its name, as far as most physicists are concerned, the Big Bang is now a part of the established structure of modern physics….

… Many physicists have found the idea that the universe had a beginning alarming. “So long as the universe had a beginning,” Stephen Hawking has written, “we could suppose it had a creator.” God forbid!

… Big Bang cosmology has been confirmed by additional evidence, some of it astonishing. In 1963, the physicists Arno Penzias and Robert Wilson observed what seemed to be the living remnants of the Big Bang— and after 14 billion years!— when in 1962 they detected, by means of a hum in their equipment, a signal in the night sky they could only explain as the remnants of the microwave radiation background left over from the Big Bang itself.

More than anything else, this observation, and the inference it provoked, persuaded physicists that the structure of Big Bang cosmology was anchored into fact….

“Perhaps the best argument in favor of the thesis that the Big Bang supports theism,” the astrophysicist Christopher Isham has observed, “is the obvious unease with which it is greeted by some atheist physicists. At times this has led to scientific ideas, such as continuous creation or an oscillating universe, being advanced with a tenacity which so exceeds their intrinsic worth that one can only suspect the operation of psychological forces lying very much deeper than the usual academic desire of a theorist to support his or her theory.”…

… With the possibility of inexistence staring it in the face, why does the universe exist? To say that universe just is, as Stephen Hawking has said, is to reject out of hand any further questions. We know that it is. It is right there in plain sight. What philosophers such as ourselves wish to know is why it is. It may be that at the end of these inquiries we will answer our own question by saying that the universe exists for no reason whatsoever. At the end of these inquiries, and not the beginning….

Among physicists, the question of how something emerged from nothing has one decisive effect: It loosens their tongues. “One thing [that] is clear,” a physicist writes, “in our framing of questions such as ‘How did the Universe get started?’ is that the Universe was self-creating. This is not a statement on a ‘cause’ behind the origin of the Universe, nor is it a statement on a lack of purpose or destiny. It is simply a statement that the Universe was emergent, that the actual Universe probably derived from an indeterminate sea of potentiality that we call the quantum vacuum, whose properties may always remain beyond our current understanding.”

It cannot be said that “an indeterminate sea of potentiality” has anything like the clarifying effect needed by the discussion, and indeed, except for sheer snobbishness, physicists have offered no reason to prefer this description of the Source of Being to the one offered by Abu al-Hassan al Hashari in ninth-century Baghdad. The various Islamic versions of that indeterminate sea of being he rejected in a spasm of fierce disgust. “We confess,” he wrote, “that God is firmly seated on his throne. We confess that God has two hands, without asking how. We confess that God has two eyes, without asking how. We confess that God has a face.”…

Proposing to show how something might emerge from nothing, [the physicist Victor Stenger] introduces “another universe [that] existed prior to ours that tunneled through . . . to become our universe. Critics will argue that we have no way of observing such an earlier universe, and so this is not very scientific” (italics added). This is true. Critics will do just that. Before they do, they will certainly observe that Stenger has completely misunderstood the terms of the problem that he has set himself, and that far from showing how something can arise from nothing, he has shown only that something might arise from something else. This is not an observation that has ever evoked a firestorm of controversy….

… [A]ccording to the many-worlds interpretation [of quantum mechanics], at precisely the moment a measurement is made, the universe branches into two or more universes. The cat who was half dead and half alive gives rise to two separate universes, one containing a cat who is dead, the other containing a cat who is alive. The new universes cluttering up creation embody the quantum states that were previously in a state of quantum superposition.

The many-worlds interpretation of quantum mechanics is rather like the incarnation. It appeals to those who believe in it, and it rewards belief in proportion to which belief is sincere….

No less than the doctrines of religious belief, the doctrines of quantum cosmology are what they seem: biased, partial, inconclusive, and largely in the service of passionate but unexamined conviction.

*   *   *

The cosmological constant is a number controlling the expansion of the universe. If it were negative, the universe would appear doomed to contract in upon itself, and if positive, equally doomed to expand out from itself. Like the rest of us, the universe is apparently doomed no matter what it does. And here is the odd point: If the cosmological constant were larger than it is, the universe would have expanded too quickly, and if smaller, it would have collapsed too early, to permit the appearance of living systems….

“Scientists,” the physicist Paul Davies has observed, “are slowly waking up to an inconvenient truth— the universe looks suspiciously like a fix. The issue concerns the very laws of nature themselves. For 40 years, physicists and cosmologists have been quietly collecting examples of all too convenient ‘coincidences’ and special features in the underlying laws of the universe that seem to be necessary in order for life, and hence conscious beings, to exist. Change any one of them and the consequences would be lethal.”….

Why? Yes, why?

An appeal to still further physical laws is, of course, ruled out on the grounds that the fundamental laws of nature are fundamental. An appeal to logic is unavailing. The laws of nature do not seem to be logical truths. The laws of nature must be intrinsically rich enough to specify the panorama of the universe, and the universe is anything but simple. As Newton remarks, “Blind metaphysical necessity, which is certainly the same always and everywhere, could produce no variety of things.”

If the laws of nature are neither necessary nor simple, why, then, are they true?

Questions about the parameters and laws of physics form a single insistent question in thought: Why are things as they are when what they are seems anything but arbitrary?

One answer is obvious. It is the one that theologians have always offered: The universe looks like a put-up job because it is a put-up job.

*   *   *

Any conception of a contingent deity, Aquinas argues, is doomed to fail, and it is doomed to fail precisely because whatever He might do to explain the existence of the universe, His existence would again require an explanation. “Therefore, not all beings are merely possible, but there must exist something the existence of which is necessary.”…

… “We feel,” Wittgenstein wrote, “that even when all possible scientific questions have been answered, the problems of life remain completely untouched.” Those who do feel this way will see, following Aquinas, that the only inference calculated to overcome the way things are is one directed toward the way things must be….

“The key difference between the radically extravagant God hypothesis,” [Dawkins] writes, “and the apparently extravagant multiverse hypothesis, is one of statistical improbability.”

It is? I had no idea, the more so since Dawkins’s very next sentence would seem to undercut the sentence he has just written. “The multiverse, for all that it is extravagant, is simple,” because each of its constituent universes “is simple in its fundamental laws.”

If this is true for each of those constituent universes, then it is true for our universe as well. And if our universe is simple in its fundamental laws, what on earth is the relevance of Dawkins’s argument?

Simple things, simple explanations, simple laws, a simple God.

Bon appétit.

*   *   *

As a rhetorical contrivance, the God of the Gaps makes his effect contingent on a specific assumption: that whatever the gaps, they will in the course of scientific research be filled…. Western science has proceeded by filling gaps, but in filling them, it has created gaps all over again. The process is inexhaustible. Einstein created the special theory of relativity to accommodate certain anomalies in the interpretation of Clerk Maxwell’s theory of the electromagnetic field. Special relativity led directly to general relativity. But general relativity is inconsistent with quantum mechanics, the largest visions of the physical world alien to one another. Understanding has improved, but within the physical sciences, anomalies have grown great, and what is more, anomalies have grown great because understanding has improved….

… At the very beginning of his treatise Vertebrate Paleontology and Evolution, Robert Carroll observes quite correctly that “most of the fossil record does not support a strictly gradualistic account” of evolution. A “strictly gradualistic” account is precisely what Darwin’s theory demands: It is the heart and soul of the theory.

But by the same token, there are no laboratory demonstrations of speciation either, millions of fruit flies coming and going while never once suggesting that they were destined to appear as anything other than fruit flies. This is the conclusion suggested as well by more than six thousand years of artificial selection, the practice of barnyard and backyard alike. Nothing can induce a chicken to lay a square egg or to persuade a pig to develop wheels mounted on ball bearings….

… In a research survey published in 2001, and widely ignored thereafter, the evolutionary biologist Joel Kingsolver reported that in sample sizes of more than one thousand individuals, there was virtually no correlation between specific biological traits and either reproductive success or survival. “Important issues about selection,” he remarked with some understatement, “remain unresolved.”

Of those important issues, I would mention prominently the question whether natural selection exists at all.

Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not. Thomas Ray has for years been conducting computer experiments in an artificial environment that he has designated Tierra. Within this world, a shifting population of computer organisms meet, mate, mutate, and reproduce.

Sandra Blakeslee, writing for the New York Times, reported the results under the headline “Computer ‘Life Form’ Mutates in an Evolution Experiment: Natural Selection Is Found at Work in a Digital World.”

Natural selection found at work? I suppose so, for as Blakeslee observes with solemn incomprehension, “the creatures mutated but showed only modest increases in complexity.” Which is to say, they showed nothing of interest at all. This is natural selection at work, but it is hardly work that has worked to intended effect.

What these computer experiments do reveal is a principle far more penetrating than any that Darwin ever offered: There is a sucker born every minute….

… Daniel Dennett, like Mexican food, does not fail to come up long after he has gone down. “Contemporary biology,” he writes, “has demonstrated beyond all reasonable doubt that natural selection— the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge— has the power to generate breathtakingly ingenious designs” (italics added).

These remarks are typical in their self-enchanted self-confidence. Nothing in the physical sciences, it goes without saying— right?— has been demonstrated beyond all reasonable doubt. The phrase belongs to a court of law. The thesis that improvements in life appear automatically represents nothing more than Dennett’s conviction that living systems are like elevators: If their buttons are pushed, they go up. Or down, as the case may be. Although Darwin’s theory is very often compared favorably to the great theories of mathematical physics on the grounds that evolution is as well established as gravity, very few physicists have been heard observing that gravity is as well established as evolution. They know better and they are not stupid….

… The greater part of the debate over Darwin’s theory is not in service to the facts. Nor to the theory. The facts are what they have always been: They are unforthcoming. And the theory is what it always was: It is unpersuasive. Among evolutionary biologists, these matters are well known. In the privacy of the Susan B. Anthony faculty lounge, they often tell one another with relief that it is a very good thing the public has no idea what the research literature really suggests.

“Darwin?” a Nobel laureate in biology once remarked to me over his bifocals. “That’s just the party line.”

In the summer of 2007, Eugene Koonin, of the National Center for Biotechnology Information at the National Institutes of Health, published a paper entitled “The Biological Big Bang Model for the Major Transitions in Evolution.”

The paper is refreshing in its candor; it is alarming in its consequences. “Major transitions in biological evolution,” Koonin writes, “show the same pattern of sudden emergence of diverse forms at a new level of complexity” (italics added). Major transitions in biological evolution? These are precisely the transitions that Darwin’s theory was intended to explain. If those “major transitions” represent a “sudden emergence of new forms,” the obvious conclusion to draw is not that nature is perverse but that Darwin was wrong….

Koonin is hardly finished. He has just started to warm up. “In each of these pivotal nexuses in life’s history,” he goes on to say, “the principal ‘types’ seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate ‘grades’ or intermediate forms between different types are detectable.”…

… [H[is views are simply part of a much more serious pattern of intellectual discontent with Darwinian doctrine. Writing in the 1960s and 1970s, the Japanese mathematical biologist Motoo Kimura argued that on the genetic level— the place where mutations take place— most changes are selectively neutral. They do nothing to help an organism survive; they may even be deleterious…. Kimura was perfectly aware that he was advancing a powerful argument against Darwin’s theory of natural selection. “The neutral theory asserts,” he wrote in the introduction to his masterpiece, The Neutral Theory of Molecular Evolution, “that the great majority of evolutionary changes at the molecular level, as revealed by comparative studies of protein and DNA sequences, are caused not by Darwinian selection but by random drift of selectively neutral or nearly neutral mutations” (italics added)….

… Writing in the Proceedings of the National Academy of Sciences, the evolutionary biologist Michael Lynch observed that “Dawkins’s agenda has been to spread the word on the awesome power of natural selection.” The view that results, Lynch remarks, is incomplete and therefore “profoundly misleading.” Lest there be any question about Lynch’s critique, he makes the point explicitly: “What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms.”…

When asked what he was in awe of, Christopher Hitchens responded that his definition of an educated person is that you have some idea how ignorant you are. This seems very much as if Hitchens were in awe of his own ignorance, in which case he has surely found an object worthy of his veneration.

*   *   *

Do read the whole thing. It will take you only a few hours. And it will remind you — as we badly need reminding these days — that sanity reigns in some corners of the universe.


Related posts:

Same Old Story, Same Old Song and Dance
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
The Legality of Teaching Intelligent Design
Science, Logic, and God
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
The Big Bang and Atheism
Atheism, Religion, and Science Redux
Religion as Beneficial Evolutionary Adaptation
A Non-Believer Defends Religion
The Greatest Mystery
Landsburg Is Half-Right
Evolution, Human Nature, and “Natural Rights”
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Religion on the Left
Scientism, Evolution, and the Meaning of Life
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Some Thoughts about Evolution
Rationalism, Empiricism, and Scientific Knowledge
Fine-Tuning in a Wacky Wrapper
Beating Religion with the Wrong End of the Stick
Quantum Mechanics and Free Will
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
The Fragility of Knowledge
Altruism, One More Time
Religion, Creation, and Morality
The Pretence of Knowledge
Evolution, Intelligence, and Race

New Pages

In case you haven’t noticed the list in the right sidebar, I have converted several classic posts to pages, for ease of access. Some have new names; many combine several posts on the same subject:

Abortion Q & A

Climate Change

Constitution: Myths and Realities

Economic Growth Since World War II

Intelligence

Keynesian Multiplier: Fiction vs. Fact

Leftism

Movies

Spygate

Wildfires and “Climate Change”

Regarding the claim that there are more wildfires because of “climate change”:

In case the relationship isn’t obvious, here it is:

Estimates of the number of fires are from National Fire Protection Association, Number of Fires by Type of Fire. Specifically, the estimates are the sum of the columns for “Outside of Structures with Value Involved but no vehicle (outside storage crops, timber, etc) and “Brush, Grass, Wildland (excluding crops and timber), with no value or loss involved”.

Estimates of the global temperature anomalies are annual averages of monthly satellite readings for the lower troposphere, published by the Earth Science System Center of the University of Alabama-Huntsville.

Not-So-Random Thoughts (XXII)

This is a long-overdue entry; the previous one was posted on October 4, 2017. Accordingly, it is a long entry, consisting of these parts:

Censorship and Left-Wing Bias on the Web

The Real Collusion Story

“Suicide” of the West

Evolution, Intelligence, and Race

Will the Real Fascists Please Stand Up?

Consciousness

Empathy Is Over-Rated

“Nudging”



CENSORSHIP AND LEFT-WING BIAS ON THE WEB

It’s a hot topic these days. See, for example, this, this, this, this, and this. Also, this, which addresses Google’s slanting of search results about climate research. YouTube is at it, too.

A lot of libertarian and conservative commentators are loath to demand governmental intervention because the censorship is being committed by private companies: Apple, Facebook, Google, Twitter, YouTube, et al. Some libertarians and conservatives are hopeful that libertarian-conservative options will be successful (e.g., George Gilder). I am skeptical. I have seen and tried some of those options, and they aren’t in the same league as the left-wingers, which have pretty well locked up users and advertisers. (It’s called path-dependence.) And even if they finally succeed in snapping up a respectable share of the information market, the damage will have been done; libertarians and conservatives will have been marginalized, criminalized, and suppressed.

The time to roll out the big guns is now, as I explain here:

Given the influence that Google and the other members of the left-wing information-technology oligarchy exert in this country, that oligarchy is tantamount to a state apparatus….

These information-entertainment-media-academic institutions are important components of what I call the vast left-wing conspiracy in America. Their purpose and effect is the subversion of the traditional norms that made America a uniquely free, prosperous, and vibrant nation….

What will happen in America if that conspiracy succeeds in completely overthrowing “bourgeois culture”? The left will frog-march America in whatever utopian direction captures its “feelings” (but not its reason) at the moment…

Complete victory for the enemies of liberty is only a few election cycles away. The squishy center of the American electorate — as is its wont — will swing back toward the Democrat Party. With a Democrat in the White House, a Democrat-controlled Congress, and a few party switches in the Supreme Court, the dogmas of the information-entertainment-media-academic complex will become the law of the land….

[It is therefore necessary to] enforce the First Amendment against information-entertainment-media-academic complex. This would begin with action against high-profile targets (e.g., Google and a few large universities that accept federal money). That should be enough to bring the others into line. If it isn’t, keep working down the list until the miscreants cry uncle.

What kind of action do I have in mind?…

Executive action against state actors to enforce the First Amendment:

Amendment I to the Constitution says that “Congress shall make no law … abridging the freedom of speech”.

Major entities in the telecommunications, news, entertainment, and education industries have exerted their power to suppress speech because of its content. (See appended documentation.) The collective actions of these entities — many of them government- licensed and government-funded — effectively constitute a governmental violation of the Constitution’s guarantee of freedom of speech (See Smith v. Allwright, 321 U.S. 649 (1944) and Marsh v. Alabama, 326 U.S. 501 (1946).)

And so on. Read all about it here.



THE REAL COLLUSION STORY

Not quite as hot, but still in the news, is Spygate. Collusion among the White House, CIA, and FBI (a) to use the Trump-Russia collusion story to swing the 2016 election to Clinton, and (b) failing that, to cripple Trump’s presidency and provide grounds for removing him from office. The latest twist in the story is offered by Byron York:

Emails in 2016 between former British spy Christopher Steele and Justice Department official Bruce Ohr suggest Steele was deeply concerned about the legal status of a Putin-linked Russian oligarch, and at times seemed to be advocating on the oligarch’s behalf, in the same time period Steele worked on collecting the Russia-related allegations against Donald Trump that came to be known as the Trump dossier. The emails show Steele and Ohr were in frequent contact, that they intermingled talk about Steele’s research and the oligarch’s affairs, and that Glenn Simpson, head of the dirt-digging group Fusion GPS that hired Steele to compile the dossier, was also part of the ongoing conversation….

The newly-released Ohr-Steele-Simpson emails are just one part of the dossier story. But if nothing else, they show that there is still much for the public to learn about the complex and far-reaching effort behind it.

My take is here. The post includes a long list of related — and enlightening — reading, to which I’ve just added York’s piece.



“SUICIDE” OF THE WEST

Less “newsy”, but a hot topic on the web a few weeks back, is Jonah Goldberg’s Suicide of the West. It received mixed reviews. It is also the subject of an excellent non-review by Hubert Collins.

Here’s my take:

The Framers held a misplaced faith in the Constitution’s checks and balances (see Madison’s Federalist No. 51 and Hamilton’s Federalist No. 81). The Constitution’s wonderful design — containment of a strictly limited central government through horizontal and vertical separation of powers — worked rather well until the Progressive Era. The design then cracked under the strain of greed and the will to power, as the central government began to impose national economic regulation at the behest of muckrakers and do-gooders. The design then broke during the New Deal, which opened the floodgates to violations of constitutional restraint (e.g., Medicare, Medicaid, Obamacare,  the vast expansion of economic regulation, and the destruction of civilizing social norms), as the Supreme Court has enabled the national government to impose its will in matters far beyond its constitutional remit.

In sum, the “poison pill” baked into the nation at the time of the Founding is human nature, against which no libertarian constitution is proof unless it is enforced resolutely by a benign power.

See also my review essay on James Burnham’s Suicide of the West: An Essay on the Meaning and Destiny of Liberalism.



EVOLUTION, INTELLIGENCE, AND RACE

Evolution is closely related to and intertwined with intelligence and race. Two posts and a page of mine (here, here, and here) delve some of the complexities. The latter of the two posts draws on David Stove‘s critique of evolutionary theory, “So You Think You Are a Darwinian?“.

Fred Reed is far more entertaining than Stove, and no less convincing. His most recent columns on evolution are here and here. In the first of the two, he writes this:

What are some of the problems with official Darwinism? First, the spontaneous generation of life has not been replicated…. Nor has anyone assembled in the laboratory a chemical structure able to metabolize, reproduce, and thus to evolve. It has not been shown to be mathematically possible….

Sooner or later, a hypothesis must be either confirmed or abandoned. Which? When? Doesn’t science require evidence, reproducibility, demonstrated theoretical possibility? These do not exist….

Other serious problems with the official story: Missing intermediate fossils–”missing links”– stubbornly remain missing. “Punctuated equilibrium,” a theory of sudden rapid evolution invented to explain the lack of fossil evidence, seems unable to generate genetic information fast enough. Many proteins bear no resemblance to any others and therefore cannot have evolved from them. On and on.

Finally, the more complex an event, the less likely it is to  occur by chance. Over the years, cellular mechanisms have been found to be  ever more complex…. Recently with the discovery of epigenetics, complexity has taken a great leap upward. (For anyone wanting to subject himself to such things, there is The Epigenetics Revolution. It is not light reading.)

Worth noting is that  that the mantra of evolutionists, that “in millions and millions and billions of years something must have evolved”–does not necessarily hold water. We have all heard of Sir James Jeans assertion that a monkey, typing randomly, would eventually produce all the books in the British Museum. (Actually he would not produce a single chapter in the accepted age of the universe, but never mind.) A strong case can be made that spontaneous generation is similarly of mathematically vanishing probability. If evolutionists could prove the contrary, they would immensely strengthen their case. They haven’t….

Suppose that you saw an actual monkey pecking at a keyboard and, on examining his output, saw that he was typing, page after page, The Adventures of Tom Sawyer, with no errors.

You would suspect fraud, for instance that the typewriter was really a computer programmed with Tom. But no, on inspection you find that it is a genuine typewriter. Well then, you think, the monkey must be a robot, with Tom in RAM. But  this too turns out to be wrong: The monkey in fact is one. After exhaustive examination, you are forced to conclude that Bonzo really is typing at random.

Yet he is producing Tom Sawyer. This being impossible, you would have to conclude that something was going on that you did not understand.

Much of biology is similar. For a zygote, barely visible, to turn into a baby is astronomically improbable, a suicidal assault on Murphy’s Law. Reading embryology makes this apparent. (Texts are prohibitively expensive, but Life Unfolding serves.) Yet every step in the process is in accord with chemical principles.

This doesn’t make sense. Not, anyway, unless one concludes that something deeper is going on that we do not understand. This brings to mind several adages that might serve to ameliorate our considerable arrogance. As Haldane said, “The world is not only queerer than we think, but queerer than we can think.” Or Fred’s Principle, “The smartest of a large number of hamsters is still a hamster.”

We may be too full of ourselves.

On the subject of race, Fred is no racist, but he is a realist; for example:

We have black football players refusing to stand for the national anthem.  They think that young black males are being hunted down by cops. Actually of  course black males are hunting each other down in droves but black football players apparently have no objection to this. They do not themselves convincingly suffer discrimination. Where else can you get paid six million green ones a year for grabbing something and running? Maybe in a district of jewelers.

The non-standing is racial hostility to whites. The large drop in attendance of games, of television viewership, is racial blowback by whites. Millions of whites are thinking, that, if America doesn’t suit them, football players can afford a ticket to Kenya. While this line of reasoning is tempting, it doesn’t really address the problem and so would be a waste of time.

But what, really, is the problem?

It is one that dare not raise its head: that blacks cannot compete with whites, Asians, or Latin-Americans. Is there counter-evidence? This leaves them in an incurable state of resentment and thus hostility. I think we all know this: Blacks know it, whites know it, liberals know it, and conservatives know it. If any doubt this, the truth would be easy enough to determine with carefully done tests. [Which have been done.] The furious resistance to the very idea of measuring intelligence suggests awareness of the likely outcome. You don’t avoid a test if you expect good results.

So we do nothing while things worsen and the world looks on astounded. We have mob attacks by Black Lives Matter, the never-ending Knockout Game, flash mobs looting stores and subway trains, occasional burning cities, and we do nothing. Which makes sense, because there is nothing to be done short of restructuring the country.

Absolute, obvious, unacknowledged disaster.

Regarding which: Do we really want, any of us, what we are doing? In particular, has anyone asked ordinary blacks, not black pols and race hustlers. “Do you really want to live among whites, or would you prefer a safe middle-class black neighborhood? Do your kids want to go to school with whites? If so, why? Do you want them to? Why? Would you prefer black schools to decide what and how to teach your children? Keeping whites out of it? Would you prefer having only black police in your neighborhood?”

And the big one: “Do you, and the people you actually know in your neighborhood, really want integration? Or is it something imposed on you by oreo pols and white ideologues?”

But these are things we must never think, never ask.

Which brings me to my most recent post about blacks and crime, which is here. As for restructuring the country, Lincoln saw what was needed.

The touchy matter of intelligence — its heritability and therefore its racial component — is never far from my thoughts. I commend to you Gregory Hood’s excellent piece, “Forbidden Research: How the Study of Intelligence is Crippled by Ideology“. Hood mentions some of the scientists whose work I have cited in my writings about intelligence and its racial component. See this page, for example, which give links to several related posts and excerpts of relevant research about intelligence. (See also the first part of Fred Reed’s post “Darwin’s Vigilantes, Richard Sternberg, and Conventional Pseudoscience“.)

As for the racial component, my most recent post on the subject (which provides links to related posts) addresses the question “Why study race and intelligence?”. Here’s why:

Affirmative action and similar race-based preferences are harmful to blacks. But those preferences persist because most Americans do not understand that there are inherent racial differences that prevent blacks, on the whole, from doing as well as whites (and Asians) in school and in jobs that require above-average intelligence. But magical thinkers (like [Professor John] McWhorter) want to deny reality. He admits to being driven by hope: “I have always hoped the black–white IQ gap was due to environmental causes.”…

Magical thinking — which is rife on the left — plays into the hands of politicians, most of whom couldn’t care less about the truth. They just want the votes of those blacks who relish being told, time and again, that they are “down” because they are “victims”, and Big Daddy government will come to their rescue. But unless you are the unusual black of above-average intelligence, or the more usual black who has exceptional athletic skills, dependence on Big Daddy is self-defeating because (like a drug addiction) it only leads to more of the same. The destructive cycle of dependency can be broken only by willful resistance to the junk being peddled by cynical politicians.

It is for the sake of blacks that the truth about race and intelligence ought to be pursued — and widely publicized. If they read and hear the truth often enough, perhaps they will begin to realize that the best way to better themselves is to make the best of available opportunities instead of moaning abut racism and relying on preferences and handouts.



WILL THE REAL FASCISTS PLEASE STAND UP?

I may puke if I hear Trump called a fascist one more time. As I observe here,

[t]he idea … that Trump is the new Hitler and WaPo [The Washington Post] and its brethren will keep us out of the gas chambers by daring to utter the truth (not)…. is complete balderdash, inasmuch as WaPo and its ilk are enthusiastic hand-maidens of “liberal” fascism.

“Liberals” who call conservatives “fascists” are simply engaging in psychological projection. This is a point that I address at length here.

As for Mr. Trump, I call on Shawn Mitchell:

A lot of public intellectuals and writers are pushing an alarming thesis: President Trump is a menace to the American Republic and a threat to American liberties. The criticism is not exclusively partisan; it’s shared by prominent conservatives, liberals, and libertarians….

Because so many elites believe Trump should be impeached, or at least shunned and rendered impotent, it’s important to agree on terms for serious discussion. Authoritarian means demanding absolute obedience to a designated authority. It means that somewhere, someone, has unlimited power. Turning the focus to Trump, after 15 months in office, it’s impossible to assign him any of those descriptions….

…[T]here are no concentration camps or political arrests. Rather, the #Resistance ranges from fervent to rabid. Hollywood and media’s brightest stars regularly gather at galas to crudely declare their contempt for Trump and his deplorable supporters. Academics and reporters lodged in elite faculty lounges and ivory towers regularly malign his brains, judgment, and temperament. Activists gather in thousands on the streets to denounce Trump and his voters. None of these people believe Trump is an autocrat, or, if they do they are ignorant of the word’s meaning. None fear for their lives, liberty, or property.

Still, other elites pile on. Federal judges provide legal backup, contriving frivolous theories to block administrations moves. Some rule Trump lacks even the authority to undo by executive order things Obama himself introduced by executive order. Governors from states like California, Oregon and New York announce they will not cooperate with administration policy (current law, really) on immigration, the environment, and other issues.

Amidst such widespread rebellion, waged with impunity against the constitutionally elected president, the critics’ dark warnings that America faces a dictator are more than wrong; they are surreal and damnable. They are what amounts to the howl of that half the nation still refusing to accept election results it dislikes.

Conceding Trump lacks an inmate or body count, critics still offer theories to categorize him in genus monsterus. The main arguments cite Trump’s patented belligerent personality and undisciplined tweets, his use of executive orders; his alleged obstruction in firing James Comey and criticizing Robert Mueller, his blasts at the media, and his immigration policies. These attacks weigh less than the paper they might be printed on.

Trump’s personality doubtless is sui generis for national office. If he doesn’t occasionally offend listeners they probably aren’t listening. But so what? Personality is not policy. A sensibility is not a platform, and bluster and spittle are not coercive state action. The Human Jerk-o-meter could measure Trump in the 99th percentile, and the effect would not change one law, eliminate one right, or jail one critic.

Executive Orders are misunderstood. All modern presidents used them. There is nothing wrong in concept with executive orders. Some are constitutional some are not. What matters is whether they direct executive priorities within U.S. statutes or try to push authority beyond the law to change the rights and duties of citizens. For example, a president might order the EPA to focus on the Clean Air Act more than the Clean Water Act, or vice versa. That is fine. But, if a president orders the EPA to regulate how much people can water their lawns or what kind of lawns to plant, the president is trying to legislate and create new controls. That is unconstitutional.

Many of Obama’s executive orders were transgressive and unconstitutional. Most of Trump’s executive orders are within the law, and constitutional. However that debate turns out, though, it is silly to argue the issue implicates authoritarianism.

The partisan arguments over Trump’s response to the special counsel also miss key points. Presidents have authority to fire subordinates. The recommendation authored by Deputy Attorney General Rod Rosenstein provides abundant reason for Trump to have fired James Comey, who increasingly is seen as a bitter anti-Trump campaigner. As for Robert Mueller, criticizing is not usurping. Mueller’s investigation continues, but now readily is perceived as a target shoot, unmoored from the original accusations about Russia, in search of any reason to draw blood from Trump. Criticizing that is not dictatorial, it is reasonable.

No doubt Trump criticizes the media more than many modern presidents. But criticism is not oppression. It attacks not freedom of the press but the credibility of the press. That is civically uncomfortable, but the fact is, the war of words between Trump and the media is mutual. The media attacks Trump constantly, ferociously and very often inaccurately as Mollie Hemingway and Glenn Greenwald document from different political perspectives. Trump fighting back is not asserting government control. It is just challenging media assumptions and narratives in a way no president ever has. Reporters don’t like it, so they call it oppression. They are crybabies.

Finally, the accusation that Trump wants to enforce the border under current U.S. laws, as well as better vet immigration from a handful of failed states in the Middle East with significant militant activity hardly makes him a tyrant. Voters elected Trump to step up border enforcement. Scrutinizing immigrants from a handful of countries with known terrorist networks is not a “Muslim ban.” The idea insults the intelligence since there are about 65 majority Muslim countries the order does not touch.

Trump is not Hitler. Critics’ attacks are policy disputes, not examples of authoritarianism. The debate is driven by sore losers who are willing to erode norms that have preserved the republic for 240 years.

Amen.



CONSCIOUSNESS

For a complete change of pace I turn to a post by Bill Vallicella about consciousness:

This is an addendum to Thomas Nagel on the Mind-Body Problem. In that entry I set forth a problem in the philosophy of mind, pouring it into the mold of an aporetic triad:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely physical processes do not share.

3) The only acceptable explanation of conscious experience is in terms of physical properties alone.

Note first that the three propositions are collectively inconsistent: they cannot all be true.  Any two limbs entail the negation of the remaining one. Note second that each limb exerts a strong pull on our acceptance. But we cannot accept them all because they are logically incompatible.

This is one hard nut to crack.  So hard that many, following David Chalmers, call it, or something very much like it, the Hard Problem in the philosophy of mind.  It is so hard that it drives some into the loony bin. I am thinking of Daniel Dennett and those who have the chutzpah to deny (1)….

Sophistry aside, we either reject (2) or we reject (3).  Nagel and I accept (1) and (2) and reject (3). Those of a  scientistic stripe accept (1) and (3) and reject (2)….

I conclude that if our aporetic triad has a solution, the solution is by rejecting (3).

Vallicella reaches his conclusion by subtle argumentation, which I will not attempt to parse in this space.

My view is that (2) is false because the subjective character of conscious experience is an illusion that arises from the physical properties of the central nervous system. Consciousness itself is not an illusion. I accept (1) and (3). For more, see this and this.



EMPATHY IS OVER-RATED

Andrew Scull addresses empathy:

The basic sense in which most of us use “empathy” is analogous to what Adam Smith called “sympathy”: the capacity we possess (or can develop) to see the world through the eyes of another, to “place ourselves in his situation . . . and become in some measure the same person with him, and thence from some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them”….

In making moral choices, many would claim that empathy in this sense makes us more likely to care about others and to consider their interests when choosing our own course of action….

Conversely, understanding others’ feelings doesn’t necessarily lead one to treating them better. On the contrary: the best torturers are those who can anticipate and intuit what their victims most fear, and tailor their actions accordingly. Here, Bloom effectively invokes the case of Winston Smith’s torturer O’Brien in Orwell’s Nineteen Eighty-four, who is able to divine the former’s greatest dread, his fear of rats, and then use it to destroy him.

Guest blogger L.P. addressed empathy in several posts: here, here, here, here, here, and here. This is from the fourth of those posts:

Pro-empathy people think less empathetic people are “monsters.” However, as discussed in part 2 of this series, Baron-Cohen, Kevin Dutton in The Wisdom of Psychopaths, and other researchers establish that empathetic people, particularly psychopaths who have both affective and cognitive empathy, can be “monsters” too.

In fact, Kevin Dutton’s point about psychopaths generally being able to blend in and take on the appearance of the average person makes it obvious that they must have substantial emotional intelligence (linked to cognitive empathy) and experience of others’ feelings in order to mirror others so well….

Another point to consider however, as mentioned in part 1, is that those who try to empathize with others by imagining how they would experience another’s situation aren’t truly empathetic. They’re just projecting their own feelings onto others. This brings to mind Jonathan Haidt’s study on morality and political orientation. On the “Identification with All of Humanity Scale,” liberals most strongly endorsed the dimension regarding identification with “everyone around the world.” (See page 25 of “Understanding Libertarian Morality: The psychological roots of an individualist ideology.”) How can anyone empathize with billions of persons about whom one knows nothing, and a great number of whom are anything but liberal?

Haidt’s finding is a terrific example of problems with self-evaluation and self-reported data – liberals overestimating themselves in this case. I’m not judgmental about not understanding everyone in the world. There are plenty of people I don’t understand either. However, I don’t think people who overestimate their ability to understand people should be in a position that allows them to tamper with, or try to “improve,” the lives of people they don’t understand….

I conclude by quoting C. Daniel Batson who acknowledges the prevailing bias when it comes to evaluating altruism as a virtue. This is from his paper, “Empathy-Induced Altruistic Motivation,” written for the Inaugural Herzliya Symposium on Prosocial Motives, Emotions, and Behavior:

[W]hereas there are clear social sanctions against unbridled self-interest, there are not clear sanctions against altruism. As a result, altruism can at times pose a greater threat to the common good than does egoism.



“NUDGING”

I have addressed Richard Thaler and Cass Sunstein’s “libertarian” paternalism and “nudging in many posts. (See this post, the list at the bottom of it, and this post.) Nothing that I have written — clever and incisive as it may be — rivals Deirdre McCloskey’s take on Thaler’s non-Nobel prize, “The Applied Theory of Bossing“:

Thaler is distinguished but not brilliant, which is par for the course. He works on “behavioral finance,” the study of mistakes people make when they talk to their stock broker. He can be counted as the second winner for “behavioral economics,” after the psychologist Daniel Kahneman. His prize was for the study of mistakes people make when they buy milk….

Once Thaler has established that you are in myriad ways irrational it’s much easier to argue, as he has, vigorously—in his academic research, in popular books, and now in a column for The New York Times—that you are too stupid to be treated as a free adult. You need, in the coinage of Thaler’s book, co-authored with the law professor and Obama adviser Cass Sunstein, to be “nudged.” Thaler and Sunstein call it “libertarian paternalism.”*…

Wikipedia lists fully 257 cognitive biases. In the category of decision-making biases alone there are anchoring, the availability heuristic, the bandwagon effect, the baseline fallacy, choice-supportive bias, confirmation bias, belief-revision conservatism, courtesy bias, and on and on. According to the psychologists, it’s a miracle you can get across the street.

For Thaler, every one of the biases is a reason not to trust people to make their own choices about money. It’s an old routine in economics. Since 1848, one expert after another has set up shop finding “imperfections” in the market economy that Smith and Mill and Bastiat had come to understand as a pretty good system for supporting human flourishing….

How to convince people to stand still for being bossed around like children? Answer: Persuade them that they are idiots compared with the great and good in charge. That was the conservative yet socialist program of Kahneman, who won the 2002 Nobel as part of a duo that included an actual economist named Vernon Smith…. It is Thaler’s program, too.

Like with the psychologist’s list of biases, though, nowhere has anyone shown that the imperfections in the market amount to much in damaging the economy overall. People do get across the street. Income per head since 1848 has increased by a factor of 20 or 30….

The amiable Joe Stiglitz says that whenever there is a “spillover” — my ugly dress offending your delicate eyes, say — the government should step in. A Federal Bureau of Dresses, rather like the one Saudi Arabia has. In common with Thaler and Krugman and most other economists since 1848, Stiglitz does not know how much his imagined spillovers reduce national income overall, or whether the government is good at preventing the spill. I reckon it’s about as good as the Army Corps of Engineers was in Katrina.

Thaler, in short, melds the list of psychological biases with the list of economic imperfections. It is his worthy scientific accomplishment. His conclusion, unsupported by evidence?

It’s bad for us to be free.

CORRECTION: Due to an editing error, an earlier version of this article referred to Thaler’s philosophy as “paternalistic libertarianism.” The correct term is “libertarian paternalism.”

No, the correct term is paternalism.

I will end on that note.

The Pretence of Knowledge

Updated, with links to a related article and additional posts, and republished.

Friedrich Hayek, in his Nobel Prize lecture of 1974, “The Pretence of Knowledge,” observes that

the great and rapid advance of the physical sciences took place in fields where it proved that explanation and prediction could be based on laws which accounted for the observed phenomena as functions of comparatively few variables.

Hayek’s particular target was the scientism then (and still) rampant in economics. In particular, there was (and is) a quasi-religious belief in the power of central planning (e.g., regulation, “stimulus” spending, control of the money supply) to attain outcomes superior to those that free markets would yield.

But, as Hayek says in closing,

There is danger in the exuberant feeling of ever growing power which the advance of the physical sciences has engendered and which tempts man to try, “dizzy with success” … to subject not only our natural but also our human environment to the control of a human will. The recognition of the insuperable limits to his knowledge ought indeed to teach the student of society a lesson of humility which should guard him against becoming an accomplice in men’s fatal striving to control society – a striving which makes him not only a tyrant over his fellows, but which may well make him the destroyer of a civilization which no brain has designed but which has grown from the free efforts of millions of individuals.

I was reminded of Hayek’s observations by John Cochrane’s post, “Groundhog Day” (The Grumpy Economist, May 11, 2014), wherein Cochrane presents this graph:

The fed's forecasting models are broken

Cochrane adds:

Every serious forecast looked like this — Fed, yes, but also CBO, private forecasters, and the term structure of forward rates. Everyone has expected bounce-back growth and rise in interest rates to start next year, for the last 6 years. And every year it has not happened. Welcome to the slump. Every year, Sonny and Cher wake us up, and it’s still cold, and it’s still grey. But we keep expecting spring tomorrow.

Whether the corrosive effects of government microeconomic and regulatory policy, or a failure of those (unprintable adjectives) Republicans to just vote enough wasted-spending Keynesian stimulus, or a failure of the Fed to buy another $3 trillion of bonds, the question of the day really should be why we have this slump — which, let us be honest, no serious forecaster expected.

(I add the “serious forecaster” qualification on purpose. I don’t want to hear randomly mined quotes from bloviating prognosticators who got lucky once, and don’t offer a methodology or a track record for their forecasts.)

The Fed’s forecasting models are nothing more than sophisticated charlatanism — a term that Hayek applied to pseudo-scientific endeavors like macroeconomic modeling. Nor is charlatanism confined to economics and the other social “sciences.” It’s rampant in climate “science,” as Roy Spencer has shown. Consider, for example, this graph from Spencers’s post, “95% of Climate Models Agree: The Observations Must Be Wrong” (Roy Spencer, Ph.D., February 7, 2014):

95% of climate models agree_the observations must be wrong

Spencer has a lot more to say about the pseudo-scientific aspects of climate “science.” This example is from “Top Ten Good Skeptical Arguments” (May 1, 2014):

1) No Recent Warming. If global warming science is so “settled”, why did global warming stop over 15 years ago (in most temperature datasets), contrary to all “consensus” predictions?

2) Natural or Manmade? If we don’t know how much of the warming in the longer term (say last 50 years) is natural, then how can we know how much is manmade?

3) IPCC Politics and Beliefs. Why does it take a political body (the IPCC) to tell us what scientists “believe”? And when did scientists’ “beliefs” translate into proof? And when was scientific truth determined by a vote…especially when those allowed to vote are from the Global Warming Believers Party?

4) Climate Models Can’t Even Hindcast How did climate modelers, who already knew the answer, still fail to explain the lack of a significant temperature rise over the last 30+ years? In other words, how to you botch a hindcast?

5) …But We Should Believe Model Forecasts? Why should we believe model predictions of the future, when they can’t even explain the past?

6) Modelers Lie About Their “Physics”. Why do modelers insist their models are based upon established physics, but then hide the fact that the strong warming their models produce is actually based upon very uncertain “fudge factor” tuning?

7) Is Warming Even Bad? Who decided that a small amount of warming is necessarily a bad thing?

8) Is CO2 Bad? How did carbon dioxide, necessary for life on Earth and only 4 parts in 10,000 of our atmosphere, get rebranded as some sort of dangerous gas?

9) Do We Look that Stupid? How do scientists expect to be taken seriously when their “theory” is supported by both floods AND droughts? Too much snow AND too little snow?

10) Selective Pseudo-Explanations. How can scientists claim that the Medieval Warm Period (which lasted hundreds of years), was just a regional fluke…yet claim the single-summer (2003) heat wave in Europe had global significance?

11) (Spinal Tap bonus) Just How Warm is it, Really? Why is it that every subsequent modification/adjustment to the global thermometer data leads to even more warming? What are the chances of that? Either a warmer-still present, or cooling down the past, both of which produce a greater warming trend over time. And none of the adjustments take out a gradual urban heat island (UHI) warming around thermometer sites, which likely exists at virtually all of them — because no one yet knows a good way to do that.

It is no coincidence that leftists believe in the efficacy of central planning and cling tenaciously to a belief in catastrophic anthropogenic global warming. The latter justifies the former, of course. And both beliefs exemplify the left’s penchant for magical thinking, about which I’ve written several times (e.g., here, here, here, here, and here).

Magical thinking is the pretense of knowledge in the nth degree. It conjures “knowledge” from ignorance and hope. And no one better exemplifies magical thinking than our hopey-changey president.


Related reading: Walter E. Williams, “The Experts Have Been Wrong About a Lot of Things, Here’s a Sample“, The Daily Signal, July 25, 2018

Related posts:
Modeling Is Not Science
The Left and Its Delusions
Economics: A Survey
AGW: The Death Knell
The Keynesian Multiplier: Phony Math
Modern Liberalism as Wishful Thinking
“The Science Is Settled”
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
Bayesian Irrationality
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Babe Ruth and the Hot-Hand Hypothesis
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unknowable
A (Long) Footnote about Science
The Balderdash Chronicles
The Probability That Something Will Happen
Analytical and Scientific Arrogance

Analytical and Scientific Arrogance

It is customary in democratic countries to deplore expenditures on armaments as conflicting with the requirements of the social services. There is a tendency to forget that the most important social service that a government can do for its people is to keep them alive and free.

Marshal of the Royal Air Force Sir John Slessor, Strategy for the West

I’m returning to the past to make a timeless point: Analysis is a tool of decision-making, not a substitute for it.

That’s a point to which every analyst will subscribe, just as every judicial candidate will claim to revere the Constitution. But analysts past and present have tended to read their policy preferences into their analytical work, just as too many judges real their political preferences into the Constitution.

What is an analyst? Someone whose occupation requires him to gather facts bearing on an issue, discern robust relationships among the facts, and draw conclusions from those relationships.

Many professionals — from economists to physicists to so-called climate scientists — are more or less analytical in the practice of their professions. That is, they are not just seeking knowledge, but seeking to influence policies which depend on that knowledge.

There is also in this country (and in the West, generally) a kind of person who is an analyst first and a disciplinary specialist second (if at all). Such a person brings his pattern-seeking skills to the problems facing decision-makers in government and industry. Depending on the kinds of issues he addresses or the kinds of techniques that he deploys, he may be called a policy analyst, operations research analyst, management consultant, or something of that kind.

It is one thing to say, as a scientist or analyst, that a certain option (a policy, a system, a tactic) is probably better than the alternatives, when judged against a specific criterion (most effective for a given cost, most effective against a certain kind of enemy force). It is quite another thing to say that the option is the one that the decision-maker should adopt. The scientist or analyst is looking a small slice of the world; the decision-maker has to take into account things that the scientist or analyst did not (and often could not) take into account (economic consequences, political feasibility, compatibility with other existing systems and policies).

It is (or should be) unsconsionable for a scientist or analyst to state or imply that he has the “right” answer. But the clever arguer avoids coming straight out with the “right” answer; instead, he slants his presentation in a way that makes the “right” answer seem right.

A classic case in point is they hysteria surrounding the increase in “global” temperature in the latter part of the 20th century, and the coincidence of that increase with the rise in CO2. I have had much to say about the hysteria and the pseudo-science upon which it is based. (See links at the end of this post.) Here, I will take as a case study an event to which I was somewhat close: the treatment of the Navy’s proposal, made in the early 1980s, for an expansion to what was conveniently characterized as the 600-ship Navy. (The expansion would have involved personnel, logistics systems, ancillary war-fighting systems, stockpiles of parts and ammunition, and aircraft of many kinds — all in addition to a 25-percent increase in the number of ships in active service.)

The usual suspects, of an ilk I profiled here, wasted no time in making the 600-ship Navy seem like a bad idea. Of the many studies and memos on the subject, two by the Congressional Budget Office stand out a exemplars of slanted analysis by innuendo: “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches” (March 1982), and “Future Budget Requirements for the 600-Ship Navy: Preliminary Analysis” (April 1985). What did the “whiz kids” at CBO have to say about the 600-ship Navy? Here are excerpts of the concluding sections:

The Administration’s five-year shipbuilding plan, containing 133 new construction ships and estimated to cost over $80 billion in fiscal year 1983 dollars, is more ambitious than previous programs submitted to the Congress in the past few years. It does not, however, contain enough ships to realize the Navy’s announced force level goals for an expanded Navy. In addition, this plan—as has been the case with so many previous plans—has most of its ships programmed in the later out-years. Over half of the 133 new construction ships are programmed for the last two years of the five-year plan. Achievement of the Navy’s expanded force level goals would require adhering to the out-year building plans and continued high levels of construction in the years beyond fiscal year 1987. [1982 report, pp. 71-72]

Even the budget increases estimated here would be difficult to achieve if history is a guide. Since the end of World War II, the Navy has never sustained real increases in its budget for more than five consecutive years. The sustained 15-year expansion required to achieve and sustain the Navy’s present plans would result in a historic change in budget trends. [1985 report, p. 26]

The bias against the 600-ship Navy drips from the pages. The “argument” goes like this: If it hasn’t been done, it can’t be done and, therefore, shouldn’t be attempted. Why not? Because the analysts at CBO were a breed of cat that emerged in the 1960s, when Robert Strange McNamara and his minions used simplistic analysis (“tablesmanship”) to play “gotcha” with the military services:

We [I was one of the minions] did it because we were encouraged to do it, though not in so many words. And we got away with it, not because we were better analysts — most of our work was simplistic stuff — but because we usually had the last word. (Only an impassioned personal intercession by a service chief might persuade McNamara to go against SA [the Systems Analysis office run by Alain Enthoven] — and the key word is “might.”) The irony of the whole process was that McNamara, in effect, substituted “civilian judgment” for oft-scorned “military judgment.” McNamara revealed his preference for “civilian judgment” by elevating Enthoven and SA a level in the hierarchy, 1965, even though (or perhaps because) the services and JCS had been open in their disdain of SA and its snotty young civilians.

In the case of the 600-ship Navy, civilian analysts did their best to derail it by sending the barely disguised message that it was “unaffordable”. I was reminded of this “insight” by a colleague of long-standing who recently proclaimed that “any half-decent cost model would show a 600-ship Navy was unsustainable into this century.” How could a cost model show such a thing when the sustainability (affordability) of defense is a matter of political will, not arithmetic?

Defense spending fluctuates as function of perceived necessity. Consider, for example, this graph (misleadingly labeled “Recent Defense Spending”) from usgovernmentspending.com, which shows defense spending as a percentage of GDP for fiscal year (FY) 1792 to FY 2017:

What was “unaffordable” before World War II suddenly became affordable. And so it has gone throughout the history of the republic. Affordability (or sustainability) is a political issue, not a line drawn in the sand by an smart-ass analyst who gives no thought to the consequences of spending too little on defense.

I will now zoom in on the era of interest.

CBO’s “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches“, which crystallized opposition to the 600-ship Navy estimates the long-run, annual obligational authority required to sustain a 600-ship Navy (of the Navy’s design) to be about 20-percent higher in constant dollars than the FY 1982 Navy budget. (See Options I and II in Figure 2, p. 50.) The long-run would have begun around FY 1994, following several years of higher spending associated with the buildup of forces. I don’t have a historical breakdown of the Department of Defense (DoD) budget by service, but I found values for all-DoD spending on military programs at Office of Management and Budget Historical Tables. Drawing on Tables 5.2 and 10.1, I constructed a constant-dollar of DoD’s obligational authority (FY 1982 = 1):

FY Index
1983 1.08
1984 1.13
1985 1.21
1986 1.17
1987 1.13
1988 1.11
1989 1.10
1990 1.07
1991 0.97
1992 0.97
1993 0.90
1994 0.82
1995 0.82
1996 0.80
1997 0.80
1998 0.79
1999 0.84
2000 0.86
2001 0.92
2002 0.98
2003 1.23
2004 1.29
2005 1.28
2006 1.36
2007 1.50
2008 1.65
2009 1.61
2010 1.66
2011 1.62
2012 1.51
2013 1.32
2014 1.32
2015 1.25
2016 1.29
2017 1.34

There was no inherent reason that defense spending couldn’t have remained on the trajectory of the middle 1980s. The slowdown of the late 1980s was a reflection of improved relations between the U.S. and USSR. Those improved relations had much to do with the Reagan defense buildup, of which the goal of attaining a 600-ship Navy was an integral part.

The Reagan buildup helped to convince Soviet leaders (Gorbachev in particular) that trying to keep pace with the U.S. was futile and (actually) unaffordable. The rest — the end of the Cold War and the dissolution of the USSR — is history. The buildup, in other words, sowed the seeds of its own demise. But that couldn’t have been predicted with certainty in the early-to-middle 1980s, when CBO and others were doing their best to undermine political support for more defense spending. Had CBO and the other nay-sayers succeeded in their aims, the Cold War and the USSR might still be with us.

The defense drawdown of the mid-1990s was a deliberate response to the end of the Cold War and lack of other serious threats, not a historical necessity. It was certainly not on the table in the early 1980s, when the 600-ship Navy was being pushed. Had the Cold War not thawed and ended, there is no reason that U.S. defense spending couldn’t have continued at the pace of the middle 1980s, or higher. As is evident in the index values for recent years, even after drastic force reductions in Iraq, defense spending is now about one-third higher than it was in FY 1982.

John Lehman, Secretary of the Navy from 1981 to 1987, was rightly incensed that analysts — some of them on his payroll as civilian employees and contractors — were, in effect, undermining a deliberate strategy of pressing against a key Soviet weakness — the unsustainability of its defense strategy. There was much lamentation at the time about Lehman’s “war” on the offending parties, one of which was the think-tank for which I then worked. I can now admit openly that I was sympathetic to Lehman and offended by the arrogance of analysts who believed that it was their job to suggest that spending more on defense was “unaffordable”.

When I was a young analyst I was handed a pile of required reading material. One of the items was was Methods of Operations Research, by Philip M. Morse and George E. Kimball. Morse, in the early months of America’s involvement in World War II, founded the civilian operations-research organization from which my think-tank evolved. Kimball was a leading member of that organization. Their book is notable not just a compendium of analytical methods that were applied, with much success, to the war effort. It is also introspective — and properly humble — about the power and role of analysis.

Two passages, in particular, have stuck with me for the more than 50 years since I first read the book. Here is one of them:

[S]uccessful application of operations research usually results in improvements by factors of 3 or 10 or more…. In our first study of any operation we are looking for these large factors of possible improvement…. They can be discovered if the [variables] are given only one significant figure,…any greater accuracy simply adds unessential detail.

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. [p. 38]

Morse and Kimball — two brilliant scientists and analysts, who had worked with actual data (pardon the redundancy) about combat operations — counseled against making too much of quantitative estimates given the uncertainties inherent in combat. But, as I have seen over the years, analysts eager to “prove” something nevertheless make a huge deal out of minuscule differences in quantitative estimates — estimates based not on actual combat operations but on theoretical values derived from models of systems and operations yet to see the light of day. (I also saw, and still see, too much “analysis” about soft subjects, such as domestic politics and international relations. The amount of snake oil emitted by “analysts” — sometimes called scholars, journalists, pundits, and commentators — would fill the Great Lakes. Their perceptions of reality have an uncanny way of supporting their unabashed decrees about policy.)

The second memorable passage from Methods of Operations Research goes directly to the point of this post:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. [p. 10].

In the case of CBO and other opponents of the 600-ship Navy, substitute “cost estimate” for “operations research”, “responsible defense official” for “administrator in charge”, and “strategy” for “operations”. The principle is the same: The CBO and its ilk knew the price of the 600-ship Navy, but had no inkling of its value.

Too many scientists and analysts want to make policy. On the evidence of my close association with scientists and analysts over the years — including a stint as an unsparing reviewer of their products — I would say that they should learn to think clearly before they inflict their views on others. But too many of them — even those with Ph.D.s in STEM disciplines — are incapable of thinking clearly, and more than capable of slanting their work to support their biases. Exhibit A: Michael Mann, James Hansen (more), and their co-conspirators in the catastrophic-anthropogenic-global-warming scam.


Related posts:
The Limits of Science
How to View Defense Spending
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
The McNamara Legacy: A Personal Perspective
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
AGW in Austin? (II)
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Babe Ruth and the Hot-Hand Hypothesis
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unknowable
A (Long) Footnote about Science
Further Thoughts about Probability
Climate Scare Tactics: The Mythical Ever-Rising 30-Year Average
A Grand Strategy for the United States

The Probability That Something Will Happen

A SINGLE EVENT DOESN’T HAVE A PROBABILITY

A believer in single-event probabilities takes the view that a single flip of a coin or roll of a dice has a probability. I do not. A probability represents the frequency with which an outcome occurs over the very long run, and it is only an average that conceals random variations.

The outcome of a single coin flip can’t be reduced to a percentage or probability. It can only be described in terms of its discrete, mutually exclusive possibilities: heads (H) or tails (T). The outcome of a single roll of a die or pair of dice can only be described in terms of the number of points that may come up, 1 through 6 or 2 through 12.

Yes, the expected frequencies of H, T, and and various point totals can be computed by simple mathematical operations. But those are only expected frequencies. They say nothing about the next coin flip or dice roll, nor do they more than approximate the actual frequencies that will occur over the next 100, 1,000, or 10,000 such events.

Of what value is it to know that the probability of H is 0.5 when H fails to occur in 11 consecutive flips of a fair coin? Of what value is it to know that the probability of rolling a  7 is 0.167 — meaning that 7 comes up every 6 rolls, on average — when 7 may not appear for 56 consecutive rolls? These examples are drawn from simulations of 10,000 coin flips and 1,000 dice rolls. They are simulations that I ran once, not simulations that I cherry-picked from many runs. (The Excel file is at https://drive.google.com/open?id=1FABVTiB_qOe-WqMQkiGFj2f70gSu6a82. Coin flips are at the first tab, dice rolls are at the second tab.)

Let’s take another example, one that is more interesting and has generated much controversy of the years. It’s the Monty Hall problem,

a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let’s Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975…. It became famous as a question from a reader’s letter quoted in Marilyn vos Savant’s “Ask Marilyn” column in Parade magazine in 1990 … :

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice

Vos Savant’s response was that the contestant should switch to the other door…. Under the standard assumptions, contestants who switch have a 2/3 chance of winning the car, while contestants who stick to their initial choice have only a 1/3 chance.

Vos Savant’s answer is correct, but only if the contestant is allowed to play an unlimited number of games. A player who adopts a strategy of “switch” in every game will, in the long run, win about 2/3 of the time (explanation here). That is, the player has a better chance of winning if he chooses “switch” rather than “stay”.

Read the preceding paragraph carefully and you will spot the logical defect that underlies the belief in single-event probabilities: The long-run winning strategy (“switch”) is transformed into a “better chance” to win a particular game. What does that mean? How does an average frequency of 2/3 improve one’s chances of winning a particular game? It doesn’t. Game results are utterly random; that is, the average frequency of 2/3 has no bearing on the outcome of a single game.

I’ll try to drive the point home by returning to the coin-flip game, with money thrown into the mix. A $1 bet on H means a gain of $1 if H turns up, and a loss of $1 if T turns up. The expected value of the bet — if repeated over a very large number of trials — is zero. The bettor expects to win and lose the same number of times, and to walk away no richer or poorer than when he started. And for a very large number of games, the better will walk away approximately (but not necessarily exactly) neither richer nor poorer than when he started. How many games? In the simulation of 10,000 games mentioned earlier, H occurred 50.6 percent of the time. A very large number of games is probably at least 100,000.

Let us say, for the sake of argument, that a bettor has played 100,00 coin-flip games at $1 a game and come out exactly even. What does that mean for the play of the next game? Does it have an expected value of zero?

To see why the answer is “no”, let’s make it interesting and say that the bet on the next game — the next coin flip — is $10,000. The size of the bet should wonderfully concentrate the bettor’s mind. He should now see the situation for what it really is: There are two possible outcomes, and only one of them will be realized. An average of the two outcomes is meaningless. The single coin flip doesn’t have a “probability” of 0.5 H and 0.5 T and an “expected payoff” of zero. The coin will come up either H or T, and the bettor will either lose $10,000 or win $10,000.

To repeat: The outcome of a single coin flip doesn’t have an expected value for the bettor. It has two possible values, and the bettor must decide whether he is willing to lose $10,000 on the single flip of a coin.

By the same token (or coin), the outcome of a single roll of a pair of dice doesn’t have a 1-in-6 probability of coming up 7. It has 36 possible outcomes and 11 possible point totals, and the bettor must decide how much he is willing to lose if he puts his money on the wrong combination or outcome.

In summary, it is a logical fallacy to ascribe a probability to a single event. A probability represents the observed or computed average value of a very large number of like events. A single event cannot possess that average value. A single event has a finite number of discrete and mutually exclusive possible outcomes. Those outcomes will not “average out” in that single event. Only one of them will obtain, like Schrödinger’s cat.

To say or suggest that the outcomes will average out — which is what a probability implies — is tantamount to saying that Jack Sprat and his wife were neither skinny nor fat because their body-mass indices averaged to a normal value. It is tantamount to saying that one can’t drown by walking across a pond with an average depth of 1 foot, when that average conceals the existence of a 100-foot-deep hole.

It should go without saying that a specific event that might occur — rain tomorrow, for example — doesn’t have a probability.

WHAT ABOUT THE PROBABILITY OF PRECIPITATION?

Weather forecasters (meteorologists) are constantly saying things like “there’s an 80-percent probability of precipitation (PoP) in __________ tomorrow”. What do such statements mean? Not much:

It is not surprising that this issue is difficult for the general public, given that it is debated even within the scientific community. Some propose a “frequentist” interpretation: there will be at least a minimum amount of rain on 80% of days with weather conditions like they are today. Although preferred by many scientists, this explanation may be particularly difficult for the general public to grasp because it requires regarding tomorrow as a class of events, a group of potential tomorrows. From the perspective of the forecast user, however, tomorrow will happen only once. A perhaps less abstract interpretation is that PoP reflects the degree of confidence that the forecaster has that it will rain. In other words, an 80% chance of rain means that the forecaster strongly believes that there will be at least a minimum amount of rain tomorrow. The problem, from the perspective of the general public, is that when PoP is forecasted, none of these interpretations is specified.

There are clearly some interpretations that are not correct. The percentage expressed in PoP neither refers directly to the percent of area over which precipitation will fall nor does it refer directly to the percent of time precipitation will be observed on the forecast day Although both interpretations are clearly wrong, there is evidence that the general public holds them to varying degrees. Such misunderstandings are critical because they may affect the decisions that people make. If people misinterpret the forecast as percent time or percent area, they maybe more inclined to take precautionary action than are those who have the correct probabilistic interpretation, because they think that it will rain somewhere or some time tomorrow. The negative impact of such misunderstandings on decision making, both in terms of unnecessary precautions as well as erosion in user trust, could well eliminate any potential benefit of adding uncertainty information to the forecast. [Susan Joslyn, Nimor Nadav-Greenberg, and Rebecca M. Nichols, “Probability of Precipitations: Assessment and Enhancement of End-User Understanding“, Journal of the American Meteorological Society, February 2009, citations omitted]

The frequentist interpretation is close to be correct, but it still involves a great deal of guesswork. Rainfall in a particular location is influenced by many variables (e.g., atmospheric pressure, direction and rate of change of atmospheric pressure, ambient temperature, local terrain, presence or absence of bodies of water, vegetation, moisture content of the atmosphere, height of clouds above the terrain, depth of cloud cover). It is nigh unto impossible to say that today’s (or tomorrow’s or next week’s) weather conditions are like (or will be like) those that in the past resulted in rainfall in a particular location 80 percent of the time.

That leaves the Bayesian interpretation, in which the forecaster combines some facts (e.g., the presence or absence of a low-pressure system in or toward the area, the presence or absence of a flow of water vapor in or toward the area) with what he has observed in the past to arrive at a guess about future weather. He then attaches a probability to his guess to indicate the strength of his confidence in it.

Thus:

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

And thus:

The Bayesian approach to learning is based on the subjective interpretation of probability. The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p.

It is impossible to attach a probability — as properly defined in the first part of this article — to something that hasn’t happened, and may not happen. So when you read or hear a statement like “the probability of rain tomorrow is 80 percent”, you should mentally translate it into language like this:

X guesses that Y will (or will not) happen at time Z, and the “probability” that he attaches to his guess  indicates his degree of confidence in it.

The guess may be well-informed by systematic observation of relevant events, but it remains a guess. As most Americans have learned and relearned over the years, when rain has failed to materialize or has spoiled an outdoor event that was supposed to be rain-free.

BUT AREN’T SOME THINGS MORE LIKELY TO HAPPEN THAN OTHERS?

Of course. But only one thing will happen at a given time and place.

If a person walks across a shooting range where live ammunition is being used, his is more likely to be killed than if he walks across the same patch of ground when no one is shooting. And a clever analyst could concoct a probability of a person’s being shot by writing an equation that includes such variables as his size, the speed with which he walks, the number of shooters, their rate of fire, and the distance across the shooting range.

What would the probability estimate mean? It would mean that if a very large number of persons walked across the shooting range under identical conditions, approximately S percent of them would be shot. But the clever analyst cannot specify which of the walkers would be among the S percent.

Here’s another way to look at it. One person wearing head-to-toe bullet-proof armor could walk across the range a large number of times and expect to be hit by a bullet on S percent of his crossings. But the hardy soul wouldn’t know on which of the crossings he would be hit.

Suppose the hardy soul became a foolhardy one and made a bet that he could cross the range without being hit. Further, suppose that S is estimated to be 0.75; that is, 75 percent of a string of walkers would be hit, or a single (bullet-proof) walker would be hit on 75 percent of his crossings. Knowing the value of S, the foolhardy fellow offers to pay out $1 million dollars if he crosses the range unscathed — one time — and claim $4 million (for himself or his estate) if he is shot. That’s an even-money bet, isn’t it?

No it isn’t. This situation is exactly analogous to the $10,000 bet on a single coin flip, discussed above. But I will dissect this one in a different way, to the same end.

The bet should be understood for what it is, an either-or-proposition. The foolhardy walker will either lose $1 million or win $4 million. The bettor (or bettors) who take the other side of the bet will either win $1 million or lose $4 million.

As anyone with elementary reading and reasoning skills should be able to tell, those possible outcomes are not the same as the outcome that would obtain (approximately) if the foolhardy fellow could walk across the shooting range 1,000 times. If he could, he would come very close to breaking even, as would those who bet against him.

To put it as simply as possible:

When an event has more than one possible outcome, a single trial cannot replicate the average outcome of a large number of trials (replications of the event).

It follows that the average outcome of a large number of trials — the probability of each possible outcome — cannot occur in a single trial.

It is therefore meaningless to ascribe a probability to any possible outcome of a single trial.

MODELING AND PROBABILITY

Sometimes, when things interact, the outcome of the interactions will conform to an expected value — if that value is empirically valid. For example, if a pot of pure water is put over a flame at sea level, the temperature of the water will rise to 212 degrees Fahrenheit and water molecules will then begin to escape into the air in a gaseous form (steam).  If the flame is kept hot enough and applied long enough, the water in the pot will continue to vaporize until the pot is empty.

That isn’t a probabilistic description of boiling. It’s just a description of what’s known to happen to water under certain conditions.

But it bears a similarity to a certain kind of probabilistic reasoning. For example, in a paper that I wrote long ago about warfare models, I said this:

Consider a five-parameter model, involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model might easily yield a cumulative error of a hemibel [a factor of 3], given a twenty five percent error in each parameter.

Mathematically, 1.255 = 3.05. Which is true enough, but also misleadingly simple.

A mathematical model of that kind rests on the crucial assumption that the component probabilities are based on observations of actual events occurring in similar conditions. It is safe to say that the values assigned to the parameters of warfare models, econometric models, sociological models, and most other models outside the realm of physics, chemistry, and other “hard” sciences fail to satisfy that assumption.

Further, a mathematical model yields only the expected (average) outcome of a large number of events occurring under conditions similar to those from which the component probabilities were derived. (A Monte Carlo model merely yields a quantitative estimate of the spread around the average outcome.) Again, this precludes most models outside the “hard” sciences, and even some within that domain.

The moral of the story: Don’t be gulled by a statement about the expected outcome of an event, even when the statement seems to be based on a rigorous mathematical formula. Look behind the formula for an empirical foundation. And not just any empirical foundation, but one that is consistent with the situation to which the formula is being applied.

And when you’ve done that, remember that the formula expresses a point estimate around which there’s a wide — very wide — range of uncertainty. Which was the real point of the passage quoted above. The only sure things in life are death, taxes, and regulation.

PROBABILITY VS. OPPORTUNITY

Warfare models, as noted, deal with interactions among large numbers of things. If a large unit of infantry encounters another large unit of enemy infantry, and the units exchange gunfire, it is reasonable to expect the following consequences:

  • As the numbers of infantrymen increase, more of them will be shot, for a given rate of gunfire.
  • As the rate of gunfire increases, more of the infantrymen will be shot, for a given number of infantrymen.

These consequences don’t represent probabilities, though an inveterate modeler will try to represent them with a probabilistic model. They represent opportunities — opportunities for bullets to hit bodies. It is entirely possible that some bullets won’t hit bodies and some bodies won’t be hit by bullets. But more bullets will hit bodies if there are more bodies in a given space. And a higher proportion of a given number of bodies will be hit as more bullets enter a given space.

That’s all there is to it.

It has nothing to do with probability. The actual outcome of a past encounter is the actual outcome of that encounter, and the number of casualties has everything to do with the minutiae of the encounter and nothing to do with probability. A fortiori, the number of casualties resulting from a possible future encounter would have everything to do with the minutiae of that encounter and nothing to do with probability. Given the uniqueness of any given encounter, it would be wrong to characterize its outcome (e.g., number of casualties per infantryman) as a probability.


Related posts:
Understanding the Monty Hall Problem
The Compleat Monty Hall Problem
Some Thoughts about Probability
My War on the Misuse of Probability
Scott Adams Understands Probability
Further Thoughts about Probability

The Balderdash Chronicles

Balderdash is nonsense, to put it succinctly. Less succinctly, balderdash is stupid or illogical talk; senseless rubbish. Rather thoroughly, it is

balls, bull, rubbish, shit, rot, crap, garbage, trash, bunk, bullshit, hot air, tosh, waffle, pap, cobblers, bilge, drivel, twaddle, tripe, gibberish, guff, moonshine, claptrap, hogwash, hokum, piffle, poppycock, bosh, eyewash, tommyrot, horsefeathers, or buncombe.

I have encountered innumerable examples of balderdash in my 35 years of full-time work,  14 subsequent years of blogging, and many overlapping years as an observer of the political scene.  This essay documents some of the worst balderdash that I have come across.

THE LIMITS OF SCIENCE

Science (or what too often passes for it) generates an inordinate amount of balderdash. Consider an article in The Christian Science Monitor: “Why the Universe Isn’t Supposed to Exist”, which reads in part:

The universe shouldn’t exist — at least according to a new theory.

Modeling of conditions soon after the Big Bang suggests the universe should have collapsed just microseconds after its explosive birth, the new study suggests.

“During the early universe, we expected cosmic inflation — this is a rapid expansion of the universe right after the Big Bang,” said study co-author Robert Hogan, a doctoral candidate in physics at King’s College in London. “This expansion causes lots of stuff to shake around, and if we shake it too much, we could go into this new energy space, which could cause the universe to collapse.”

Physicists draw that conclusion from a model that accounts for the properties of the newly discovered Higgs boson particle, which is thought to explain how other particles get their mass; faint traces of gravitational waves formed at the universe’s origin also inform the conclusion.

Of course, there must be something missing from these calculations.

“We are here talking about it,” Hogan told Live Science. “That means we have to extend our theories to explain why this didn’t happen.”

No kidding!

Though there’s much more to come, this example should tell you all that you need to know about the fallibility of scientists. If you need more examples, consider these.

MODELS LIE WHEN LIARS MODEL

Not that there’s anything wrong with being wrong, but there’s a great deal wrong with seizing on a transitory coincidence between two variables (CO2 emissions and “global” temperatures in the late 1900s) and spurring a massively wrong-headed “scientific” mania — the mania of anthropogenic global warming.

What it comes down to is modeling, which is simply a way of baking one’s assumptions into a pseudo-scientific mathematical concoction. Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a general explanation of the sham, see this.

SCIENCE VS. SCIENTISM: STEVEN PINKER’S BALDERDASH

The examples that I’ve adduced thus far (and most of those that follow) demonstrate a mode of thought known as scientism: the application of the tools and language of science to create a pretense of knowledge.

No less a personage than Steven Pinker defends scientism in “Science Is Not Your Enemy”. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. Pinker says this, for example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists”. We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (e.g., bluffing, spotting “tells”, and avoiding “tells”).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real test of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification”. With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, discussed later.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

Much of the problem arises because of a simple fact that is too often overlooked: Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

In sum, scientists are human and fallible. It is in the best tradition of science to distrust their scientific claims and to dismiss their non-scientific utterances.

ECONOMICS: PHYSICS ENVY AT WORK

Economics is rife with balderdash cloaked in mathematics. Economists who rely heavily on mathematics like to say (and perhaps even believe) that mathematical expression is more precise than mere words. But, as Arnold Kling points out in “An Important Emerging Economic Paradigm”, mathematical economics is a language of faux precision, which is useful only when applied to well defined, narrow problems. It can’t address the big issues — such as economic growth — which depend on variables such as the rule of law and social norms which defy mathematical expression and quantification.

I would go a step further and argue that mathematical economics borders on obscurantism. It’s a cult whose followers speak an arcane language not only to communicate among themselves but to obscure the essentially bankrupt nature of their craft from others. Mathematical expression actually hides the assumptions that underlie it. It’s far easier to identify and challenge the assumptions of “literary” economics than it is to identify and challenge the assumptions of mathematical economics.

I daresay that this is true even for persons who are conversant in mathematics. They may be able to manipulate easily the equations of mathematical economics, but they are able to do so without grasping the deeper meanings — the assumptions and complexities — hidden by those equations. In fact, the ease of manipulating the equations gives them a false sense of mastery of the underlying, real concepts.

Much of the economics profession is nevertheless dedicated to the protection and preservation of the essential incompetence of mathematical economists. This is from “An Important Emerging Economic Paradigm”:

One of the best incumbent-protection rackets going today is for mathematical theorists in economics departments. The top departments will not certify someone as being qualified to have an advanced degree without first subjecting the student to the most rigorous mathematical economic theory. The rationale for this is reminiscent of fraternity hazing. “We went through it, so should they.”

Mathematical hazing persists even though there are signs that the prestige of math is on the decline within the profession. The important Clark Medal, awarded to the most accomplished American economist under the age of 40, has not gone to a mathematical theorist since 1989.

These hazing rituals can have real consequences. In medicine, the controversial tradition of long work hours for medical residents has come under scrutiny over the last few years. In economics, mathematical hazing is not causing immediate harm to medical patients. But it probably is working to the long-term detriment of the profession.

The hazing ritual in economics has as least two real and damaging consequences. First, it discourages entry into the economics profession by persons who, like Kling, can discuss economic behavior without resorting to the sterile language of mathematics. Second, it leads to economics that’s irrelevant to the real world — and dead wrong.

How wrong? Economists are notoriously bad at constructing models that adequately predict near-term changes in GDP. That task should be easier than sorting out the microeconomic complexities of the labor market.

Take Professor Ray Fair, for example. Professor Fair teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983, Prof. Fair has been forecasting changes in real GDP over the next four quarters. He has made 80 such forecasts based on a model that he has undoubtedly tweaked over the years. The current model is here. His forecasting track record is here. How has he done? Here’s how:

1. The median absolute error of his forecasts is 30 percent.

2. The mean absolute error of his forecasts is 70 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 4 percent; too low when real, four-quarter GDP growth is greater than 4 percent.

4. His forecasts have grown generally worse — not better — with time.

Prof. Fair is still at it. And his forecasts continue to grow worse with time:

fair-model-forecasting-errors-vs-time
This and later graphs pertaining to Prof. Fair’s forecasts were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing; 8 percent of the errors exceed 200 percent.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

fair-model-estimated-vs-actual-growth-rate

It fails the most important test; that is, it doesn’t reflect the downward trend in economic growth:

fair-model-year-over-year-growth-estimated-and-actual

THE INVISIBLE ELEPHANT IN THE ROOM

Professor Fair and his prognosticating ilk are pikers compared with John Maynard Keynes and his disciples. The Keynesian multiplier is the fraud of all frauds, not just in economics but in politics, where it is too often invoked as an excuse for taking money from productive uses and pouring it down the rathole of government spending.

The Keynesian (fiscal) multiplier is defined as

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier. Please go to “The Keynesian Multiplier: Fiction vs. Fact” for a detailed explanation of the phony math and a derivation of the true multiplier, which is decidedly negative. Here’s the short version:

  • The phony math involves the use of an accounting identity that can be manipulated in many ways, to “prove” many things. But the accounting identity doesn’t express an operational (or empirical) relationship between a change in government spending and a change in GDP.
  • The true value of the multiplier isn’t 5 (a common mathematical estimate), 1.5 (a common but mistaken empirical estimate used for government purposes), or any positive number. The true value represents the negative relationship between the change in government spending (including transfer payments) as a fraction of GDP and the change in the rate of real GDP growth. Specifically, where F represents government spending as a fraction of GDP,

a rise in F from 0.24 to 0.33 (the actual change from 1947 to 2007) would reduce the real rate of economic growth by 0.031 percentage points. The real rate of growth from 1947 to 1957 was 4 percent. Other things being the same, the rate of growth would have dropped to 0.9 percent in the period 2008-2017. It actually dropped to 1.4 percent, which is within the standard error of the estimate.

  • That kind of drop makes a huge difference in the incomes of Americans. In 10 years, rise GDP rises by almost 50 percent when the rate of growth is 4 percent, but only by 15 percent when the rate of growth is 1.9 percent. Think of the tens of millions of people who would be living in comfort rather than squalor were it not for Keynesian balderdash, which turns reality on its head in order to promote big government.

MANAGEMENT “SCIENCE”

A hot new item in management “science” a few years ago was the Candle Problem. Graham Morehead describes the problem and discusses its broader, “scientifically” supported conclusions:

The Candle Problem was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

Sam Glucksberg added a fascinating twist to this finding in his 1962 paper, “Influece of strength of drive on functional fixedness and perceptual recognition.” (Journal of Experimental Psychology 1962. Vol. 63, No. 1, 36-41). He studied the effect of financial incentives on solving the candle problem. To one group he offered no money. To the other group he offered an amount of money for solving the problem fast.

Remember, there are two candle problems. Let the “Simple Candle Problem” be the one where the tacks are outside the box — no functional fixedness. The solution is straightforward. Here are the results for those who solved it:

Simple Candle Problem Mean Times :

  • WITHOUT a financial incentive : 4.99 min
  • WITH a financial incentive : 3.67 min

Nothing unexpected here. This is a classical incentivization effect anybody would intuitively expect.

Now, let “In-Box Candle Problem” refer to the original description where the tacks start off in the box.

In-Box Candle Problem Mean Times :

  • WITHOUT a financial incentive : 7:41 min
  • WITH a financial incentive : 11:08 min

How could this be? The financial incentive made people slower? It gets worse — the slowness increases with the incentive. The higher the monetary reward, the worse the performance! This result has been repeated many times since the original experiment.

Glucksberg and others have shown this result to be highly robust. Daniel Pink calls it a legally provable “fact.” How should we interpret the above results?

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

All of this is found in a video (to which Morehead links), wherein Daniel Pink (an author and journalist whose actual knowledge of science and business appears to be close to zero) expounds the lessons of the Candle Problem. Pink displays his (no-doubt-profitable) conviction that the Candle Problem and related “science” reveals (a) the utter bankruptcy of capitalism and (b) the need to replace managers with touchy-feely gurus (like himself, I suppose). That Pink has worked for two of the country’s leading anti-capitalist airheads — Al Gore and Robert Reich — should tell you all that you need to know about Pink’s real agenda.

Here are my reasons for sneering at Pink and his ilk:

1. I have been there and done that. That is to say, as a manager, I lived through (and briefly bought into) the touchy-feely fads of the ’80s and ’90s. Think In Search of Excellence, The One Minute Manager, The Seven Habits of Highly Effective People, and so on. What did anyone really learn from those books and the lectures and workshops based on them? A perceptive person would have learned that it is easy to make up plausible stories about the elements of success, and having done so, it is possible to make a lot of money peddling those stories. But the stories are flawed because (a) they are based on exceptional cases; (b) they attribute success to qualitative assessments of behaviors that seem to be present in those exceptional cases; and (c) they do not properly account for the surrounding (and critical) circumstances that really led to success, among which are luck and rare combinations of personal qualities (e.g., high intelligence, perseverance, people-reading skills). In short, Pink and his predecessors are guilty of reductionism and the post hoc ergo propter hoc fallacy.

2. Also at work is an undue generalization about the implications of the Candle Problem. It may be true that workers will perform better — at certain kinds of tasks (very loosely specified) — if they are not distracted by incentives that are related to the performance of those specific tasks. But what does that have to do with incentives in general? Not much, because the Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

3. The Candle Problem, with its anti-incentive “lesson”, is therefore inapplicable to the real world, where incentives play a crucial and positive role:

  • The profit incentive leads firms to invest resources in the development and/or production of things that consumers are willing to buy because those things satisfy wants at the right price.
  • Firms acquire resources to develop and produce things by bidding for those resources, that is, by offering monetary incentives to attract the resources required to make the things that consumers are willing to buy.
  • The incentives (compensation) offered to workers of various kinds (from scientists with doctorates to burger-flippers) are generally commensurate with the contributions made by those workers to the production of things of value to consumers, and to the value placed on those things by consumers.
  • Workers agree to the terms and conditions of employment (including compensation) before taking a job. The incentive for most workers is to keep a job by performing adequately over a sustained period — not by demonstrating creativity in a few minutes. Some workers (but not a large fraction of them) are striving for performance-based commissions, bonuses, and profit-sharing distributions. But those distributions are based on performance over a sustained period, during which the striving workers have plenty of time to think about how they can perform better.
  • Truly creative work is done, for the most part, by persons who are hired for such work on the basis of their credentials (education, prior employment, test results). Their compensation is based on their credentials, initially, and then on their performance over a sustained period. If they are creative, they have plenty of psychological space in which to exercise and demonstrate their creativity.
  • On-the-job creativity — the improvement of protocols and tools by workers using them — does not occur under conditions of the kind assumed in the Candle Problem. Rather, on-the-job creativity flows from actual work and insights about how to do the work better. It happens when it happens, and has nothing to do with artificial time constraints and monetary incentives to be “creative” within those constraints.
  • Pink’s essential pitch is that incentives can be replaced by offering jobs that yield autonomy (self-direction), mastery (the satisfaction of doing difficult things well), and purpose (that satisfaction of contributing to the accomplishment of something important). Well, good luck with that, but I (and millions of other consumers) want what we want, and if workers want to make a living they will just have to provide what we want, not what turns them on. Yes, there is a lot to be said for autonomy, mastery, and purpose, but there is also a lot to be said for getting a paycheck. And, contrary to Pink’s implication, getting a paycheck does not rule out autonomy, mastery, and purpose — where those happen to go with the job.

Pink and company’s “insights” about incentives and creativity are 180 degrees off-target. McDonald’s could use the Candle Problem to select creative burger-flippers who will perform well under tight deadlines because their compensation is unrelated to the creativity of their burger-flipping. McDonald’s customers should be glad that McDonald’s has taken creativity out of the picture by reducing burger-flipping to the routine application of protocols and tools.

In summary:

  • The Candle Problem is an interesting experiment, and probably valid with respect to the performance of specific tasks against tight deadlines. I think the results apply whether the stakes are money or any other kind of prize. The experiment illustrates the “choke” factor, and nothing more profound than that.
  • I question whether the experiment applies to the usual kind of incentive (e.g., a commissions or bonus), where the “incentee” has ample time (months, years) for reflection and research that will enable him to improve his performance and attain a bigger commission or bonus (which usually isn’t an all-or-nothing arrangement).
  • There’s also the dissimilarity of the Candle Problem — which involves more-or-less randomly chosen subjects, working against an artificial deadline — and actual creative thinking — usually involving persons who are experts (even if the expertise is as mundane as ditch-digging), working against looser deadlines or none at all.

PARTISAN POLITICS IN THE GUISE OF PSEUDO-SCIENCE

There’s plenty of it to go around, but this one is a whopper. Peter Singer outdoes his usual tendentious self in this review of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. In the course of the review, Singer writes:

Pinker argues that enhanced powers of reasoning give us the ability to detach ourselves from our immediate experience and from our personal or parochial perspective, and frame our ideas in more abstract, universal terms. This in turn leads to better moral commitments, including avoiding violence. It is just this kind of reasoning ability that has improved during the 20th century. He therefore suggests that the 20th century has seen a “moral Flynn effect, in which an accelerating escalator of reason carried us away from impulses that lead to violence” and that this lies behind the long peace, the new peace, and the rights revolution. Among the wide range of evidence he produces in support of that argument is the tidbit that since 1946, there has been a negative correlation between an American president’s I.Q. and the number of battle deaths in wars involving the United States.

Singer does not give the source of the IQ estimates on which Pinker relies, but the supposed correlation points to a discredited piece of historiometry by Dean Keith Simonton, Simonton jumps through various hoops to assess the IQs of  every president from Washington to Bush II — to one decimal place. That is a feat on a par with reconstructing the final thoughts of Abel, ere Cain slew him.

Before I explain the discrediting of Simonton’s obviously discreditable “research”, there is some fun to be had with the Pinker-Singer story of presidential IQ (Simonton-style) for battle deaths. First, of course, there is the convenient cutoff point of 1946. Why 1946? Well, it enables Pinker-Singer to avoid the inconvenient fact that the Civil War, World War I, and World War II happened while the presidency was held by three men who (in Simonton’s estimation) had high IQs: Lincoln, Wilson, and FDR.

The next several graphs depict best-fit relationships between Simonton’s estimates of presidential IQ and the U.S. battle deaths that occurred during each president’s term of office.* The presidents, in order of their appearance in the titles of the graphs are Harry S Truman (HST), George W. Bush (GWB), Franklin Delano Roosevelt (FDR), (Thomas) Woodrow Wilson (WW), Abraham Lincoln (AL), and George Washington (GW). The number of battle deaths is rounded to the nearest thousand, so that the prevailing value is 0, even in the case of the Spanish-American War (385 U.S. combat deaths) and George H.W. Bush’s Gulf War (147 U.S. combat deaths).

This is probably the relationship referred to by Singer, though Pinker may show a linear fit, rather than the tighter polynomial fit used here:

It looks bad for the low “IQ” presidents — if you believe Simonton’s estimates of IQ, which you shouldn’t, and if you believe that battle deaths are a bad thing per se, which they aren’t. I will come back to those points. For now, just suspend your well-justified disbelief.

If the relationship for the HST-GWB era were statistically meaningful, it would not change much with the introduction of additional statistics about “IQ” and battle deaths, but it does:




If you buy the brand of snake oil being peddled by Pinker-Singer, you must believe that the “dumbest” and “smartest” presidents are unlikely to get the U.S. into wars that result in a lot of battle deaths, whereas some (but, mysteriously, not all) of the “medium-smart” presidents (Lincoln, Wilson, FDR) are likely to do so.

In any event, if you believe in Pinker-Singer’s snake oil, you must accept the consistent “humpback” relationship that is depicted in the preceding four graphs, rather than the highly selective, one-shot negative relationship of the HST-GWB graph.

More seriously, the relationship in the HST-GWB graph is an evident ploy to discredit certain presidents (especially GWB, I suspect), which is why it covers only the period since WWII. Why not just say that you think GWB is a chimp-like, war-mongering, moron and be done with it? Pseudo-statistics of the kind offered up by Pinker-Singer is nothing more than a talking point for those already convinced that Bush=Hitler.

But as long as this silly game is in progress, let us continue it, with a new rule. Let us advance from one to two explanatory variables. The second explanatory variable that strongly suggests itself is political party. And because it is not good practice to omit relevant statistics (a favorite gambit of liars), I estimated an equation based on “IQ” and battle deaths for the 27 men who served as president from the first Republican presidency (Lincoln’s) through the presidency of GWB.  The equation looks like this:

U.S. battle deaths (000) “owned” by a president =

-80.6 + 0.841 x “IQ” – 31.3 x party (where 0 = Dem, 1 = GOP)

In other words, battle deaths rise at the rate of 841 per IQ point (so much for Pinker-Singer). But there will be fewer deaths with a Republican in the White House (so much for Pinker-Singer’s implied swipe at GWB).

All of this is nonsense, of course, for two reasons: Simonton’s estimates of IQ are hogwash, and the number of U.S. battle deaths is a meaningless number, taken by itself.

With regard to the hogwash, Simonton’s estimates of presidents’ IQs put every one of them — including the “dumbest,” U.S. Grant — in the top 2.3 percent of the population. And the mean of Simonton’s estimates puts the average president in the top 0.1 percent (one-tenth of one percent) of the population. That is literally incredible. Good evidence of the unreliability of Simonton’s estimates is found in an entry by Thomas C. Reeves at George Mason University’s History New Network. Reeves is the author of A Question of Character: A Life of John F. Kennedy, the negative reviews of which are evidently the work of JFK idolators who refuse to be disillusioned by facts. Anyway, here is Reeves:

I’m a biographer of two of the top nine presidents on Simonton’s list and am highly familiar with the histories of the other seven. In my judgment, this study has little if any value. Let’s take JFK and Chester A. Arthur as examples.

Kennedy was actually given an IQ test before entering Choate. His score was 119…. There is no evidence to support the claim that his score should have been more than 40 points higher [i.e., the IQ of 160 attributed to Kennedy by Simonton]. As I described in detail in A Question Of Character [link added], Kennedy’s academic achievements were modest and respectable, his published writing and speeches were largely done by others (no study of Kennedy is worthwhile that downplays the role of Ted Sorensen)….

Chester Alan Arthur was largely unknown before my Gentleman Boss was published in 1975. The discovery of many valuable primary sources gave us a clear look at the president for the first time. Among the most interesting facts that emerged involved his service during the Civil War, his direct involvement in the spoils system, and the bizarre way in which he was elevated to the GOP presidential ticket in 1880. His concealed and fatal illness while in the White House also came to light.

While Arthur was a college graduate, and was widely considered to be a gentleman, there is no evidence whatsoever to suggest that his IQ was extraordinary. That a psychologist can rank his intelligence 2.3 points ahead of Lincoln’s suggests access to a treasure of primary sources from and about Arthur that does not exist.

This historian thinks it impossible to assign IQ numbers to historical figures. If there is sufficient evidence (as there usually is in the case of American presidents), we can call people from the past extremely intelligent. Adams, Wilson, TR, Jefferson, and Lincoln were clearly well above average intellectually. But let us not pretend that we can rank them by tenths of a percentage point or declare that a man in one era stands well above another from a different time and place.

My educated guess is that this recent study was designed in part to denigrate the intelligence of the current occupant of the White House….

That is an excellent guess.

The meaninglessness of battle deaths as a measure of anything — but battle deaths — should be evident. But in case it is not evident, here goes:

  • Wars are sometimes necessary, sometimes not. (I give my views about the wisdom of America’s various wars at this post.) Necessary or not, presidents usually act in accordance with popular and elite opinion about the desirability of a particular war. Imagine, for example, the reaction if FDR had not gone to Congress on December 8, 1941, to ask for a declaration of war against Japan, or if GWB had not sought the approval of Congress for action in Afghanistan.
  • Presidents may have a lot to do with the decision to enter a war, but they have little to do with the external forces that help to shape that decision. GHWB, for example, had nothing to do with Saddam’s decision to invade Kuwait and thereby threaten vital U.S. interests in the Middle East. GWB, to take another example, was not a party to the choices of earlier presidents (GHWB and Clinton) that enabled Saddam to stay in power and encouraged Osama bin Laden to believe that America could be brought to its knees by a catastrophic attack.
  • The number of battle deaths in a war depends on many things outside the control of a particular president; for example, the size and capabilities of enemy forces, the size and capabilities of U.S. forces (which have a lot to do with the decisions of earlier administrations and Congresses), and the scope and scale of a war (again, largely dependent on the enemy).
  • Battle deaths represent personal tragedies, but — in and of themselves — are not a measure of a president’s wisdom or acumen. Whether the deaths were in vain is a separate issue that depends on the aforementioned considerations. To use battle deaths as a single, negative measure of a president’s ability is rank cynicism — the rankness of which is revealed in Pinker’s decision to ignore Lincoln and FDR and their “good” but deadly wars.

To put the last point another way, if the number of battle death deaths is a bad thing, Lincoln and FDR should be rotting in hell for the wars that brought an end to slavery and Hitler.
__________
* The numbers of U.S. battle deaths, by war, are available at infoplease.com, “America’s Wars: U.S. Casualties and Veterans”. The deaths are “assigned” to presidents as follows (numbers in parentheses indicate thousands of deaths):

All of the deaths (2) in the War of 1812 occurred on Madison’s watch.

All of the deaths (2) in the Mexican-American War occurred on Polk’s watch.

I count only Union battle deaths (140) during the Civil War; all are “Lincoln’s.” Let the Confederate dead be on the head of Jefferson Davis. This is a gift, of sorts, to Pinker-Singer because if Confederate dead were counted as Lincoln, with his high “IQ,” it would make Pinker-Singer’s hypothesis even more ludicrous than it is.

WW is the sole “owner” of WWI battle deaths (53).

Some of the U.S. battle deaths in WWII (292) occurred while HST was president, but Truman was merely presiding over the final months of a war that was almost won when FDR died. Truman’s main role was to hasten the end of the war in the Pacific by electing to drop the A-bombs on Hiroshima and Nagasaki. So FDR gets “credit” for all WWII battle deaths.

The Korean War did not end until after Eisenhower succeeded Truman, but it was “Truman’s war,” so he gets “credit” for all Korean War battle deaths (34). This is another “gift” to Pinker-Singer because Ike’s “IQ” is higher than Truman’s.

Vietnam was “LBJ’s war,” but I’m sure that Singer would not want Nixon to go without “credit” for the battle deaths that occurred during his administration. Moreover, LBJ had effectively lost the Vietnam war through his gradualism, but Nixon chose nevertheless to prolong the agony. So I have shared the “credit” for Vietnam War battle deaths between LBJ (deaths in 1965-68: 29) and RMN (deaths in 1969-73: 17). To do that, I apportioned total Vietnam War battle deaths, as given by infoplease.com, according to the total number of U.S. deaths in each year of the war, 1965-1973.

The wars in Afghanistan and Iraq are “GWB’s wars,” even though Obama has continued them. So I have “credited” GWB with all the battle deaths in those wars, as of May 27, 2011 (5).

The relative paucity of U.S. combat  deaths in other post-WWII actions (e.g., Lebanon, Somalia, Persian Gulf) is attested to by “Post-Vietnam Combat Casualties”, at infoplease.com.

A THIRD APPEARANCE BY PINKER

Steven Pinker, whose ignominious outpourings I have addressed twice here, deserves a third strike (which he shall duly be awarded). Pinker’s The Better Angels of Our Nature is cited gleefully by leftists and cockeyed optimists as evidence that human beings, on the whole, are becoming kinder and gentler because of:

  • The Leviathan – The rise of the modern nation-state and judiciary “with a monopoly on the legitimate use of force,” which “can defuse the [individual] temptation of exploitative attack, inhibit the impulse for revenge, and circumvent…self-serving biases.”
  • Commerce – The rise of “technological progress [allowing] the exchange of goods and services over longer distances and larger groups of trading partners,” so that “other people become more valuable alive than dead” and “are less likely to become targets of demonization and dehumanization”;
  • Feminization – Increasing respect for “the interests and values of women.”
  • Cosmopolitanism – the rise of forces such as literacy, mobility, and mass media, which “can prompt people to take the perspectives of people unlike themselves and to expand their circle of sympathy to embrace them”;
  • The Escalator of Reason – an “intensifying application of knowledge and rationality to human affairs,” which “can force people to recognize the futility of cycles of violence, to ramp down the privileging of their own interests over others’, and to reframe violence as a problem to be solved rather than a contest to be won.”

I can tell you that Pinker’s book is hogwash because two very bright leftists — Peter Singer and Will Wilkinson — have strongly and wrongly endorsed some of its key findings. I dispatched Singer in earlier. As for Wilkinson, he praises statistics adduced by Pinker that show a decline in the use of capital punishment:

In the face of such a decisive trend in moral culture, we can say a couple different things. We can say that this is just change and says nothing in particular about what is really right or wrong, good or bad. Or we can take take say this is evidence of moral progress, that we have actually become better. I prefer the latter interpretation for basically the same reasons most of us see the abolition of slavery and the trend toward greater equality between races and sexes as progress and not mere morally indifferent change. We can talk about the nature of moral progress later. It’s tricky. For now, I want you to entertain the possibility that convergence toward the idea that execution is wrong counts as evidence that it is wrong.

I would count convergence toward the idea that execution is wrong as evidence that it is wrong, if that idea were (a) increasingly held by individuals who (b) had arrived at their “enlightenment” unnfluenced by operatives of the state (legislatures and judges), who take it upon themselves to flout popular support of the death penalty. What we have, in the case of the death penalty, is moral regress, not moral progress.

Moral regress because the abandonment of the death penalty puts innocent lives at risk. Capital punishment sends a message, and the message is effective when it is delivered: it deters homicide. And even if it didn’t, it would at least remove killers from our midst, permanently. By what standard of morality can one claim that it is better to spare killers than to protect innocents? For that matter, by what standard of morality is it better to kill innocents in the womb than to spare killers? Proponents of abortion (like Singer and Wilkinson) — who by and large oppose capital punishment — are completely lacking in moral authority.

Returning to Pinker’s thesis that violence has declined, I quote a review at Foseti:

Pinker’s basic problem is that he essentially defines “violence” in such a way that his thesis that violence is declining becomes self-fulling. “Violence” to Pinker is fundamentally synonymous with behaviors of older civilizations. On the other hand, modern practices are defined to be less violent than newer practices.

A while back, I linked to a story about a guy in my neighborhood who’s been arrested over 60 times for breaking into cars. A couple hundred years ago, this guy would have been killed for this sort of vandalism after he got caught the first time. Now, we feed him and shelter him for a while and then we let him back out to do this again. Pinker defines the new practice as a decline in violence – we don’t kill the guy anymore! Someone from a couple hundred years ago would be appalled that we let the guy continue destroying other peoples’ property without consequence. In the mind of those long dead, “violence” has in fact increased. Instead of a decline in violence, this practice seems to me like a decline in justice – nothing more or less.

Here’s another example, Pinker uses creative definitions to show that the conflicts of the 20th Century pale in comparison to previous conflicts. For example, all the Mongol Conquests are considered one event, even though they cover 125 years. If you lump all these various conquests together and you split up WWI, WWII, Mao’s takeover in China, the Bolshevik takeover of Russia, the Russian Civil War, and the Chinese Civil War (yes, he actually considers this a separate event from Mao), you unsurprisingly discover that the events of the 20th Century weren’t all that violent compared to events in the past! Pinker’s third most violent event is the “Mideast Slave Trade” which he says took place between the 7th and 19th Centuries. Seriously. By this standard, all the conflicts of the 20th Century are related. Is the Russian Revolution or the rise of Mao possible without WWII? Is WWII possible without WWI? By this consistent standard, the 20th Century wars of Communism would have seen the worst conflict by far. Of course, if you fiddle with the numbers, you can make any point you like.

There’s much more to the review, including some telling criticisms of Pinker’s five reasons for the (purported) decline in violence. That the reviewer somehow still wants to believe in the rightness of Pinker’s thesis says more about the reviewer’s optimism than it does about the validity of Pinker’s thesis.

That thesis is fundamentally flawed, as Robert Epstein points out in a review at Scientific American:

[T]he wealth of data [Pinker] presents cannot be ignored—unless, that is, you take the same liberties as he sometimes does in his book. In two lengthy chapters, Pinker describes psychological processes that make us either violent or peaceful, respectively. Our dark side is driven by a evolution-based propensity toward predation and dominance. On the angelic side, we have, or at least can learn, some degree of self-control, which allows us to inhibit dark tendencies.

There is, however, another psychological process—confirmation bias—that Pinker sometimes succumbs to in his book. People pay more attention to facts that match their beliefs than those that undermine them. Pinker wants peace, and he also believes in his hypothesis; it is no surprise that he focuses more on facts that support his views than on those that do not. The SIPRI arms data are problematic, and a reader can also cherry-pick facts from Pinker’s own book that are inconsistent with his position. He notes, for example, that during the 20th century homicide rates failed to decline in both the U.S. and England. He also describes in graphic and disturbing detail the savage way in which chimpanzees—our closest genetic relatives in the animal world—torture and kill their own kind.

Of greater concern is the assumption on which Pinker’s entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease? By this logic, when we reach a world population of nine billion in 2050, Pinker will conceivably be satisfied if a mere two million people are killed in war that year.

The biggest problem with the book, though, is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Pinker’s belief that violence is on the decline reminds me of “it’s different this time”, a phrase that was on the lips of hopeful stock-pushers, stock-buyers, and pundits during the stock-market bubble of the late 1990s. That bubble ended, of course, in the spectacular crash of 2000.

Predictions about the future of humankind are better left in the hands of writers who see human nature whole, and who are not out to prove that it can be shaped or contained by the kinds of “liberal” institutions that Pinker so obviously favors.

Consider this, from an article by Robert J. Samuelson at The Washington Post:

[T]he Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar. Amid the controversy over leaks from the National Security Agency, this looms as an even bigger downside.

By cyberwarfare, I mean the capacity of groups — whether nations or not — to attack, disrupt and possibly destroy the institutions and networks that underpin everyday life. These would be power grids, pipelines, communication and financial systems, business record-keeping and supply-chain operations, railroads and airlines, databases of all types (from hospitals to government agencies). The list runs on. So much depends on the Internet that its vulnerability to sabotage invites doomsday visions of the breakdown of order and trust.

In a report, the Defense Science Board, an advisory group to the Pentagon, acknowledged “staggering losses” of information involving weapons design and combat methods to hackers (not identified, but probably Chinese). In the future, hackers might disarm military units. “U.S. guns, missiles and bombs may not fire, or may be directed against our own troops,” the report said. It also painted a specter of social chaos from a full-scale cyberassault. There would be “no electricity, money, communications, TV, radio or fuel (electrically pumped). In a short time, food and medicine distribution systems would be ineffective.”

But Pinker wouldn’t count the resulting chaos as violence, as long as human beings were merely starving and dying of various diseases. That violence would ensue, of course, is another story, which is told by John Gray in The Silence of Animals: On Progress and Other Modern Myths. Gray’s book — published  18 months after Better Angels — could be read as a refutation of Pinker’s book, though Gray doesn’t mention Pinker or his book.

The gist of Gray’s argument is faithfully recounted in a review of Gray’s book by Robert W. Merry at The National Interest:

The noted British historian J. B. Bury (1861–1927) … wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future…. The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

… Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong….

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption….

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

In the Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny….

And yet the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves….

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy.

RACE AS A SOCIAL CONSTRUCT

David Reich‘s hot new book, Who We Are and How We Got Here, is causing a stir in genetic-research circles. Reich, who takes great pains to assure everyone that he isn’t a racist, and who deplores racism, is nevertheless candid about race:

I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”

Groundbreaking advances in DNA sequencing technology have been made over the last two decades. These advances enable us to measure with exquisite accuracy what fraction of an individual’s genetic ancestry traces back to, say, West Africa 500 years ago — before the mixing in the Americas of the West African and European gene pools that were almost completely isolated for the last 70,000 years. With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real….

Self-identified African-Americans turn out to derive, on average, about 80 percent of their genetic ancestry from enslaved Africans brought to America between the 16th and 19th centuries. My colleagues and I searched, in 1,597 African-American men with prostate cancer, for locations in the genome where the fraction of genes contributed by West African ancestors was larger than it was elsewhere in the genome. In 2006, we found exactly what we were looking for: a location in the genome with about 2.8 percent more African ancestry than the average.

When we looked in more detail, we found that this region contained at least seven independent risk factors for prostate cancer, all more common in West Africans. Our findings could fully account for the higher rate of prostate cancer in African-Americans than in European-Americans. We could conclude this because African-Americans who happen to have entirely European ancestry in this small section of their genomes had about the same risk for prostate cancer as random Europeans.

Did this research rely on terms like “African-American” and “European-American” that are socially constructed, and did it label segments of the genome as being probably “West African” or “European” in origin? Yes. Did this research identify real risk factors for disease that differ in frequency across those populations, leading to discoveries with the potential to improve health and save lives? Yes.

While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another.

But whether we like it or not, that line has already been crossed. A recent study led by the economist Daniel Benjamin compiled information on the number of years of education from more than 400,000 people, almost all of whom were of European ancestry. After controlling for differences in socioeconomic background, he and his colleagues identified 74 genetic variations that are over-represented in genes known to be important in neurological development, each of which is incontrovertibly more common in Europeans with more years of education than in Europeans with fewer years of education.

It is not yet clear how these genetic variations operate. A follow-up study of Icelanders led by the geneticist Augustine Kong showed that these genetic variations also nudge people who carry them to delay having children. So these variations may be explaining longer times at school by affecting a behavior that has nothing to do with intelligence.

This study has been joined by others finding genetic predictors of behavior. One of these, led by the geneticist Danielle Posthuma, studied more than 70,000 people and found genetic variations in more than 20 genes that were predictive of performance on intelligence tests.

Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly. And since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.

You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work. Indeed, the study led by Dr. Kong showed that in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century….

So how should we prepare for the likelihood that in the coming years, genetic studies will show that many traits are influenced by genetic variations, and that these traits will differ on average across human populations? It will be impossible — indeed, anti-scientific, foolish and absurd — to deny those differences. [“How Genetics Is Changing Our Understanding of ‘Race’“, The New York Times, March 23, 2018]

Reich engages in a lot of non-scientific wishful thinking about racial differences and how they should be treated by “society” — none of which is in his purview as a scientist. Reich’s forays into psychobabble have been addressed at length by Steve Sailer (here and here) and Gregory Cochran (here, here, here, here, and here). Suffice it to say that Reich is trying in vain to minimize the scientific fact of racial differences that show up crucially in intelligence and rates of violent crime.

The lesson here is that it’s all right to show that race isn’t a social construct as long as you proclaim that it is a social construct. This is known as talking out of both sides of one’s mouth — another manifestation of balderdash.

DIVERSITY IS GOOD, EXCEPT WHEN IT ISN’T

I now invoke Robert Putnam, a political scientist known mainly for his book Bowling Alone: The Collapse and Revival of American Community (2005), in which he

makes a distinction between two kinds of social capital: bonding capital and bridging capital. Bonding occurs when you are socializing with people who are like you: same age, same race, same religion, and so on. But in order to create peaceful societies in a diverse multi-ethnic country, one needs to have a second kind of social capital: bridging. Bridging is what you do when you make friends with people who are not like you, like supporters of another football team. Putnam argues that those two kinds of social capital, bonding and bridging, do strengthen each other. Consequently, with the decline of the bonding capital mentioned above inevitably comes the decline of the bridging capital leading to greater ethnic tensions.

In later work on diversity and trust within communities, Putnam concludes that

other things being equal, more diversity in a community is associated with less trust both between and within ethnic groups….

Even when controlling for income inequality and crime rates, two factors which conflict theory states should be the prime causal factors in declining inter-ethnic group trust, more diversity is still associated with less communal trust.

Lowered trust in areas with high diversity is also associated with:

  • Lower confidence in local government, local leaders and the local news media.
  • Lower political efficacy – that is, confidence in one’s own influence.
  • Lower frequency of registering to vote, but more interest and knowledge about politics and more participation in protest marches and social reform groups.
  • Higher political advocacy, but lower expectations that it will bring about a desirable result.
  • Less expectation that others will cooperate to solve dilemmas of collective action (e.g., voluntary conservation to ease a water or energy shortage).
  • Less likelihood of working on a community project.
  • Less likelihood of giving to charity or volunteering.
  • Fewer close friends and confidants.
  • Less happiness and lower perceived quality of life.
  • More time spent watching television and more agreement that “television is my most important form of entertainment”.

It’s not as if Putnam is a social conservative who is eager to impart such news. To the contrary, as Michal Jonas writes in “The Downside of Diversity“, Putnam’s

findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis … , he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer….

After releasing the initial results in 2001, Putnam says he spent time “kicking the tires really hard” to be sure the study had it right. Putnam realized, for instance, that more diverse communities tended to be larger, have greater income ranges, higher crime rates, and more mobility among their residents — all factors that could depress social capital independent of any impact ethnic diversity might have.

“People would say, ‘I bet you forgot about X,’” Putnam says of the string of suggestions from colleagues. “There were 20 or 30 X’s.”

But even after statistically taking them all into account, the connection remained strong: Higher diversity meant lower social capital. In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”

“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes….

In a recent study, [Harvard economist Edward] Glaeser and colleague Alberto Alesina demonstrated that roughly half the difference in social welfare spending between the US and Europe — Europe spends far more — can be attributed to the greater ethnic diversity of the US population. Glaeser says lower national social welfare spending in the US is a “macro” version of the decreased civic engagement Putnam found in more diverse communities within the country.

Economists Matthew Kahn of UCLA and Dora Costa of MIT reviewed 15 recent studies in a 2003 paper, all of which linked diversity with lower levels of social capital. Greater ethnic diversity was linked, for example, to lower school funding, census response rates, and trust in others. Kahn and Costa’s own research documented higher desertion rates in the Civil War among Union Army soldiers serving in companies whose soldiers varied more by age, occupation, and birthplace.

Birds of different feathers may sometimes flock together, but they are also less likely to look out for one another. “Everyone is a little self-conscious that this is not politically correct stuff,” says Kahn….

In his paper, Putnam cites the work done by Page and others, and uses it to help frame his conclusion that increasing diversity in America is not only inevitable, but ultimately valuable and enriching. As for smoothing over the divisions that hinder civic engagement, Putnam argues that Americans can help that process along through targeted efforts. He suggests expanding support for English-language instruction and investing in community centers and other places that allow for “meaningful interaction across ethnic lines.”

Some critics have found his prescriptions underwhelming. And in offering ideas for mitigating his findings, Putnam has drawn scorn for stepping out of the role of dispassionate researcher. “You’re just supposed to tell your peers what you found,” says John Leo, senior fellow at the Manhattan Institute, a conservative think tank. [Michael Jonas, “The downside of diversity,” The Boston Globe (boston.com), August 5, 2007]

What is it about academics like Reich and Putnam who can’t bear to face the very facts that they have uncovered? The magic word is “academics”. They are denizens of a milieu in which the facts of life about race, guns, sex, and many other things are in the habit of being suppressed in favor of “hope and change”, and the facts be damned.

ONE MORE BIT OF RACE-RELATED BALDERDASH

I was unaware of the Implicit Association Test (IAT) until a few years ago, when I took a test at YourMorals.Org that purported to measure my implicit racial preferences. IAT has been exposed as junk, John. J. Ray calls it:

Psychologists are well aware that people often do not say what they really think.  It is therefore something of a holy grail among them to find ways that WILL detect what people really think. A very popular example of that is the Implicit Associations test (IAT).  It supposedly measures racist thoughts whether you are aware of them or not.  It sometimes shows people who think they are anti-racist to be in fact secretly racist.

I dismissed it as a heap of junk long ago (here and here) but it has remained very popular and is widely accepted as revealing truth.  I am therefore pleased that a very long and thorough article has just appeared which comes to the same conclusion that I did.

The article in question (which has the same title as Ray’s post) is by Jesse Singal. It appeared at Science of Us on January 11, 2017. Here are some excerpts:

Perhaps no new concept from the world of academic psychology has taken hold of the public imagination more quickly and profoundly in the 21st century than implicit bias — that is, forms of bias which operate beyond the conscious awareness of individuals. That’s in large part due to the blockbuster success of the so-called implicit association test, which purports to offer a quick, easy way to measure how implicitly biased individual people are….

Since the IAT was first introduced almost 20 years ago, its architects, as well as the countless researchers and commentators who have enthusiastically embraced it, have offered it as a way to reveal to test-takers what amounts to a deep, dark secret about who they are: They may not feel racist, but in fact, the test shows that in a variety of intergroup settings, they will act racist….

[The] co-creators are Mahzarin Banaji, currently the chair of Harvard University’s psychology department, and Anthony Greenwald, a highly regarded social psychology researcher at the University of Washington. The duo introduced the test to the world at a 1998 press conference in Seattle — the accompanying press release noted that they had collected data suggesting that 90–95 percent of Americans harbored the “roots of unconscious prejudice.” The public immediately took notice: Since then, the IAT has been mostly treated as a revolutionary, revelatory piece of technology, garnering overwhelmingly positive media coverage….

Maybe the biggest driver of the IAT’s popularity and visibility, though, is the fact that anyone can take the test on the Project Implicit website, which launched shortly after the test was unveiled and which is hosted by Harvard University. The test’s architects reported that, by October 2015, more than 17 million individual test sessions had been completed on the website. As will become clear, learning one’s IAT results is, for many people, a very big deal that changes how they view themselves and their place in the world.

Given all this excitement, it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way….

Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such.

How does IAT work? Singal summarizes:

You sit down at a computer where you are shown a series of images and/or words. First, you’re instructed to hit ‘i’ when you see a “good” term like pleasant, or to hit ‘e’ when you see a “bad” one like tragedy. Then, hit ‘i’ when you see a black face, and hit ‘e’ when you see a white one. Easy enough, but soon things get slightly more complex: Hit ‘i’ when you see a good word or an image of a black person, and ‘e’ when you see a bad word or an image of a white person. Then the categories flip to black/bad and white/good. As you peck away at the keyboard, the computer measures your reaction times, which it plugs into an algorithm. That algorithm, in turn, generates your score.

If you were quicker to associate good words with white faces than good words with black faces, and/or slower to associate bad words with white faces than bad words with black ones, then the test will report that you have a slight, moderate, or strong “preference for white faces over black faces,” or some similar language. You might also find you have an anti-white bias, though that is significantly less common. By the normal scoring conventions of the test, positive scores indicate bias against the out-group, while negative ones indicate bias against the in-group.

The rough idea is that, as humans, we have an easier time connecting concepts that are already tightly linked in our brains, and a tougher time connecting concepts that aren’t. The longer it takes to connect “black” and “good” relative to “white” and “good,” the thinking goes, the more your unconscious biases favor white people over black people.

Singal continues (at great length) to pile up the mountain of evidence against IAT, and to caution against reading anything into the results it yields.

Having become aware of the the debunking of IAT, I went to the website of Project Implicit. When I reached this page, I was surprised to learn that I could not only find out whether I’m a closet racist but also whether I prefer dark or light skin tones, Asians or non-Asians, Trump or a previous president, and several other things or their opposites. I chose to discover my true feelings about Trump vs. a previous president, and was faced with a choice between Trump and Clinton.

What was the result of my several minutes of tapping “e” and “i” on the keyboard of my PC? This:

Your data suggest a moderate automatic preference for Bill Clinton over Donald Trump.

Balderdash! Though Trump is obviously not of better character than Clinton, he’s obviously not of worse character. And insofar as policy goes, the difference between Trump and Clinton is somewhat like the difference between a non-silent Calvin Coolidge and an FDR without the patriotism. (With apologies to the memory of Coolidge, my favorite president.)

What did I learn from the IAT? I must have very good reflexes. A person who processes information rapidly and then almost instantly translates it into a physical response should be able to “beat” the IAT. And that’s probably what I did in the Trump vs. Clinton test.

Perhaps the IAT for racism could be used to screen candidates for fighter-pilot training. Only “non-racists” would be admitted. Anyone who isn’t quick enough to avoid the “racist” label isn’t quick enough to win a dogfight.

OTHER “LIBERAL” DELUSIONS

There are plenty of them under the heading of balderdash. It’s also known as magical thinking, in which “ought” becomes “is” and the forces of nature and human nature can be held in abeyance by edict. The following examples revisit some ground already covered here:

  • Men are unnecessary.
  • Women can do everything that men can do, but it doesn’t work the other way … just because.
  • Mothers can work outside the home without damage to their children.
  • Race is a “social construct”; there is no such thing as intelligence; women and men are mentally and physically equal in all respects; and the under-representation of women and blacks in certain fields is therefore due to rank discrimination (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).
  • A minimum wage can be imposed without an increase in unemployment, a “fact” which can be “proven” only by concocting special cases of limited applicability.
  • Taxes can be raised without discouraging investment and therefore reducing the rate of economic growth.
  • Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”. There can “free lunches” all around.
  • Health insurance premiums will go down while the number of mandates is increased.
  • The economy can be stimulated through the action of the Keynesian multiplier, which is nothing but phony math.
  • “Green” programs create jobs (but only because they are inefficient).
  • Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).

There’s much more in a different vein here.

BALDERDASH AS EUPHEMISTIC THINKING

Balderdash, as I have sampled it here, isn’t just nonsense — it’s nonsense in the service of an agenda. The agenda is too often the expansion of government power. Those who favor the expansion of government power don’t like to think that it hurts people. (“We’re from the government and we’re here to help.”) This is a refusal to face facts, which is amply if not exhautively illustrated in the preceding entries.

But there’s a lot more where that comes from; for example:

  • Crippled became handicapped, which became disabled and then differently abled or something-challenged.
  • Stupid became learning disabled, which became special needs (a euphemistic category that houses more than the stupid).
  • Poor became underprivileged, which became economically disadvantaged, which became (though isn’t overtly called) entitled (as in entitled to other people’s money).
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

Why do lefties — lovers of big government — persist in varnishing the truth? They are — they insist — strong supporters of science, which is (ideally) the pursuit of truth. Well, that’s because they aren’t really supporters of science (witness their devotion to the “unsettled” science of AGW, among many fabrications). Nor do they really want the truth. They simply want to portray the world as they would like it to be, or to lie about it so that they can strive to reshape it to their liking.

BALDERDASH IN THE SERVICE OF SLAVERY, MODERN STYLE

I will end with this one, which is less conclusive than what has gone before, but which further illustrates the left’s penchant for evading reality in the service of growing government.

Thomas Nagel writes:

Some would describe taxation as a form of theft and conscription as a form of slavery — in fact some would prefer to describe taxation as slavery too, or at least as forced labor. Much might be said against these descriptions, but that is beside the point. For within proper limits, such practices when engaged in by governments are acceptable, whatever they are called. If someone with an income of $2000 a year trains a gun on someone with an income of $100000 a year and makes him hand over his wallet, that is robbery. If the federal government withholds a portion of the second person’s salary (enforcing the laws against tax evasion with threats of imprisonment under armed guard) and gives some of it to the first person in the form of welfare payments, food stamps, or free health care, that is taxation. In the first case it is (in my opinion) an impermissible use of coercive means to achieve a worthwhile end. In the second case the means are legitimate, because they are impersonally imposed by an institution designed to promote certain results. Such general methods of distribution are preferable to theft as a form of private initiative and also to individual charity. This is true not only for reasons of fairness and efficiency, but also because both theft and charity are disturbances of the relations (or lack of them) between individuals and involve their individual wills in a way that an automatic, officially imposed system of taxation does not. [Mortal Questions, “Ruthlessness in Public Life,” pp. 87-88]

How many logical and epistemic errors can a supposedly brilliant philosopher make in one (long) paragraph? Too many:

  • “For within proper limits” means that Nagel is about to beg the question by shaping an answer that fits his idea of proper limits.
  • Nagel then asserts that the use by government of coercive means to achieve the same end as robbery is “legitimate, because [those means] are impersonally imposed by an institution designed to promote certain results.” Balderdash! Nagel’s vision of government as some kind of omniscient, benevolent arbiter is completely at odds with reality.  The “certain results” (redistribution of income) are achieved by functionaries, armed or backed with the force of arms, who themselves share in the spoils of coercive redistribution. Those functionaries act under the authority of bare majorities of elected representatives, who are chosen by bare majorities of voters. And those bare majorities are themselves coalitions of interested parties — hopeful beneficiaries of redistributionist policies, government employees, government contractors, and arrogant statists — who believe, without justification, that forced redistribution is a proper function of government.
  • On the last point, Nagel ignores the sordid history of the unconstitutional expansion of the powers of government. Without justification, he aligns himself with proponents of the “living Constitution.”
  • Nagel’s moral obtuseness is fully revealed when he equates coercive redistribution with “fairness and efficiency,” as if property rights and liberty were of no account.
  • The idea that coercive redistribution fosters efficiency is laughable. It does quite the opposite because it removes resources from productive uses — including job-creating investments. The poor are harmed by coercive redistribution because it drastically curtails economic growth, from which they would benefit as job-holders and (where necessary) recipients of private charity (the resources for which would be vastly greater in the absence of coercive redistribution).
  • Finally (though not exhaustively), Nagel’s characterization of private charity as a “disturbance of the relations … among individuals” is so wrong-headed that it leaves me dumbstruck. Private charity arises from real relations among individuals — from a sense of community and feelings of empathy. It is the “automatic, officially imposed system of taxation” that distorts and thwarts (“disturbs”) the social fabric.

In any event, taxation for the purpose of redistribution is slavery: the subjection of one person to others, namely, agents of the government and the recipients of the taxes extracted from the person who pays them under threat of punishment. It’s slavery without whips and chains, but slavery nevertheless.

Suicide

Suicide has garnered a lot of attention in recent days. As noted in a study by the Centers for Disease Control and Prevention, the rate has been rising steadily since it bottomed out in 2000. I discussed suicide at some length in “Suicidal Despair and the ‘War on Whites’” (June 26, 2017). I have updated a few graphs and a bit of text to accommodate the latest figures. But the bottom line remains unchanged. What is it? The “war on whites” is a red herring. Go there and see for yourself.

Selected Writings about Intelligence

I have treated intelligence many times; for example:

Positive Rights and Cosmic Justice: Part IV
Race and Reason: The Achievement Gap — Causes and Implications
“Wading” into Race, Culture, and IQ
The Harmful Myth of Inherent Equality
Bigger, Stronger, and Faster — But Not Quicker?
The IQ of Nations
Some Notes about Psychology and Intelligence
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
More about Intelligence
Not-So-Random Thoughts (XXI), fifth item
Intelligence and Intuition
Intelligence, Personality, Politics, and Happiness
Intelligence As a Dirty Word

The material below consists entirely of quotations from cited sources. The quotations are consistent with and confirm several points made in the earlier posts:

  • Intelligence has a strong genetic component; it is heritable.
  • Race is a real manifestation of genetic differences among sub-groups of human beings. Those subgroups are not only racial but also ethnic in character.
  • Intelligence therefore varies by race and ethnicity, though it is influenced by environment.
  • Specifically, intelligence varies in the following way: There are highly intelligent persons of all races and ethnicities, but the proportion of highly intelligent persons is highest among Ashkenazi Jews, followed in order by East Asians, Northern Europeans, Hispanics (of European/Amerindian descent), and sub-Saharan Africans — and the American descendants of each group.
  • Males are disproportionately represented among highly intelligent persons, relative to females. Males have greater quantitative skills (including spatio-temporal aptitude) relative to females; whereas, females have greater verbal skills than males.
  • Intelligence is positively correlated with attractiveness, health, and longevity.
  • The Flynn effect (rising IQ) is a transitory environmental effect brought about by environment (e.g., better nutrition) and practice (e.g., learning and application of technical skills). The Woodley effect is (probably) a long-term dysgenic effect among people whose survival and reproduction depends more on technology (devised by a relatively small portion of the populace) than on the ability to cope with environmental threats (i.e., intelligence).

I have moved the supporting material to a new page: “Intelligence“.

Recommended Reading

Leftism, Political Correctness, and Other Lunacies (Dispatches from the Fifth Circle Book 1)

 

On Liberty: Impossible Dreams, Utopian Schemes (Dispatches from the Fifth Circle Book 2)

 

We the People and Other American Myths (Dispatches from the Fifth Circle Book 3)

 

Americana, Etc.: Language, Literature, Movies, Music, Sports, Nostalgia, Trivia, and a Dash of Humor (Dispatches from the Fifth Circle Book 4)

“Conservative” Confusion

Keith Burgess-Jackson is a self-styled conservative with whom I had a cordial online relationship about a dozen years ago. Our relationship foundered for reasons that are trivial and irrelevant to this post. I continued to visit KBJ’s eponymous blog occasionally (see first item in “related posts”, below), and learned of its disappearance when I I tried to visit it in December 2017. It had disappeared in the wake of a controversy that I will address in a future post.

In any event, KBJ has started a new blog, Just Philosophy, which I learned of and began to follow about a week ago. The posts at Just Philosophy were unexceptionable until February 5, when KBJ posted “Barry M. Goldwater (1909-1998) on the Graduated Income Tax”.

KBJ opens the post by quoting Goldwater:

The graduated [income] tax is a confiscatory tax. Its effect, and to a large extent its aim, is to bring down all men to a common level. Many of the leading proponents of the graduated tax frankly admit that their purpose is to redistribute the nation’s wealth. Their aim is an egalitarian society—an objective that does violence both to the charter of the Republic and [to] the laws of Nature. We are all equal in the eyes of God but we are equal in no other respect. Artificial devices for enforcing equality among unequal men must be rejected if we would restore that charter and honor those laws.

He then adds this “note from KBJ”:

The word “confiscate” means “take or seize (someone’s property) with authority.” Every tax, from the lowly sales tax to the gasoline tax to the cigarette tax to the estate tax to the property tax to the income tax, is by definition confiscatory in that sense, so what is Goldwater’s point in saying that the graduated (i.e., progressive) income tax is confiscatory? He must mean something stronger, namely, completely taken away. But this is absurd. We have had a progressive (“graduated”) income tax for generations, and income inequality is at an all-time high. Nobody’s income or wealth is being confiscated by the income tax, if by “confiscated” Goldwater means completely taken away. Only in the fevered minds of libertarians (such as Goldwater) is a progressive income tax designed to “bring down all men to a common level.” And what’s wrong with redistributing wealth? Every law and every public policy redistributes wealth. The question is not whether to redistribute wealth; it’s how to do so. Either we redistribute wealth honestly and intelligently or we do so with our heads in the sand. By the way, conservatives, as such, are not opposed to progressive income taxation. Conservatives want people to have good lives, and that may require progressive income taxation. Those who have more than they need (especially those who have not worked for it) are and should be required to provide for those who, through no fault of their own, have less than they need.

Yes, Goldwater obviously meant something stronger by applying “confiscatory” to the graduated income tax. But what he meant can’t be “completely taken away” because the graduated income tax is one of progressively higher marginal tax rates, none of which has ever reached 100 percent in the United States. And as KBJ acknowledges, a tax of less than 100 percent, “from the lowly sales tax to the gasoline tax to the cigarette tax to the estate tax to the property tax to the income tax, is by definition confiscatory in [the] sense” of “tak[ing] or seiz[ing] (someone’s property) with authority”. What Goldwater must have meant — despite KBJ’s obfuscation — is that the income tax is confiscatory in an especially destructive way, which Goldwater elucidates.

KBJ asks “what’s wrong with redistributing wealth?”, and justifies his evident belief that there’s nothing wrong with it by saying that “Every law and every public policy redistributes wealth.” Wow! It follows, by KBJ’s logic, that there’s nothing wrong with murder because it has been committed for millennia.

Government policy inevitably results in some redistribution of income and wealth. But that is an accident of policy in a regime of limited government, not the aim of policy. KBJ is being disingenuous (at best) when he equates an accidental outcome with the deliberate, massive redistribution of income and wealth that has been going on in the United States for more than a century. It began in earnest with the graduated income tax, became embedded in the fabric of governance with Social Security, and has been reinforced since by Medicare, Medicaid, food stamps, etc., etc., etc. Many conservatives (or “conservatives”) have been complicit in redistributive measures, but the impetus for those measures has come from the left.

KBJ then trots out this assertion: “Conservatives, as such, are not opposed to progressive income taxation.” I don’t know which conservatives KBJ has been reading or listening to (himself, perhaps, though his conservatism is now in grave doubt). In fact, the quotation in KBJ’s post is from Goldwater’s Conscience of a Conservative. For that is what Goldwater considered himself to be, not a libertarian as KBJ asserts. Goldwater was nothing like the typical libertarian who eschews the “tribalism” of patriotism. Goldwater was a patriot through-and-through.

Goldwater was a principled conservative — a consistent defender of liberty within a framework of limited government, which defends the citizenry and acts a referee of last resort. That position is the nexus of classical liberalism (sometimes called libertarianism) and conservatism, but it is conservatism nonetheless. It is a manifestation of  the conservative disposition:

A conservative’s default position is to respect prevailing social norms, taking them as a guide to conduct that will yield productive social and economic collaboration. Conservatism isn’t merely a knee-jerk response to authority. It reflects an understanding, if only an intuitive one, that tradition reflects wisdom that has passed the test of time. It also reflects a preference for changing tradition — where it needs changing — from the inside out, a bit at a time, rather from the outside in. The latter kind of change is uninformed by first-hand experience and therefore likely to be counterproductive, that is, destructive of social and economic cohesion and cooperation.

The essential ingredient in conservative governance is the preservation and reinforcement of the beneficial norms that are cultivated in the voluntary institutions of civil society: family, religion, club, community (where it is close-knit), and commerce. When those institutions are allowed to flourish, much of the work of government is done without the imposition of taxes and regulations, including the enforcement of moral codes and the care of those who unable to care for themselves.

In the conservative view, government would then be limited to making and enforcing the few rules that are required to adjudicate what Oakeshott calls “collisions”. And there are always foreign and domestic predators who are beyond the effective reach of voluntary social institutions and must be dealt with by the kind of superior force wielded by government.

By thus limiting government to the roles of referee and defender of last resort, civil society is allowed to flourish, both economically and socially. Social conservatism is analogous to the market liberalism of libertarian economics. The price signals that help to organize economic production have their counterpart in the “market” for social behavior. That behavior which is seen to advance a group’s well-being is encouraged; that behavior which is seen to degrade a group’s well-being is discouraged.

Finally on this point, personal responsibility and self-reliance are core conservative values. Conservatives therefore oppose state actions that undermine those values. Progressive income taxation punishes those who take personal responsibility and strive to be self-reliant, while encouraging and rewarding those who shirk personal responsibility and prefer dependency on others.

KBJ’s next assertion is that “Conservatives want people to have good lives, and that may require progressive income taxation.” Conservatives are hardly unique in wanting people to have good lives. Though most leftists, it seems, want to control other people’s lives, there are some leftists who sincerely want people to have good lives, and who strongly believe that this does require progressive income taxation. Not only that, but they usually justify that belief in exactly the way that KBJ does:

Those who have more than they need (especially those who have not worked for it) are and should be required to provide for those who, through no fault of their own, have less than they need.

Did I miss KBJ’s announcement that he has become a “liberal”-“progressive”-pinko? It is one thing to provide for the liberty and security of the populace; it is quite another — and decidedly not conservative — to sit in judgment as to who have “more than they need” and who have “less than they need”, and whether that is “through no fault of their own”. This is the classic “liberal” formula for the arbitrary redistribution of income and wealth. There’s not a conservative thought in that formula.

KBJ seems to have rejected, out of hand (or out of ignorance), the demonstrable truth that everyone would be better offfar better off — with a lot less government involvement in economic (and social) affairs, not more of it. That is my position, as a conservative, and it is the position of the many articulate conservatives whose blogs I read regularly.

It is a position that is consistent with the values of personal responsibility and self-reliance. Conservatives embrace those values not only because they bestow dignity on those who observe them, but also because the observance fosters general as well as personal prosperity. This is another instance of the wisdom that is embedded in traditional values.

Positive law often conflicts with and undermines traditional values. That is why it is a conservative virtue to oppose, resist, and strive to overturn positive law of that kind (e.g., Roe v. Wade, Obergefell v. Hodges, Obamacare). It is a “conservative” vice to accept it just because it’s “the law of the land”.

I am left wondering if KBJ is really a conservative, or just a “conservative“.


Related reading: Yuval Levin, “The Roots of a Reforming Conservatism“, Intercollegiate Review, Spring 2015

Related posts:
Gains from Trade (A critique of KBJ’s “conservative” views on trade)
Why Conservatism Works
Liberty and Society
The Eclipse of “Old America”
Genetic Kinship and Society
Defending Liberty against (Pseudo) Libertarians
Defining Liberty
Conservatism as Right-Minarchism
The Pseudo-Libertarian Temperament
Parsing Political Philosophy (II)
My View of Libertarianism
The War on Conservatism
Another Look at Political Labels
Rescuing Conservatism
If Men Were Angels
Libertarianism, Conservatism, and Political Correctness
Disposition and Ideology

A Glimmer of Hope on the Education Front

Gregory Cochran (West Hunter) points to an item from 2014 that gives the annual distribution of bachelor’s degrees by field of study for 1970-2011. (I would say “major”, but many of the categories encompass several related majors.) I extracted the values for 1970, 1990, and 2011, and assigned a “hardness” value to each field of study:

The distribution of degrees seems to have been shifting away from “soft” fields to “middling” and “hard” ones:

The number of graduates has increased with time, of course, so there are still more soft bachelor’s degrees being granted now than in 1970. But the shift toward harder fields is comforting because soft fields seem to attract squishy-minded leftists in disproportionate numbers.

The graph suggests that the college-educated workforce of the future will be somewhat less dominated by squishy-minded leftists than it has been since 1970. It was around then that many of the flower-children and radicals of the 1960s graduated and went on to positions of power and prominence in the media, the academy, and politics.

It’s faint hope for a future that’s less dominated by leftists than the recent past and present — but it is hope.

CAVEATS:

1. The results shown in the graph are sensitive to my designation of each field’s level of “hardness”. If you disagree with any of those assignments, let me know and I’ll change the inputs and see what difference they make. The table and graph are in a spreadsheet, and changes in the table will instantly show up as changes in the graph.

2. The decline of “soft” fields is due mainly to the sharp decline of Education as a percentage of all bachelor’s degrees, which occurred between 1971 and 1985. To the extent that some Education majors migrated to STEM fields, the overall shift toward “hard” fields is overstated. A prospective teacher who happens to major in math is probably of less-squishy stock than a prospective teacher who happens to major in English, History, or similar “soft” fields — but he is likely to be more squishy than the math major who intends to pursue an advanced degree in his field, and to “do” rather than teach at any level.