Abortion Q and A

A new entry at Realities. Using a Q&A format, this article summarizes my writings on abortion over the past 14 years, since I first voiced my opposition to it.

Recommended Reading

Leftism, Political Correctness, and Other Lunacies (Dispatches from the Fifth Circle Book 1)

 

On Liberty: Impossible Dreams, Utopian Schemes (Dispatches from the Fifth Circle Book 2)

 

We the People and Other American Myths (Dispatches from the Fifth Circle Book 3)

 

Americana, Etc.: Language, Literature, Movies, Music, Sports, Nostalgia, Trivia, and a Dash of Humor (Dispatches from the Fifth Circle Book 4)

“Conservative” Confusion

Keith Burgess-Jackson is a self-styled conservative with whom I had a cordial online relationship about a dozen years ago. Our relationship foundered for reasons that are trivial and irrelevant to this post. I continued to visit KBJ’s eponymous blog occasionally (see first item in “related posts”, below), and learned of its disappearance when I I tried to visit it in December 2017. It had disappeared in the wake of a controversy that I will address in a future post.

In any event, KBJ has started a new blog, Just Philosophy, which I learned of and began to follow about a week ago. The posts at Just Philosophy were unexceptionable until February 5, when KBJ posted “Barry M. Goldwater (1909-1998) on the Graduated Income Tax”.

KBJ opens the post by quoting Goldwater:

The graduated [income] tax is a confiscatory tax. Its effect, and to a large extent its aim, is to bring down all men to a common level. Many of the leading proponents of the graduated tax frankly admit that their purpose is to redistribute the nation’s wealth. Their aim is an egalitarian society—an objective that does violence both to the charter of the Republic and [to] the laws of Nature. We are all equal in the eyes of God but we are equal in no other respect. Artificial devices for enforcing equality among unequal men must be rejected if we would restore that charter and honor those laws.

He then adds this “note from KBJ”:

The word “confiscate” means “take or seize (someone’s property) with authority.” Every tax, from the lowly sales tax to the gasoline tax to the cigarette tax to the estate tax to the property tax to the income tax, is by definition confiscatory in that sense, so what is Goldwater’s point in saying that the graduated (i.e., progressive) income tax is confiscatory? He must mean something stronger, namely, completely taken away. But this is absurd. We have had a progressive (“graduated”) income tax for generations, and income inequality is at an all-time high. Nobody’s income or wealth is being confiscated by the income tax, if by “confiscated” Goldwater means completely taken away. Only in the fevered minds of libertarians (such as Goldwater) is a progressive income tax designed to “bring down all men to a common level.” And what’s wrong with redistributing wealth? Every law and every public policy redistributes wealth. The question is not whether to redistribute wealth; it’s how to do so. Either we redistribute wealth honestly and intelligently or we do so with our heads in the sand. By the way, conservatives, as such, are not opposed to progressive income taxation. Conservatives want people to have good lives, and that may require progressive income taxation. Those who have more than they need (especially those who have not worked for it) are and should be required to provide for those who, through no fault of their own, have less than they need.

Yes, Goldwater obviously meant something stronger by applying “confiscatory” to the graduated income tax. But what he meant can’t be “completely taken away” because the graduated income tax is one of progressively higher marginal tax rates, none of which has ever reached 100 percent in the United States. And as KBJ acknowledges, a tax of less than 100 percent, “from the lowly sales tax to the gasoline tax to the cigarette tax to the estate tax to the property tax to the income tax, is by definition confiscatory in [the] sense” of “tak[ing] or seiz[ing] (someone’s property) with authority”. What Goldwater must have meant — despite KBJ’s obfuscation — is that the income tax is confiscatory in an especially destructive way, which Goldwater elucidates.

KBJ asks “what’s wrong with redistributing wealth?”, and justifies his evident belief that there’s nothing wrong with it by saying that “Every law and every public policy redistributes wealth.” Wow! It follows, by KBJ’s logic, that there’s nothing wrong with murder because it has been committed for millennia.

Government policy inevitably results in some redistribution of income and wealth. But that is an accident of policy in a regime of limited government, not the aim of policy. KBJ is being disingenuous (at best) when he equates an accidental outcome with the deliberate, massive redistribution of income and wealth that has been going on in the United States for more than a century. It began in earnest with the graduated income tax, became embedded in the fabric of governance with Social Security, and has been reinforced since by Medicare, Medicaid, food stamps, etc., etc., etc. Many conservatives (or “conservatives”) have been complicit in redistributive measures, but the impetus for those measures has come from the left.

KBJ then trots out this assertion: “Conservatives, as such, are not opposed to progressive income taxation.” I don’t know which conservatives KBJ has been reading or listening to (himself, perhaps, though his conservatism is now in grave doubt). In fact, the quotation in KBJ’s post is from Goldwater’s Conscience of a Conservative. For that is what Goldwater considered himself to be, not a libertarian as KBJ asserts. Goldwater was nothing like the typical libertarian who eschews the “tribalism” of patriotism. Goldwater was a patriot through-and-through.

Goldwater was a principled conservative — a consistent defender of liberty within a framework of limited government, which defends the citizenry and acts a referee of last resort. That position is the nexus of classical liberalism (sometimes called libertarianism) and conservatism, but it is conservatism nonetheless. It is a manifestation of  the conservative disposition:

A conservative’s default position is to respect prevailing social norms, taking them as a guide to conduct that will yield productive social and economic collaboration. Conservatism isn’t merely a knee-jerk response to authority. It reflects an understanding, if only an intuitive one, that tradition reflects wisdom that has passed the test of time. It also reflects a preference for changing tradition — where it needs changing — from the inside out, a bit at a time, rather from the outside in. The latter kind of change is uninformed by first-hand experience and therefore likely to be counterproductive, that is, destructive of social and economic cohesion and cooperation.

The essential ingredient in conservative governance is the preservation and reinforcement of the beneficial norms that are cultivated in the voluntary institutions of civil society: family, religion, club, community (where it is close-knit), and commerce. When those institutions are allowed to flourish, much of the work of government is done without the imposition of taxes and regulations, including the enforcement of moral codes and the care of those who unable to care for themselves.

In the conservative view, government would then be limited to making and enforcing the few rules that are required to adjudicate what Oakeshott calls “collisions”. And there are always foreign and domestic predators who are beyond the effective reach of voluntary social institutions and must be dealt with by the kind of superior force wielded by government.

By thus limiting government to the roles of referee and defender of last resort, civil society is allowed to flourish, both economically and socially. Social conservatism is analogous to the market liberalism of libertarian economics. The price signals that help to organize economic production have their counterpart in the “market” for social behavior. That behavior which is seen to advance a group’s well-being is encouraged; that behavior which is seen to degrade a group’s well-being is discouraged.

Finally on this point, personal responsibility and self-reliance are core conservative values. Conservatives therefore oppose state actions that undermine those values. Progressive income taxation punishes those who take personal responsibility and strive to be self-reliant, while encouraging and rewarding those who shirk personal responsibility and prefer dependency on others.

KBJ’s next assertion is that “Conservatives want people to have good lives, and that may require progressive income taxation.” Conservatives are hardly unique in wanting people to have good lives. Though most leftists, it seems, want to control other people’s lives, there are some leftists who sincerely want people to have good lives, and who strongly believe that this does require progressive income taxation. Not only that, but they usually justify that belief in exactly the way that KBJ does:

Those who have more than they need (especially those who have not worked for it) are and should be required to provide for those who, through no fault of their own, have less than they need.

Did I miss KBJ’s announcement that he has become a “liberal”-“progressive”-pinko? It is one thing to provide for the liberty and security of the populace; it is quite another — and decidedly not conservative — to sit in judgment as to who have “more than they need” and who have “less than they need”, and whether that is “through no fault of their own”. This is the classic “liberal” formula for the arbitrary redistribution of income and wealth. There’s not a conservative thought in that formula.

KBJ seems to have rejected, out of hand (or out of ignorance), the demonstrable truth that everyone would be better offfar better off — with a lot less government involvement in economic (and social) affairs, not more of it. That is my position, as a conservative, and it is the position of the many articulate conservatives whose blogs I read regularly.

It is a position that is consistent with the values of personal responsibility and self-reliance. Conservatives embrace those values not only because they bestow dignity on those who observe them, but also because the observance fosters general as well as personal prosperity. This is another instance of the wisdom that is embedded in traditional values.

Positive law often conflicts with and undermines traditional values. That is why it is a conservative virtue to oppose, resist, and strive to overturn positive law of that kind (e.g., Roe v. Wade, Obergefell v. Hodges, Obamacare). It is a “conservative” vice to accept it just because it’s “the law of the land”.

I am left wondering if KBJ is really a conservative, or just a “conservative“.


Related reading: Yuval Levin, “The Roots of a Reforming Conservatism“, Intercollegiate Review, Spring 2015

Related posts:
Gains from Trade (A critique of KBJ’s “conservative” views on trade)
Why Conservatism Works
Liberty and Society
The Eclipse of “Old America”
Genetic Kinship and Society
Defending Liberty against (Pseudo) Libertarians
Defining Liberty
Conservatism as Right-Minarchism
The Pseudo-Libertarian Temperament
Parsing Political Philosophy (II)
My View of Libertarianism
The War on Conservatism
Another Look at Political Labels
Rescuing Conservatism
If Men Were Angels
Libertarianism, Conservatism, and Political Correctness
Disposition and Ideology

A Glimmer of Hope on the Education Front

Gregory Cochran (West Hunter) points to an item from 2014 that gives the annual distribution of bachelor’s degrees by field of study for 1970-2011. (I would say “major”, but many of the categories encompass several related majors.) I extracted the values for 1970, 1990, and 2011, and assigned a “hardness” value to each field of study:

The distribution of degrees seems to have been shifting away from “soft” fields to “middling” and “hard” ones:

The number of graduates has increased with time, of course, so there are still more soft bachelor’s degrees being granted now than in 1970. But the shift toward harder fields is comforting because soft fields seem to attract squishy-minded leftists in disproportionate numbers.

The graph suggests that the college-educated workforce of the future will be somewhat less dominated by squishy-minded leftists than it has been since 1970. It was around then that many of the flower-children and radicals of the 1960s graduated and went on to positions of power and prominence in the media, the academy, and politics.

It’s faint hope for a future that’s less dominated by leftists than the recent past and present — but it is hope.

CAVEATS:

1. The results shown in the graph are sensitive to my designation of each field’s level of “hardness”. If you disagree with any of those assignments, let me know and I’ll change the inputs and see what difference they make. The table and graph are in a spreadsheet, and changes in the table will instantly show up as changes in the graph.

2. The decline of “soft” fields is due mainly to the sharp decline of Education as a percentage of all bachelor’s degrees, which occurred between 1971 and 1985. To the extent that some Education majors migrated to STEM fields, the overall shift toward “hard” fields is overstated. A prospective teacher who happens to major in math is probably of less-squishy stock than a prospective teacher who happens to major in English, History, or similar “soft” fields — but he is likely to be more squishy than the math major who intends to pursue an advanced degree in his field, and to “do” rather than teach at any level.

Hot Is Better Than Cold: A Small Case Study

I’ve been trying to find wandering classmates as the 60th anniversary of our graduation from high school looms. Not all are enthusiastic about returning to our home town in Michigan for a reunion next August. Nor am I, truth be told.

A sunny, August day in Michigan is barely warm enough for me. I’m far from alone in holding that view, as anyone with a casual knowledge of inter-State migration knows.

Take my graduating class, for example. Of the 79 living graduates whose whereabouts are known, 45 are still in Michigan; 24 are in warmer States (Arizona, California, Florida, Georgia, Kentucky, Louisiana, Mississippi, Tennessee, and Texas — moi); and 10 (inexplicably) have opted for other States at about the same latitude. In sum: 30 percent have opted for warmer climes; only 13 percent have chosen to leave a cold State for another cold State.

It would be a good thing if the world were warming a tad, as it might be.

Further Thoughts about Probability

BACKGROUND

A few weeks ago I posted “A Bayesian Puzzle”. I took it down because Bayesianism warranted more careful treatment than I had given it. But while the post was live at Ricochet (where I cross-posted in September-November), I had an exchange with a reader who is an obdurate believer in single-event probabilities, such as “the probability of heads on the next coin flip is 50 percent” and “the probability of 12 on the next roll of a pair of dice is 1/18”. That wasn’t the first such exchange of its kind that I’ve had; “Some Thoughts about Probability” reports an earlier and more thoughtful exchange with a believer in single-event probabilities.

DISCUSSION

A believer in single-event probabilities takes the view that a single flip of a coin or roll of dice has a probability. I do not. A probability represents the frequency with which an outcome occurs over the very long run, and it is only an average that conceals random variations.

The outcome of a single coin flip can’t be reduced to a percentage or probability. It can only be described in terms of its discrete, mutually exclusive possibilities: heads (H) or tails (T). The outcome of a single roll of a die or pair of dice can only be described in terms of the number of points that may come up, 1 through 6 or 2 through 12.

Yes, the expected frequencies of H, T, and and various point totals can be computed by simple mathematical operations. But those are only expected frequencies. They say nothing about the next coin flip or dice roll, nor do they more than approximate the actual frequencies that will occur over the next 100, 1,000, or 10,000 such events.

Of what value is it to know that the probability of H is 0.5 when H fails to occur in 11 consecutive flips of a fair coin? Of what value is it to know that the probability of rolling a  7 is 0.167 — meaning that 7 comes up every 6 rolls, on average — when 7 may not appear for 56 consecutive rolls? These examples are drawn from simulations of 10,000 coin flips and 1,000 dice rolls. They are simulations that I ran once — not simulations that I cherry-picked from many runs. (The Excel file is at https://drive.google.com/open?id=1FABVTiB_qOe-WqMQkiGFj2f70gSu6a82 — coin flips are on the first tab, dice rolls are on the second tab.)

Let’s take another example, which is more interesting, and has generated much controversy of the years. It’s the Monty Hall problem,

a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let’s Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975…. It became famous as a question from a reader’s letter quoted in Marilyn vos Savant’s “Ask Marilyn” column in Parade magazine in 1990 … :

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice

Vos Savant’s response was that the contestant should switch to the other door…. Under the standard assumptions, contestants who switch have a 2/3 chance of winning the car, while contestants who stick to their initial choice have only a 1/3 chance.

Vos Savant’s answer is correct, but only if the contestant is allowed to play an unlimited number of games. A player who adopts a strategy of “switch” in every game will, in the long run, win about 2/3 of the time (explanation here). That is, the player has a better chance of winning if he chooses “switch” rather than “stay”.

Read the preceding paragraph carefully and you will spot the logical defect that underlies the belief in single-event probabilities: The long-run winning strategy (“switch”) is transformed into a “better chance” to win a particular game. What does that mean? How does an average frequency of 2/3 improve one’s chances of winning a particular game? It doesn’t. As I show here, game results are utterly random; that is, the average frequency of 2/3 has no bearing on the outcome of a single game.

I’ll try to drive the point home by returning to the coin-flip game, with money thrown into the mix. A $1 bet on H means a gain of $1 if H turns up, and a loss of $1 if T turns up. The expected value of the bet — if repeated over a very large number of trials — is zero. The bettor expects to win and lose the same number of times, and to walk away no richer or poorer than when he started. And for a very large number of games, the better will walk away approximately (but not necessarily exactly) neither richer nor poorer than when he started. How many games? In the simulation of 10,000 games mentioned earlier, H occurred 50.6 percent of the time. A very large number of games is probably at least 100,000.

Let us say, for the sake of argument, that a bettor has played 100,00 coin-flip games at $1 a game and come out exactly even. What does that mean for the play of the next game? Does it have an expected value of zero?

To see why the answer is “no”, let’s make it interesting and say that the bet on the next game — the next coin flip — is $10,000. The size of the bet should wonderfully concentrate the bettor’s mind. He should now see the situation for what it really is: There are two possible outcomes, and only one of them will be realized. An average of the two outcomes is meaningless. The single coin flip doesn’t have a “probability” of 0.5 H and 0.5 T and an “expected payoff” of zero. The coin will come up either H or T, and the bettor will either lose $10,000 or win $10,000.

To repeat: The outcome of a single coin flip doesn’t have an expected value for the bettor. It has two possible values, and the bettor must decide whether he is willing to lose $10,000 on the single flip of a coin.

By the same token (or coin), the outcome of a single roll of a pair of dice doesn’t have a 1-in-6 probability of coming up 7. It has 36 possible outcomes and 11 possible point totals, and the bettor must decide how much he is willing to lose if he puts his money on the wrong combination or outcome.

CONCLUSION

It is a logical fallacy to ascribe a probability to a single event. A probability represents the observed or computed average value of a very large number of like events. A single event cannot possess that average value. A single event has a finite number of discrete and mutually exclusive outcomes. Those outcomes will not “average out” — only one of them will obtain, like Schrödinger’s cat.

To say that the outcomes will average out — which is what a probability implies — is tantamount to saying that Jack Sprat and his wife were neither skinny nor fat because their body-mass indices averaged to a normal value. It is tantamount to saying that one can’t drown by walking across a pond with an average depth of 1 foot, when that average conceals the existence of a 100-foot-deep hole.


Related posts:
Understanding the Monty Hall Problem
The Compleat Monty Hall Problem
Some Thoughts about Probability
My War on the Misuse of Probability
Scott Adams Understands Probability

A (Long) Footnote about Science

In “Deduction, Induction, and Knowledge” I make a case that knowledge (as opposed to belief) can only be inductive, that is, limited to specific facts about particular phenomena. It’s true that a hypothesis or theory about a general pattern of relationships (e.g., the general theory of relativity) can be useful, and even necessary. As I say at the end of “Deduction…”, the fact that a general theory can’t be proven

doesn’t — and shouldn’t — stand in the way of acting as if we possess general knowledge. We must act as if we possess general knowledge. To do otherwise would result in stasis, or analysis-paralysis.

Which doesn’t mean that a general theory should be accepted just because it seems plausible. Some general theories — such as global climate models (or GCMs) are easily falsified. They persist only because pseudo-scientists and true believers refuse to abandon them. (There is no such thing as “settled science”.)

Neil Lock, writing at Watts Up With That?, offers this perspective on inductive vs. deductive thinking:

Bottom up thinking is like the way we build a house. Starting from the ground, we work upwards, using what we’ve done already as support for what we’re working on at the moment. Top down thinking, on the other hand, starts out from an idea that is a given. It then works downwards, seeking evidence for the idea, or to add detail to it, or to put it into practice….

The bottom up thinker seeks to build, using his senses and his mind, a picture of the reality of which he is a part. He examines, critically, the evidence of his senses. He assembles this evidence into percepts, things he perceives as true. Then he pulls them together and generalizes them into concepts. He uses logic and reason to seek understanding, and he often stops to check that he is still on the right lines. And if he finds he has made an error, he tries to correct it.

The top down thinker, on the other hand, has far less concern for logic or reason, or for correcting errors. He tends to accept new ideas only if they fit his pre-existing beliefs. And so, he finds it hard to go beyond the limitations of what he already knows or believes. [“‘Bottom Up’ versus ‘Top Down’ Thinking — On Just about Everything“, October 22, 2017]

(I urge you to read the whole thing, in which Lock applies the top down-bottom up dichotomy to a broad range of issues.)

Lock overstates the distinction between the two modes of thought. A lot of “bottom up” thinkers derive general hypotheses from their observations about particular events. But — and this is a big “but” — they are also amenable to revising their hypotheses when they encounter facts that contradict them. The best scientists are bottom-up and top-down thinkers whose beliefs are based on bottom-up thinking.

General hypotheses are indispensable guides to “everyday” living. Some of them (e.g., fire burns, gravity causes objects to fall) are such reliable guides that it’s foolish to assume their falsity. Nor does it take much research to learn, for example, that there are areas within a big city where violent crime is rampant. A prudent person — even a “liberal” one — will therefore avoid those areas.

There are also general patterns — now politically incorrect to mention — with respect to differences in physical, psychological, and intellectual traits and abilities between men and women and among races. (See this, this, and this, for example.) These patterns explain disparities in achievement, but they are ignored by true believers who would wish away the underlying causes and penalize those who are more able (in a relevant dimension) for the sake of ersatz equality. The point is that a good many people — perhaps most people — studiously ignore facts of some kind in order to preserve their cherished beliefs about themselves and the world around them.

Which brings me back to science and scientists. Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

This is certainly the case in physics, where scientists admit that the standard model of sub-atomic physics “proves” that the universe shouldn’t exist. (See Andrew Griffin, “The Universe Shouldn’t Exist, Scientists Say after Finding Bizarre Behaviour of Anti-Matter“, The Independent, October 23, 2017.) It is most certainly the case in climatology, where many pseudo-scientists have deployed hopelessly flawed models in the service of policies that would unnecessarily cripple the economy of the United States.

As I say here,

scientists are human and fallible. It is in the best tradition of science to distrust their claims and to dismiss their non-scientific utterances.

Non-scientific utterances are not only those which have nothing to do with a scientist’s field of specialization, but also include those that are based on theories which derive from preconceptions more than facts. It is scientific to admit lack of certainty. It is unscientific — anti-scientific, really — to proclaim certainty about something that is so little understood the origin of the universe or Earth’s climate.


Related posts:
Hemibel Thinking
The Limits of Science
The Thing about Science
Science in Politics, Politics in Science
Global Warming and the Liberal Agenda
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
“Warmism”: The Myth of Anthropogenic Global Warming
Modeling Is Not Science
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Pinker Commits Scientism
AGW: The Death Knell
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Is Science Self-Correcting?
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
Bayesian Irrationality
Mettenheim on Einstein’s Relativity
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unkownable

Much Ado about the Unknown and Unknowable

New items are added occasionally to the list of related reading that follows the text of this post.

The “official” GISS set of temperature records (here) comprises surface thermometer records going back to January 1880. It takes a lot of massaging to construct a monthly time series of “global” temperatures that spans 137 years, with spotty coverage of Earth’s surface (even now), and wide variability in site conditions. There’s the further issue of data manipulation, the most recent example of which was the erasure of the pause that had lasted for almost 19 years.

Taking the GISS numbers at face value, for the moment, what do they suggest about changes in Earth’s temperature (whatever that means)? Almost nothing, when viewed in proper perspective. When viewed, that is, in terms of absolute (Kelvin) temperature readings:

Yes, there’s an upward trend of about 1 degree K (or 1 degree C) per century. And, yes, it’s statistically significant. But the statistical significance is due to the strong correlation between time and temperature. The trend doesn’t explain why Earth’s temperature is what it is. Nor does it explain why it has varied over the past 137 years.

Those variations have been minute. The maximum of 288.79K  is only 1.1 percent higher than the minimum of 285.68K. This minuscule difference must be swamped by measurement and estimation errors. It is credible that Earth’s average temperature — had it been measured consistently over the past 137 years — would have changed less (or more) than the GISS record indicates. It is credible that the observed uptrend is an artifact of selective observation and interpretation. It has become warmer over the past 30 years where I live, for example, but the warming is explained entirely by the urban heat-island effect.

A proper explanation of the minute variations in Earth’s temperature — if real — would incorporate all of the factors that influence Earth’s temperature, starting from Earth’s core and going out into the far reaches of the universe (e.g., to account for the influence of cosmic radiation). Among many things, a proper explanation would encompass the effects of the expansion of the universe, the position and movement of the Milky Way, the position and movement of the Solar System, and the position and movement of Earth within the Solar System, and variations in Earth’s magnetic field.

But global climate models (or GCMs) focus entirely on temperature changes and are limited to superficial factors that are hypothesized to cause those changes — but only those factors that can be measured or estimated by complex and often-dubious methods (e.g., the effects of cloud cover). This is equivalent to searching for one’s car keys under a street lamp because that’s where the light is, even though the car keys were dropped 100 feet away.

The deeper and probably more relevant causes of Earth’s ambient temperature are to be found, I believe, in Earth’s core, magma, plate dynamics, ocean currents and composition, magnetic field, exposure to cosmic radiation, and dozens of other things that — to my knowledge — are ignored by GCMs. Moreover, the complexity of the interactions of such factors, and others that are usually included in GCMs, cannot possibly be modeled.

In sum:

  • Changes in Earth’s temperature are unknown with any degree of confidence.
  • At best, the changes are minute.
  • The causes of the changes are unknown.
  • It is impossible to model Earth’s temperature or changes in it.

It is therefore impossible to say whether and to what extent human activity causes Earth’s temperature to change.

It is further impossible for a group of scientists, legislators, or opinionizers to say whether Earth’s warming — if indeed it is warming — is a bad thing. It is a good thing for agriculture — up to some point. It’s a good thing for human comfort (thus the flight of “snowbirds”) — up to some point. But for reasons given above, it’s truly unknown whether those points, and others, will be reached. But as they are, human beings will adapt, as they have in the past — unless their ability to adapt is preempted or hampered by costly regulations and counterproductive resource reallocations.

Science is not on the side of the doom-sayers, no matter how loudly they protest that it is.


Related reading (listed chronologically):
Freeman Dyson, “Heretical Thoughts about Science and Society“, from Many Colored Glass: Reflections on the Place of Life in the Universe, University of Virgina Press, 2007
Ron Clutz, “Temperatures According to Climate Models“, Science Matters, March 24, 2015
Dr. Tim Ball, “Long-Term Climate Change: What Is a Reasonable Sample Size?“, Watts Up With That?, February 7, 2016
The Global Warming Policy Foundation, Climate Science: Assumptions, Policy Implications, and the Scientific Method, 2017
John Mauer, “Through the Looking Glass with NASA GISS“, Watts Up With That?, February 22, 2017
David R. Henderson and Charles L. Hooper, “Flawed Climate Models“, Hoover Institution, April 4, 2017
Mike Jonas, “Indirect Effects of the Sun on Earth’s Climate“, Watts Up With That?, June 10, 2017
George White, “A Consensus of Convenience“, Watts Up With That?, August 20, 2017
Jennifer Marohasy, “Most of the Recent Warming Could be Natural“, Jennifer Marohasy, August 21, 2017
Richard Taylor, “News from Vostok Ice Cores“, Watts Up With That?, October 8, 2017
Ian Flanigan, “Core of Climate Science Is in the Real-World Data“, Watts Up With That?, November 22, 2017
Eric Worrall, “Claim: Climate Driven Human Extinction ‘in the Coming Decades or Sooner’“, Watts Up With That?, November 23, 2017
Rupert Darwall, “A Veneer of Certainty Stoking Climate Alarm“, Competitive Enterprise Institute, November 28, 2017
Anthony Watts, “New Paper: The Missing Link between Cosmic Rays, Clouds, and Climate Change on Earth“, Watts Up With That?, December 19, 2017
David Archibald, “Baby It’s Cold Outside – Evidence of Solar Cycle Affecting Earth’s Cloud Cover“, Watts Up With That?, December 31, 2017
Anthony Watts, “‘Flaws in Applying Greenhouse Warming to Climate Variability’: A Post-Mortem Paper by Dr. Bill Gray“,  Watts Up With That?, January 18, 2018
Dale Leuck, “Fake News and 2017 Near-Record Temperatures“, Watts Up With That?, January 21, 2017
Christopher Booker, Global Warming: A Case Study in Groupthink, The Global Warming Policy Foundation, February 2018
Will Happer, “Can Climate Models Predict Climate Change?“, PragerU, February 5, 2018
Anthony Watts, “A Never-Before Western-Published Paleoclimate Study from China Suggests Warmer Temperatures in the Past“, Watts Up With That?, February 11, 2018
H. Sterling Burnett, “Alarmist Climate Researchers Abandon Scientific Method“, The American Spectator, February 27, 2018
Anthony Watts, “Alarmists Throw In the Towel on Poor Quality Surface Temperature Data – Pitch for a New Global Climate Reference Network“, Watts Up With That?, March 2, 2018
David Archibald, “The Modern Warm Period Delimited“, Watts Up With That?, March 10, 2018 (This piece offers further evidence — not put forward as “proof” — of the influence of solar flux on cosmic radiation, which affects cloud formation and thus climate. The time scale analyzed is far longer than the 25-year period in which the coincidence of rising CO2 emissions and temperatures led many climate scientists — and many more non-scientists — to become “global warming” “climate change” “climate catastrophe” hysterics.)
Anthony Watts, “New Paper Tries to Disentangle Global Warming from Natural Ocean Variations“, Watts Up With That?, March 15, 2018
Anthony Watts, “Climate Scientist Admits Embarrassment over Future Climate Uncertainty“, Watts Up With That?, March 16, 2018
Renee Hannon, “Modern Warming – Climate Variability or Climate Change?“, Watts Up With That?, March 28, 2018
David Archibald, “It Was the Sun All Along — So Say the Bulgarians“, Watts Up With That?, April 9, 2018
Mark Fife, “Reconstructing a Temperature History Using Complete and Partial Data“, Watts Up With That?, April 19, 2018
Jamal Munshi, “The Charney Sensitivity of Homicides to Atmospheric CO2: A Parody“, Watts Up With That?, April 20, 2018
Peter L. Ward, “Ozone Depletion, Not Greenhouse Gases Cause for Global Warming, Says Researcher“, R&D Magazine, April 20, 2018 (It makes as much sense, if not more sense, than global climate models, which only predict the past.)

Related posts:
AGW: The Death Knell (with many links to related reading and earlier posts)
Not-So-Random Thoughts (XIV) (second item)
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II) (with more links to related reading)
Global-Warming Hype

Deduction, Induction, and Knowledge

Syllogism:

All Greek males are bald.

Herodotus is a Greek male.

Therefore, Herodotus is bald.

The conclusion is false because Herodotus isn’t bald, at least not as he is portrayed.

Moreover, the conclusion depends on a premise — all Greeks are bald — which can’t be known with certainty. The disproof of the premise by a single observation exemplifies the HumeanPopperian view of the scientific method. A scientific proposition is one that can be falsified  — contradicted by observed facts. If a proposition isn’t amenable to falsification, it is non-scientific.

In the Humean-Popperian view, a general statement such as “all Greek males are bald” can never be proven. (The next Greek male to come into view may have a full head of hair.) In this view, knowledge consists only of the accretion of discrete facts. General statements are merely provisional inferences based on what has been observed, and cannot be taken as definitive statements about what has not been observed.

Is there a way to prove a general statement about a class of things by showing that there is something about such things which necessitates the truth of a general statement about them? That approach begs the question. The “something about such things” can be discovered only by observation of a finite number of such things. The unobserved things are still lurking out of view, and any of them might not possess the “something” that is characteristic of the observed things.

All general statements about things, their characteristics, and their relationshships are therefore provisional. This inescapable truth has been dressed up in the guise of inductive probability, which is a fancy way of saying the same thing.

Not all is lost, however. If it weren’t for provisional knowledge about such things as heat and gravity, many more human beings would succumb to the allure of flames and cliffs, and man would never have stood on the Moon. If it weren’t for provisional knowledge about the relationship between matter and energy, nuclear power and nuclear weapons wouldn’t exist. And on and on.

The Humean-Popperian view is properly cautionary, but it doesn’t — and shouldn’t — stand in the way of acting as if we possess general knowledge. We must act as if we possess general knowledge. To do otherwise would result in stasis, or analysis-paralysis.

Altruism, One More Time

I am reading and generally enjoying Darwinian Fairytales: Selfish Genes, Errors of Heredity and Other Fables of Evolution by the late Australian philosopher, David Stove. I say generally enjoying because in Essay 6, which I just finished reading, Stove goes off the rails.

The title of Essay 6 is “Tax and the Selfish Girl, Or Does ‘Altruism’ Need Inverted Commas?”. Stove expends many words in defense of altruism as it is commonly thought of: putting others before oneself. He also expends some words (though not many) in defense of taxation as an altruistic act.

Stove, whose writing is refreshingly informal instead of academically stilted, is fond of calling things “ridiculous” and “absurd”. Well, Essay 6 is both of those things. Stove’s analysis of altruism is circular: He parades examples of what he considers altruistic conduct, and says that because there is such conduct there must be altruism.

His target is a position that I have taken, and still hold despite Essay 6. My first two essays about altruism are here and here. I will quote a third essay, in which I address philosopher Jason Brennan’s defense of altruism:

What about Brennan’s assertion that he is genuinely altruistic because he doesn’t merely want to avoid bad feelings, but wants to help his son for his son’s sake. That’s called empathy. But empathy is egoistic. Even strong empathy — the ability to “feel” another person’s pain or anguish — is “felt” by the empathizer. It is the empathizer’s response to the other person’s pain or anguish.

Brennan inadvertently makes that point when he invokes sociopathy:

Sociopaths don’t care about other people for their own sake–they view them merely as instruments. Sociopaths don’t feel guilt for failing to help others.

The difference between a sociopath and a “normal” person is found in caring (feeling). But caring (feeling) is something that the I does — or fails to do, if the I is a sociopath. I = ego:

the “I” or self of any person; a thinking, feeling, and conscious being, able to distinguish itself from other selves.

I am not deprecating the kind of laudable act that is called altruistic. I am simply trying to point out what should be an obvious fact: Human beings necessarily act in their own interests, though their own interests often coincide with the interests of others for emotional reasons (e.g., love, empathy), as well as practical ones (e.g., loss of income or status because of the death of a patron).

It should go without saying that the world would be a better place if it had fewer sociopaths in it. Voluntary, mutually beneficial relationships are more than merely transactional; they thrive on the mutual trust and respect that arise from social bonds, including the bonds of love and affection.

Where Stove goes off the rails is with his claim that the existence of classes of people like soldiers, priests, and doctors is evidence of altruism. (NB: Stove was an atheist, so his inclusion of priests isn’t any kind of defense of religion.)

People become soldiers, priests, and doctors for various reasons, including (among many non-altruistic things) a love of danger (soldiers), a desire to control the lives of others (soldiers, priests, and doctors), an intellectual challenge that has nothing to do with caring for others (doctors), earning a lot of money (doctors), prestige (high-ranking soldiers, priests, and doctors), and job security (priests and doctors). Where’s the altruism in any of that?

Where Stove really goes off the rails is with his claim that redistributive taxation is evidence of altruism. As if human beings live in monolithic societies (like ant colonies), where the will of one is the will of all. And as if government represents the “will of the people”, when all it represents is the will of a small number of people who have been granted the power to govern by garnering a bare minority of votes cast by a minority of the populace, by their non-elected bureaucratic agents, and by (mostly) non-elected judges.

 

Hurricane Hysteria

UPDATED 09/15/17 AND 09/16/17

Yes, hurricanes are bad things when they kill and injure people, destroy property, and saturate the soil with seawater. But hurricanes are in the category of “stuff happens”.

Contrary to the true believers in catastrophic anthropogenic global warming (CAGW), hurricanes are not the fault of human beings. Hurricanes are not nature’s “retribution” for mankind’s “sinful” ways, such as the use of fossil fuels.

How do I know? Because there are people who actually look at the numbers. See, for example, “Hate on Display: Climate Activists Go Bonkers Over #Irma and Nonexistent Climate Connection” by Anthony Watts  (Watts Up With That?, September 11, 2017). See also Michel de Rougement’s “Correlation of Accumulated Cyclone Energy and Atlantic Multidecadal Oscillations” (Watts Up With That?, September 4, 2017).

M. de Rougemont’s post addresses accumulated cyclone energy (ACE):

The total energy accumulated each year by tropical storms and hurricanes (ACE) is also showing such a cyclic pattern.

NOAA’s Hurricane Research Division explanations on ACE: “the ACE is calculated by squaring the maximum sustained surface wind in the system every six hours (knots) and summing it up for the season. It is expressed in 104 kt2.” Direct instrumental observations are available as monthly series since 1848. A historic reconstruction since 1851 was done by NOAA (yearly means).

clip_image006

Figure 2 Yearly accumulated cyclone energy (ACE) ACE_7y: centered running average over 7 years

A correlation between ACE and AMO [Atlantic Multidecadal Oscillation] is confirmed by regression analysis.

clip_image008

Figure 3 Correlation ACE=f(AMO), using the running averages over 7 years. AMO: yearly means of the Atlantic Multidecadal Oscillations ACE_7y: yearly observed accumulated cyclone energy ACE_calc: calculated ACE by using the indicated formula.

Regression formula:

clip_image010
clip_image012

Thus, a simple, linear relation ties ACE to AMO, in part directly, and in part with an 18 years delay. The correlation coefficient is astonishingly good.

Anthony Watts adds fuel to this fire (or ice to this cocktail) in “Report: Ocean Cycles, Not Humans, May Be Behind Most Observed Climate Change” (Watts Up With That?, September 15, 2017). There, he discusses a report by Anastosios Tsonis, which I have added to the list of related readings, below:

… Anastasios Tsonis, emeritus distinguished professor of atmospheric sciences at the University of Wisconsin-Milwaukee, describes new and cutting-edge research into natural climatic cycles, including the well known El Nino cycle and the less familiar North Atlantic Oscillation and Pacific Decadal Oscillation.

He shows how interactions between these ocean cycles have been shown to drive changes in the global climate on timescales of several decades.

Professor Tsonis says:

We can show that at the start of the 20th century, the North Atlantic Oscillation pushed the global climate into a warming phase, and in 1940 it pushed it back into cooling mode. The famous “pause” in global warming at the start of the 21st century seems to have been instigated by the North Atlantic Oscillation too.

In fact, most of the changes in the global climate over the period of the instrumental record seem to have their origins in the North Atlantic.

Tsonis’ insights have profound implications for the way we view calls for climate alarm.

It may be that another shift in the North Atlantic could bring about another phase shift in the global climate, leading to renewed cooling or warming for several decades to come.

These climatic cycles are entirely natural, and can tell us nothing about the effect of carbon dioxide emissions. But they should inspire caution over the slowing trajectory of global warming we have seen in recent decades.

As Tsonis puts it:

While humans may play a role in climate change, other natural forces may play important roles too.

There are other reasons to be skeptical of CAGW, and even of AGW. For one thing, temperature records are notoriously unreliable, especially records from land-based thermometers. (See, for example, these two posts at Watt’s Up With That?: “Press Release – Watts at #AGU15 The Quality of Temperature Station Siting Matters for Temperature Trends” by Anthony Watts on December 17, 2015, and “Ooops! Australian BoM Climate Readings May Be invalid Due To Lack of Calibration“, on September 11, 2017.) And when those records aren’t skewed by siting and lack-of-coverage problems, they’re skewed by fudging the numbers to “prove” CAGW. (See my post, “Global-Warming Hype“, August 22, 2017.) Moreover, the models that “prove” CAGW and AGW are terrible, to put it bluntly. (Again, see “Global-Warming Hype“, and also Dr. Tim Ball’s post of September 16, 2017, “Climate Models Can’t Even Approximate Reality Because Atmospheric Structure and Movements are Virtually Unknown” at Watts Up With That?)

It’s certainly doubtful that NOAA’s reconstruction of ACE is accurate and consistent as far back as 1851. I hesitate to give credence to a data series that predates the confluence of satellite observations, ocean-buoys, and specially equipped aircraft. The history of weather satellites casts doubt on the validity of aggregate estimates for any period preceding the early 1960s.

As it happens, the data sets for tropical cyclone activity that are maintained by the Tropical Meteorology Project at Colorado State University cover all six of the relevant ocean basins as far back as 1972. And excluding the North Indian Ocean basin — which is by far the least active — the coverage goes back to 1961 (and beyond).

Here’s a graph of the annual values for each basin from 1961 through 2016:

Here’s a graph of the annual totals for 1961-2016, without the North Indian Ocean basin:

The red line is the sum of ACE for all five basins, including the Northwest Pacific basin; the yellow line in the sum of ACE for four basins, including the Northeast Pacific basin; etc.

The exclusion of the North Indian Ocean basin makes little difference in the totals, which look like this with the inclusion of that basin:

I have these observations about the numbers represented in the preceding graphs:

If one is a believer in CAGW (the G stands for global), it is a lie (by glaring omission) to focus on random, land-falling hurricanes hitting the U.S.

Tropical cyclone activity in the North Atlantic basin, which includes storms that hit the U.S., is not a major factor in the level of global activity.

The level of activity in the North Atlantic basin is practically flat between 1961 and 2016.

The overall level of activity is practically flat between 1961 and 2016, with the exception of spikes that seem to coincide with strong El Niño events.

There is a “pause” in the overall level of activity between the late 1990s and 2015 (with the exception of an El Niño-related spike in 2004). The pause coincides with the pause in global temperatures, which suggests an unsurprising correlation between the level of tropical cyclone activity and the warming of the globe — or lack thereof. But it doesn’t explain that warming, and climate models that “explain” it primarily as a function of the accumulation of atmospheric CO2 are notoriously unreliable.

In fact, NOAA’s reconstruction of ACE in the North Atlantic basin — which, if anything, probably understates ACE before the early 1960s — is rather suggestive:

The recent spikes in ACE are not unprecedented. And there are many prominent spikes that predate the late-20th-century temperature rise on which “warmism” is predicated.

I am very sorry for the victims of Harvey, Irma, and every other weather-related disaster — and of every other disaster, whether man-made or not. But I am not about to reduce my carbon footprint because of the Luddite hysterics who dominate and cling to the quasi-science of climatology.


Other related reading:
Ron Clutz, “Temperatures According to Climate Models“, Science Matters, March 24, 2015
Dr. Tim Ball, “Long-Term Climate Change: What Is a Reasonable Sample Size?“, Watts Up With That?, February 7, 2016
The Global Warming Policy Foundation, Climate Science: Assumptions, Policy Implications, and the Scientific Method, 2017
John Mauer, “Through the Looking Glass with NASA GISS“, Watts Up With That?, February 22, 2017
George White, “A Consensus of Convenience“, Watts Up With That?, August 20, 2017
Jennifer Marohasy, “Most of the Recent Warming Could be Natural“, Jennifer Marohasy, August 21, 2017
Anthony Watts, “What You Need to Know and Are Not Told about Hurricanes“, Watts Up With That?, September 15, 2017
Anastasios Tsonis, The Little Boy: El Niño and Natural Climate Change, Global Warming Policy Foundation, GWPF Report 26, 2017

Other related posts:
AGW: The Death Knell (with many links to related reading and earlier posts)
Not-So-Random Thoughts (XIV) (second item)
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II) (with more links to related reading)

Babe Ruth and the Hot-Hand Hypothesis

According to Wikipedia, the so-called hot-hand fallacy is that “a person who has experienced success with a seemingly random event has a greater chance of further success in additional attempts.” The article continues:

[R]esearchers for many years did not find evidence for a “hot hand” in practice. However, later research has questioned whether the belief is indeed a fallacy. More recent studies using modern statistical analysis have shown that there is evidence for the “hot hand” in some sporting activities.

I won’t repeat the evidence cited in the Wikipedia article, nor will I link to the many studies about the hot-hand effect. You can follow the link and read it all for yourself.

What I will do here is offer an analysis that supports the hot-hand hypothesis, taking Babe Ruth as a case in point. Ruth was a regular position player (non-pitcher) from 1919 through 1934. In that span of 16 seasons he compiled 688 home runs (HR) in 7,649 at-bats (AB) for an overall record of 0.0900 HR/AB. Here are the HR/AB tallies for each of the 16 seasons:

Year HR/AB
1919 0.067
1920 0.118
1921 0.109
1922 0.086
1923 0.079
1924 0.087
1925 0.070
1926 0.095
1927 0.111
1928 0.101
1929 0.092
1930 0.095
1931 0.086
1932 0.090
1933 0.074
1934 0.060

Despite the fame that accrues to Ruth’s 1927 season, when he hit 60 home runs, his best season for HR/AB came in 1920. In 1919, Ruth set a new single-season record with 29 HR. He almost doubled that number in 1920, getting 54 HR in 458 AB for 0.118 HR/AB.

Here’s what that season looks like, in graphical form:

The word for it is “streaky”, which isn’t surprising. That’s the way of most sports. Streaks include not only cold spells but also hot spells. Look at the relatively brief stretches in which Ruth was shut out in the HR department. And look at the relatively long stretches in which he readily exceeded his HR/AB for the season. (For more about the hot and and streakiness, see Brett Green and Jeffrey Zwiebel, “The Hot-Hand Fallacy: Cognitive Mistakes or Equilibrium Adjustments? Evidence from Major League Baseball“, Stanford Graduate School of Business, Working Paper No. 3101, November 2013.)

The same pattern can be inferred from this composite picture of Ruth’s 1919-1934 seasons:

Here’s another way to look at it:

If hitting home runs were a random thing — which they would be if the hot hand were a fallacy — the distribution would be tightly clustered around the mean of 0.0900 HR/AB. Nor would there be a gap between 0 HR/AB and the 0.03 to 0.06 bin. In fact, the gap is wider than that; it goes from 0 to 0.042 HR/AB. When Ruth broke out of a home-run slump, he broke out with a vengeance, because he had the ability to do so.

In other words, Ruth’s hot streaks weren’t luck. They were the sum of his ability and focus (or “flow“); he was “putting it all together”. The flow was broken at times — by a bit of bad luck, a bout of indigestion, a lack of sleep, a hangover, an opponent who “had his number”, etc. But a great athlete like Ruth bounces back and put it all together again and again, until his skills fade to the point that he can’t overcome his infirmities by waiting for his opponents to make mistakes.

The hot hand is the default condition for a great player like a Ruth or a Cobb. The cold hand is the exception until the great player’s skills finally wither. And there’s no sharp dividing line between the likes of Cobb and Ruth and lesser mortals. Anyone who has the ability to play a sport at a professional level (and many an amateur, too) will play with a hot hand from time to time.

The hot hand isn’t a fallacy or a matter of pure luck (or randomness). It’s an artifact of skill.


Related posts:
Flow
Fooled by Non-Randomness
Randomness Is Over-Rated
Luck and Baseball, One More Time
Pseudoscience, “Moneyball,” and Luck
Ty Cobb and the State of Science
The American League’s Greatest Hitters: III

Pattern-Seeking

UPDATED 09/04/17

Scientists and analysts are reluctant to accept the “stuff happens” explanation for similar but disconnected events. The blessing and curse of the scientific-analytic mind is that it always seeks patterns, even where there are none to be found.

UPDATE 1

The version of this post that appears at Ricochet includes the following comments and replies:

Comment — Cool stuff, but are you thinking of any particular patter/maybe-not-pattern in particular?

My reply — The example that leaps readily to mind is “climate change”, the gospel of which is based on the fleeting (25-year) coincidence of rising temperatures and rising CO2 emissions. That, in turn, leads to the usual kind of hysteria about “climate change” when something like Harvey occurs.

Comment — It’s not a coincidence when the numbers are fudged.

My reply — The temperature numbers have been fudged to some extent, but even qualified skeptics accept the late 20th century temperature rise and the long-term rise in CO2. What’s really at issue is the cause of the temperature rise. The true believers seized on CO2 to the near-exclusion of other factors. How else could they then justify their puritanical desire to control the lives of others, or (if not that) their underlying anti-scientific mindset which seeks patterns instead of truths.

Another example, which applies to non-scientists and (some) scientists, is the identification of random arrangements of stars as “constellations”, simply because they “look” like something. Yet another example is the penchant for invoking conspiracy theories to explain (or rationalize) notorious events.

Returning to science, it is pattern-seeking which drives scientists to develop explanations that are later discarded and even discredited as wildly wrong. I list a succession of such explanations in my post “The Science Is Settled“.

UPDATE 2

Political pundits, sports writers, and sports commentators are notorious for making predictions that rely on tenuous historical parallels. I herewith offer an example, drawn from this very blog.

Here is the complete text of “A Baseball Note: The 2017 Astros vs. the 1951 Dodgers“, which I posted on the 14th of last month:

If you were following baseball in 1951 (as I was), you’ll remember how that season’s Brooklyn Dodgers blew a big lead, wound up tied with the New York Giants at the end of the regular season, and lost a 3-game playoff to the Giants on Bobby Thomson’s “shot heard ’round the world” in the bottom of the 9th inning of the final playoff game.

On August 11, 1951, the Dodgers took a doubleheader from the Boston Braves and gained their largest lead over the Giants — 13 games. The Dodgers at that point had a W-L record of 70-36 (.660), and would top out at .667 two games later. But their W-L record for the rest of the regular season was only .522. So the Giants caught them and went on to win what is arguably the most dramatic playoff in the history of professional sports.

The 2017 Astros peaked earlier than the 1951 Dodgers, attaining a season-high W-L record of .682 on July 5, and leading the second-place team in the AL West by 18 games on July 28. The Astros’ lead has dropped to 12 games, and the team’s W-L record since the July 5 peak is only .438.

The Los Angeles Angels might be this year’s version of the 1951 Giants. The Angels have come from 19 games behind the Astros on July 28, to trail by 12. In that span, the Angels have gone 11-4 (.733).

Hold onto your hats.

Since I wrote that, the Angels have gone 10-9, while the Astros have gone gone 12-8 and increased their lead over the Angels to 13.5 games. It’s still possible that the Astros will collapse and the Angels will surge. But the contest between the two teams no longer resembles the Dodgers-Giants duel of 1951, when the Giants had closed to 5.5 games behind the Dodgers at this point in the season.

My “model” of the 2017 contest between the Astros and Angels was on a par with the disastrously wrong models that “prove” the inexorability of catastrophic anthropogenic global warming. The models are disastrously wrong because they are being used to push government policy in counterproductive directions: wasting money on “green energy” while shutting down efficient sources of energy at the cost of real jobs and economic growth.


Related posts:
Hemibel Thinking
The Limits of Science
The Thing about Science
Words of Caution for Scientific Dogmatists
What’s Wrong with Game Theory
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
Mathematical Economics
Modeling Is Not Science
Beware the Rare Event
Physics Envy
What Is Truth?
The Improbability of Us
We, the Children of the Enlightenment
In Defense of Subjectivism
The Atheism of the Gaps
The Ideal as a False and Dangerous Standard
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Luck and Baseball, One More Time
Are the Natural Numbers Supernatural?
The Candle Problem: Balderdash Masquerading as Science
More about Luck and Baseball
Combinatorial Play
Pseudoscience, “Moneyball,” and Luck
The Fallacy of Human Progress
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
The “Marketplace” of Ideas
Time and Reality
My War on the Misuse of Probability
Ty Cobb and the State of Science
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
Is Science Self-Correcting?
Taleb’s Ruinous Rhetoric
Words Fail Us
Fine-Tuning in a Wacky Wrapper
Tricky Reasoning
Modeling Revisited
Bayesian Irrationality
The Fragility of Knowledge

The Fragility of Knowledge

A recent addition to the collection of essays at “Einstein’s Errors” relies mainly on Christoph von Mettenheim’s Popper versus Einstein. One of Mettenheim’s key witnesses for the prosecution of Einstein’s special theory of relativity (STR) is Alfred Tarski, a Polish-born logician and mathematician. According to Mettenheim, Tarski showed

that all the axioms of geometry [upon which STR is built] are in fact nominalistic definitions, and therefore have nothing to do with truth, but only with expedience. [p. 86]

Later:

Tarski has demonstrated that logical and mathematical inferences can never yield an increase of empirical information because they are based on nominalistic definitions of the most simple terms of our language. We ourselves give them their meaning and cannot,therefore, get out of them anything but what we ourselves have put into them. They are tautological in the sense that any information contained in the conclusion must also have been contained in the premises. This is why logic and mathematics alone can never lead to scientific discoveries. [p. 100]

Mettenheim refers also to Alfred North Whitehead, a great English mathematician and philosopher who preceded Tarski. I am reading Whitehead’s Science and the Modern World thanks to my son, who recently wrote about it. I had heretofore only encountered the book in bits and snatches. I will have more to say about it in future posts. For now, I am content to quote this relevant passage, which presages Tarski’s theme and goes beyond it:

Thought is abstract; and the the intolerant use of abstractions is the major vice of the intellect. this vice is not wholly corrected by the recurrence to concrete experience. For after all, you need only attend to those aspects of your concrete experience which lie within some limited scheme. There are two methods for the purification of ideas. One of them is dispassionate observation by means of the bodily senses. But observation is selection. [p. 18]

More to come.

Mettenheim on Einstein’s Relativity

I have added “Mettenheim on Einstein’s Relativity – Part I” to “Einstein’s Errors“. The new material draws on the Part I of Christoph von Mettenheim’s Popper versus Einstein: On the Philosophical Foundations of Physics (Tübingen: Mohr Siebeck, 1998). Mettenheim strikes many telling blows against STR. These go to the heart of STR and Einstein’s view of science:

[T[o Einstein the axiomatic method of Euclidean geometry was the method of all science; and the task of the scientist was to find those fundamental truths from which all other statement of science could then be derived by purely logical inference. He explicitly said that the step from geometry to physics was to be achieved by simply adding to the axioms of Euclidean geometry one single further axiom, namely the sentence

Regarding the possibilities of their position solid physical bodies will behave like the bodies of Euclidean geometry.

Popper versus Einstein, p. 30

*     *     *

[T]he theory of relativity as Einstein stated it was a mathematical theory. To him the logical necessity of his theory served as an explanation of its results. He believed that nature itself will observe the rules of logic. His words were that

experience of course remains the sole criterion of the serviceability of a mathematical construction for physics, but the truly creative principle resides in mathematics.

Popper versus Einstein, pp. 61-62

*     *     *

There’s much, much more. Go there and see for yourself.

Bayesian Irrationality

I just came across a strange and revealing statement by Tyler Cowen:

I am frustrated by the lack of Bayesianism in most of the religious belief I observe. I’ve never met a believer who asserted: “I’m really not sure here. But I think Lutheranism is true with p = .018, and the next strongest contender comes in only at .014, so call me Lutheran.” The religious people I’ve known rebel against that manner of framing, even though during times of conversion they may act on such a basis.

I don’t expect all or even most religious believers to present their views this way, but hardly any of them do. That in turn inclines me to think they are using belief for psychological, self-support, and social functions.

I wouldn’t expect anyone to say something like “Lutheranism is true with p = .018”. Lutheranism is either true or false. Just as a person on trial is either guilty or innocent. One may have doubts about the truth of Lutheranism or the guilt of a defendant, but those doubts have nothing to do with probability. Neither does Bayesianism.

In defense of probability, I will borrow heavily from myself. According to Wikipedia (as of December 19, 2014):

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

Or consider this account:

The Bayesian approach to learning is based on the subjective interpretation of probability.   The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p….

“Level of certainty” and “subjective interpretation” mean “guess.” The guess may be “educated.” It’s well known, for example, that a balanced coin will come up heads about half the time, in the long run. But to say that “I’m 50-percent confident that the coin will come up heads” is to say nothing meaningful about the outcome of a single coin toss. There are as many probable outcomes of a coin toss as there are bystanders who are willing to make a statement like “I’m x-percent confident that the coin will come up heads.” Which means that a single toss doesn’t have a probability, though it can be the subject of many opinions as to the outcome.

Returning to reality, Richard von Mises eloquently explains frequentism in Probability, Statistics and Truth (second revised English edition, 1957). Here are some excerpts:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply the theory of probability we must have a practically unlimited sequence of uniform observations. [P. 11]

*     *     *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the game of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. [P. 11]

*     *     *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute in a given collective’. [Pp. 11-12]

*     *     *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attribute would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective. [P. 15, emphasis in the original]

*     *     *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? [P. 33, emphasis added]

Cowen has always struck me a intellectually askew — looking at things from odd angles just for the sake of doing so. In that respect he reminds me of a local news anchor whose suits, shirts, ties, and pocket handkerchiefs almost invariably clash in color and pattern. If there’s a method to his madness, other than attention-getting, it’s lost on me — as is Cowen’s skewed, attention-getting way of thinking.

Modeling Revisited

Arnold Kling comments on a post by John Taylor, who writes about the Macroeconomic Modelling and Model Comparison Network (MMCN), which

is one part of a larger project called the Macroeconomic Model Comparison Initiative (MMCI)…. That initiative includes the Macroeconomic Model Data Base, which already has 82 models that have been developed by researchers at central banks, international institutions, and universities. Key activities of the initiative are comparing solution methods for speed and accuracy, performing robustness studies of policy evaluations, and providing more powerful and user-friendly tools for modelers.

Kling says: “Why limit the comparison to models? Why not compare models with verbal reasoning?” I say: a pox on economic models, whether they are mathematical or verbal.

That said, I do harbor special disdain for mathematical models, including statistical estimates of such models. Reality is nuanced. Verbal descriptions of reality, being more nuanced than mathematics, can more closely represent reality than can be done with mathematics.

Mathematical modelers are quick to point out that a mathematical model can express complex relationships which are difficult to express in words. True, but the words must always precede the mathematics. Long usage may enable a person to grasp the meaning of 2 + 2 = 4 without consciously putting it into words, but only because he already done so and committed the formula to memory.

Do you remember word problems? As I remember them, the words came first:

John is twenty years younger than Amy, and in five years’ time he will be half her age. What is John’s age now?

Then came the math:

Solve for J [John’s age]:

J = A − 20
J + 5 = (A + 5) / 2

[where A = Amy’s age]

What would be the point of presenting the math, then asking for the words?

Mathematics is a man-made tool. It probably started with counting. Sheep? Goats? Bananas? It doesn’t matter what it was. What matters is that the actual thing, which had a spoken name, came before the numbering convention that enabled people to refer to three sheep without having to draw or produce three actual sheep.

But … when it came to bartering sheep for loaves of bread, or whatever, those wily ancestors of ours knew that sheep come in many sizes, ages, fecundity, and states of health, and in two sexes. (Though I suppose that the LGBTQ movement has by now “discovered” homosexual and transgender sheep, and transsexual sheep may be in the offing.) Anyway, there are so many possible combinations of sizes, ages, fecundity, and states of health that it was (and is) impractical to reduce them to numbers. A quick, verbal approximation would have to do in the absence of the real thing. And the real thing would have to be produced before Grog and Grok actually exchanged X sheep for Y loaves of bread, unless they absolutely trusted each other’s honesty and descriptive ability.

Things are somewhat different in this age of mass production and commodification. But even if it’s possible to add sheep that have been bred for near-uniformity or nearly identical loaves of bread or Paper Mate Mirado Woodcase Pencils, HB 2, Yellow Barrel, it’s not possible to add those pencils to the the sheep and the loaves of bread. The best that one could do is to list the components of such a conglomeration by name and number, with the caveat that there’s a lot of variability in the sheep, goats, banana, and bread.

An economist would say that it is possible to add a collection of disparate things: Just take the sales price of each one, multiply it by the quantity sold, and if you do that for every product and service produced in the U.S. during a year you have an estimate of GDP. (I’m being a bit loose with the definition of GDP, but it’s good enough for the point I wish to make.) Further, some economists will tout this or that model which estimates changes in the value of GDP as a function of such things as interest rates, the rate of government spending, and estimates of projected consumer spending.

I don’t disagree that GDP can be computed or that economic models can be concocted. But it is to say that such computations and models, aside from being notoriously inaccurate (even though they deal in dollars, not in quantities of various products and services), are essentially meaningless. Aside from the errors that are inevitable in the use of sampling to estimate the dollar value of billions of transactions, there is the essential meaninglessness of the dollar value. Every transaction represented in an estimate of GDP (or any lesser aggregation) has a different real value to each participant in the transaction. Further, those real values, even if they could be measured and expressed in “utils“, can’t be summed because “utils” are incommensurate — there is no such thing as a social-welfare function.

Quantitative aggregations are not only meaningless, but their existence simply encourages destructive government interference in economic affairs. Mathematical modeling of “aggregate economic activity” (there is no such thing) may serve as an amusing and even lucrative pastime, but it does nothing to advance the lives and fortunes of the vast majority of Americans. In fact, it serves to retard their lives and fortunes.

All of that because pointy-headed academics, power-lusting politicians, and bamboozled bureaucrats believe that economic aggregates and quantitative economic models are meaningful. If they spent more than a few minutes thinking about what those models are supposed to represent — and don’t and can’t represent — they would at least use them with a slight pang of conscience. (I hold little hope that they would abandon them. The allure of power and the urge to “do something” are just too strong.)

Economic aggregates and models gain meaning and precision only as their compass shrinks to discrete markets for closely similar products and services. But even in the quantification of such markets there will always be some kind of misrepresentation by aggregation, if only because tastes, preferences, materials, processes, and relative prices change constantly. Only a fool believes that a quantitative economic model (of any kind) is more than a rough approximation of past reality — an approximation that will fade quickly as time marches on.

Economist Tony Lawson puts it this way:

Given the modern emphasis on mathematical modelling it is important to determine the conditions in which such tools are appropriate or useful. In other words we need to uncover the ontological presuppositions involved in the insistence that mathematical methods of a certain sort be everywhere employed. The first thing to note is that all these mathematical methods that economists use presuppose event regularities or correlations. This makes modern economics a form of deductivism. A closed system in this context just means any situation in which an event regularity occurs. Deductivism is a form of explanation that requires event regularities. Now event regularities can just be assumed to hold, even if they cannot be theorised, and some econometricians do just that and dedicate their time to trying to uncover them. But most economists want to theorise in economic terms as well. But clearly they must do so in terms that guarantee event regularity results. The way to do this is to formulate theories in terms of isolated atoms. By an atom I just mean a factor that has the same independent effect whatever the context. Typically human individuals are portrayed as the atoms in question, though there is nothing essential about this. Notice too that most debates about the nature of rationality are beside the point. Mainstream modellers just need to fix the actions of the individual of their analyses to render them atomistic, i.e., to fix their responses to given conditions. It is this implausible fixing of actions that tends to be expressed though, or is the task of, any rationality axiom. But in truth any old specification will do, including fixed rule or algorithm following as in, say, agent based modelling; the precise assumption used to achieve this matters little. Once some such axiom or assumption-fixing behaviour is made economists can predict/deduce what the factor in question will do if stimulated. Finally the specification in this way of what any such atom does in given conditions allows the prediction activities of economists ONLY if nothing is allowed to counteract the actions of the atoms of analysis. Hence these atoms must additionally be assumed to act in isolation. It is easy to show that this ontology of closed systems of isolated atoms characterises all of the substantive theorising of mainstream economists.

It is also easy enough to show that the real world, the social reality in which we actually live, is of a nature that is anything but a set of closed systems of isolated atoms (see Lawson, [Economics and Reality, London and New York: Routledge] 1997, [Reorienting Economics, London and New York: Routledge] 2003).

Mathematical-statistical descriptions of economic phenomena are either faithful (if selective) depictions of one-off events (which are unlikely to recur) or highly stylized renditions of complex chains of events (which almost certainly won’t recur). As Arnold Kling says in his review of Richard Bookstaber’s The End of Theory,

people are assumed to know, now and for the indefinite future, the entire range of possibilities, and the likelihood of each. The alternative assumption, that the future has aspects that are not foreseeable today, goes by the name of “radical uncertainty.” But we might just call it the human condition. Bookstaber writes that radical uncertainty “leads the world to go in directions we had never imagined…. The world could be changing right now in ways that will blindside you down the road.”

I’m picking on economics because it’s an easy target. But the “hard sciences” have their problems, too. See, for example, my work in progress about Einstein’s special theory of relativity.


Related reading:

John Cochrane, “Mallaby, the Fed, and Technocratic Illusions“, The Grumpy Economist, July 5, 2017

Vincent Randall: “The Uncertainty Monster: Lessons from Non-Orthodox Economics“, Climate Etc., July 5, 2017

Related posts:

Modeling Is Not Science
Microeconomics and Macroeconomics
Why the “Stimulus” Failed to Stimulate
Baseball Statistics and the Consumer Price Index
The Keynesian Multiplier: Phony Math
Further Thoughts about the Keynesian Multiplier
The Wages of Simplistic Economics
The Essence of Economics
Economics and Science
Economists As Scientists
Mathematical Economics
Economic Modeling: A Case of Unrewarded Complexity
Economics from the Bottom Up
Unorthodox Economics: 1. What Is Economics?
Unorthodox Economics: 2. Pitfalls
Unorthodox Economics: 3. What Is Scientific about Economics?
Unorthodox Economics 4: A Parable of Political Economy

“Science” vs. Science: The Case of Evolution, Race, and Intelligence

If you were to ask those people who marched for science if they believe in evolution, they would have answered with a resounding “yes”. Ask them if they believe that all branches of the human race evolved identically and you will be met with hostility. The problem, for them, is that an admission of the obvious — differential evolution, resulting in broad racial differences — leads to a fact that they don’t want to admit: there are broad racial differences in intelligence, differences that must have evolutionary origins.

“Science” — the cherished totem of left-wing ideologues — isn’t the same thing as science. The totemized version consists of whatever set of facts and hypotheses suit the left’s agenda. In the case of “climate change”, for example, the observation that in the late 1900s temperatures rose for a period of about 25 years coincident with a reported rise in the level of atmospheric CO2 occasioned the hypothesis that the generation of CO2 by humans causes temperatures to rise. This is a reasonable hypothesis, given the long-understood, positive relationship between temperature and so-called greenhouse gases. But it comes nowhere close to confirming what leftists seem bent on believing and “proving” with hand-tweaked models, which is that if humans continue to emit CO2, and do so at a higher rate than in the past, temperatures will rise to the point that life on Earth will become difficult if not impossible to sustain. There is ample evidence to support the null hypothesis (that “climate change” isn’t catastrophic) and the alternative view (that recent warming is natural and caused mainly by things other than human activity).

Leftists want to believe in catastrophic anthropogenic global warming because it suits the left’s puritanical agenda, as did Paul Ehrlich’s discredited thesis that population growth would outstrip the availability of food and resources, leading to mass starvation and greater poverty. Population control therefore became a leftist mantra, and remains one despite the generally rising prosperity of the human race and the diminution of scarcity (except where leftist governments, like Venezuela’s, create misery).

Why are leftists so eager to believe in problems that portend catastrophic consequences which “must” be averted through draconian measures, such as enforced population control, taxes on soft drinks above a certain size, the prohibition of smoking not only in government buildings but in all buildings, and decreed reductions in CO2-emitting activities (which would, in fact, help to impoverish humans)? The common denominator of such measures is control. And yet, by the process of psychological projection, leftists are always screaming “fascist” at libertarians and conservatives who resist control.

Returning to evolution, why are leftists so eager to eager to embrace it or, rather, what they choose to believe about it? My answers are that (a) it’s “science” (it’s only science when it’s spelled out in detail, uncertainties and all) and (b) it gives leftists (who usually are atheists) a stick with which to beat “creationists”.

But when it comes to race, leftists insist on denying what’s in front of their eyes: evolutionary disparities in such phenomena as skin color, hair texture, facial structure, running and jumping ability, cranial capacity, and intelligence.

Why? Because the urge to control others is of a piece with the superiority with which leftists believe they’re endowed because they are mainly white persons of European descent and above-average intelligence (just smart enough to be dangerous). Blacks and Hispanics who vote left do so mainly for the privileges it brings them. White leftists are their useful idiots.

Leftism, in other words, is a manifestation of “white privilege”, which white leftists feel compelled to overcome through paternalistic condescension toward blacks and other persons of color. (But not East Asians or the South Asians who have emigrated to the U.S., because the high intelligence of those groups is threatening to white leftists’ feelings of superiority.) What could be more condescending, and less scientific, than to deny what evolution has wrought in order to advance a political agenda?

Leftist race-denial, which has found its way into government policy, is akin to Stalin’s support of Lysenkoism, which its author cleverly aligned with Marxism. Lysenkoism

rejected Mendelian inheritance and the concept of the “gene”; it departed from Darwinian evolutionary theory by rejecting natural selection.

This brings me to Stephen Jay Gould, a leading neo-Lysenkoist and a fraudster of “science” who did much to deflect science from the question of race and intelligence:

[In The Mismeasure of Man] Gould took the work of a 19th century physical anthropologist named Samuel George Morton and made it ridiculous. In his telling, Morton was a fool and an unconscious racist — his project of measuring skull sizes of different ethnic groups conceived in racism and executed in same. Why, Morton clearly must have thought Caucasians had bigger brains than Africans, Indians, and Asians, and then subconsciously mismeasured the skulls to prove they were smarter.

The book then casts the entire project of measuring brain function — psychometrics — in the same light of primitivism.

Gould’s antiracist book was a hit with reviewers in the popular press, and many of its ideas about the morality and validity of testing intelligence became conventional wisdom, persisting today among the educated folks. If you’ve got some notion that IQ doesn’t measure anything but the ability to take IQ tests, that intelligence can’t be defined or may not be real at all, that multiple intelligences exist rather than a general intelligence, you can thank Gould….

Then, in 2011, a funny thing happened. Researchers at the University of Pennsylvania went and measured old Morton’s skulls, which turned out to be just the size he had recorded. Gould, according to one of the co-authors, was nothing but a “charlatan.”

The study itself couldn’t matter, though, could it? Well, recent work using MRI technology has established that descendants of East Asia have slightly more cranial capacity than descendants of Europe, who in turn have a little more than descendants of Africa. Another meta-analysis finds a mild correlation between brain size and IQ performance.

You see where this is going, especially if you already know about the racial disparities in IQ testing, and you’d probably like to hit the brakes before anybody says… what, exactly? It sounds like we’re perilously close to invoking science to argue for genetic racial superiority.

Am I serious? Is this a joke?…

… The reason the joke feels dangerous is that it incorporates a fact that is rarely mentioned in public life. In America, white people on average score higher than black people on IQ tests, by a margin of 12-15 points. And there’s one man who has been made to pay the price for that fact — the scholar Charles Murray.

Murray didn’t come up with a hypothesis of racial disparity in intelligence testing. He simply co-wrote a book, The Bell Curve, that publicized a fact well known within the field of psychometrics, a fact that makes the rest of us feel tremendously uncomfortable.

Nobody bears more responsibility for the misunderstanding of Murray’s work than Gould, who reviewed The Bell Curve savagely in the New Yorker. The IQ tests couldn’t be explained away — here he is acknowledging the IQ gap in 1995 — but the validity of IQ testing could be challenged. That was no trouble for the old Marxist.

Gould should have known that he was dead wrong about his central claim — that general intelligence, or g, as psychologists call it, was unreal. In fact, “Psychologists generally agree that the greatest success of their field has been in intelligence testing,” biologist Bernard D. Davis wrote in the Public Interest in 1983, in a long excoriation of Gould’s strange ideas.

Psychologists have found that performance on almost any test of cognition will have some correlation to other tests of cognition, even in areas that might seem distant from pure logic, such as recognizing musical notes. The more demanding tests have a higher correlation, or a high g load, as they term it.

IQ is very closely related to this measure, and turns out to be extraordinarily predictive not just for how well one does on tests, but on all sorts of real-life outcomes.

Since the publication of The Bell Curve, the data have demonstrated not just those points, but that intelligence is highly heritable (around 50 to 80 percent, Murray says), and that there’s little that can be done to permanently change the part that’s dependent on the environment….

The liberal explainer website Vox took a swing at Murray earlier this year, publishing a rambling 3,300-word hit job on Murray that made zero references to the scientific literature….

Vox might have gotten the last word, but a new outlet called Quillette published a first-rate rebuttal this week, which sent me down a three-day rabbit hole. I came across some of the most troubling facts I’ve ever encountered — IQ scores by country — and then came across some more reassuring ones from Thomas Sowell, suggesting that environment could be the main or exclusive factor after all.

The classic analogy from the environment-only crowd is of two handfuls of genetically identical seed corn, one planted in Iowa and the other in the Mojave Desert. One group flourishes; the other is stunted. While all of the variation within one group will be due to genetics, its flourishing relative to the other group will be strictly due to environment.

Nobody doubts that the United States is richer soil than Equatorial Guinea, but the analogy doesn’t prove the case. The idea that there exists a mean for human intelligence and that all racial subgroups would share it given identical environments remains a metaphysical proposition. We may want this to be true quite desperately, but it’s not something we know to be true.

For all the lines of attack, all the brutal slander thrown Murray’s way, his real crime is having an opinion on this one key issue that’s open to debate. Is there a genetic influence on the IQ testing gap? Murray has written that it’s “likely” genetics explains “some” of the difference. For this, he’s been crucified….

Murray said [in a recent interview] that the assumption “that everyone is equal above the neck” is written into social policy, employment policy, academic policy and more.

He’s right, of course, especially as ideas like “disparate impact” come to be taken as proof of discrimination. There’s no scientifically valid reason to expect different ethnic groups to have a particular representation in this area or that. That much is utterly clear.

The universities, however, are going to keep hollering about institutional racism. They are not going to accept Murray’s views, no matter what develops. [Jon Cassidy, “Mau Mau Redux: Charles Murray Comes in for Abuse, Again“, The American Spectator, June 9, 2017]

And so it goes in the brave new world of alternative facts, most of which seem to come from the left. But the left, with its penchant for pseudo-intellectualism (“science” vs. science) calls it postmodernism:

Postmodernists … eschew any notion of objectivity, perceiving knowledge as a construct of power differentials rather than anything that could possibly be mutually agreed upon…. [S]cience therefore becomes an instrument of Western oppression; indeed, all discourse is a power struggle between oppressors and oppressed. In this scheme, there is no Western civilization to preserve—as the more powerful force in the world, it automatically takes on the role of oppressor and therefore any form of equity must consequently then involve the overthrow of Western “hegemony.” These folks form the current Far Left, including those who would be described as communists, socialists, anarchists, Antifa, as well as social justice warriors (SJWs). These are all very different groups, but they all share a postmodernist ethos. [Michael Aaron, “Evergreen State and the Battle for Modernity“, Quillette, June 8, 2017]


Other related reading (listed chronologically):

Molly Hensley-Clancy, “Asians With “Very Familiar Profiles”: How Princeton’s Admissions Officers Talk About Race“, BuzzFeed News, May 19, 2017

Warren Meyer, “Princeton Appears To Penalize Minority Candidates for Not Obsessing About Their Race“, Coyote Blog, May 24, 2017

B. Wineguard et al., “Getting Voxed: Charles Murray, Ideology, and the Science of IQ“, Quillette, June 2, 2017

James Thompson, “Genetics of Racial Differences in Intelligence: Updated“, The Unz Review: James Thompson Archive, June 5, 2017

Raymond Wolters, “We Are Living in a New Dark Age“, American Renaissance, June 5, 2017

F. Roger Devlin, “A Tactical Retreat for Race Denial“, American Renaissance, June 9, 2017

Scott Johnson, “Mugging Mr. Murray: Mr. Murray Speaks“, Power Line, June 9, 2017


Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering
Some Notes about Psychology and Intelligence