Science and Understanding

Some Thoughts about Probability

This is the final version of a post that was accidentally published in draft form on December 2, 2014.

This post is prompted by a reader’s comments about “The Compleat Monty Hall Problem.” I open with a discussion of probability and its inapplicability to single games of chance (e.g., one toss of a coin). With that as background, I then address the reader’s specific comments. I close with a discussion of the debasement of the meaning of probability.


What is probability? Is it a property of a thing (e.g., a coin), a property of an event involving a thing (e.g., a toss of the coin), or a description of the average outcome of a large number of such events (e.g., “heads” and “tails” will come up about the same number of times)? I take the third view.

What does it mean to say, for example, that there’s a probability of 0.5 (50 percent) that a tossed coin will come up “heads” (H), and a probability of 0.5 that it will come up “tails” (T)? Does such a statement have any bearing on the outcome of a single toss of a coin? No, it doesn’t. The statement is only a short way of saying that in a sufficiently large number of tosses, approximately half will come up H and half will come up T. The result of each toss, however, is a random event — it has no probability.

That is the standard, frequentist interpretation of probability, to which I subscribe. It replaced the classical interpretation , which is problematic:

If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by

P(A) = {N_A \over N} .

There are two clear limitations to the classical definition.[16] Firstly, it is applicable only to situations in which there is only a ‘finite’ number of possible outcomes. But some important random experiments, such as tossing a coin until it rises heads, give rise to an infinite set of outcomes. And secondly, you need to determine in advance that all the possible outcomes are equally likely without relying on the notion of probability to avoid circularity….

A similar charge has been laid against frequentism:

It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular.

Not so:

  • There is no “real probability.” If there were, the classical theory would measure it, but the classical theory is circular, as explained above.
  • It is therefore meaningless to refer to “error of measurement.” Estimates of probability may well vary from one series of trials to another. But they will “tend to a fixed limit” over many trials (see below).

There are other approaches to probability. See, for example, this, this, and this.) One approach is known as propensity probability:

Propensities are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate a given outcome type at a persistent rate. A central aspect of this explanation is the law of large numbers. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will (with high probability) be close to the probability of heads on each single toss. This law suggests that stable long-run frequencies are a manifestation of invariant single-case probabilities.

This is circular. You observe the relative frequencies of outcomes and, lo and behold, you have found the “propensity” that yields those relative frequencies.

Another approach is Bayesian probability:

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

Or consider this account:

The Bayesian approach to learning is based on the subjective interpretation of probability.   The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p….

“Level of certainty” and “subjective interpretation” mean “guess.” The guess may be “educated.” It’s well known, for example, that a balanced coin will come up heads about half the time, in the long run. But to say that “I’m 50-percent confident that the coin will come up heads” is to say nothing meaningful about the outcome of a single coin toss. There are as many probable outcomes of a coin toss as there are bystanders who are willing to make a statement like “I’m x-percent confident that the coin will come up heads.” Which means that a single toss doesn’t have a probability, though it can be the subject of many opinions as to the outcome.

Returning to reality, Richard von Mises eloquently explains frequentism in Probability, Statistics and Truth (second revised English edition, 1957). Here are some excerpts:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply the theory of probability we must have a practically unlimited sequence of uniform observations. [P. 11]

*     *     *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the game of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. [P. 11]

*     *     *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute in a given collective’. [Pp. 11-12]

*     *     *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attribute would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective. [P. 15, emphasis in the original]

*     *     *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? [P. 33, emphasis added]

As stated earlier, it is simply meaningless to say that the probability of H or T coming up in a single toss is 0.5. Here’s the proper way of putting it: There is no reason to expect a single coin toss to have a particular outcome (H or T), given that the coin is balanced, the toss isn’t made is such a way as to favor H or T, and there are no other factors that might push the outcome toward H or T. But to say that P(H) is 0.5 for a single toss is to misrepresent the meaning of probability, and to assert something meaningless about a single toss.

If you believe that probabilities attach to a single event, you must also believe that a single event has an expected value. Let’s say, for example, that you’re invited to toss a coin once, for money. You get $1 if H comes up; you pay $1 if T comes up. As a believer in single-event probabilities, you “know” that you have a “50-50 chance” of winning or losing. Would you play a single game, which has an expected value of $0? If you would, it wouldn’t be because of the expected value of the game; it would be because you might win $1, and because losing $1 would mean little to you.

Now, change the bet from $1 to $1,000. The “expected value” of the single game remains the same: $0. But the size of the stake wonderfully concentrates your mind. You suddenly see through the “expected value” of the game. You are struck by the unavoidable fact that what really matters is the prospect of winning $1,000 or losing $1,000, because those are the only possible outcomes.

Your decision about playing a single game for $1,000 will depend on your finances (e.g., you may be very wealthy or very desperate for money) and your tolerance for risk (e.g., you may be averse to risk-taking or addicted to it). But — if you are rational — you will not make your decision on the basis of the fictional expected value of a single game, which derives from the fictional single-game probabilities of H and T. You will decide whether you’re willing and able to risk the loss of $1,000.

Do I mean to say that probability is irrelevant to a single play of the Monty Hall problem, or to a choice between games of chance? If you’re a proponent of propensity, you might say that in the Monty Hall game the prize has a propensity to be behind the other unopened door (i.e., the door not chosen by you and not opened by the host). But does that tell you anything about the actual location of the prize in a particular game? No, because the “propensity” merely reflects the outcomes of many games; it says nothing about a single game, which (like Schrödinger’s cat) can have only a single outcome (prize or no prize), not 2/3 of one.

If you’re a proponent of Bayesian probability, you might say that you’re confident with “probability” 2/3 that the prize is behind the other unopened door. But that’s just another way of saying that contestants win 2/3 of the time if they always switch doors. That’s the background knowledge that you bring to your statement of confidence. But someone who’s ignorant of the Monty Hall problem might be confident with 1/2 “probability” that the prize is behind the other unopened door. And he could be right about a particular game, despite his lower level of confidence.

So, yes, I do mean to say that there’s no such thing as a single-case probability. You may have an opinion ( or a hunch or a guess) about the outcome of a single game, but it’s only your opinion (hunch, guess). In the end, you have to bet on a discrete outcome. If it gives you comfort to switch to the unopened door because that’s the winning door 2/3 of the time (according to classical probability) and about 2/3 of the time (according to the frequentist interpretation), be my guest. I might do the same thing, for the same reason: to be comfortable about my guess. But I’d be able separate my psychological need for comfort from the reality of the situation:

A single game is just one event in the long series of events from which probabilities emerge. I can win the Monty Hall game about 2/3 of the time in repeated plays if I always switch doors. But that probability has nothing to do with a single game, the outcome of which is a random occurrence.


I now turn to the reader’s specific comments, which refer to “The Compleat Monty Hall Problem.” (You should read it before continuing with this post if you’re unfamiliar with the Monty Hall problem or my analysis of it.) The reader’s comments — which I’ve rearranged slightly — are in italic type. (Here and there, I’ve elaborated on the reader’s comments; my elaborations are placed in brackets and set in roman type.) My replies are in bold type.

I find puzzling your statement that a probability cannot “describe” a single instance, eg one round of the Monty Hall problem.

See my introductory remarks.

While the long run result serves to prove the probability of a particular outcome, that does not mean that that probability may not be assigned to a smaller number of instances. That is the beauty of probability.

The long-run result doesn’t “prove” the probability of a particular outcome; it determines the relative frequency of occurrence of that outcome — and nothing more. There is no probability associated with a “smaller number of instances,” certainly not 1 instance. Again, see my introductory remarks.

If the [Monty Hall] game is played once [and I don’t switch doors], I should budget for one car [the prize that’s usually cited in discussions of the Monty hall problem], and if it is played 100 times [and I never switch doors], I budget for 33….

“Budget” seems to refer to the expected number of cars won, given the number of plays of the game and a strategy of never switching doors. The reader contradicts himself by “budgeting” for 1 car in a single play of the Monty Hall problem. In doing so, he is being unfaithful to his earlier statement: “While the long run result serves to prove the probability of a particular outcome, that does not mean that that probability may not be assigned to a smaller number of instances.” Removing the double negatives, we get “probability may be assigned to a smaller number of instances.” Given that 1 is a smaller number than 100, it follows, by the reader’s logic, that his “budget” for a single game should be 1/3 car (assuming, as he does, a strategy of not switching doors). The reader’s problem here is his insistence that a probability expresses something other than the long-run relative frequency of a particular outcome.

To justify your contrary view, you ask how you can win 2/3 of a car [the long-run average if the contestant plays many games and always switches doors]; you can win or you can not win, you say, you cannot partly win. Is this not sophistry or a straw man, sloppy reasoning at best, to convince uncritical thinkers who agree that you cannot drive 2/3 of a car?

My “contrary view” of what? My view of statistics isn’t “contrary.” Rather, it’s in line with the standard, frequentist interpretation.

It’s a simple statement of obvious fact you can’t win 2/3 of a car. There’s no “sophistry” or “straw man” about it. If you can’t win 2/3 of a car, what does it mean to assign a probability of 2/3 to winning a car by adopting the switching strategy? As discussed above, it means only one thing: A long series of games will be won about 2/3 of the time if all contestants adopt the switching strategy.

On what basis other than an understanding of probability would you be optimistic at the prospect of being offered one chance of picking a single Golden Ball worth $1m from a bag of just three balls and pessimistic about your prospects of picking the sole Golden Ball from a barrel of 10,000 balls?

The only difference between the two games is that on the one hand you have a decent (33%) chance of winning and on the other hand you have a lousy (0.01%) chance. Isn’t it these disparate probabilities that give you cause for optimism or pessimism, as the case may be?

“Optimism” and “pessimism” — like “comfort” — are subjective terms for ill-defined states of mind. There are persons who will be “optimistic” about a given situation, and persons who will be “pessimistic” about the same situation. For example: There are hundreds of millions of persons who are “optimistic” about winning various lotteries, even though they know that the grand prize in each lottery will be assigned to only one of millions of possible numbers. By the same token, there are hundreds of millions of persons who, knowing the same facts, refuse to buy lottery tickets because they are “pessimistic” about the likely outcome of doing so. But “optimism” and “pessimism” — like “comfort” — have nothing to do with probability, which isn’t an attribute of a single game.

If probability cannot describe the chances of each of the two one-off “games”, does that mean I could not provide a mathematical basis for my advice that you play the game with 3 balls (because you have a one-in-three chance of winning) rather than the ball in the barrel game which offers a one in ten thousand chance of winning?

You can provide a mathematical basis for preferring the game with 3 balls. But you must, in honesty, state that the mathematical basis applies only to many games, and that the outcome of a single game is unpredictable.

It might be that probability cannot reliably describe the actual outcome of a single event because the sample size of 1 game is too small to reflect the long-run average that proves the probability. However, comparing the probabilities for winning the two games describes the relative likelihood of winning each game and informs us as to which game will more likely provide the prize.

If not by comparing the probability of winning each game, how do we know which of the two games has a better chance of delivering a win? One cannot compare the probability of selecting the Golden Ball from each of the two games unless the probability of each game can be expressed, or described, as you say.

Here, the reader comes close to admitting that a probability can’t describe the (expected) outcome of a single event (“reliably” is superfluous). But he goes off course when he says that “comparing the probabilities for the two games … informs us as to which game will more likely provide the prize.” That statement is true only for many plays of the two ball games. It has nothing to do with a single play of either ball game. The choice there must be based on subjective considerations: “optimism,” “pessimism,” “comfort,” a guess, a hunch, etc.

Can I not tell a smoker that their lifetime risk of developing lung cancer is 23% even though smokers either get lung cancer or they do not? No one gets 23% cancer. Did someone say they did? No one has 0.2 of a child either but, on average, every family in a census did at one stage have 2.2 children.

No, the reader may not (honestly) tell a smoker that his lifetime risk of developing lung cancer is 23 percent, or any specific percentage. The smoker has one life to live; he will either get lung cancer or he will not. What the reader may honestly tell the smoker is that statistics based on the fates of a large number of smokers over many decades indicate that a certain percentage of those smokers contracted lung cancer. The reader should also tell the smoker that the frequency of the incidence of lung cancer in a large population varies according to the number of cigarettes smoked daily. (According to Wikipedia: “For every 3–4 million cigarettes smoked, one lung cancer death occurs.[1][132]“) Further, the reader should note that the incidence of lung cancer also varies with the duration of smoking at various rates, and with genetic and environmental factors that vary from person to person.

As for family size, given that the census counts only post-natal children (who come in integer values), how could “every family in a census … at one stage have 2.2 children”? The average number of children across a large number of families may be 2.2, but surely the reader knows that “every family” did not somehow have 2.2 children “at one stage.” And surely the reader knows that average family size isn’t a probabilistic value, one that measures the relative frequency of an event (e.g., “heads”) given many repetitions of the same trial (e.g., tossing a fair coin), under the same conditions (e.g., no wind blowing). Each event is a random occurrence within the long string of repetitions. The reader may have noticed that family size is in fact strongly determined (especially in Western countries) by non-random events (e.g., deliberate decisions by couples to reproduce, or not). In sum, probabilities may represent averages, but not all (or very many) averages represent probabilities.

If not [by comparing probabilities], how do we make a rational recommendation and justify it in terms the board of a think-tank would accept? [This seems to be a reference to my erstwhile position as an officer of a defense think-tank.]

Here, the reader extends an inappropriate single-event view of probability to an inappropriate unique-event view. I would not have gone before the board and recommended a course of action — such as bidding on a contract for a new line of work — based on a “probability of success.” That would be an absurd statement to make about an event that is defined by unique circumstances (e.g., the composition of the think-tank’s staff at that time, the particular kind of work to be done, the qualifications of prospective competitors’ staffs). I would simply have spelled out the facts and the uncertainties. And if I had a hunch about the likely success or failure of the venture, I would have recommended for or against it, giving specific reasons for my hunch (e.g., the relative expertise of our staff and competitors’ staffs). But it would have been nothing more than a hunch; it wouldn’t have been my (impossible) assessment of the probability of a unique event.

Boards (and executives) don’t base decisions on (non-existent) probabilities; they base decisions on unique sets of facts, and on hunches (preferably hunches rooted in knowledge and experience). Those hunches may sometimes be stated as probabilities, as in “We’ve got a 50-50 chance of winning the contract.” (Though I would never say such a thing.) But such statements are only idiomatic, and have nothing to do with probability as it is properly understood.


The reader’s comments reflect the popular debasement of the meaning of probability. The word has been adapted to many inappropriate uses: the probability of precipitation (a quasi-subjective concept), the probability of success in a business venture (a concept that requires the repetition of unrepeatable events), the probability that a batter will get a hit in his next at-bat (ditto, given the many unique conditions that attend every at-bat), and on and on. The effect of all such uses (and, often, the purpose of such uses) is to make a guess seem like a “scientific” prediction.


The Harmful Myth of Inherent Equality

Malcolm Gladwell popularized the 10,000-hour rule in Outliers: The Story of Success. According to the Wikipedia article about the book,

…Gladwell repeatedly mentions the “10,000-Hour Rule”, claiming that the key to success in any field is, to a large extent, a matter of practicing a specific task for a total of around 10,000 hours….

…[T]he “10,000-Hour Rule” [is] based on a study by Anders Ericsson. Gladwell claims that greatness requires enormous time, using the source of The Beatles’ musical talents and Gates’ computer savvy as examples….

Reemphasizing his theme, Gladwell continuously reminds the reader that genius is not the only or even the most important thing when determining a person’s success….

For “genius” read “genes.” Gladwell’s borrowed theme reinforces the left’s never-ending effort to sell the idea that all men and women are born with the same potential. And, of course, it’s the task of the almighty state to ensure that outcomes (e.g., housing, jobs, college admissions, and income) conform to nature’s design.

I encountered the 10,000-hour rule several years ago, and referred to it in this post, where I observed that “outcomes are skewed … because talent is distributed unevenly.” By “talent” I mean inherent ability of a particular kind — high intelligence and athletic prowess, for example — the possession of which obviously varies from person to person and (on average) from gender to gender and race to race. Efforts to deny such variations are nothing less than anti-scientific. They exemplify the left’s penchant for magical thinking.

There’s plenty of evidence of the strong link between inherent ability to success in any endeavor. I’ve offered some evidence here, here, here, and here. Now comes “Practice Does Not Make Perfect” by , , and (Slate, September 28, 2014). The piece veers off into social policy (with a leftish tinge) and an anemic attempt to rebut the race-IQ correlation, but it’s good on the facts. First, the authors frame the issue:

…What makes someone rise to the top in music, games, sports, business, or science? This question is the subject of one of psychology’s oldest debates.

The “debate” began sensibly enough:

In the late 1800s, Francis Galton—founder of the scientific study of intelligence and a cousin of Charles Darwin—analyzed the genealogical records of hundreds of scholars, artists, musicians, and other professionals and found that greatness tends to run in families. For example, he counted more than 20 eminent musicians in the Bach family. (Johann Sebastian was just the most famous.) Galton concluded that experts are “born.”

Then came the experts-are-made view and the 10,000-hour rule:

Nearly half a century later, the behaviorist John Watson countered that experts are “made” when he famously guaranteed that he could take any infant at random and “train him to become any type of specialist [he] might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents.”

The experts-are-made view has dominated the discussion in recent decades. In a pivotal 1993 article published in Psychological Review—psychology’s most prestigious journal—the Swedish psychologist K. Anders Ericsson and his colleagues proposed that performance differences across people in domains such as music and chess largely reflect differences in the amount of time people have spent engaging in “deliberate practice,” or training exercises specifically designed to improve performance…. For example, the average for elite violinists was about 10,000 hours, compared with only about 5,000 hours for the least accomplished group. In a second study, the difference for pianists was even greater—an average of more than 10,000 hours for experts compared with only about 2,000 hours for amateurs. Based on these findings, Ericsson and colleagues argued that prolonged effort, not innate talent, explained differences between experts and novices.

But reality has a way of making itself known:

[R]ecent research has demonstrated that deliberate practice, while undeniably important, is only one piece of the expertise puzzle—and not necessarily the biggest piece. In the first study to convincingly make this point, the cognitive psychologists Fernand Gobet and Guillermo Campitelli found that chess players differed greatly in the amount of deliberate practice they needed to reach a given skill level in chess. For example, the number of hours of deliberate practice to first reach “master” status (a very high level of skill) ranged from 728 hours to 16,120 hours. This means that one player needed 22 times more deliberate practice than another player to become a master.

A recent meta-analysis by Case Western Reserve University psychologist Brooke Macnamara and her colleagues (including the first author of this article for Slate) came to the same conclusion. We searched through more than 9,000 potentially relevant publications and ultimately identified 88 studies that collected measures of activities interpretable as deliberate practice and reported their relationships to corresponding measures of skill…. [P]eople who reported practicing a lot tended to perform better than those who reported practicing less. But the correlations were far from perfect: Deliberate practice left more of the variation in skill unexplained than it explained. For example, deliberate practice explained 26 percent of the variation for games such as chess, 21 percent for music, and 18 percent for sports. So, deliberate practice did not explain all, nearly all, or even most of the performance variation in these fields. In concrete terms, what this evidence means is that racking up a lot of deliberate practice is no guarantee that you’ll become an expert. Other factors matter.

Genes are among the other factors:

There is now compelling evidence that genes matter for success, too. In a study led by the King’s College London psychologist Robert Plomin, more than 15,000 twins in the United Kingdom were identified through birth records and recruited to perform a battery of tests and questionnaires, including a test of drawing ability in which the children were asked to sketch a person. In a recently published analysis of the data, researchers found that there was a stronger correspondence in drawing ability for the identical twins than for the fraternal twins. In other words, if one identical twin was good at drawing, it was quite likely that his or her identical sibling was, too. Because identical twins share 100 percent of their genes, whereas fraternal twins share only 50 percent on average, this finding indicates that differences across people in basic artistic ability are in part due to genes. In a separate study based on this U.K. sample, well over half of the variation between expert and less skilled readers was found to be due to genes.

In another study, a team of researchers at the Karolinska Institute in Sweden led by psychologist Miriam Mosing had more than 10,000 twins estimate the amount of time they had devoted to music practice and complete tests of basic music abilities, such as determining whether two melodies carry the same rhythm. The surprising discovery of this study was that although the music abilities were influenced by genes—to the tune of about 38 percent, on average—there was no evidence they were influenced by practice. For a pair of identical twins, the twin who practiced music more did not do better on the tests than the twin who practiced less. This finding does not imply that there is no point in practicing if you want to become a musician. The sort of abilities captured by the tests used in this study aren’t the only things necessary for playing music at a high level; things such as being able to read music, finger a keyboard, and commit music to memory also matter, and they require practice. But it does imply that there are limits on the transformative power of practice. As Mosing and her colleagues concluded, practice does not make perfect.

This is bad news for the blank-slate crowd on the left:

Ever since John Locke laid the groundwork for the Enlightenment by proposing that we are born as tabula rasa—blank slates—the idea that we are created equal has been the central tenet of the “modern” worldview. Enshrined as it is in the Declaration of Independence as a “self-evident truth,” this idea has special significance for Americans. Indeed, it is the cornerstone of the American dream—the belief that anyone can become anything they want with enough determination….

Wouldn’t it be better to just act as if we are equal, evidence to the contrary notwithstanding? That way, no people will be discouraged from chasing their dreams—competing in the Olympics or performing at Carnegie Hall or winning a Nobel Prize. The answer is no, for two reasons. The first is that failure is costly, both to society and to individuals. Pretending that all people are equal in their abilities will not change the fact that a person with an average IQ is unlikely to become a theoretical physicist, or the fact that a person with a low level of music ability is unlikely to become a concert pianist. It makes more sense to pay attention to people’s abilities and their likelihood of achieving certain goals, so people can make good decisions about the goals they want to spend their time, money, and energy pursuing…. Pushing someone into a career for which he or she is genetically unsuited will likely not work.

With regard to the latter point, Richard Sander has shown that aspiring blacks are chief among the victims of the form of “pushing” known as affirmative action. A few years ago, Sander was a guest blogger at The Volokh Conspiracy, where he posted thrice on the subject. In his first post, Sander writes:

As some readers will recall, a little more than seven years ago I published an analysis of law school affirmative action in the Stanford Law Review. The article was the first to present detailed data on the operation and effects of racial preferences in law schools (focusing on blacks).

I also laid out evidence suggesting that large preferences seemed to be worsening black outcomes. I argued that this was plausibly due to a “mismatch effect”; students receiving large preferences (for whatever reason) were likely to find themselves in academic environments where they had to struggle just to keep up; professor instruction would typically be aimed at the “median” student, so students with weaker academic preparation would tend to fall behind, and, even if they did not become discouraged and give up, would tend to learn less than they would have learned in an environment where their level of academic preparation was closer to the class median.

I suggested that the “mismatch effect” could explain as much as half of the black-white gap in first-time bar passage rates (the full gap is thirty to forty percentage points). I also suggested that “mismatch” might so worsen black outcomes that, on net, contemporary affirmative action was not adding to the total number of black lawyers, and might even be lowering the total number of new, licensed black attorneys.

This is from Sander’s second post:

Some of the most significant recent work on affirmative action concerns a phenomenon called “science mismatch”. The idea behind science mismatch is very intuitive: if you are a high school senior interested in becoming, for example, a chemist, you may seriously harm your chances of success by attending a school where most of the other would-be chemists have stronger academic preparation than you do. Professors will tend to pitch their class at the median student, not you; and if you struggle or fall behind in the first semester of inorganic chemistry, you will be in even worse shape in the second semester, and in very serious trouble when you hit organic chemistry. You are likely to get bad grades and to either transfer out of chemistry or fail to graduate altogether….

Duke economists Peter Arcidiacono, Esteban Aucejo, and Ken Spenner last year completed a study that looked at a number of ways that differences in admissions standards at Duke affected academic outcomes. In one of many useful analyses they did, they found that 54% of black men at Duke who, as freshmen, had been interested in STEM fields or economics, had switched out of those fields before graduation; the comparative rate for white men was 8%. Importantly, they found that “these cross-race differences in switching patterns can be fully explained by differences in academic background.” In other words, preferences – not race – was the culprit.

In research conducted by FTC economist Marc Luppino and me, using data from the University of California, we have found important peer effects and mismatch effects that affect students of all races; our results show that one’s chances of completing a science degree fall sharply, at a given level of academic preparation, as one attends more and more elite schools within the UC system. At Berkeley, there is a seven-fold difference in STEM degree completion between students with high and low pre-college credentials.

As is always the case with affirmative action, ironies abound. Although young blacks are about one-seventh as likely as young whites to eventually earn a Ph.D. in STEM fields, academically strong blacks in high school are more likely than similar whites to aspire to science careers. And although a U.S. Civil Rights Commission report in 2010 documented the “science mismatch” phenomenon in some detail, President Obama’s new initiative to improve the nation’s production of scientists neither recognizes nor addresses mismatch….

Science mismatch is, of course, relevant to the general affirmative action debate in showing that preferences can boomerang on their intended beneficiaries. But it also has a special relevance to Fisher v. University of Texas. The university’s main announced purpose in reintroducing racial preferences in 2004 was to increase “classroom” diversity. The university contended that, even though over a fifth of its undergraduates were black or Hispanic, many classrooms had no underrepresented minorities. It sought to use direct (and very large) racial preferences to increase campus URM numbers and thus increase the number of URMs in classes that lacked them. But science mismatch shows that this strategy, too, can be self-defeating. The larger a university’s preferences, the more likely it is that preferenced students will have trouble competing in STEM fields and other majors that are demanding and grade sternly. These students will tend to drop out of the tough fields and congregate in comparatively less demanding ones. Large preferences, in other words, can increase racial segregation across majors and courses within a university, and thus hurt classroom diversity.

And this is from Sander’s third post:

[In the previous post] I discussed a body of research – all of it uncontroverted – that documents a serious flaw in affirmative action programs pursued by elite colleges. Students who receive large preferences and arrive on campus hoping to major in STEM fields (e.g., Science, Technology, Engineering and Math) tend to migrate out of those fields at very high rates, or, if they remain in those fields, often either fail to graduate or graduate with very low GPAs. There is thus a strong tension between receiving a large admissions preference to a more elite school, and one’s ability to pursue a STEM career.

Is it possible for contemporary American universities to engage constructively with this type of research? …

Colleges and universities are committed to the mythology that diversity happens merely because they want it and put resources into it, and that all admitted students arrive with all the prerequisites necessary to flourish in any way they choose. Administrators work hard to conceal the actual differences in academic preparation that almost invariably accompany the aggressive use of preferences. Any research that documents the operation and effects of affirmative action therefore violates this “color-blind” mythology and accompanying norms; minority students are upset, correctly realizing that either the research is wrong or that administrators have misled them. In this scenario, administrators invariably resort to the same strategy: dismiss the research without actually lying about it; reassure the students that the researchers are misguided, but that the university can’t actually punish the researchers because of “academic freedom”….

Leftists — academic and other — cannot abide the truth when it refutes their prejudices. Affirmative action, as it turns out, is harmful to aspiring blacks. Most leftists will deny it because their leftist faith — their magical thinking– is more important to them than the well-being of those whose cause they claim to champion.


Evolution, Culture, and “Diversity”

The “satirical and opinionated,” but well-read, Fred Reed poses some questions about evolution. He wisely asks John Derbyshire to answer them. In the absence of a response from Derbyshire, I will venture some answers, and then offer some general observations about evolution and two closely related subjects: culture and “diversity.” (The “sneer quotes” mean that “diversity,” as used by leftists, really means favoritism toward their clientele — currently blacks and Hispanics, especially illegal immigrants).

Herewith, Reed’s questions (abridged, in italics) and my answers:

(1) In evolutionary principle, traits that lead to more surviving children proliferate. In practice, when people learn how to have fewer or no children, they do…. [W]hat selective pressures lead to a desire not to reproduce, and how does this fit into a Darwinian framework?

As life becomes less fraught for homo sapiens, reproduction becomes less necessary. First, the ability of the species (and families) to survive and thrive becomes less dependent on sheer numbers and more dependent on technological advances. Second (and consequently), more couples are able to  trade the time and expense of child-rearing for what would have been luxuries in times past (e.g., a nicer home, bigger cars, more luxurious vacations, a more comfortable retirement).

As suggested by the second point, human behavior isn’t determined solely by genes; it has a strong cultural component. There is an interplay between genes and culture, as I’ll discuss, but culture can (and does) influence evolution. An emergent aspect of culture is an inverse relationship between the number of children and social status. Social status is enhanced by the acquisition and display of goods made affordable by limiting family size.

(2) Morality. In evolution as I understand it, there are no absolute moral values: Morals evolved as traits allowing social cooperation, conducing to the survival of the group and therefore to the production of more surviving children….

Question: Why should I not indulge my hobby of torturing to death the severely genetically retarded? This would seem beneficial. We certainly don’t want them to reproduce, they use resources better invested in healthy children, and it makes no evolutionary difference whether they die quietly or screaming.

Here Reed clearly (if tacitly) acknowledges the role of culture as a (but not the) determinant of behavior. Morals may “evolve,” but not in the same way as physiological characteristics. Morals may nevertheless influence the survival of a species, as Reed suggests. Morals may also influence biological evolution to the extent that selective mating favors those who adhere to a beneficial morality, and yields offspring who are genetically predisposed toward that morality.

Religion — especially religion in the Judeo-Christian tradition — fosters beneficial morality. This is from David Sloan Wilson‘s “Beyond Demonic Memes: Why Richard Dawkins Is Wrong about Religion” (, July 4, 2007):

On average, religious believers are more prosocial than non-believers, feel better about themselves, use their time more constructively, and engage in long-term planning rather than gratifying their impulsive desires. On a moment-by-moment basis, they report being more happy, active, sociable, involved and excited. Some of these differences remain even when religious and non-religious believers are matched for their degree of prosociality. More fine-grained comparisons reveal fascinating differences between liberal vs. conservative protestant denominations, with more anxiety among the liberals and conservatives feeling better in the company of others than when alone…

In Darwin’s Cathedral, I initiated a survey of religions drawn at random from the 16-volume Encyclopedia of World Religions, edited by the great religious scholar Mircia Eliade. The results are described in an article titled “Testing Major Evolutionary Hypotheses about Religion with a Random Sample,” which was published in the journal Human Nature and is available on my website. The beauty of random sampling is that, barring a freak sampling accident, valid conclusions for the sample apply to all of the religions in the encyclopedia from which the sample was taken. By my assessment, the majority of religions in the sample are centered on practical concerns, especially the definition of social groups and the regulation of social interactions within and between groups. New religious movements usually form when a constituency is not being well served by current social organizations (religious or secular) in practical terms and is better served by the new movement. The seemingly irrational and otherworldly elements of religions in the sample usually make excellent practical sense when judged by the only gold standard that matters from an evolutionary perspective — what they cause the religious believers to do.

What religions do (on the whole) is to cause their adherents to live more positive and productive lives, as Wilson notes in the first part of the quotation.

Despite the decline of religious observance in the West, most Westerners are still strongly influenced by the moral tenets of the Judeo-Christian tradition. Why? Because the observance of those traditions fosters beneficial cooperation, and beneficial cooperation fosters happiness and prosperity. (For a detailed exposition of this point, see “Religion and Liberty” in “Facets of Liberty.”)

Therefore, one answer to Reed’s rhetorical question — “Why should I not indulge my hobby of torturing to death the severely genetically retarded?” — is that such behavior doesn’t comport with Judeo-Christian morality. A second answer is that empathy causes most people eschew actions that cause suffering in others (except in the defense of self, kin, and country), and empathy may be a genetic (i.e. evolutionary) trait.

(3) Abiogenesis. This is not going to be a fair question as there is no way anyone can know the answer, but I pose it anyway. The theory, which I cannot refute, is that a living, metabolizing, reproducing gadget formed accidentally in the ancient seas. Perhaps it did. I wasn’t there. It seems to me, though, that the more complex one postulates the First Critter to have been, the less likely, probably exponentially so, it would have been to form. The less complex one postulates it to have been, the harder to explain why biochemistry, which these days is highly sophisticated, cannot reproduce the event. Question: How many years would have to pass without replication of the event, if indeed it be not replicated, before one might begin to suspect that it didn’t happen?

How many years? 250 million to 1 billion. That’s roughly the length of time between the formation of Earth and the beginning of life, according to current estimates. (See the first paragraph of the Wikipedia article about abiogenesis.) That could be plenty of time for untold billions of random interactions of matter to have produced a life form that could, with further development, reproduce and become more complex. But who knows? And even if someone in a lab somewhere happens to produce such a “critter,” it may well be different than Reed’s First Critter.

I certainly hew to the possibility that seems to lurk in Reed’s mind; namely, that the First Critter was the handiwork of the Creator, or at least came to be because of the physical laws established by the Creator. (See “Existence and Creation,” possibility 5.)

(4) … Straight-line evolution, for example in which Eohippus gradually gets larger until it reaches Clydesdale, is plausible because each intervening step is a viable animal. In fact this is just selective breeding. Yet many evolutionary transformations seem to require intermediate stages that could not survive.

For example there are two-cycle bugs (insects, arachnids) that lay eggs that hatch into tiny replicas of the adults, which grow, lay eggs, and repeat the cycle. The four-cycle bugs go through egg, larva, pupa, adult. Question: What are the viable steps needed to evolve from one to the other? Or from anything to four-cycle? …

Lacking the technical wherewithal requisite to a specific answer, I fall back on time — the billions of years encompassed in evolution.

(5) … Mr. Derbyshire believes strongly in genetic determinism—that we are what we are and behave as we do because of genetic programming….

… A physical (to include chemical) system cannot make decisions. All subsequent states of a physical system are determined by the initial state. So, if one accepts the electrochemical premise (which, again, seems to be correct) it follows that we do not believe things because they are true, but because we are predestined to believe them. Question: Does not genetic determinism (with which I have no disagreement) lead to a  paradox: that the thoughts we think we are thinking we only think to be thoughts when they are really utterly predetermined by the inexorable working of physics and chemistry?

This smacks of Cartesian dualism, the view that “there are two fundamental kinds of substance: mental and material.” It seems to me easier to believe that the nervous system (with its focal point in the brain). It seems to me that experimental psychologists have amply document the links between brain activity (i.e., mental states) and behavior.

The real question is whether behavior is strictly determined by genes. The obvious answer is “no” because every instance of behavior is conditioned by immediate circumstances, which are not always (usually?) determined by the actor.

Further, free will is consistent with a purely physiological interpretation of behavioral decisions:

Suppose I think that I might want to eat some ice cream. I go to the freezer compartment and pull out an unopened half-gallon of vanilla ice cream and an unopened half-gallon of chocolate ice cream. I can’t decide between vanilla, chocolate, some of each, or none. I ask a friend to decide for me by using his random-number generator, according to rules of his creation. He chooses the following rules:

  • If the random number begins in an odd digit and ends in an odd digit, I will eat vanilla.
  • If the random number begins in an even digit and ends in an even digit, I will eat chocolate.
  • If the random number begins in an odd digit and ends in an even digit, I will eat some of each flavor.
  • If the random number begins in an even digit and ends in an odd digit, I will not eat ice cream.

Suppose that the number generated by my friend begins in an even digit and ends in an even digit: the choice is chocolate. I act accordingly.

I didn’t inevitably choose chocolate because of events that led to the present state of my body’s chemistry, which might otherwise have dictated my choice. That is, I broke any link between my past and my choice about a future action.

I call that free will.

I suspect that our brains are constructed in such a way as to produce the same kind of result in many situations, though certainly not in all situations. That is, we have within us the equivalent of an impartial friend and an (informed) decision-making routine, which together enable us to exercise something we can call free will.

My suspicion is well-founded. The brains of human beings are complex, and consist of many “centers” that perform different functions. That complexity enables self-awareness; a person may “stand back” from himself and view his actions critically. Human beings, in other words, aren’t simple machines that operate according hard-wired routines.

(6) … In principle, traits spread through a population because they lead to the having of greater numbers of children….

… Genes already exist in populations for extraordinary superiority of many sorts—for the intelligence of Stephen Hawking, the body of Mohammed Ali, for 20/5 vision, for the astonishing endurance in running of the Tarahumara Indians, and so on. To my unschooled understanding, these traits offer clear and substantial advantage in survival and reproduction, yet they do not become universal, or even common. The epicanthic fold does. Question: Why do seemingly trivial traits proliferate while clearly important ones do not?

First, survival depends on traits that are suited to the environment in which a group finds itself. Not all — or even most — challenges to survival demand the intelligence of a Hawking, the body of an Ali, etc. Further, cooperative groups find that acting together they possess high intelligence of a kind that’s suited to the group’s situation. Similarly, the strength of many is usually sufficient to overcome obstacles and meet challenges.

Second, mating isn’t driven entirely by a focus on particular traits — high intelligence, superior athletic ability, etc. Such traits therefore remain relatively rare unless they are essential to survival, which might explain the “astonishing endurance running of the Tarahumara Indians.”

(7) … Looking at the human body, I see many things that appear to have no relation to survival or more vigorous reproduction, and that indeed work against it, yet are universal in the species. For example, the kidneys contain the nervous tissue that makes kidney stones agonizingly painful, yet until recently the victim has been able to do nothing about them….

What is the reproductive advantage of crippling pain (migraines can be crippling) about which pre-recently, the sufferer could do nothing?

What is the reproductive advantage of Tay-Sachs disease, which is found disproportionately among Ashkenazi Jews? Here is a reasonable hypothesis:

Gregory Cochran proposes that the mutant alleles causing Tay–Sachs confer higher intelligence when present in carrier form, and provided a selective advantage in the historical period when Jews were restricted to intellectual occupations.[9][10] Peter Frost argues for a similar heterozygote advantage for mutant alleles being responsible for the prevalence of Tay Sachs disease in Eastern Quebec.[11]

In sum, the bad sometimes goes with the good. That’s just the way evolution is. In the case of migraines, it may be that those who are prone to them are also in some way attractive as mates. Who knows? But if every genetic disadvantage worked against survival, human beings would have become extinct long ago.

(8) Finally, the supernatural. Unfairly, as it turned out, in regard to religion I had expected Mr. Derbyshire to strike the standard “Look at me, I’m an atheist, how advanced I am” pose. I was wrong. In fact he says that he believes in a God. (Asked directly, he responded, “Yes, to my own satisfaction, though not necessarily to yours.”) His views are reasoned, intellectually modest, and, though I am not a believer, I see nothing with which to quarrel, though for present purposes this is neither here nor there. Question: If one believes in or suspects the existence of God or gods, how does one exclude the possibility that He, She, or It meddles in the universe—directing evolution, for example?

A belief in gods would seem to leave the door open to Intelligent Design, the belief that the intricacies of life came about not by accident but were crafted by Somebody or Something. The view, anathema in evolutionary circles, is usually regarded as emanating from Christianity, and usually does….

In the piece by Derbyshire to which Reed links, Derbyshire writes:

I belong to the 16 percent of Americans who, in the classification used for a recent survey, believe in a “Critical God.”… He is the Creator….

I am of the same persuasion, though Derbyshire and I may differ in our conception of God’s role in the Universe:

1. There is necessarily a creator of the universe [see this], which comprises all that exists in “nature.”

2. The creator is not part of nature; that is, he stands apart from his creation and is neither of its substance nor governed by its laws. (I use “he” as a term of convenience, not to suggest that the creator is some kind of human or animate being, as we know such beings.)

3. The creator designed the universe, if not in detail then in its parameters. The parameters are what we know as matter-energy (substance) and its various forms, motions, and combinations (the laws that govern the behavior of matter-energy).

4. The parameters determine everything that is possible in the universe. But they do not necessarily dictate precisely the unfolding of events in the universe. Randomness and free will are evidently part of the creator’s design.

5. The human mind and its ability to “do science” — to comprehend the laws of nature through observation and calculation — are artifacts of the creator’s design.

6. Two things probably cannot be known through science: the creator’s involvement in the unfolding of natural events; the essential character of the substance on which the laws of nature operate.

Points 3 and 4 say as much as I am willing to say about Intelligent Design.

I turn now to the interaction of culture and biological evolution, which figures in my answers to several of Reed’s questions. Consider this, from an article by evolutionary psychologist Joseph Henrich (“A Cultural Species: How Culture Drove Human Evolution,” Psychological Science Agenda, American Psychological Association, November 2011; citations omitted):

Once a species is sufficiently reliant on learning from others for at least some aspects of its behavioral repertoire, cultural evolutionary processes can arise, and these processes can alter the environment faced by natural selection acting on genes….

Models of cumulative cultural evolution suggest two important, and perhaps non-intuitive, features of our species. First, our ecological success, technology, and adaptation to diverse environments is not due to our intelligence. Alone and stripped of our culture, we are hopeless as a species. Cumulative cultural evolution has delivered both our fancy technologies as well as the subtle and unconscious ways that humans have adapted their behavior and thinking to tackle environmental challenges. The smartest among us could not in a single lifetime devise even a small fraction of the techniques and technologies that allow any foraging society to survive. Second, the available formal models make clear that the effectiveness of this cumulative cultural evolutionary process depends crucially on the size and interconnectedness of our populations and social networks. It’s the ability to freely exchange information that sparks and accelerates adaptive cultural evolution, and creates innovation…. Sustaining complex technologies depends on maintaining a large and well-interconnected population of minds.

…In the case of ethnic groups, for example, such models explore how genes and culture coevolve. This shows how cultural evolution will, under a wide range of conditions, create a landscape in which different social groups tend to share both similar behavioral expectations and similar arbitrary “ethnic markers” (like dialect or language). In the wake of this culturally constructed world, genes evolve to create minds that are inclined to preferentially interact with and imitate those who share their markers. This guarantees that individuals most effectively coordinate with those who share their culturally learned behavioral expectations (say about marriage or child rearing). These purely theoretical predictions were subsequently confirmed by experiments with both children and adults.

This approach also suggests that cultural evolution readily gives rise to social norms, as long as learners can culturally acquire the standards by which they judge others. Many models robustly demonstrate that cultural evolution can sustain almost any behavior or preference that is common in a population (including cooperation), if it is not too costly. This suggests that different groups will end up with different norms and begin to compete with each other. Competition among groups with different norms will favor those particular norms that lead to success in intergroup competition. My collaborators and I have argued that cultural group selection has shaped the cultural practices, institutions, beliefs and psychologies that are common in the world today, including those associated with anonymous markets, prosocial religions with big moralizing gods, and monogamous marriage. Each of these cultural packages, which have emerged relatively recently in human history, impacts our psychology and behavior. Priming “markets” and “God”, for example, increase trust and giving (respectively) in behavioral experiments, though “God primes” only work on theists. Such research avenues hold the promise of explaining, rather than merely documenting, the patterns of psychological variation observed across human populations.

The cultural evolution of norms over tens or hundreds of thousands of years, and their shaping by cultural group selection, may have driven genetic evolution to create a suite of cognitive adaptations we call norm psychology. This aspect of our evolved psychology emerged and coevolved in response to cultural evolution’s production of norms. This suite facilitates, among other things, our identification and learning of social norms, our expectation of sanctions for norm violations, and our ability to internalize normative behavior as motivations….

Biological evolution continues, and with it, cultural evolution. But there are some “constants” that seem to remain embedded in the norms of most cultural-genetic groups. Among them, moral codes that exclude gratuitous torture of innocent children, a belief in God, and status-consciousness (which, for example, reinforces a diminished need to reproduce for survival of the species).

Henrich hits upon one of the reasons — perhaps the main reason — why efforts to integrate various biological-cultural groups under the banner of “diversity” are doomed to failure:

[G]enes evolve to create minds that are inclined to preferentially interact with and imitate those who share their markers. This guarantees that individuals most effectively coordinate with those who share their culturally learned behavioral expectations (say about marriage or child rearing).

As I say here,

genetic kinship will always be a strong binding force, even where the kinship is primarily racial. Racial kinship boundaries, by the way, are not always and necessarily the broad ones suggested by the classic trichotomy of Caucasoid, Mongoloid, Negroid. (If you want to read for yourself about the long, convoluted, diffuse, and still controversial evolutionary chains that eventuated in the sub-species homo sapiens sapiens, to which all humans are assigned arbitrarily, without regard for their distinctive differences, begin here, here, here, and here.)

The obverse of of genetic kinship is “diversity,” which often is touted as a good thing by anti-tribalist social engineers. But “diversity” is not a good thing when it comes to social bonding.

At that point, I turn to an article by Michael Jonas about a study by Harvard political scientist Robert Putnam, “E Pluribus Unum: Diversity and Community in the Twenty-first Century“:

It has become increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.

But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings….

…Putnam’s work adds to a growing body of research indicating that more diverse populations seem to extend themselves less on behalf of collective needs and goals….

(That’s from Jonas’s article, “The Downside of diversity,” The Boston Globe (, August 5, 2007. See this post for more about genetic kinship and “diversity.”)

In a later post, I add this:

Yes, human beings are social animals, but human beings are not “brothers under the skin,” and there is no use in pretending that we are. Trying to make us so, by governmental fiat, isn’t only futile but also wasteful and harmful. The futility of forced socialization is as true of the United States — a vast and varied collection of races, ethnicities, religions, and cultures — as it is of the world.

Despite the blatant reality of America’s irreconcilable diversity, American increasingly are being forced to lead their lives according to the dictates of the central government. Some apologists for this state of affairs will refer to the “common good,” which is a fiction that I address in [“Modern Utilitarianism,” “The Social Welfare Function,” and “Utilitarianism vs. Liberty”].

Human beings, for the most part, may be bigger, stronger, and healthier than ever, but their physical progress depends heavily on technology, and would be reversed by a cataclysm that disables technology. Further, technologically based prosperity masks moral squalor. Strip away that prosperity, and the West would look like the warring regions of Central and South America, Eastern Europe, the Middle East, Africa, and South and Southeast Asia: racial and ethnic war without end. Much of urban and suburban America — outside affluent, well-guarded, and mostly “liberal” enclaves — would look like Ferguson.

Human beings are not “brothers under the skin,” and no amount of wishful thinking or forced integration can make us so. That is the lesson to be learned from biological and cultural evolution, which makes human beings different — perhaps irreconcilably so — but not necessarily better.


*     *     *

Related posts:
Crime, Explained
Society and the State
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
Liberty and Society
Tolerance on the Left
The Eclipse of “Old America”
Genetic Kinship and Society
“Conversing” about Race
The Fallacy of Human Progress
“We the People” and Big Government
Evolution and Race
The Social Animal and the “Social Contract”
“Wading” into Race, Culture, and IQ
Poverty, Crime, and Big Government

Poverty, Crime, and Big Government

Dr. James Thompson (Psychological Comments) reports the results of a thorough study of the link between poverty and crime. Near the end of the piece, Dr. Thompson quotes The Economist‘s summary of the study’s implications:

That suggests two, not mutually exclusive, possibilities. One is that a family’s culture, once established, is “sticky”—that you can, to put it crudely, take the kid out of the neighbourhood, but not the neighbourhood out of the kid. Given, for example, children’s propensity to emulate elder siblings whom they admire, that sounds perfectly plausible. The other possibility is that genes which predispose to criminal behaviour (several studies suggest such genes exist) are more common at the bottom of society than at the top, perhaps because the lack of impulse-control they engender also tends to reduce someone’s earning capacity.

Neither of these conclusions is likely to be welcome to social reformers. The first suggests that merely topping up people’s incomes, though it may well be a good idea for other reasons, will not by itself address questions of bad behaviour. The second raises the possibility that the problem of intergenerational poverty may be self-reinforcing, particularly in rich countries like Sweden where the winnowing effects of education and the need for high levels of skill in many jobs will favour those who can control their behaviour, and not those who rely on too many chemical crutches to get them through the day.

In brief, there is a strong connection between genes and criminal behavior. Inasmuch as there are also strong connections between genes and intelligence, on the one hand, and intelligence and income, on the other hand, it follows that:

  • Criminal behavior will be more prevalent in genetic groups with below-average intelligence.
  • Poverty will be more prevalent in genetic groups with below-average intelligence.
  • The correlation between crime and poverty must, therefore, reflect (to some extent) the correlation between below-average intelligence and poverty.

As The Economist notes “merely topping up people’s incomes … will not by itself address questions of bad behaviour.” This would seem to contradict my finding of a strongly negative relationship between economic growth and the rate of violent-and-property crime.

But there is no contradiction. Not all persons who commit crimes are incorrigible. At the margin, there are persons who will desist from criminal activity when presented with the alternative of attaining money without running the risk of being punished for their efforts.

How much less crime would there be if economic growth weren’t suppressed by the dead hand of big government? A lot less.

*     *     *

Related posts:
Crime, Explained
Lock ‘Em Up
Estimating the Rahn Curve: Or, How Government Spending Inhibits Economic Growth
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ

Not-So-Random Thoughts (X)

How Much Are Teachers Worth?

David Harsanyi writes:

“The bottom line,” says the Center for American Progress, “is that mid- and late-career teachers are not earning what they deserve, nor are they able to gain the salaries that support a middle-class existence.”

Alas, neither liberal think tanks nor explainer sites have the capacity to determine the worth of human capital. And contrasting the pay of a person who has a predetermined government salary with the pay earned by someone in a competitive marketplace tells us little. Public-school teachers’ compensation is determined by contracts negotiated long before many of them even decided to teach. These contracts hurt the earning potential of good teachers and undermine the education system. And it has nothing to do with what anyone “deserves.”

So if teachers believe they aren’t making what they’re worth — and they may well be right about that — let’s free them from union constraints and let them find out what the job market has to offer. Until then, we can’t really know. Because a bachelor’s degree isn’t a dispensation from the vagaries of economic reality. And teaching isn’t the first step toward sainthood. Regardless of what you’ve heard. (“Are Teachers Underpaid? Let’s Find Out,”, July 25, 2014)

Harsanyi is right, but too kind. Here’s my take, from “The Public-School Swindle“:

[P]ublic “education” — at all levels — is not just a rip-off of taxpayers, it is also an employment scheme for incompetents (especially at the K-12 level) and a paternalistic redirection of resources to second- and third-best uses.

And, to top it off, public education has led to the creation of an army of left-wing zealots who, for many decades, have inculcated America’s children and young adults in the advantages of collective, non-market, anti-libertarian institutions, where paternalistic “empathy” supplants personal responsibility.

Utilitarianism, Once More

EconLog bloggers Bryan Caplan and Scott Sumner are enjoying an esoteric exchange about utilitarianism (samples here and here), which is a kind of cost-benefit calculus in which the calculator presumes to weigh the costs and benefits that accrue to other persons.  My take is that utilitarianism borders on psychopathy. In “Utilitarianism and Psychopathy,” I quote myself to this effect:

Here’s the problem with cost-benefit analysis — the problem it shares with utilitarianism: One person’s benefit can’t be compared with another person’s cost. Suppose, for example, the City of Los Angeles were to conduct a cost-benefit analysis that “proved” the wisdom of constructing yet another freeway through the city in order to reduce the commuting time of workers who drive into the city from the suburbs.

Before constructing the freeway, the city would have to take residential and commercial property. The occupants of those homes and owners of those businesses (who, in many cases would be lessees and not landowners) would have to start anew elsewhere. The customers of the affected businesses would have to find alternative sources of goods and services. Compensation under eminent domain can never be adequate to the owners of taken property because the property is taken by force and not sold voluntarily at a true market price. Moreover, others who are also harmed by a taking (lessees and customers in this example) are never compensated for their losses. Now, how can all of this uncompensated cost and inconvenience be “justified” by, say, the greater productivity that might (emphasize might) accrue to those commuters who would benefit from the construction of yet another freeway.

Yet, that is how cost-benefit analysis works. It assumes that group A’s cost can be offset by group B’s benefit: “the greatest amount of happiness altogether.”

America’s Financial Crisis

Timothy Taylor tackles the looming debt crisis:

First, the current high level of government debt, and the projections for the next 25 years, mean that the U.S. government lacks fiscal flexibility….

Second, the current spending patterns of the U.S. government are starting to crowd out everything except health care, Social Security, and interest payments….

Third, large government borrowing means less funding is available for private investment….

…CBO calculates an “alternative fiscal scenario,” in which it sets aside some of these spending and tax changes that are scheduled to take effect in five years or ten years or never…. [T]he extended baseline scenario projected that the debt/GDP ratio would be 106% by 2039. In the alternative fiscal scenario, the debt-GDP ratio is projected to reach 183% of GDP by 2039. As the report notes: “CBO’s extended alternative fiscal scenario is based on the assumptions that certain policies that are now in place but are scheduled to change under current law will be continued and that some provisions of law that might be difficult to sustain for a long period will be modified. The scenario, therefore, captures what some analysts might consider to be current policies, as opposed to current laws.”…

My own judgement is that the path of future budget deficits in the next decade or so is likely to lean toward the alternative fiscal scenario. But long before we reach a debt/GDP ratio of 183%, something is going to give. I don’t know what will change. But as an old-school economist named Herb Stein used to say, “If something can’t go on, it won’t.” (Long Term Budget Deficits,Conversable Economist, July 24, 2014)

Professional economists are terribly low-key, aren’t they? Here’s the way I see it, in “America’s Financial Crisis Is Now“:

It will not do simply to put an end to the U.S. government’s spending spree; too many State and local governments stand ready to fill the void, and they will do so by raising taxes where they can. As a result, some jurisdictions will fall into California- and Michigan-like death-spirals while jobs and growth migrate to other jurisdictions…. Even if Congress resists the urge to give aid and comfort to profligate States and municipalities at the expense of the taxpayers of fiscally prudent jurisdictions, the high taxes and anti-business regimes of California- and Michigan-like jurisdictions impose deadweight losses on the whole economy….

So, the resistance to economically destructive policies cannot end with efforts to reverse the policies of the federal government. But given the vast destructiveness of those policies — “entitlements” in particular — the resistance must begin there. Every conservative and libertarian voice in the land must be raised in reasoned opposition to the perpetuation of the unsustainable “promises” currently embedded in Social Security, Medicare, and Medicaid — and their expansion through Obamacare. To those voices must be added the voices of “moderates” and “liberals” who see through the proclaimed good intentions of “entitlements” to the economic and libertarian disaster that looms if those “entitlements” are not pared down to their original purpose: providing a safety net for the truly needy.

The alternative to successful resistance is stark: more borrowing, higher interest payments, unsustainable debt, higher taxes, and economic stagnation (at best).

For the gory details about government spending and economic stagnation, see “Estimating the Rahn Curve: Or, How Government Spending Inhibits Economic Growth” and “The True Multiplier.”

Climate Change: More Evidence against the Myth of AGW

There are voices of reason, that is, real scientists doing real science:

Over the 55-years from 1958 to 2012, climate models not only significantly over-predict observed warming in the tropical troposphere, but they represent it in a fundamentally different way than is observed. (Ross McKittrick and Timothy Vogelsang, “Climate models not only significantly over-predict observed warming in the tropical troposphere, but they represent it in a fundamentally different way than is observed,” excerpted at Watt’s Up With That, July 24, 2014)

Since the 1980s anthropogenic aerosols have been considerably reduced in Europe and the Mediterranean area. This decrease is often considered as the likely cause of the brightening effect observed over the same period. This phenomenon is however hardly reproduced by global and regional climate models. Here we use an original approach based on reanalysis-driven coupled regional climate system modelling, to show that aerosol changes explain 81 ± 16 per cent of the brightening and 23 ± 5 per cent of the surface warming simulated for the period 1980–2012 over Europe. The direct aerosol effect is found to dominate in the magnitude of the simulated brightening. The comparison between regional simulations and homogenized ground-based observations reveals that observed surface solar radiation, as well as land and sea surface temperature spatio-temporal variations over the Euro-Mediterranean region are only reproduced when simulations include the realistic aerosol variations. (“New paper finds 23% of warming in Europe since 1980 due to clean air laws reducing sulfur dioxide,” The Hockey Schtick, July 23, 2014)

My (somewhat out-of-date but still useful) roundup of related posts and articles is at “AGW: The Death Knell.”

Crime Explained…

…but not by this simplistic item:

Of all of the notions that have motivated the decades-long rise of incarceration in the United States, this is probably the most basic: When we put people behind bars, they can’t commit crime.

The implied corollary: If we let them out, they will….

Crime trends in a few states that have significantly reduced their prison populations, though, contradict this fear. (Emily Badger, “There’s little evidence that fewer prisoners means more crime,” Wonkblog, The Washington Post, July 21, 2014)

Staring at charts doesn’t yield answers to complex, multivariate questions, such as the causes of crime. Ms. Badger should have extended my work of seven years ago (“Crime, Explained“). Had she, I’m confident that she would have obtained the same result, namely:

VPC (violent+property crimes per 100,000 persons) =


+346837BLK (number of blacks as a decimal fraction of the population)

-3040.46GRO (previous year’s change in real GDP per capita, as a decimal fraction of the base)

-1474741PRS (the number of inmates in federal and State prisons in December of the previous year, as a decimal fraction of the previous year’s population)

The t-statistics on the intercept and coefficients are 19.017, 21.564, 1.210, and 17.253, respectively; the adjusted R-squared is 0.923; the standard error of the estimate/mean value of VPC = 0.076.

The coefficient and t-statistic for PRS mean that incarceration has a strong, statistically significant, negative effect on the violent-property crime rate. In other words, more prisoners = less crime against persons and their property.

The Heritability of Intelligence

Strip away the trappings of culture and what do you find? This:

If a chimpanzee appears unusually intelligent, it probably had bright parents. That’s the message from the first study to check if chimp brain power is heritable.

The discovery could help to tease apart the genes that affect chimp intelligence and to see whether those genes in humans also influence intelligence. It might also help to identify additional genetic factors that give humans the intellectual edge over their non-human-primate cousins.

The researchers estimate that, similar to humans, genetic differences account for about 54 per cent of the range seen in “general intelligence” – dubbed “g” – which is measured via a series of cognitive tests. “Our results in chimps are quite consistent with data from humans, and the human heritability in g,” says William Hopkins of the Yerkes National Primate Research Center in Atlanta, Georgia, who heads the team reporting its findings in Current Biology.

“The historical view is that non-genetic factors dominate animal intelligence, and our findings challenge that view,” says Hopkins. (Andy Coghlan, “Chimpanzee brain power is strongly heritable,New Scientist, July 10, 2014)

Such findings are consistent with Nicholas Wade’s politically incorrect A Troublesome Inheritance: Genes, Race and Human History. For related readings, see “‘Wading’ into Race, Culture, and IQ’.” For a summary of scholarly evidence about the heritability of intelligence — and its dire implications — see “Race and Reason — The Achievement Gap: Causes and Implications.” John Derbyshire offers an even darker view: “America in 2034” (American Renaissance, June 9, 2014).

The correlation of race and intelligence is, for me, an objective matter, not an emotional one. For evidence of my racial impartiality, see the final item in “My Moral Profile.”

The Limits of Science, Illustrated by Scientists

Our first clue is the title of a recent article in The Christian Science Monitor: “Why the Universe Isn’t Supposed to Exist.” The article reads, in part:

The universe shouldn’t exist — at least according to a new theory.

Modeling of conditions soon after the Big Bang suggests the universe should have collapsed just microseconds after its explosive birth, the new study suggests.

“During the early universe, we expected cosmic inflation — this is a rapid expansion of the universe right after the Big Bang,” said study co-author Robert Hogan, a doctoral candidate in physics at King’s College in London. “This expansion causes lots of stuff to shake around, and if we shake it too much, we could go into this new energy space, which could cause the universe to collapse.”

Physicists draw that conclusion from a model that accounts for the properties of the newly discovered Higgs boson particle, which is thought to explain how other particles get their mass; faint traces of gravitational waves formed at the universe’s origin also inform the conclusion.

Of course, there must be something missing from these calculations.

“We are here talking about it,” Hogan told Live Science. “That means we have to extend our theories to explain why this didn’t happen.”

No kidding!

So, you think “the science is settled,” do you? Think again, long and hard.

*     *     *

Related posts: Just about everything here. Enjoy.


Verbal Regression Analysis, the “End of History,” and Think-Tanks

There once was a Washington DC careerist with whom I crossed verbal swords. I won; he lost and moved on to another job. I must, however, credit him with at least one accurate observation: Regression analysis is a method of predicting the past with great accuracy.

What did he mean by that? Data about past events may yield robust statistical relationships, but those relationships are meaningless unless they accurately predict future events. The problem is that in the go-go world of DC, where rhetoric takes precedence over reality, analysts usually assume the predictive power of statistical relationships, without waiting to see if they have any bearing on future events.

Francis Fukuyama has just published an article in which he admits that his famous article, “The End of History” (1989), was a kind of verbal regression analysis — a sweeping prediction of the future based on a (loose) verbal analysis of the past.

What is the “end of history”? This, according to Wikipedia:

[A] political and philosophical concept that supposes that a particular political, economic, or social system may develop that would constitute the end-point of humanity’s sociocultural evolution and the final form of human government.

What did Fukuyama say about “the end of history” in 1989? This:

In watching the flow of events over the past decade or so, it is hard to avoid the feeling that something very fundamental has happened in world history….

What we may be witnessing in not just the end of the Cold War, or the passing of a particular period of post-war history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government. This is not to say that there will no longer be events to fill the pages of Foreign Affairs‘s yearly summaries of international relations, for the victory of liberalism has occurred primarily in the realm of ideas or consciousness and is as yet incomplete in the real or material world. But there are powerful reasons for believing that it is the ideal that will govern the material world in the long run. To understand how this is so, we must first consider some theoretical issues concerning the nature of historical change.

What does Fukuyama say now? This:

I argued [in 1989] that History (in the grand philosophical sense) was turning out very differently from what thinkers on the left had imagined. The process of economic and political modernization was leading not to communism, as the Marxists had asserted and the Soviet Union had avowed, but to some form of liberal democracy and a market economy. History, I wrote, appeared to culminate in liberty: elected governments, individual rights, an economic system in which capital and labor circulated with relatively modest state oversight….

Twenty-five years later, the most serious threat to the end-of-history hypothesis isn’t that there is a higher, better model out there that will someday supersede liberal democracy; neither Islamist theocracy nor Chinese capitalism cuts it. Once societies get on the up escalator of industrialization, their social structure begins to change in ways that increase demands for political participation. If political elites accommodate these demands, we arrive at some version of democracy.

The question is whether all countries will inevitably get on that escalator. The problem is the intertwining of politics and economics. Economic growth requires certain minimal institutions such as enforceable contracts and reliable public services before it will take off, but those basic institutions are hard to create in situations of extreme poverty and political division. Historically, societies broke out of this “trap” through accidents of history, in which bad things (like war) often created good things (like modern governments). It is not clear, however, that the stars will necessarily align for everyone….

A second problem that I did not address 25 years ago is that of political decay, which constitutes a down escalator. All institutions can decay over the long run. They are often rigid and conservative; rules responding to the needs of one historical period aren’t necessarily the right ones when external conditions change.

Moreover, modern institutions designed to be impersonal are often captured by powerful political actors over time. The natural human tendency to reward family and friends operates in all political systems, causing liberties to deteriorate into privileges….

As for technological progress, it is fickle in distributing its benefits. Innovations such as information technology spread power because they make information cheap and accessible, but they also undermine low-skill jobs and threaten the existence of a broad middle class.

No one living in an established democracy should be complacent about its survival. But despite the short-term ebb and flow of world politics, the power of the democratic ideal remains immense. We see it in the mass protests that continue to erupt unexpectedly from Tunis to Kiev to Istanbul, where ordinary people demand governments that recognize their equal dignity as human beings. We also see it in the millions of poor people desperate to move each year from places like Guatemala City or Karachi to Los Angeles or London.

Even as we raise questions about how soon everyone will get there, we should have no doubt as to what kind of society lies at the end of History.

And blah, blah, blah, blah, blah.

The “end of history” will be some kind of “democracy,” and it will arrive despite all of the very real obstacles in its way, which include sectional and sectarian conflict, the capture of governmental power by special interests, and economic realities (which are somehow “wrong,” despite the fact that they are just realities). In the end “hope and change” will prevail because, well, they ought to prevail, by golly.

In sum, Fukuyama has substituted a new verbal regression analysis for his old one.

You may have guessed by now that “verbal regression analysis” means “bullshit.” Fukuyama emitted bullshit in 1989, and he’s emitting it 25 years later. Why anyone would pay attention to him and his ilk is beyond me.

But there are organizations — so-called think-tanks — that specialize in converting your tax dollars into bullshit of the kind emitted by Fukuyama. It’s unfortunate that the output of those think-tanks can’t be bagged and used as fertilizer. It would then have real value.

“Settled Science” and the Monty Hall Problem

The so-called 97-percent consensus among climate scientists about anthropogenic global warming (AGW) isn’t evidence of anything but the fact that scientists are only human. Even if there were such a consensus, it certainly wouldn’t prove the inchoate theory of AGW, any more than the early consensus against Einstein’s special theory of relativity disproved that theory.

Actually, in the case of AGW, the so-called consensus is far from a consensus about the extent of warming, its causes, and its implications. (See, for example, this post and this one.) But it’s undeniable that a lot of climate scientists believe in a “strong” version of AGW, and in its supposedly dire consequences for humanity.

Why is that? Well, in a field as inchoate as climate science, it’s easy to let one’s prejudices drive one’s research agenda and findings, even if only subconsciously. And isn’t it more comfortable and financially rewarding to be with the crowd and where the money is than to stand athwart the conventional wisdom? (Lennart Bengtsson certainly found that to be the case.) Moreover, there was, in the temperature records of the late 20th century, a circumstantial case for AGW, which led to the development of theories and models that purport to describe a strong relationship between temperature and CO2. That the theories and models are deeply flawed and lacking in predictive value seems not to matter to the 97 percent (or whatever the number is).

In other words, a lot of climate scientists have abandoned the scientific method, which demands skepticism, in order to be on the “winning” side of the AGW issue. How did it come to be thought of as the “winning” side? Credit vocal so-called scientists who were and are (at least) guilty of making up models to fit their preconceptions, and ignoring evidence that human-generated CO2 is a minor determinant of atmospheric temperature. Credit influential non-scientists (e.g., Al Gore) and various branches of the federal government that have spread the gospel of AGW and bestowed grants on those who can furnish evidence of it. Above all, credit the media, which for the past two decades has pumped out volumes of biased, half-baked stories about AGW, in the service of the “liberal” agenda: greater control of the lives and livelihoods of Americans.

Does this mean that the scientists who are on the AGW bandwagon don’t believe in the correctness of AGW theory? I’m sure that most of them do believe in it — to some degree. They believe it at least to the same extent as a religious convert who zealously proclaims his new religion to prove (mainly to himself) his deep commitment to that religion.

What does all of this have to do with the Monty Hall problem? This:

Making progress in the sciences requires that we reach agreement about answers to questions, and then move on. Endless debate (think of global warming) is fruitless debate. In the Monty Hall case, this social process has actually worked quite well. A consensus has indeed been reached; the mathematical community at large has made up its mind and considers the matter settled. But consensus is not the same as unanimity, and dissenters should not be stifled. The fact is, when it comes to matters like Monty Hall, I’m not sufficiently skeptical. I know what answer I’m supposed to get, and I allow that to bias my thinking. It should be welcome news that a few others are willing to think for themselves and challenge the received doctrine. Even though they’re wrong. (Brian Hayes, “Monty Hall Redux” (a book review), American Scientist, September-October 2008)

The admirable part of Hayes’s statement is its candor: Hayes admits that he may have adopted the “consensus” answer because he wants to go with the crowd.

The dismaying part of Hayes’s statement is his smug admonition to accept “consensus” and move on. As it turns out the “consensus” about the Monty Hall problem isn’t what it’s cracked up to be. A lot of very bright people have solved a tricky probability puzzle, but not the Monty Hall problem. (For the details, see my post, “The Compleat Monty Hall Problem.”)

And the “consensus” about AGW is very far from being the last word, despite the claims of true believers. (See, for example, the relatively short list of recent articles, posts, and presentations given at the end of this post.)

Going with the crowd isn’t the way to do science. It’s certainly not the way to ascertain the contribution of human-generated CO2 to atmospheric warming, or to determine whether the effects of any such warming are dire or beneficial. And it’s most certainly not the way to decide whether AGW theory implies the adoption of policies that would stifle economic growth and hamper the economic betterment of millions of Americans and billions of other human beings — most of whom would love to live as well as the poorest of Americans.

Given the dismal track record of global climate models, with their evident overstatement of the effects of CO2 on temperatures, there should be a lot of doubt as to the causes of rising temperatures in the last quarter of the 20th century, and as to the implications for government action. And even if it could be shown conclusively that human activity will temperatures to resume the rising trend of the late 1900s, several important questions remain:

  • To what extent would the temperature rise be harmful and to what extent would it be beneficial?
  • To what extent would mitigation of the harmful effects negate the beneficial effects?
  • What would be the costs of mitigation, and who would bear those costs, both directly and indirectly (e.g., the effects of slower economic growth on the poorer citizens of thw world)?
  • If warming does resume gradually, as before, why should government dictate precipitous actions — and perhaps technologically dubious and economically damaging actions — instead of letting households and businesses adapt over time by taking advantage of new technologies that are unavailable today?

Those are not issues to be decided by scientists, politicians, and media outlets that have jumped on the AGW bandwagon because it represents a “consensus.” Those are issues to be decided by free, self-reliant, responsible persons acting cooperatively for their mutual benefit through the mechanism of free markets.

*     *     *

Recent Related Reading:
Roy Spencer, “95% of Climate Models Agree: The Observations Must Be Wrong,” Roy Spencer, Ph.D., February 7, 2014
Roy Spencer, “Top Ten Good Skeptical Arguments,” Roy Spencer, Ph.D., May 1, 2014
Ross McKittrick, “The ‘Pause’ in Global Warming: Climate Policy Implications,” presentation to the Friends of Science, May 13, 2014 (video here)
Patrick Brennan, “Abuse from Climate Scientists Forces One of Their Own to Resign from Skeptic Group after Week: ‘Reminds Me of McCarthy’,” National Review Online, May 14, 2014
Anthony Watts, “In Climate Science, the More Things Change, the More They Stay the Same,” Watts Up With That?, May 17, 2014
Christopher Monckton of Brenchley, “Pseudoscientists’ Eight Climate Claims Debunked,” Watts Up With That?, May 17, 2014
John Hinderaker, “Why Global Warming Alarmism Isn’t Science,” PowerLine, May 17, 2014
Tom Sheahan, “The Specialized Meaning of Words in the “Antarctic Ice Shelf Collapse’ and Other Climate Alarm Stories,” Watts Up With That?, May 21, 2014
Anthony Watts, “Unsettled Science: New Study Challenges the Consensus on CO2 Regulation — Modeled CO2 Projections Exaggerated,” Watts Up With That?, May 22, 2014
Daniel B. Botkin, “Written Testimony to the House Subcommittee on Science, Space, and Technology,” May 29, 2014

Related posts:
The Limits of Science
The Thing about Science
Debunking “Scientific Objectivity”
Modeling Is Not Science
The Left and Its Delusions
Demystifying Science
AGW: The Death Knell
Modern Liberalism as Wishful Thinking
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”

The Compleat Monty Hall Problem

Wherein your humble blogger gets to the bottom of the Monty Hall problem, sorts out the conflicting solutions, and declares that the standard solution is the right solution, but not to the Monty Hall problem as it’s usually posed.


The Monty Hall problem, first posed as a statistical puzzle in 1975, has been notorious since 1990, when Marilyn vos Savant wrote about it in Parade. Her solution to the problem, to which I will come, touched off a controversy that has yet to die down. But her solution is now widely accepted as the correct one; I refer to it here as the standard solution.

This is from the Wikipedia entry for the Monty Hall problem:

The Monty Hall problem is a brain teaser, in the form of a probability puzzle (Gruber, Krauss and others), loosely based on the American television game show Let’s Make a Deal and named after its original host, Monty Hall. The problem was originally posed in a letter by Steve Selvin to the American Statistician in 1975 (Selvin 1975a), (Selvin 1975b). It became famous as a question from a reader’s letter quoted in Marilyn vos Savant‘s “Ask Marilyn” column in Parade magazine in 1990 (vos Savant 1990a):

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Here’s a complete statement of the problem:

1. A contestant sees three doors. Behind one of the doors is a valuable prize, which I’ll denote as $. Undesirable or worthless items are behind the other two doors; I’ll denote those items as x.

2. The contestant doesn’t know which door conceals $ and which doors conceal x.

3. The contestant chooses a door at random.

4. The host, who knows what’s behind each of the doors, opens one of the doors not chosen by the contestant.

5. The door chosen by the host may not conceal $; it must conceal an x. That is, the host always opens a door to reveal an x.

6. The host then asks the contestant if he wishes to stay with the door he chose initially (“stay”) or switch to the other unopened door (“switch”).

7. The contestant decides whether to stay or switch.

8. The host then opens the door finally chosen by the contestant.

9. If $ is revealed, the contestant wins; if x is revealed the contestant loses.

One solution (the standard solution) is to switch doors because there’s a 2/3 probability that $ is hidden behind the unopened door that the contestant didn’t choose initially. In vos Savant’s own words:

Yes; you [the contestant] should switch. The first [initially chosen] door has a 1/3 chance of winning, but the second [other unopened] door has a 2/3 chance.

The other solution (the alternative solution) is indifference. Those who propound this solution maintain that there’s a equal chance of finding $ behind either of the doors that remain unopened after the host has opened a door.

As it turns out, the standard solution doesn’t tell a contestant what to do in a particular game. But the standard solution does point to the right strategy for someone who plays or bets on a large number of games.

The alternative solution accurately captures the unpredictability of any particular game. But indifference is only a break-even strategy for a person who plays or bets on a large number of games.


The contestant may choose among three doors, and there are three possible ways of arranging the items behind the doors: S x x; x $ x; and x x $. The result is nine possible ways in which a game may unfold:

Equally likely outcomes

Events 1, 5, and 9 each have two branches. But those branches don’t count as separate events. They’re simply subsets of the same event; when the contestant chooses a door that hides $, the host must choose between the two doors that hide x, but he can’t open both of them. And his choice doesn’t affect the outcome of the event.

It’s evident that switching would pay off with a win in 2/3 of the possible events; whereas, staying with the original choice would off in only 1/3 of the possible events. The fractions 1/3 and 2/3 are usually referred to as probabilities: a 2/3 probability of winning $ by switching doors, as against a 1/3 probability of winning $ by staying with the initially chosen door.

Accordingly, proponents of the standard solution — who are now legion — advise the individual (theoretical) contestant to switch. The idea is that switching increases one’s chance (probability) of winning.


There are three problems with the standard solution:

1. It incorporates a subtle shift in perspective. The Monty Hall problem, as posed, asks what a contestant should do. The standard solution, on the other hand, represents the expected (long-run average) outcome of many events, that is, many plays of the game. For reasons I’ll come to, the outcome of a single game can’t be described by a probability.

2.  Lists of possibilities, such as those in the diagram above, fail to reflect the randomness inherent in real events.

3. Probabilities emerge from many repetitions of the kinds of events listed above. It is meaningless to ascribe a probability to a single event. In case of the Monty Hall problem, many repetitions of the game will yield probabilities approximating those given in the standard solution, but the outcome of each repetition will be unpredictable. It is therefore meaningless to say that a contestant has a 2/3 chance of winning a game if he switches. A 2/3 chance of winning refers to the expected outcome of many repetitions, where the contestant chooses to switch every time. To put it baldly: How does a person win 2/3 of a game? He either wins or doesn’t win.

Regarding points 2 and 3, I turn to Probability, Statistics and Truth (second revised English edition, 1957), by Richard von Mises:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply toe theory of probability we must have a practically unlimited sequence of uniform observations. (p. 11)

*     *     *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the same of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. (p. 11)

*     *     *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute [e.g., winning $ rather than x ] in a given collective [a series of attempts to win $ rather than x ]. (pp. 11-12)

*     *     *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attributed would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective [emphasis in the original]. (p. 15)

*     *     *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? It is important to note that this statement remains valid also if the calculated probability has one of the two extreme values 1 or 0 [emphasis added]. (p. 33)

To bring the point home, here are the results of 50 runs of the Monty Hall problem, where each result represents (i) a random initial choice between Door 1, Door 2, and Door 3; (ii) a random array of $, x, and x behind the three doors; (iii) the opening of a door (other than the one initially chosen) to reveal an x; and (iv) a decision, in every case, to switch from the initially chosen door to the other unopened door:

Results of 50 games

What’s relevant here isn’t the fraction of times that $ appears, which is 3/5 — slightly less than the theoretical value of 2/3.  Just look at the utter randomness of the results. The first three outcomes yield the “expected” ratio of two wins to one loss, though in the real game show the two winners and one loser would have been different persons. The same goes for any sequence, even the final — highly “improbable” (i.e., random) — string of nine straight wins (which would have accrued to nine different contestants). And who knows what would have happened in games 51, 52, etc.

If a person wants to win 2/3 of the time, he must find a game show that allows him to continue playing the game until he has reached his goal. As I’ve found in my simulations, it could take as many as 10, 20, 70, or 300 games before the cumulative fraction of wins per game converges on 2/3.

That’s what it means to win 2/3 of the time. It’s not possible to win a single game 2/3 of the time, which is the “logic” of the standard solution as it’s usually presented.


The alternative solution doesn’t offer a winning strategy. In this view of the Monty Hall problem, it doesn’t matter which unopened door a contestant chooses. In effect, the contestant is advised to flip a coin.

As discussed above, the outcome of any particular game is unpredictable, so a coin flip will do just as well as any other way of choosing a door. But randomly selecting an unopened door isn’t a good strategy for repeated plays of the game. Over the long run, random selection means winning about 1/2 of all games, as opposed to 2/3 for the “switch” strategy. (To see that the expected probability of winning through random selection approaches 1/2, return to the earlier diagram; there, you’ll see that $ occurs in 9/18 = 1/2 of the possible outcomes for “stay” and “switch” combined.)

Proponents of the alternative solution overlook the importance of the host’s selection of a door to open. His choice isn’t random. Therein lies the secret of the standard solution — as a long-run strategy.


It’s commonly said by proponents of the standard solution that when the host opens a door, he gives away information that the contestant can use to increase his chance of winning that game. One nonsensical version of this explanation goes like this:

  • There’s a 2/3 probability that $ is behind one of the two doors not chosen initially by the contestant.
  • When the host opens a door to reveal x, that 2/3 “collapses” onto the other door that wasn’t chosen initially. (Ooh … a “collapsing” probability. How exotic. Just like Schrödinger’s cat.)

Of course, the host’s action gives away nothing in the context of a single game, the outcome of which is unpredictable. The host’s action does help in the long run, if you’re in a position to play or bet on a large number of games. Here’s how:

  • The contestant’s initial choice (IC) will be wrong 2/3 of the time. That is, in 2/3 of a large number of games, the $ will be behind one of the other two doors.
  • Because of the rules of the game, the host must open one of those other two doors (HC1 and HC2); he can’t open IC.
  • When IC hides an x (which happens 2/3 of the time), either HC1 and HC2 must conceal the $; the one that doesn’t conceal the $ conceals an x.
  • The rules require the host to open the door that conceals an x.
  • Therefore, about 2/3 of the time the $ will be behind HC1 or HC2, and in those cases it will always be behind the door (HC1 or HC2) that the host doesn’t open.
  • It follows that the contestant, by consistently switching from IC to the remaining unopened door (HC1 or HC2), will win the $ about 2/3 of the time.

The host’s action transforms the probability — the long-run frequency — of choosing the winning door from 1/2 to 2/3. But it does so if and only if the player or bettor always switches from IC to HC1 or HC2 (whichever one remains unopened).

You can visualize the steps outlined above by looking at the earlier diagram of possible outcomes.

That’s all there is. There isn’t any more.

“The Science Is Settled”

Thales (c. 620 – c. 530 BC): The Earth rests on water.

Aneximenes (c. 540 – c. 475 BC): Everything is made of air.

Heraclitus (c. 540 – c. 450 BC): All is fire.

Empodecles (c. 493 – c. 435 BC): There are four elements: earth, air, fire, and water.

Democritus (c. 460 – c. 370 BC): Atoms (basic elements of nature) come in an infinite variety of shapes and sizes.

Aristotle (384 – 322 BC): Heavy objects must fall faster than light ones. The universe is a series of crystalline spheres that carry the sun, moon, planets, and stars around Earth.

Ptolemey (90 – 168 AD): Ditto the Earth-centric universe,  with a mathematical description.

Copernicus (1473 – 1543): The planets revolve around the sun in perfectly circular orbits.

Brahe (1546 – 1601): The planets revolve around the sun, but the sun and moon revolve around Earth.

Kepler (1573 – 1630): The planets revolve around the sun in elliptical orbits, and their trajectory is governed by magnetism.

Newton (1642 – 1727): The course of the planets around the sun is determined by gravity, which is a force that acts at a distance. Light consists of corpuscles; ordinary matter is made of larger corpuscles. Space and time are absolute and uniform.

Rutherford (1871 – 1937), Bohr (1885 – 1962), and others: The atom has a center (nucleus), which consists of two elemental particles, the neutron and proton.

Einstein (1879 – 1955): The universe is neither expanding nor shrinking.

That’s just a small fraction of the mistaken and incomplete theories that have held sway in the field of physics. There are many more such mistakes and lacunae in the other natural sciences: biology, chemistry, and earth science — each of which, like physics, has many branches. And in all branches there are many unresolved questions. For example, the Standard Model of particle physics, despite its complexity, is known to be incomplete. And it is thought (by some) to be unduly complex; that is, there may be a simpler underlying structure waiting to be discovered.

Given all of this, it is grossly presumptive to claim that climate science is “settled” when the phenomena that it encompasses are so varied, complex, often poorly understood, and often given short shrift (e.g., the effects of solar radiation on the intensity of cosmic radiation reaching Earth, which affects low-level cloud formation, which affects atmospheric temperature and precipitation).

Anyone who says that climate science is “settled” is either ignorant, stupid, or a freighted with a political agenda.

The Pretence of Knowledge

Friedrich Hayek, in his Nobel Prize lecture of 1974, “The Pretence of Knowledge,” observes that

the great and rapid advance of the physical sciences took place in fields where it proved that explanation and prediction could be based on laws which accounted for the observed phenomena as functions of comparatively few variables.

Hayek’s particular target was the scientism then (and still) rampant in economics. In particular, there was (and is) a quasi-religious belief in the power of central planning (e.g., regulation, “stimulus” spending, control of the money supply) to attain outcomes superior to those that free markets would yield.

But, as Hayek says in closing,

There is danger in the exuberant feeling of ever growing power which the advance of the physical sciences has engendered and which tempts man to try, “dizzy with success” … to subject not only our natural but also our human environment to the control of a human will. The recognition of the insuperable limits to his knowledge ought indeed to teach the student of society a lesson of humility which should guard him against becoming an accomplice in men’s fatal striving to control society – a striving which makes him not only a tyrant over his fellows, but which may well make him the destroyer of a civilization which no brain has designed but which has grown from the free efforts of millions of individuals.

I was reminded of Hayek’s observations by John Cochrane’s post, “Groundhog Day” (The Grumpy Economist, May 11, 2014), wherein Cochrane presents this graph:

The fed's forecasting models are broken

Cochrane adds:

Every serious forecast looked like this — Fed, yes, but also CBO, private forecasters, and the term structure of forward rates. Everyone has expected bounce-back growth and rise in interest rates to start next year, for the last 6 years. And every year it has not happened. Welcome to the slump. Every year, Sonny and Cher wake us up, and it’s still cold, and it’s still grey. But we keep expecting spring tomorrow.

Whether the corrosive effects of government microeconomic and regulatory policy, or a failure of those (unprintable adjectives) Republicans to just vote enough wasted-spending Keynesian stimulus, or a failure of the Fed to buy another $3 trillion of bonds, the question of the day really should be why we have this slump — which, let us be honest, no serious forecaster expected.

(I add the “serious forecaster” qualification on purpose. I don’t want to hear randomly mined quotes from bloviating prognosticators who got lucky once, and don’t offer a methodology or a track record for their forecasts.)

The Fed’s forecasting models are nothing more than sophisticated charlatanism — a term that Hayek applied to pseudo-scientific endeavors like macroeconomic modeling. Nor is charlatanism confined to economics and the other social “sciences.” It’s rampant in climate “science,” as Roy Spencer has shown. Consider, for example, this graph from Spencers’s post, “95% of Climate Models Agree: The Observations Must Be Wrong” (Roy Spencer, Ph.D., February 7, 2014):

95% of climate models agree_the observations must be wrong

Spencer has a lot more to say about the pseudo-scientific aspects of climate “science.” This example is from “Top Ten Good Skeptical Arguments” (May 1, 2014):

1) No Recent Warming. If global warming science is so “settled”, why did global warming stop over 15 years ago (in most temperature datasets), contrary to all “consensus” predictions?

2) Natural or Manmade? If we don’t know how much of the warming in the longer term (say last 50 years) is natural, then how can we know how much is manmade?

3) IPCC Politics and Beliefs. Why does it take a political body (the IPCC) to tell us what scientists “believe”? And when did scientists’ “beliefs” translate into proof? And when was scientific truth determined by a vote…especially when those allowed to vote are from the Global Warming Believers Party?

4) Climate Models Can’t Even Hindcast How did climate modelers, who already knew the answer, still fail to explain the lack of a significant temperature rise over the last 30+ years? In other words, how to you botch a hindcast?

5) …But We Should Believe Model Forecasts? Why should we believe model predictions of the future, when they can’t even explain the past?

6) Modelers Lie About Their “Physics”. Why do modelers insist their models are based upon established physics, but then hide the fact that the strong warming their models produce is actually based upon very uncertain “fudge factor” tuning?

7) Is Warming Even Bad? Who decided that a small amount of warming is necessarily a bad thing?

8) Is CO2 Bad? How did carbon dioxide, necessary for life on Earth and only 4 parts in 10,000 of our atmosphere, get rebranded as some sort of dangerous gas?

9) Do We Look that Stupid? How do scientists expect to be taken seriously when their “theory” is supported by both floods AND droughts? Too much snow AND too little snow?

10) Selective Pseudo-Explanations. How can scientists claim that the Medieval Warm Period (which lasted hundreds of years), was just a regional fluke…yet claim the single-summer (2003) heat wave in Europe had global significance?

11) (Spinal Tap bonus) Just How Warm is it, Really? Why is it that every subsequent modification/adjustment to the global thermometer data leads to even more warming? What are the chances of that? Either a warmer-still present, or cooling down the past, both of which produce a greater warming trend over time. And none of the adjustments take out a gradual urban heat island (UHI) warming around thermometer sites, which likely exists at virtually all of them — because no one yet knows a good way to do that.

It is no coincidence that leftists believe in the efficacy of central planning and cling tenaciously to a belief in catastrophic anthropogenic global warming. The latter justifies the former, of course. And both beliefs exemplify the left’s penchant for magical thinking, about which I’ve written several times (e.g., here, here, here, here, and here).

Magical thinking is the pretense of knowledge in the nth degree. It conjures “knowledge” from ignorance and hope. And no one better exemplifies magical thinking than our hopey-changey president.

*     *     *

Related posts:
Modeling Is Not Science
The Left and Its Delusions
Economics: A Survey
AGW: The Death Knell
The Keynesian Multiplier: Phony Math
Modern Liberalism as Wishful Thinking

Not Over the Hill

The Washington Post reports on some research about intelligence that is as irrelevant as the candle problem. Specifically:

[R]esearchers at Canada’s Simon Fraser University … have found that measurable declines in cognitive performance begin to occur at age 24. In terms of brainpower, you’re over the hill by your mid-20s.

The researchers measured this by studying the performance of thousands of players of Starcraft 2, a strategy video game….

Even worse news for those of us who are cognitively over-the-hill: the researchers find “no evidence that this decline can be attenuated by expertise.” Yes, we get wiser as we get older. But wisdom doesn’t substitute for speed. At best, older players can only hope to compensate “by employing simpler strategies and using the game’s interface more efficiently than younger players,” the authors say.

So there you have it: scientific evidence that we cognitively peak at age 24. At that point, you should probably abandon any pretense of optimism and accept that your life, henceforth, will be a steady descent into mediocrity, punctuated only by the bitter memories of the once seemingly-endless potential that you so foolishly squandered in your youth. Considering that the average American lives to be 80, you’ll have well over 50 years to do so! (Christopher Ingraham, “Your Brain Is Over the Hill by Age 24,” April 16, 2014)

Happily, Starcraft 2 is far from a representation of the real world. Take science, for example. I went to Wikipedia and obtained the list of all Nobel laureates in physics. It’s a long list, so I sampled it — taking the winners for the first five years (1901-1905), the middle five years (1955-1959) and the most recent five years (2009-2013). Here’s a list of the winners for those 15 years, and the approximate age of each winner at the time he or she did the work for which the prize was awarded:

1901 Wilhelm Röntgen (50)

1902 Hendrik Lorentz; (43) and Pieter Zeeman (31)

1903 Henri Becquerel, (44), Pierre Curie (37), and Marie Curie (29)

1904 Lord Rayleigh (52)

1955 Willis Lamb (34) and Polykarp Kusch (40)

1956 John Bardeen (39), Walter Houser Brattain (45), and William Shockley (37)

1957 Chen Ning Yang (27) and Tsung-Dao Lee (23)

1958 Pavel Cherenkov (30), Ilya Frank (26), and Igor Tamm (39)

1959 Emilio G. Segrè (50) and Owen Chamberlain (35)

2009 Charles K. Kao (33), Willard S. Boyle (45), and George E. Smith (39)

2010 Andre Geim (46) and Konstantin Novoselov (34)

2011 Saul Perlmutter (39), Adam G. Riess (29), and Brian Schmidt (31)

2012 Serge Haroche (40-50) and David J. Wineland (40-50)

2013 François Englert (32) and Peter W. Higgs (35)

There’s exactly one person within a year of age 24 (Tsung-Dao Lee, 23), and a few others who were still in their (late) 20s. Most of the winners were in their 30s and 40s when they accomplished their prize-worthy scientific feats. And there are at least as many winners who were in their 50s as winners who were in their 20s.

Let’s turn to so-called physical pursuits, which often combine brainpower (anticipation, tactical improvisation, hand-eye coordination) and pure physical skill (strength and speed). Baseball exemplifies such a pursuit. Do ballplayers go sharply downhill after the age of 24? Hardly. On average, they’re just entering their best years at age 24, and they perform at peak level for several years.

I’ll use two charts to illustrate the point about ballplayers. The first depicts normalized batting average vs. age for 86 of the leading hitters in the history of the American League*:

Greatest hitters_BA by age_86 hitters

Because of the complexity of the spreadsheet from which the numbers are taken, I was unable to derive a curve depicting mean batting average vs. age. But the density of the plot lines suggests that the peak age for batting average begins at 24 and extends into the early 30s. Further, with relatively few exceptions, batting performance doesn’t decline sharply until the late 30s.

Among a more select group of players, and by a different measure of performance, the peak years occur at ages 24-28, with a slow decline after 28**:

Offensive average by age_25 leading hitters

The two graphs suggest to me that ballplayers readily compensate for physical decline (such as it is) by applying the knowledge they acquire in the course of playing the game. Such knowledge would include “reading” pitchers to make better guesses about the pitch that’s coming, knowing where to hit a ball in a certain ballpark against certain fielders, judging the right moment to attempt a stolen base against a certain pitcher-catcher combination, hitting to the opposite field on occasion instead of trying to pull the ball every time, and so on.

I strongly suspect that what is true in baseball is true in many walks of life: Wisdom — knowledge culled from experience — compensates for pure brainpower, and continues to do so for a long time. The Framers of the Constitution, who weren’t perfect but who were astute observers of the human condition, knew as much. That’s why they set 35 as the minimum age for election to the presidency. (Subsequent history — notably, the presidencies of TR, JFK, Clinton, and Obama — tells us that the Framers should have made it 50.)

I do grow weary of pseudo-scientific crap like the research reported in the Post. But it does give me something to write about. And most of the pseudo-science is harmless, unlike the statistical lies on which global-warming hysteria is based.

* The numbers are drawn from the analysis described in detail here and here, which is based on statistics derived through the Play Index at The bright red line represents Ty Cobb’s career, which deserves special mention because of Cobb’s unparalleled dominance as a hitter-for-average over a 24-year career, and especially for ages 22-32. I should add that Cobb’s dominance has been cemented by Ichiro Suzuki’s sub-par performance in the three seasons since I posted this, wherein I proclaimed Cobb the American League’s best all-time hitter for average, taking age into account. (There’s no reason to think that the National League has ever hosted Cobb’s equal.)

** This is an index, where 100 represents parity with the league average. I chose the 25 players represented here from a list of career leaders in OPS+ (on-base percentage plus slugging average, normalized for league averages and park factors). Because of significant changes in rules and equipment in the late 1800s and early years of the 1900s (see here, here, and here), players whose careers began before 1907 were eliminated, excepting Cobb, who didn’t become a regular player until 1906. Also eliminated were Barry Bonds and Mark McGwire, whose drug-fueled records don’t merit recognition, and Joey Votto, who has completed only eight seasons. Offensive Average (OA) avoids the double-counting inherent in OPS+, which also (illogically) sums two fractions with different denominators. OA measures a player’s total offensive contribution (TOC) per plate appearance (PA) in a season, normalized by the league average for that season. TOC = singles + doubles x 2 + triples x 3 + home runs x 4 + stolen bases – times caught stealing + walks – times grounded into a double play + sacrifice hits + sacrifice flies. In the graph, Cobb seems to disappear into the (elite) crowd after age 24, but that’s an artifact of Cobb’s preferred approach to the game — slapping hits and getting on base — not his ability to hit the long ball, for which extra credit is given in computing OA. (See this, for example.)

The Limits of Science (II)

The material of the universe — be it called matter or energy — has three essential properties: essence, emanation, and effect. Essence — what things really “are” — is the most elusive of the properties, and probably unknowable. Emanations are the perceptible aspects of things, such as their detectible motions and electromagnetic properties. Effects are what things “do” to other things, as in the effect that a stream of photons has on paper when the photons are focused through a magnifying glass. (You’ve lived a bland life if you’ve never started a fire that way.)

Science deals in emanations and effects. It seems that these can be described without knowing what matter-energy “really” consists of. But can they?

Take a baseball. Assume, for the sake of argument, that it can’t be opened and separated into constituent parts, which are many. (See the video at this page for details.) Taking the baseball as a fundamental particle, its attributes (seemingly) can be described without knowing what’s inside it. Those attributes include the distance that it will travel when hit by a bat, when the ball and bat (of a certain weight) meet at certain velocities and at certain angles, given the direction and speed of rotation of the ball when it meets the bat, ambient temperature and relative humidity, and so on.

And yet, the baseball can’t be treated as if it were a fundamental particle. The distance that it will travel, everything else being the same, depends on the material at its core, the size of the core, the tightness of the windings of yarn around the core, the types of yarn used in the windings, the tightness of the cover, the flatness of the stitches that hold the cover in place, and probably several other things.

This suggests to me that the emanations and effects of an object depend on its essence — at least in the everyday world of macroscopic objects. If that’s so, why shouldn’t it be the same for the world of objects called sub-atomic particles?

Which leads to some tough questions: Is it really the case that all of the particles now considered elementary are really indivisible? Are there other elementary particles yet to be discovered or hypothesized, and will some of those be constituents of particles now thought to be elementary? And even if all of the truly elementary particles are discovered, won’t scientists still be in the dark as to what those particles really “are”?

The progress of science should be judged by how much scientists know about the universe and its constituents. By that measure — and despite what may seem to be a rapid pace of discovery — it is fair to say that science has a long way to go — probably forever.

Scientists, who tend to be atheists, like to refer to the God of the gaps, a “theological perspective in which gaps in scientific knowledge are taken to be evidence or proof of God’s existence.” The smug assumption implicit in the use of the phrase by atheists is that science will close the gaps, and that there will be no room left for God.

It seems to me that the shoe is really on the other foot. Atheistic scientists assume that the gaps in their knowledge are relatively small ones, and that science will fill them. How wrong they are.

*     *     *

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
A Theory of Everything, Occam’s Razor, and Baseball
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness

Home-Field Advantage

You know it’s real. Just how real? Consider this:

Home field advantage
Derived from statistics available through the Play Index subscription service of

In the years 1901 through 2013, major league teams won 54 percent of their home games and lost 46 percent of their road games, for a home/road (H/R) ratio of 1.181. Only one team has lost more than half of its home games: San Diego, with 1,791 wins against 1,793 losses (which rounds to a W-L record of .500). The Padres have nevertheless done better at home than on the road, with an H/R ratio of 1.171.

No team has done better on the road than at home. Two expansion teams — Los Angeles Angels (1961) and New York Mets (1962) — have come closest. But their home records are still significantly better than their road records: H/R ratios of 1.132 and 1.139, respectively.

The Colorado Rockies have the best H/R ratio — 1.381 — mainly because Rockies teams have been tailored to do well at their home park in mile-high Denver. Accordingly, they have done poorly at lower altitudes, where (for example) high fly balls don’t as often become home runs.

The New York Yankees, unsurprisingly, have been the best at home and on the road. Further, the Yankees franchise is the only one with a road record above .500 for the past 113 years.

The importance of playing at home is perhaps best captured by these averages for 1901-2013:

  • The mighty Yankees compiled their enviable home record by outscoring opponents by only 0.89 run per game.
  • The second- and third-best Giants and Red Sox bested visitors by only 0.49 and 0.46 runs per game, respectively.
  • The lopsided Rockies compiled by far the biggest home-minus-road scoring gap: 1.04 runs per game.
  • Eleven of the 30 franchises were outscored at home, but only the Padres had a (barely) losing record at home.
  • Only 4 of 30 franchises — Yankees, Giants, Dodgers, and Cardinals — outscored opponents on the road as well at home.
  • Every franchise had a better average margin of victory at home than on the road.
  • Home teams (on average) outscored their opponents by only 0.16 runs.

Home-field advantage is a fragile but very real thing.

Not-So-Random Thoughts (IX)

Demystifying Science

In a post with that title, I wrote:

“Science” is an unnecessarily daunting concept to the uninitiated, which is to say, almost everyone. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — often are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Just how unwarranted is the “authority” that is lent by publication in a scientific journal?

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. . . .

In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one . . . paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” As he told the quadrennial International Congress on Peer Review and Biomedical Publication, held this September [2013] in Chicago, the problem has not gone away. (The Economist, “Trouble at the Lab,” October 19, 2013)

Tell me again about anthropogenic global warming.

The “Little Ice Age” Redux?

Speaking of AGW, remember the “Little Ice Age” of the 1970s?

George Will does. As do I.

One Sunday morning in January or February of 1977, when I lived in western New York State, I drove to the news stand to pick up my Sunday Times. I had to drive my business van because my car wouldn’t start. (Odd, I thought.) I arrived at the stand around 8:00 a.m. The temperature sign on the bank across the street then read -16 degrees (Fahrneheit). The proprietor informed me that when he opened his shop at 6:00 a.m. the reading was -36 degrees.

That was the nadir of the coldest winter I can remember. The village reservoir froze in January and stayed frozen until March. (The fire department had to pump water from the Genesee River to the village’s water-treatment plant.) Water mains were freezing solid, even though they were 6 feet below the surface. Many homeowners had to keep their faucets open a trickle to ensure that their pipes didn’t freeze. And, for the reasons cited in Will’s article, many scientists — and many Americans — thought that a “little ice age” had arrived and would be with us for a while.

But science is often inconclusive and just as often slanted to serve a political agenda. (Also, see this.) That’s why I’m not ready to sacrifice economic growth and a good portion of humanity on the altar of global warming and other environmental fads.

Well, the “Little Ice Age” may return, soon:

[A] paper published today in Advances in Space Research predicts that if the current lull in solar activity “endures in the 21st century the Sun shall enter a Dalton-like grand minimum. It was a period of global cooling.” (Anthony Watts, “Study Predicts the Sun Is Headed for a Dalton-like Solar Minimum around 2050,” Watts Up With That?, December 2, 2013)

The Dalton Minimum, named after English astronomer John Dalton, lasted from 1790 to 1830.

Bring in your pets and plants, cover your pipes, and dress warmly.

Madison’s Fatal Error

Timothy Gordon writes:

After reading Montesquieu’s most important admonitions in Spirit of the Laws, Madison decided that he could outsmart him. The Montesquieuan admonitions were actually limitations on what a well-functioning republic could allow, and thus, be. And Madison got greedy, not wanting to abide by those limitations.

First, Montesquieu required republican governments to maintain limited geographic scale. Second, Montesquieu required republican governments to preside over a univocal people of one creed and one mind on most matters. A “res publica” is a public thing valued by each citizen, after all. “How could this work when a republic is peopled diversely?” the faithful Montesquieuan asks. (Nowadays in America, for example, half the public values liberty and the other half values equality, its eternal opposite.) Thirdly—and most important—Montesquieu mandated that the three branches of government were to hold three distinct, separate types of power, without overlap.

Before showing just how correct Montesquieu was—and thus, how incorrect Madison was—it must be articulated that in the great ratification contest of 1787-1788, there operated only one faithful band of Montesquieu devotees: the Antifederalists. They publicly pointed out how superficial and misleading were the Federalist appropriations of Montesquieu within the new Constitution and its partisan defenses.

The first two of these Montesquieuan admonitions went together logically: a) limiting a republic’s size to a small confederacy, b) populated by a people of one mind. In his third letter, Antifederalist Cato made the case best:

“whoever seriously considers the immense extent of territory within the limits of the United States, together with the variety of its climates, productions, and number of inhabitants in all; the dissimilitude of interest, morals, and policies, will receive it as an intuitive truth, that a consolidated republican form of government therein, can never form a perfect union.”

Then, to bulwark his claim, Cato goes on to quote two sacred sources of inestimable worth: the Bible… and Montesquieu. Attempting to fit so many creeds and beliefs into such a vast territory, Cato says, would be “like a house divided against itself.” That is, it would not be a res publica, oriented at sameness. Then Cato goes on: “It is natural, says Montesquieu, to a republic to have only a small territory, otherwise it cannot long subsist.”

The teaching Cato references is simple: big countries of diverse peoples cannot be governed locally, qua republics, but rather require a nerve center like Washington D.C. wherefrom all the decisions shall be made. The American Revolution, Cato reminded his contemporaries, was fought over the principle of local rule.

To be fair, Madison honestly—if wrongly—figured that he had dialed up the answer, such that the United States could be both vast and pluralistic, without the consequent troubles forecast by Montesquieu. He viewed the chief danger of this combination to lie in factionalization. One can either “remove the cause [of the problem] or control its effects,” Madison famously prescribed in “Federalist 10″.

The former solution (“remove the cause”) suggests the Montesquieuan way: i.e. remove the plurality of opinion and the vastness of geography. Keep American confederacies small and tightly knit. After all, victory in the War of Independence left the thirteen colonies thirteen small, separate countries, contrary to President Lincoln’s rhetoric four score later. Union, although one possible option, was not logically necessary.

But Madison opted for the latter solution (“control the effects”), viewing union as vitally indispensable and thus, Montesquieu’s teaching as regrettably dispensable: allow size, diversity, and the consequent factionalization. Do so, he suggested, by reducing them to nothing…with hyper-pluralism. Madison deserves credit: for all its oddity, the idea actually seemed to work… for a time. . . . (“James Madison’s Nonsense-Coup Against Montesqieu (and the Classics Too),” The Imaginative Conservative, December 2013)

The rot began with the advent of the Progressive Era in the late 1800s, and it became irreversible with the advent of the New Deal, in the 1930s. As I wrote here, Madison’s

fundamental error can be found in . . . Federalist No. 51. Madison was correct in this:

. . . It is of great importance in a republic not only to guard the society against the oppression of its rulers, but to guard one part of the society against the injustice of the other part. Different interests necessarily exist in different classes of citizens. If a majority be united by a common interest, the rights of the minority will be insecure. . . .

But Madison then made the error of assuming that, under a central government, liberty is guarded by a diversity of interests:

[One method] of providing against this evil [is] . . . by comprehending in the society so many separate descriptions of citizens as will render an unjust combination of a majority of the whole very improbable, if not impracticable. . . . [This] method will be exemplified in the federal republic of the United States. Whilst all authority in it will be derived from and dependent on the society, the society itself will be broken into so many parts, interests, and classes of citizens, that the rights of individuals, or of the minority, will be in little danger from interested combinations of the majority.

In a free government the security for civil rights must be the same as that for religious rights. It consists in the one case in the multiplicity of interests, and in the other in the multiplicity of sects. The degree of security in both cases will depend on the number of interests and sects; and this may be presumed to depend on the extent of country and number of people comprehended under the same government. This view of the subject must particularly recommend a proper federal system to all the sincere and considerate friends of republican government, since it shows that in exact proportion as the territory of the Union may be formed into more circumscribed Confederacies, or States oppressive combinations of a majority will be facilitated: the best security, under the republican forms, for the rights of every class of citizens, will be diminished: and consequently the stability and independence of some member of the government, the only other security, must be proportionately increased. . . .

In fact, as Montesqieu predicted, diversity — in the contemporary meaning of the word, is inimical to civil society and thus to ordered liberty. Exhibit A is a story by Michael Jonas about a study by Harvard political scientist Robert Putnam, “E Pluribus Unum: Diversity and Community in the Twenty-first Century“:

It has become increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.

But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings. . . .

. . . Putnam’s work adds to a growing body of research indicating that more diverse populations seem to extend themselves less on behalf of collective needs and goals.

His findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis in June in the journal Scandinavian Political Studies, he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer, in a recent Orange County Register op-ed titled “Greater diversity equals more misery.”. . .

The results of his new study come from a survey Putnam directed among residents in 41 US communities, including Boston. Residents were sorted into the four principal categories used by the US Census: black, white, Hispanic, and Asian. They were asked how much they trusted their neighbors and those of each racial category, and questioned about a long list of civic attitudes and practices, including their views on local government, their involvement in community projects, and their friendships. What emerged in more diverse communities was a bleak picture of civic desolation, affecting everything from political engagement to the state of social ties. . . .

. . . In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes. . . . (“The Downside of Diversity,” The Boston Globe (, August 5, 2007)

See also my posts, “Liberty and Society,” “The Eclipse of ‘Old America’,” and “Genetic Kinship and Society.” And these: “Caste, Crime, and the Rise of Post-Yankee America” (Theden, November 12, 2013) and “The New Tax Collectors for the Welfare State,” (Handle’s Haus, November 13, 2013).

Libertarian Statism

Finally, I refer you to David Friedman’s “Libertarian Arguments for Income Redistribution” (Ideas, December 6, 2013). Friedman notes that “Matt Zwolinski has recently posted some possible arguments in favor of a guaranteed basic income or something similar.” Friedman then dissects Zwolinski’s arguments.

Been there, done that. See my posts, “Bleeding-Heart Libertarians = Left-Statists” and “Not Guilty of Libertarian Purism,” wherein I tackle the statism of Zwolinski and some of his co-bloggers at Bleeding Heart Libertarians. In the second-linked post, I say that

I was wrong to imply that BHLs [Bleeding Heart Libertarians] are connivers; they (or too many of them) are just arrogant in their judgments about “social justice” and naive when they presume that the state can enact it. It follows that (most) BHLs are not witting left-statists; they are (too often) just unwitting accomplices of left-statism.

Accordingly, if I were to re-title [“Bleeding-Heart Libertarians = Left-Statists”] I would call it “Bleeding-Heart Libertarians: Crypto-Statists or Dupes for Statism?”.

*     *     *

Other posts in this series: I, II, III, IV, V, VI, VII, VIII

Evolution and Race

UPDATED 11/24/13 AND 02/11/14

Have you read about Skull 5, a 1.8-million-year-old fossil? Well, it has been in the news lately. Here’s some of the coverage:

Scientists trying to unravel the origins of humanity mostly study scraps — some ancient teeth here, a damaged bone there. But now a lucky research team has uncovered a fossil superstar: the first complete skull of an early human adult from the distant past.

The 1.8-million-year-old fossil, known as Skull 5, is like nothing seen before. It has a small brain case and a heavy, jutting jaw, as did some of humanity’s older, more apelike ancestors. But other bones linked to Skull 5 show its owner had relatively short arms and long legs, as does our own species, Homo sapiens. Those who’ve studied Skull 5 say it also provides support for the provocative idea that, 1.8 million years ago, only one kind of early human held sway, rather than the throng of different species listed in today’s textbooks….

Paleoanthropologist Susan Antón of New York University, while praising the new analysis, says the Dmanisi team didn’t compare fossil features, such as the anatomy around the front teeth, that differ most starkly between two different species of early humans. So the Dmanisi team’s hypothesis that there was only one lineage is not totally convincing, she says… (Traci Watson, “Skull Discovery Sheds Light on Human Species,” USA Today, October 17, 2013)

Here’s more:

In the eastern European nation of Georgia, a group of researchers has excavated a 1.8 million-year-old skull of an ancient human relative, whose only name right now is Skull 5. They report their findings in the journal Science, and say it belongs to our genus, called Homo.

“This is most complete early Homo skull ever found in the world,” said lead study author David Lordkipanidze, researcher at the Georgian National Museum in Tbilisi….

The variation in physical features among the Dmanisi hominid specimens is comparable to the degree of diversity found in humans today, suggesting that they all belong to one species, Lordkipanidze said….

Now it gets more controversial: Lordkipanidze and colleagues also propose that these individuals are members of a single evolving Homo erectus species, examples of which have been found in Africa and Asia. The similarities between the new skull from Georgia and Homo erectus remains from Java, Indonesia, for example, may mean there was genetic “continuity across large geographic distances,” the study said.

What’s more, the researchers suggest that the fossil record of what have been considered different Homo species from this time period — such as Homo ergaster, Homo rudolfensis and Homo habilis — could actually be variations on a single species, Homo erectus. That defies the current understanding of how early human relatives should be classified….

The Dmanisi fossils are a great find, say anthropology researchers not involved with the excavation. But they’re not sold on the idea that this is the same Homo erectus from both Africa and Asia — or that individual Homo species from this time period are really all one species.

“The specimen is wonderful and an important contribution to the hominin record in a temporal period where there are woefully too few fossils,” said Lee Berger, paleoanthropologist at the University of the Witwatersrand in Johannesburg, in an e-mail.

But the suggestion that these fossils prove an evolving lineage of Homo erectus in Asia and Africa, Berger said, is “taking the available evidence too far.”

…He criticized the authors of the new study for not comparing the fossils at Dmanisi to A. sediba or to more recent fossils found in East Africa…. (Elizabeth Landau, “Skull Sparks Human Evolution Controversy,” CNN, October 19, 2013)

I will go further and say this: Even if 1.8 million years ago there was a single species from which today’s human beings are descended, today’s human beings don’t necessarily belong to a single species or sub-species.

In fact, some reputable scientists have advanced a theory that is consistent with racial divergence:

Gregory Cochran and Henry Harpending begin The 10,000 Year Explosion [link added] with a remark from the paleontologist Stephen J. Gould, who said that “there’s been no biological change in humans for 40,000 or 50,000 years.” They also cite the evolutionist Ernst Mayr, who agrees that “man’s evolution towards manness suddenly came to a halt” in the same epoch. Such claims capture the consensus in anthropology, too, which dates the emergence of “behaviorally modern humans” — beings who acted much more like us than like their predecessors — to about 45,000 years ago.

But is the timeline right? Did human evolution really stop? If not, our sense of who we are — and how we got this way — may be radically altered. Messrs. Cochran and Harpending, both scientists themselves, dismiss the standard view. Far from ending, they say, evolution has accelerated since humans left Africa 40,000 years ago and headed for Europe and Asia.

Evolution proceeds by changing the frequency of genetic variants, known as “alleles.” In the case of natural selection, alleles that enable their bearers to leave behind more offspring will become more common in the next generation. Messrs. Cochran and Harpending claim that the rate of change in the human genome has been increasing in recent millennia, to the point of turmoil. Literally hundreds or thousands of alleles, they say, are under selection, meaning that our social and physical environments are favoring them over other — usually older — alleles. These “new” variants are sweeping the globe and becoming more common.

But genomes don’t just speed up their evolution willy-nilly. So what happened, the authors ask, to keep human evolution going in the “recent” past? Two crucial events, they contend, had to do with food production. As humans learned the techniques of agriculture, they abandoned their diffuse hunter-gatherer ways and established cities and governments. The resulting population density made humans ripe for infectious diseases like smallpox and malaria. Alleles that helped protect against disease proved useful and won out.

The domestication of cattle for milk production also led to genetic change. Among people of northern European descent, lactose intolerance — the inability to digest milk in adulthood — is unusual today. But it was universal before a genetic mutation arose about 8,000 years ago that made lactose tolerance continue beyond childhood. Since you can get milk over and over from a cow, but can get meat from it only once, you can harvest a lot more calories over time for the same effort if you are lactose tolerant. Humans who had this attribute would have displaced those who didn’t, all else being equal. (If your opponent has guns and you don’t, drinking milk won’t save you.)

To make their case for evolution having continued longer than is usually claimed, Messrs. Cochran and Harpending remind us that dramatic changes in human culture appeared about 40,000 years ago, resulting in painting, sculpture, and better tools and weapons. A sudden change in the human genome, they suggest, made for more creative, inventive brains. But how could such a change come about? The authors propose that the humans of 40,000 years ago occasionally mated with Neanderthals living in Europe, before the Neanderthals became extinct. The result was an “introgression” of Neanderthal alleles into the human lineage. Some of those alleles may have improved brain function enough to give their bearers an advantage in the struggle for survival, thus becoming common.

In their final chapter, Messrs. Cochran and Harpending venture into recorded history by observing two interesting facts about Ashkenazi Jews (those who lived in Europe after leaving the Middle East): They are disproportionately found among intellectual high-achievers — Nobel Prize winners, world chess champions, people who score well on IQ tests — and they are victims of rare genetic diseases, like Gaucher’s and Tay-Sachs. The authors hypothesize that these two facts are connected by natural selection.

Just as sickle-cell anemia results from having two copies of an allele that protects you against malaria if you have just one, perhaps each Ashkenazi disease occurs when you have two copies of an allele that brings about something useful when you have just one. That useful thing, according to Messrs. Cochran and Harpending, is higher cognitive ability. They argue that the rare diseases are unfortunate side-effects of natural selection for intelligence, which Messrs. Cochran and Harpending think happened during the Middle Ages in Europe, when Jews rarely intermarried with other Europeans. (Christopher F. Chabris, “Last-Minute Changes,” The Wall Street Journal, February 12, 2009)

It is said that, despite the differences across races, all humans beings have in common 96 percent of their genes. Well, if I told you that humans and chimpanzees have about the same percentage of their genes in common, would you consider chimpanzees to be nothing more than superficially different human beings who belong to the same sub-species? Just remember this: The “species problem” remains unsolved.

So what if human beings belong to a variety of different sub-species? A candid scientific admission of that fact would put an end to the nonsense the “we’re all the same under the skin.” We”re not, and it’s long past time to own up to it, and to quit using the power of the state to strive for a kind of equality that is unattainable.

UPDATE (11/24/13):

And there are some who prefer to be sub-human.

UPDATE (02/11/14):

Although there are out-and-out disbelievers and cautious skeptics, some recent research in a field known as epigenetics suggests that behavioral conditioning can yield heritable traits. If true, it means that evolution is shaped by cultural influences, thus reinforcing positive traits (e.g., hard work and law-abidingness) among those people who possess and inculcate such traits, while also reinforcing negative traits (e.g., violence, shiftlessness) among those people who possess inculcate such traits.

*     *     *

Related reading:
Gregory Gorelik and Todd D. Shackleford, “A Review of Gregory Cochran and Henry Harpending, The 10,000 Year Explosion: How Civilization Accelerated Human Evolution,” Evolutionary Psychology, 2010. 8(1): 113-118
Carl Zimmer, “Christening the Earliest Members of Our Genus,” The New York Times, October 24, 2013

Related posts:
Race and Reason: The Derbyshire Debacle
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications

Do Managers Make a Difference?


The activity of managing ranges from the supervision of one other person in the performance of a menial task to the supervision of the executive branch of the government of the United States. (The latter is a fair description of a president’s constitutional responsibility.) And there are many criteria for judging managers, not all of which are unambiguous or conducive to precise quantification. It may be easy, for example, to determine whether a ditch was dug on time and within budget. But what if the manager’s methods alienated workers, causing some of them to quit when the job was done and requiring the company to recruit and train new workers at some expense?

Or consider the presidency. What determines whether an incumbent is doing a good job? Polls? They are mere opinions, mostly based on impressions and political preferences, not hard facts. The passage by Congress of legislation proposed by the president? By that measure, Obama earns points for the passage of the Affordable Care Act, which if not repealed will make health care less affordable and less available.

Given the impossibility of arriving at a general answer to the tittle question, I will turn — as is my wont — to the game of baseball. You might think that the plethora of baseball statistics would yield an unambiguous answer with respect to major-league managers. As you’ll see, that’s not so.


Data Source

According to this page at, 680 different men have managed teams in the history of major-league baseball, which is considered to have begun in 1871 with the founding of the National Association. Instead of reaching that far back into the past, when the game was primitive by comparison with today’s game, I focus on men whose managing careers began in 1920 or later. It was 1920 that marked the beginning of the truly modern era of baseball, with its emphasis on power hitting. (This modern era actually consists of six sub-eras. See this and this.) In this modern era, which now spans 1920 through 2013, 399 different men have managed major-league teams. That is a sizable sample from which I had hoped to draw firm judgments about whether baseball managers, or some of them, make a difference.

Won-Lost Record

The “difference” in question is a manager’s effect — or lack thereof — on the success of his team, as measured by its won-lost (W-L) record. For the benefit of non-fans, W-L record, usually denoted W-L%, is determined by the following simple equation: W/(W + L), that is, games won divided by games won plus games lost. (The divisor isn’t number of games played because sometimes, though rarely, a baseball game is played to a tie.) Thus a team that wins 81 of its 162 games in a season has a W-L record of .500 for that season. (In baseball statistics, it is customary to omit the “0” before the decimal point, contrary to mathematical convention.)

Quantifying Effectiveness

I’m about to throw some numbers at you. But I must say more about the samples that I used in my analysis. The aggregate-level analysis described in the next section draws on the records of a subset of the 399 men whose managerial careers are encompassed in the 1920-2013 period. The subset consists of the 281 men who managed at least 162 games, which (perhaps not coincidentally) has been the number of games in a regulation season since the early 1960s. I truncated the sample where I did because the W-L records of mangers with 162 or more games are statistically better (significance level of 0.05) than the W-L records of managers with fewer than 162 games. In other words, a manager who makes it through a full season is likely to have passed a basic test of management ability: not losing “too many” games. (I address this subjective assessment later in the post.)

Following the aggregate-level analysis, I turn to an individual-level analysis of the records of those managers who led a team for at least five consecutive seasons. (I allowed into the sample some managers whose fifth full season consisted of a partial season in year 1 and a partial season in year 6, as long as the number of games in the two partial seasons added to the number of games in a full season, or nearly so. I also included a few managers whose service with a particular team was broken by three years or less.) Some managers led more than one team for at least five consecutive seasons, and each such occurrence is counted separately. For reasons that will become evident, the five seasons had to begin no earlier than 1923 and end no later than 2010.  The sample size for this analysis is 63 management tours accomplished by 47 different managers.

Results and Inferences: Aggregate Level

“Just the facts” about the sub-sample of 281 managers:

Number of games managed vs W-L record

The exponential equation, though statistically significant, tells us that W-L record explains only about 21 percent of the variation in number of games managed, which spans 162 to 5,097.

Looking closer, I found that the 28 managers in the top decile of games managed (2,368 to 5,097) have a combined W-L record of .526. But their individual W-L records range from .477 to .615, and eight of the managers compiled a career W-L record below .500. Perhaps the losers did the best they could with the teams they had. Perhaps, but it’s also quite possible that the winners were blessed with teams that made them look good. In any event, the length of a manager’s career may have little to do with his effectiveness as a manager.

Which brings me to the next topic.

Results and Inferences: Individual Level

This view is more complicated.  As mentioned above, I focused on those 47 managers who on 63 separate occasions led their respective teams for at least five consecutive seasons (with minor variations). To get at each manager’s success (or failure) during each management tour, I compared his W-L record during a tour with the W-L record of the same team in the preceding and following three seasons.

My aim in choosing five years for the minimum span of a manager’s tenure with a team was to avoid judging a manager’s performance on the basis of an atypical year or two. My aim in looking three years back and three years ahead was to establish a baseline against which to compare the manager’s performance. I could have chosen on time spans, of course, but a plausible story ensues from the choices that I made.

First, here is a graphical view of the relationship between each of the 63 managerial stints and the respective before-and-after records of the teams involved:

Manager's W-L record vs. baseline

A clue to deciphering the graph: Look at the data point toward the upper-left corner labeled “Sewell SLB 41-46.” The label gives the manager’s last name (Sewell for Luke Sewell, in this case), the team he managed (SLB = St. Louis Browns), and the years of his tenure (1941-46). (In the table below, all names, teams, and dates are spelled out, for all 63 observations.) During Sewell’s tenure, the Browns’ W-L record was .134 points above the average of .378 attained by the Browns in 1938-40 and 1947-49. That’s an impressive performance, and it stands well above the 68-percent confidence interval. (Confidence intervals represent the range within which certain percentages of observations are expected to fall.)

The linear fit (equation in lower-left corner) indicates a statistically significant negative relationship between the change in a team’s fortunes during a manager’s tenure and the team’s baseline performance. The negative relationship means that there is a strong tendency to “regress toward the mean,” that is toward a record that is consistent with the quality of a team’s players. In other words, the negative relationship indicates that a team’s outstanding or abysmal record my owe nothing (or very little) to a manager’s efforts.

In fact, relatively few managers succeeded in leading their teams significantly far (up or down) from baseline performance. Those managers are indicated by green (good) and red (bad) in the preceding graph.

The following table gives a rank-ordering of all 47 managers in their 63 management stints. The color-coding indicates the standing of a particular performance with respect to the trend (green = above trend, red = below trend). The shading indicates the standing of a particular performance with respect to the confidence intervals: darkest shading = above and below the 95-percent confidence interval; medium shading = between the 68-percent and 95-percent confidence intervals; lightest shading = between the 68-percent confidence intervals.

Ranking of manager's performances

Of the 63 performances, 4 of them (6.3 percent) lie outside the 95-percent confidence interval; 13 of them (20.6 percent) are between the 68-percent and 95-percent confidence intervals; the other 46 (73.0) percent are in the middle, and statistically indistinguishable.

Billy Southworth’s tour as manager of the St. Louis Cardinals in 1940-45 (#1) stands alone above the 95-percent confidence interval. Two of Bucky Harris’s four stints rank near the bottom (#61 and #62) just above Ralph Houk’s truly abysmal performance as manager of the Detroit Tigers in 1974-78 (#63).

Southworth’s tenure with the Cardinals is of a piece with his career W-L record (.597), and with his above-average performance as manager of the Boston Braves in 1946-51 (# 18). Harris had a mixed career, as indicated by his overall W-L record of .493 and two above-average tours as manager (#22 and #26). Houk’s abysmal record with the Tigers was foretold by his below-average tour as manager of the Yankees, a broken tenure that spanned 1961-73 (#47).

Speaking of the Yankees, will the real Casey Stengel please stand up? Is he the “genius” with an above-average record as Yankees manager in 1949-60, (#13) or the “bum” with a dismal record as skipper of the Boston Bees/Braves in 1938-42 (#56)? (Stengel’s ludicrous three-and-a-half-year tour as manager of the hapless New York Mets of 1962-65 isn’t on the list because of its brevity. It should be noted, however, that the Mets improved gradually after Stengel’s departure, and won the World Series in 1969.)

Stengel is one of seven managers with a single-season performance below the 68-percent confidence level. Four of the seven — Harris, Houk, Stengel, and Tom Kelly (late of the Minnesota Twins) — are among the top decile on the games-managed list. The top decile also includes seven managers who turned in performances that rank above the 68-percent confidence interval: Earl Weaver, Bobby Cox, Al Lopez, Joe Torre, Sparky Anderson, Joe McCarthy, and Charlie Grimm (#s 2-4 and 6-9).

I could go on and on about games managed vs. performance, but it boils down to this: If there were a strong correlation between the rank-order of managers’ performances in the preceding table and the number of games they managed in their careers, it would approach -1.00. (Minus because the the best performance is ranked #1 and the worst is ranked #68.) But the correlation between between rank and number of games managed in a career is only -0.196, a “very weak” correlation in the parlance of statistics.

In summary, when it comes to specific management stints, Southworth’s performance in 1940-45 was clearly superlative; the performances of Harris (1929-33, 1935-42) and Houk (1974-78) were clearly awful. In between those great and ghastly performance lie a baker’s dozen that probably merit cheers or Bronx cheers. A super-majority of the performances (the 73 percent in the middle) probably have little to do with management skills and a lot to do with other factors, to which I will come.

The Bottom Line

It’s safe to say that the number of games managed is, at best, a poor reflection of managerial ability. What this means is that (a) few managers exert a marked influence on the performance of their teams and (b) managers, for the most part, are dismissed or kept around for reasons other than their actual influence on performance. Both points are supported by the two preceding sections.

More tellingly, both points are consistent with the time-tested observation that “they” couldn’t fire the team, so “they” fired the manager.


The numbers confirm what I saw in 30 years of being managed and 22 (overlapping) years of managing: The selection of managers is at least as random as their influence on what they manage. This is true not only in baseball but wherever there are managers, that is, throughout the world of commerce (including its entertainment sectors), the academy, and government.

The is randomness for several reasons. First, there is the difficulty of specifying managerial objectives that are measurable and consistent. A manager’s basic task might be to attain a specific result (e.g., winning more games than the previous manager, winning at least a certain number of games, turning a loss into a profit). But a manager might also be expected to bring peace and harmony to a fractious workplace. And the manager might also be charged with maintainng a”diverse” workplace and avoiding charges of discrimination? Whatever the tasks, their specification is often arbitrary and, in large organizations, impossible to relate the objective to an overarching organization goal (e.g., attaining a profit target).

Who knows if it’s possible to win more games or turn a loss into a profit, given the competition, the quality of the workforce, etc.? Is a harmonious workplace more productive than a fractious one if a fractious one is a sign of productive competitiveness?  How does one square “diversity” and forbearance toward the failings of the “diverse” (to avoid discrimination charges), while also turning a profit?

Given the complexity of management, at which I’ve only hinted, and the difficulty of judging managers, even when their “output” is well-defined (e.g., W-L record), it’s unsurprising that the ranks of managers are riddled with the ineffective and the incompetent. And such traits are often tolerated and even rewarded (e.g., raise, promotion, contract extension). Why? Here are some of the reasons:

  • Unwillingness to admit that it was a mistake to hire or promote a manager
  • A manager’s likeability or popularity
  • A manager’s connections to higher-ups
  • The cost and difficulty of firing a manager (e.g., severance pay, contract termination clauses, possibility of discrimination charges)
  • Inertia — Things seem to be going well enough, and no one has an idea of how well they should be going).

The good news is that relatively few managers make a big difference. The bad news is that the big difference is just as likely to be negative as it is to be positive. And for the reasons listed above, abysmal managers will not be rooted out until they have done a lot of damage.

So, yes, some managers — though relatively few — make a difference. But that difference is likely to prove disastrous. Just look at the course of the United States over the past 80 years.

A Human Person


The ludicrous and (it seems) increasingly popular assertion that plants have rights should not distract us from the more serious issue of fetal rights. (My position on the issue can be found among these links.) Maverick Philosopher explains how abortion may be opposed for non-religious reasons:

It is often assumed that opposition to abortion can be based only on religious premises. This assumption is plainly false. To show that it is is false, one need merely give an anti-abortion argument that does not invoke any religious tenet, for example:1. Infanticide is morally wrong.
2. There is no morally relevant difference between abortion and infancticide.
3. Abortion is morally wrong.

Whether one accepts this argument or not, it clearly invokes no religious premise. It is therefore manifestly incorrect to say or imply that all opposition to abortion must be religiously-based. Theists and atheists alike could make use of the above argument.

MP then links to a piece by Nat Hentoff, an atheist and Leftist. Hentoff writes, apropos Barack Obama and abortion, that

I admire much of Obama’s record, including what he wrote in “The Audacity of Hope” about the Founders’ “rejection of all forms of absolute authority, whether the king, the theocrat, the general, the oligarch, the dictator, the majority … George Washington declined the crown because of this impulse.”

But on abortion, Obama is an extremist. He has opposed the Supreme Court decision that finally upheld the Partial-Birth Abortion Ban Act against that form of infanticide. Most startlingly, for a professed humanist, Obama — in the Illinois Senate — also voted against the Born Alive Infant Protection Act….

Furthermore, as “National Right to Life News” (April issue) included in its account of Obama’s actual votes on abortion, he “voted to kill a bill that would have required an abortionist to notify at least one parent before performing an abortion on a minor girl from another state.”

These are conspiracies — and that’s the word — by pro-abortion extremists to transport a minor girl across state lines from where she lives, unbeknownst to her parents. This assumes that a minor fully understands the consequences of that irredeemable act. As I was researching this presidential candidate’s views on the unilateral “choice” that takes another’s life, I heard on the radio what Obama said during a Johnstown, Pa., town hall meeting on March 29 as he was discussing the continuing dangers of exposure to HIV/AIDS infections:

“When it comes specifically to HIV/AIDS, the most important prevention is education, which should include — which should include abstinence education and teaching children, you know, that sex is not something casual. But it should also include — it should also include other, you know, information about contraception because, look, I’ve got two daughters, 9 years old and 6 years old. I am going to teach them first of all about values and morals.

“But if they make a mistake,” Obama continued, “I don’t want them punished with a baby.”

Among my children and grandchildren are two daughters and three granddaughters; and when I hear anyone, including a presidential candidate, equate having a baby as punishment, I realize with particular force the impact that the millions of legal abortions in this country have had on respect for human life.

And that’s the crux of the issue: respect for human life.

Thus I turn to a Peter Lawler’s “A Human Person, Actually,” in which Lawler reviews Embryo: A Defense of Human Life, by Robert P. George and Christopher Tollefsen:

The embryo, George and Tollefsen argue, is a whole being, possessing the integrated capability to go through all the phases of human development. An embryo has what it takes to be a free, rational, deliberating, and choosing being; it is naturally fitted to develop into a being who can be an “uncaused cause,” a genuinely free agent. Some will object, of course, that the embryo is only potentially human. The more precise version of this objection is that the embryo is human—not a fish or a member of some other species—but not yet a person. A person, in this view, is conscious enough to be a free chooser right now. Rights don’t belong to members of our species but to persons, beings free enough from natural determination to be able to exercise their rights. How could someone have rights if he doesn’t even know that he has them?…

Is the embryo a “who”? It’s true enough that we usually don’t bond with embryos or grieve when they die. Doubtless, that’s partly because of our misperception of who or what an embryo is. But it’s also because we have no personal or loving contact with them. We tend to think of persons as beings with brains and hearts; an embryo has neither. But personal significance can’t be limited to those we happen to know and love ourselves; my powers of knowing and loving other persons are quite limited, and given to the distortions of prejudice. Whether an embryo is by nature a “who” can be determined only by philosophical reflection about what we really know.The evidence that George and Tollefsen present suggests that there are only two non-arbitrary ways to consider when a “what” naturally becomes a “who.” Either the embryo is incapable of being anything but a “who”; from the moment he or she comes to be, he or she is a unique and particular being capable of exhibiting all the personal attributes associated with knowing, loving, and choosing. Or a human being doesn’t become a “who” until he or she actually acquires the gift of language and starts displaying distinctively personal qualities. Any point in between these two extremes—such as the point at which a fetus starts to look like a human animal or when the baby is removed from the mother’s womb—is perfectly arbitrary. From a purely rational or scientific view, the price of being unable to regard embryos as “whos” is being unable to regard newborn babies as “whos”….

As I say here,

abortion is of a piece with selective breeding and involuntary euthanasia, wherein the state fosters eugenic practices that aren’t far removed from those of the Third Reich. And when those practices become the norm, what and who will be next? Libertarians, of all people, should be alert to such possibilities. Instead of reflexively embracing “choice” they should be asking whether “choice” will end with fetuses.

Most libertarians, alas, mimic “liberals” and “progressives” on the issue of abortion. But there are no valid libertarian arguments for abortion, just wrong-headed ones.

Are You Happy?


Justin Wolfers (Freakonomics blog) has completed a series of six posts about the economics of happiness (here, here, here, here, here, and here). The bottom line, according to Wolfers:

1) Rich people are happier than poor people.
2) Richer countries are happier than poorer countries.
3) As countries get richer, they tend to get happier.

All of which should come as no surprise to anyone, without the benefit of “happiness research.” Regarding which, I agree with Arnold Kling, who says:

My view is that happiness research implies Nothing. Zero. Zilch. Nada. I believe that you do not learn about economic behavior by watching what people say in response to a survey.

You learn about economic behavior by watching what people actually do.

And…you consult your “priors.” It is axiomatic that individuals prefer more to less; that is, more income yields more satisfaction because it affords access to goods and services of greater variety and higher quality. Moreover, income and the wealth that flows from it are valued for their own sake by most individuals. (That they might be valued because they enable philanthropic endeavors is a case in point.)

It is reasonable to conclude, therefore, that the “law” of diminishing marginal utility, which may apply to particular goods and services, does not generally apply to income or wealth in the aggregate. But, in any event, given that Wolfers’s first conclusion is self-evidently true, the second and third conclusions follow. And they follow logically, not from “happiness research.”

“Ensuring America’s Freedom of Movement”: A Review

Ensuring America’s Freedom of Movement: A National Security Imperative to Reduce U.S. Oil Dependence was issued by CNA in October 2011. (CNA, in this case, is a not-for-profit analytical organization located in Alexandria, Virginia, and is not to be mistaken for the Chicago-based insurance and financial services company.) Ensuring America’s Freedom of Movement is a product of CNA’s Military Advisory Board (MAB), and is the fourth report issued by the MAB. Accordingly, I refer to it in the rest of this review as MAB4.

This review may be somewhat out of date in places, though not in its thrust. I began writing it almost two years ago, when Ensuring… was published. I have not been in a hurry to post this review because Ensuring… is an inconsequential bit of fluff and unlikely to influence policy. But post I must, because the existence of the MAB and MAB4 are affronts to the distinguished intellectual heritage claimed by CNA.

*     *     *

A critical reader — someone who is not seeking support for preconceived policy prescriptions — will be disappointed in MAB4. If there are valid arguments for government initiatives to foster the development and use of alternatives to oil, they do not leap out of the pages of MAB4.

The main point of MAB4 is to urge

government … action to promote the use of a more diverse mix of transportation fuels and to drive wider public acceptance of these alternatives. (p. xiv, emphasis added)

And on cue, a day after the issuance of Obama’s plan to combat “climate change,” the MAB released a statement that ends with this:

The CNA MAB supports the President’s plan to act now to address the worst effects of climate change and to improve our nations’ energy posture and competitive advantage in clean energy markets. The CNA MAB continues to identify the security implications of climate change and to protect and enhance our energy, climate and national security today and for our future generations. (June 26, 2013)

Despite token acknowledgement of the power of markets to do the job, the authors consistently invoke the power of government, in the name of “stability.”

There is much pointing-with-alarm at the instability caused by “dependence” on imported oil — with a focus on the Middle East. But the only “hard” estimate of the price of instability is a poorly documented, questionable estimate of the effects of a 30-day closure of the Strait of Hormuz on GDP and the output and employment of the U.S. trucking industry. Empirical estimates of the effects of sudden reductions in oil imports (oil shocks) are available, but the authors of MAB4 did not use them — or perhaps did not know about them.

It would have been instructive to compare the cumulative losses to GDP resulting from actual oil shocks with (a) the costs of maintaining forces in the Middle East to deter overtly hostile shocks (e.g., the closure of the Strait of Hormuz by Iran) and (b) the costs to taxpayers and consumers of government subsidies and edicts to promote the development and require the use of alternative energy sources. But no such comparison is offered, so the critical reader has no idea whether efforts to wean the U.S. from oil — especially imported oil — make economic sense.

Moreover, the authors of MAB4 reject the possibility of drawing down U.S. forces in the Middle East, for “strategic” reasons, which means that (in the authors’ view) taxpayers should continue to foot the bill for Middle East forces while coughing up additional sums to subsidize the development and use of alternative energy sources. I am all in favor of a forward strategy that is aimed at deterring and countering adventurism on the part of America’s enemies and potential enemies. It would be foolish in the extreme to allow our enemies and potential enemies to aggrandize their power by denying America’s access to a vital resource, such as oil. (Iran and China, I am looking at you.) It would be (and is) doubly foolish to throw bad money after good by also succumbing to the lobbying efforts of corn-growers, makers of solar panels, and kindred rent-seekers.

In sum, MAB4 is a piece of advocacy, not objective analysis. True believers in the wisdom and infallibility of government will rejoice in MAB4 and hope, pray, plead, and work for the adoption of its recommendations by the federal government. Critical readers will check their wallets and wonder at the naivete and presumptuousness of the 13 retired flag and general officers who constituted the MAB when CNA extruded MAB4.

*   *   *

I will elaborate on the preceding observations in the rest of this review, which has six main parts:

  • I. Background: CNA and the MAB — This part is for the benefit of those readers — almost all of you, I’m sure — who know nothing of CNA or its Military Advisory Board.
  • II. An Overview of MAB4 — This part outlines the organization of MAB4 and summarizes its findings and recommendations, which come into play throughout the review.
  • III. The Hidden Foundation of MAB4 — MAB4’s findings and recommendations rest on a foundation of hidden assumptions — biases, if you will. Part III articulates those biases.
  • IV. The Analytical Superstructure of MAB4 — This part focuses on the facts and logic of the substantive portions of MAB4, namely, Chapters 1 and 2. They are found wanting.
  • V. MAB4 vs. CNA’s Standards — CNA proclaims itself an organization that upholds a long tradition of high standards and objectivity. Are the MAB and MAB4 consistent with that tradition? Part V answers that question in the negative.
  • VI. Summary Assessment —  A final 534 words, for the benefit of readers who want to skip the gory details.

(The rest of this very long review is below the fold.) (more…)