“Settled Science” and the Monty Hall Problem

The so-called 97-percent consensus among climate scientists about anthropogenic global warming (AGW) isn’t evidence of anything but the fact that scientists are only human. Even if there were such a consensus, it certainly wouldn’t prove the inchoate theory of AGW, any more than the early consensus against Einstein’s special theory of relativity disproved that theory.

Actually, in the case of AGW, the so-called consensus is far from a consensus about the extent of warming, its causes, and its implications. (See, for example, this post and this one.) But it’s undeniable that a lot of climate scientists believe in a “strong” version of AGW, and in its supposedly dire consequences for humanity.

Why is that? Well, in a field as inchoate as climate science, it’s easy to let one’s prejudices drive one’s research agenda and findings, even if only subconsciously. And isn’t it more comfortable and financially rewarding to be with the crowd and where the money is than to stand athwart the conventional wisdom? (Lennart Bengtsson certainly found that to be the case.) Moreover, there was, in the temperature records of the late 20th century, a circumstantial case for AGW, which led to the development of theories and models that purport to describe a strong relationship between temperature and CO2. That the theories and models are deeply flawed and lacking in predictive value seems not to matter to the 97 percent (or whatever the number is).

In other words, a lot of climate scientists have abandoned the scientific method, which demands skepticism, in order to be on the “winning” side of the AGW issue. How did it come to be thought of as the “winning” side? Credit vocal so-called scientists who were and are (at least) guilty of making up models to fit their preconceptions, and ignoring evidence that human-generated CO2 is a minor determinant of atmospheric temperature. Credit influential non-scientists (e.g., Al Gore) and various branches of the federal government that have spread the gospel of AGW and bestowed grants on those who can furnish evidence of it. Above all, credit the media, which for the past two decades has pumped out volumes of biased, half-baked stories about AGW, in the service of the “liberal” agenda: greater control of the lives and livelihoods of Americans.

Does this mean that the scientists who are on the AGW bandwagon don’t believe in the correctness of AGW theory? I’m sure that most of them do believe in it — to some degree. They believe it at least to the same extent as a religious convert who zealously proclaims his new religion to prove (mainly to himself) his deep commitment to that religion.

What does all of this have to do with the Monty Hall problem? This:

Making progress in the sciences requires that we reach agreement about answers to questions, and then move on. Endless debate (think of global warming) is fruitless debate. In the Monty Hall case, this social process has actually worked quite well. A consensus has indeed been reached; the mathematical community at large has made up its mind and considers the matter settled. But consensus is not the same as unanimity, and dissenters should not be stifled. The fact is, when it comes to matters like Monty Hall, I’m not sufficiently skeptical. I know what answer I’m supposed to get, and I allow that to bias my thinking. It should be welcome news that a few others are willing to think for themselves and challenge the received doctrine. Even though they’re wrong. (Brian Hayes, “Monty Hall Redux” (a book review), American Scientist, September-October 2008)

The admirable part of Hayes’s statement is its candor: Hayes admits that he may have adopted the “consensus” answer because he wants to go with the crowd.

The dismaying part of Hayes’s statement is his smug admonition to accept “consensus” and move on. As it turns out the “consensus” about the Monty Hall problem isn’t what it’s cracked up to be. A lot of very bright people have solved a tricky probability puzzle, but not the Monty Hall problem. (For the details, see my post, “The Compleat Monty Hall Problem.”)

And the “consensus” about AGW is very far from being the last word, despite the claims of true believers. (See, for example, the relatively short list of recent articles, posts, and presentations given at the end of this post.)

Going with the crowd isn’t the way to do science. It’s certainly not the way to ascertain the contribution of human-generated CO2 to atmospheric warming, or to determine whether the effects of any such warming are dire or beneficial. And it’s most certainly not the way to decide whether AGW theory implies the adoption of policies that would stifle economic growth and hamper the economic betterment of millions of Americans and billions of other human beings — most of whom would love to live as well as the poorest of Americans.

Given the dismal track record of global climate models, with their evident overstatement of the effects of CO2 on temperatures, there should be a lot of doubt as to the causes of rising temperatures in the last quarter of the 20th century, and as to the implications for government action. And even if it could be shown conclusively that human activity will temperatures to resume the rising trend of the late 1900s, several important questions remain:

  • To what extent would the temperature rise be harmful and to what extent would it be beneficial?
  • To what extent would mitigation of the harmful effects negate the beneficial effects?
  • What would be the costs of mitigation, and who would bear those costs, both directly and indirectly (e.g., the effects of slower economic growth on the poorer citizens of thw world)?
  • If warming does resume gradually, as before, why should government dictate precipitous actions — and perhaps technologically dubious and economically damaging actions — instead of letting households and businesses adapt over time by taking advantage of new technologies that are unavailable today?

Those are not issues to be decided by scientists, politicians, and media outlets that have jumped on the AGW bandwagon because it represents a “consensus.” Those are issues to be decided by free, self-reliant, responsible persons acting cooperatively for their mutual benefit through the mechanism of free markets.

*     *     *

Recent Related Reading:
Roy Spencer, “95% of Climate Models Agree: The Observations Must Be Wrong,” Roy Spencer, Ph.D., February 7, 2014
Roy Spencer, “Top Ten Good Skeptical Arguments,” Roy Spencer, Ph.D., May 1, 2014
Ross McKittrick, “The ‘Pause’ in Global Warming: Climate Policy Implications,” presentation to the Friends of Science, May 13, 2014 (video here)
Patrick Brennan, “Abuse from Climate Scientists Forces One of Their Own to Resign from Skeptic Group after Week: ‘Reminds Me of McCarthy’,” National Review Online, May 14, 2014
Anthony Watts, “In Climate Science, the More Things Change, the More They Stay the Same,” Watts Up With That?, May 17, 2014
Christopher Monckton of Brenchley, “Pseudoscientists’ Eight Climate Claims Debunked,” Watts Up With That?, May 17, 2014
John Hinderaker, “Why Global Warming Alarmism Isn’t Science,” PowerLine, May 17, 2014
Tom Sheahan, “The Specialized Meaning of Words in the “Antarctic Ice Shelf Collapse’ and Other Climate Alarm Stories,” Watts Up With That?, May 21, 2014
Anthony Watts, “Unsettled Science: New Study Challenges the Consensus on CO2 Regulation — Modeled CO2 Projections Exaggerated,” Watts Up With That?, May 22, 2014
Daniel B. Botkin, “Written Testimony to the House Subcommittee on Science, Space, and Technology,” May 29, 2014

Related posts:
The Limits of Science
The Thing about Science
Debunking “Scientific Objectivity”
Modeling Is Not Science
The Left and Its Delusions
Demystifying Science
AGW: The Death Knell
Modern Liberalism as Wishful Thinking
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”

The Compleat Monty Hall Problem

Wherein your humble blogger gets to the bottom of the Monty Hall problem, sorts out the conflicting solutions, and declares that the standard solution is the right solution, but not to the Monty Hall problem as it’s usually posed.


The Monty Hall problem, first posed as a statistical puzzle in 1975, has been notorious since 1990, when Marilyn vos Savant wrote about it in Parade. Her solution to the problem, to which I will come, touched off a controversy that has yet to die down. But her solution is now widely accepted as the correct one; I refer to it here as the standard solution.

This is from the Wikipedia entry for the Monty Hall problem:

The Monty Hall problem is a brain teaser, in the form of a probability puzzle (Gruber, Krauss and others), loosely based on the American television game show Let’s Make a Deal and named after its original host, Monty Hall. The problem was originally posed in a letter by Steve Selvin to the American Statistician in 1975 (Selvin 1975a), (Selvin 1975b). It became famous as a question from a reader’s letter quoted in Marilyn vos Savant‘s “Ask Marilyn” column in Parade magazine in 1990 (vos Savant 1990a):

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Here’s a complete statement of the problem:

1. A contestant sees three doors. Behind one of the doors is a valuable prize, which I’ll denote as $. Undesirable or worthless items are behind the other two doors; I’ll denote those items as x.

2. The contestant doesn’t know which door conceals $ and which doors conceal x.

3. The contestant chooses a door at random.

4. The host, who knows what’s behind each of the doors, opens one of the doors not chosen by the contestant.

5. The door chosen by the host may not conceal $; it must conceal an x. That is, the host always opens a door to reveal an x.

6. The host then asks the contestant if he wishes to stay with the door he chose initially (“stay”) or switch to the other unopened door (“switch”).

7. The contestant decides whether to stay or switch.

8. The host then opens the door finally chosen by the contestant.

9. If $ is revealed, the contestant wins; if x is revealed the contestant loses.

One solution (the standard solution) is to switch doors because there’s a 2/3 probability that $ is hidden behind the unopened door that the contestant didn’t choose initially. In vos Savant’s own words:

Yes; you [the contestant] should switch. The first [initially chosen] door has a 1/3 chance of winning, but the second [other unopened] door has a 2/3 chance.

The other solution (the alternative solution) is indifference. Those who propound this solution maintain that there’s a equal chance of finding $ behind either of the doors that remain unopened after the host has opened a door.

As it turns out, the standard solution doesn’t tell a contestant what to do in a particular game. But the standard solution does point to the right strategy for someone who plays or bets on a large number of games.

The alternative solution accurately captures the unpredictability of any particular game. But indifference is only a break-even strategy for a person who plays or bets on a large number of games.


The contestant may choose among three doors, and there are three possible ways of arranging the items behind the doors: S x x; x $ x; and x x $. The result is nine possible ways in which a game may unfold:

Equally likely outcomes

Events 1, 5, and 9 each have two branches. But those branches don’t count as separate events. They’re simply subsets of the same event; when the contestant chooses a door that hides $, the host must choose between the two doors that hide x, but he can’t open both of them. And his choice doesn’t affect the outcome of the event.

It’s evident that switching would pay off with a win in 2/3 of the possible events; whereas, staying with the original choice would off in only 1/3 of the possible events. The fractions 1/3 and 2/3 are usually referred to as probabilities: a 2/3 probability of winning $ by switching doors, as against a 1/3 probability of winning $ by staying with the initially chosen door.

Accordingly, proponents of the standard solution — who are now legion — advise the individual (theoretical) contestant to switch. The idea is that switching increases one’s chance (probability) of winning.


There are three problems with the standard solution:

1. It incorporates a subtle shift in perspective. The Monty Hall problem, as posed, asks what a contestant should do. The standard solution, on the other hand, represents the expected (long-run average) outcome of many events, that is, many plays of the game. For reasons I’ll come to, the outcome of a single game can’t be described by a probability.

2.  Lists of possibilities, such as those in the diagram above, fail to reflect the randomness inherent in real events.

3. Probabilities emerge from many repetitions of the kinds of events listed above. It is meaningless to ascribe a probability to a single event. In case of the Monty Hall problem, many repetitions of the game will yield probabilities approximating those given in the standard solution, but the outcome of each repetition will be unpredictable. It is therefore meaningless to say that a contestant has a 2/3 chance of winning a game if he switches. A 2/3 chance of winning refers to the expected outcome of many repetitions, where the contestant chooses to switch every time. To put it baldly: How does a person win 2/3 of a game? He either wins or doesn’t win.

Regarding points 2 and 3, I turn to Probability, Statistics and Truth (second revised English edition, 1957), by Richard von Mises:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply toe theory of probability we must have a practically unlimited sequence of uniform observations. (p. 11)

*     *     *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the same of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. (p. 11)

*     *     *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute [e.g., winning $ rather than x ] in a given collective [a series of attempts to win $ rather than x ]. (pp. 11-12)

*     *     *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attributed would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective [emphasis in the original]. (p. 15)

*     *     *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? It is important to note that this statement remains valid also if the calculated probability has one of the two extreme values 1 or 0 [emphasis added]. (p. 33)

To bring the point home, here are the results of 50 runs of the Monty Hall problem, where each result represents (i) a random initial choice between Door 1, Door 2, and Door 3; (ii) a random array of $, x, and x behind the three doors; (iii) the opening of a door (other than the one initially chosen) to reveal an x; and (iv) a decision, in every case, to switch from the initially chosen door to the other unopened door:

Results of 50 games

What’s relevant here isn’t the fraction of times that $ appears, which is 3/5 — slightly less than the theoretical value of 2/3.  Just look at the utter randomness of the results. The first three outcomes yield the “expected” ratio of two wins to one loss, though in the real game show the two winners and one loser would have been different persons. The same goes for any sequence, even the final — highly “improbable” (i.e., random) — string of nine straight wins (which would have accrued to nine different contestants). And who knows what would have happened in games 51, 52, etc.

If a person wants to win 2/3 of the time, he must find a game show that allows him to continue playing the game until he has reached his goal. As I’ve found in my simulations, it could take as many as 10, 20, 70, or 300 games before the cumulative fraction of wins per game converges on 2/3.

That’s what it means to win 2/3 of the time. It’s not possible to win a single game 2/3 of the time, which is the “logic” of the standard solution as it’s usually presented.


The alternative solution doesn’t offer a winning strategy. In this view of the Monty Hall problem, it doesn’t matter which unopened door a contestant chooses. In effect, the contestant is advised to flip a coin.

As discussed above, the outcome of any particular game is unpredictable, so a coin flip will do just as well as any other way of choosing a door. But randomly selecting an unopened door isn’t a good strategy for repeated plays of the game. Over the long run, random selection means winning about 1/2 of all games, as opposed to 2/3 for the “switch” strategy. (To see that the expected probability of winning through random selection approaches 1/2, return to the earlier diagram; there, you’ll see that $ occurs in 9/18 = 1/2 of the possible outcomes for “stay” and “switch” combined.)

Proponents of the alternative solution overlook the importance of the host’s selection of a door to open. His choice isn’t random. Therein lies the secret of the standard solution — as a long-run strategy.


It’s commonly said by proponents of the standard solution that when the host opens a door, he gives away information that the contestant can use to increase his chance of winning that game. One nonsensical version of this explanation goes like this:

  • There’s a 2/3 probability that $ is behind one of the two doors not chosen initially by the contestant.
  • When the host opens a door to reveal x, that 2/3 “collapses” onto the other door that wasn’t chosen initially. (Ooh … a “collapsing” probability. How exotic. Just like Schrödinger’s cat.)

Of course, the host’s action gives away nothing in the context of a single game, the outcome of which is unpredictable. The host’s action does help in the long run, if you’re in a position to play or bet on a large number of games. Here’s how:

  • The contestant’s initial choice (IC) will be wrong 2/3 of the time. That is, in 2/3 of a large number of games, the $ will be behind one of the other two doors.
  • Because of the rules of the game, the host must open one of those other two doors (HC1 and HC2); he can’t open IC.
  • When IC hides an x (which happens 2/3 of the time), either HC1 and HC2 must conceal the $; the one that doesn’t conceal the $ conceals an x.
  • The rules require the host to open the door that conceals an x.
  • Therefore, about 2/3 of the time the $ will be behind HC1 or HC2, and in those cases it will always be behind the door (HC1 or HC2) that the host doesn’t open.
  • It follows that the contestant, by consistently switching from IC to the remaining unopened door (HC1 or HC2), will win the $ about 2/3 of the time.

The host’s action transforms the probability — the long-run frequency — of choosing the winning door from 1/2 to 2/3. But it does so if and only if the player or bettor always switches from IC to HC1 or HC2 (whichever one remains unopened).

You can visualize the steps outlined above by looking at the earlier diagram of possible outcomes.

That’s all there is. There isn’t any more.

Election 2014: Food for Thought

Will the GOP make big gains in the House and Senate this year? It seems to be the conventional wisdom that big gains will be made. But I don’t think it’s going to be quite the cakewalk that many commentators — and too many Republicans — are expecting. Consider the following graph, which I’ll translate and discuss below:

Obama's daily approval ratings_26 May 2014
Derived from Rasmussen Reports, Daily Presidential Tracking Poll.

First, what do the three lines mean?

The blue line represents the number of likely voters approving Obama’s performance divided by number of likely voters disapproving Obama’s performance. A ratio of 1.00 indicates parity — equal sentiment for and against Obama. A ratio below 1.00 means that likely voters, on balance, disapprove of Obama’s performance.

The black line represents the number of voters strongly approving divided by the number of voters strongly disapproving Obama’s performance. The post-reelection bandwagon aside, Obama has been on the wrong side of this crucial ratio since June 29, 2009.

The red line represents the intensity of disapproval. It’s the ratio of strong disapproval to overall disapproval.

In the election of 2010, when the GOP gained 64 House seats and 6 Senate seats, the trends were strongly anti-Obama. His overall approval/disapproval ratio had hovered around 0.9 for months; his strong approval/disapproval ratio had hovered around 0.6 for months; and the intensity of disapproval had been rising for months.

In 2012, when the GOP lost 8 House seats and 2 Senate seats, Obama’s stock had been on the rise for 3 months. It’s true that the strong disapproval/overall disapprove ratio was rising, but I attribute that to a smaller denominator, that is, a shrinking pool of likely voters who disapproved.

Which brings us to 2014. What’s happening now? Obama’s overall approval/disapproval ratio is higher than it was before the 2010 election, which could be a bad sign for the GOP. But — praise be — Obama’s strong approval/disapproval ratio seems to be a bit lower than it was in the runup to the 2010 election. If that ratio climbs, the GOP will have a fight on its hands, unless the “enthusiasm gap” keeps a lot of Democrats home on November 4.

So, in my view, 2014 isn’t guaranteed to be another 2010. And another 2010 is what’s needed if the GOP is to control both the House and Senate. Sure, the GOP can’t come close to a veto-proof majority in the Senate (and probably not in the House, either). But with control of the Senate, the GOP could stymie Obama’s court nominees. And with control of both houses, the GOP would face less pressure to comprise on defense spending, entitlement spending, and immigration — to name three salient issues. A weakened Obama would have less leverage in any showdown over those and other issues.

But to control Congress, the GOP has to hold the House and make big gains in the Senate. And for that to happen, the GOP must win the battle of enthusiasm; that is, it must take full advantage of disenchantment with Obama and his failed policies: the disaster that is Obamacare, the failure to deal with the looming disaster in entitlement spending, the naive reliance on diplomacy to secure national interests, and the high-handed pursuit of a radical social, economic, and environmental agenda.

“The Science Is Settled”

Thales (c. 620 – c. 530 BC): The Earth rests on water.

Aneximenes (c. 540 – c. 475 BC): Everything is made of air.

Heraclitus (c. 540 – c. 450 BC): All is fire.

Empodecles (c. 493 – c. 435 BC): There are four elements: earth, air, fire, and water.

Democritus (c. 460 – c. 370 BC): Atoms (basic elements of nature) come in an infinite variety of shapes and sizes.

Aristotle (384 – 322 BC): Heavy objects must fall faster than light ones. The universe is a series of crystalline spheres that carry the sun, moon, planets, and stars around Earth.

Ptolemey (90 – 168 AD): Ditto the Earth-centric universe,  with a mathematical description.

Copernicus (1473 – 1543): The planets revolve around the sun in perfectly circular orbits.

Brahe (1546 – 1601): The planets revolve around the sun, but the sun and moon revolve around Earth.

Kepler (1573 – 1630): The planets revolve around the sun in elliptical orbits, and their trajectory is governed by magnetism.

Newton (1642 – 1727): The course of the planets around the sun is determined by gravity, which is a force that acts at a distance. Light consists of corpuscles; ordinary matter is made of larger corpuscles. Space and time are absolute and uniform.

Rutherford (1871 – 1937), Bohr (1885 – 1962), and others: The atom has a center (nucleus), which consists of two elemental particles, the neutron and proton.

Einstein (1879 – 1955): The universe is neither expanding nor shrinking.

That’s just a small fraction of the mistaken and incomplete theories that have held sway in the field of physics. There are many more such mistakes and lacunae in the other natural sciences: biology, chemistry, and earth science — each of which, like physics, has many branches. And in all branches there are many unresolved questions. For example, the Standard Model of particle physics, despite its complexity, is known to be incomplete. And it is thought (by some) to be unduly complex; that is, there may be a simpler underlying structure waiting to be discovered.

Given all of this, it is grossly presumptive to claim that climate science is “settled” when the phenomena that it encompasses are so varied, complex, often poorly understood, and often given short shrift (e.g., the effects of solar radiation on the intensity of cosmic radiation reaching Earth, which affects low-level cloud formation, which affects atmospheric temperature and precipitation).

Anyone who says that climate science is “settled” is either ignorant, stupid, or a freighted with a political agenda.

The Passing of Red-Brick Schoolhouses and a Way of Life

My home town once boasted fifteen schoolhouses that were built between the end of the Civil War and 1899. All but the high school were named for presidents of the United States: Adams, Buchanan, Fillmore, Harrison, Jackson, Jefferson, Madison, Monroe, Pierce, Polk, Taylor, Tyler, Van Buren, and Washington. Another of their ilk came along sometime between 1904 and 1915; Lincoln was its name.

With the Adams School counting for two presidents — the second and sixth — there was a school for every president through Lincoln. Why Lincoln came late is a mystery to me. Lincoln was revered by us Northerners, and his picture was displayed proudly next to Washington’s in schools and municipal offices. We even celebrated Lincoln’s Birthday as a holiday distinct from Washington’s Birthday (a.k.a. President’s Day).

More schools — some named for presidents — followed well into the 20th century, but only the fifteen that I’ve named were built in the style of the classic red-brick schoolhouse: two stories, a center hall with imposing staircase, tall windows, steep roof, and often a tower for the bell that the janitor rang to summon neighborhood children to school. (The Lincoln, as a latecomer, was L-shaped rather than boxy, but it was otherwise a classic red-brick schoolhouse, replete with a prominent bell tower.)

I attended three of the fifteen red-brick schoolhouses. My first was Polk School, where I began kindergarten two days after the formal surrender of Japan on September 2, 1945. (For the benefit of youngsters, that ceremony marked the official end of World War II.)

Here’s the Polk in its heyday:


Kindergarten convened in a ground-floor room at the back of the school, facing what seemed then like a large playground, with room for a softball field. The houses at the far end of the field would have been easy targets for adult players, but it would have been a rare feat for a student to hit one over the fence that separated the playground from the houses.

In those innocent days, students got to school and back home by walking. Here’s the route that I followed as a kindergartener:

Route to Polk School

A kindergartener walking several blocks between home and school, usually alone most of the way? Unheard of today, it seems. But in those days predation was unheard of. And, as a practical matter, most families had only one car, which the working (outside-the-home) parent (then known as the father and head-of-household) used on weekdays for travel to and from his job. Moreover, the exercise of walking as much as a mile each way was considered good for growing children — and it was.

The route between my home and Polk School was 0.6 mile in length, and it crossed one busy street. Along that street were designated crossing points, at which stood Safety Patrol Boys, usually 6th-graders, who ensured that students crossed only when it was safe to do so. They didn’t stand in the street and stop oncoming traffic; they simply judged when students could safely cross, and gave them the “green light” by blowing on a whistle. In the several years of my elementary-school career, I never saw or heard of a close call, let alone an injury or a fatality.

I began at Polk School because the school closest to my home, Madison School, didn’t have kindergarten. I went Madison for 1st grade. It was a gloomy pile:


Madison was shuttered after my year there, so I returned to Polk for 2nd and 3rd grades. Madison stood empty for a few years, and was razed in the late 1940s or early 1950s. Polk was shuttered sometime in the 1950s, and eventually was razed after being used for many years as a school-district warehouse.

The former site of Madison School now hosts “affordable housing”:

Madison School site

There’s a public playground where Polk School stood:

Polk School site

I spent two more years — 4th and 5th grades — in another red-brick schoolhouse: Tyler School. It’s still there, though it hasn’t been used as a school for many decades. It looked like this in 2006, when it served as a halfway house:

Tyler School_2

It now stands empty and uncared for. It looked like this in 2013:

Tyler School 2013

The only other survivor among the fifteen red-brick schoolhouses is Monroe School, the present use of which I can’t ascertain. It seems to have been cared for, however. This image is from 2013:

Monroe School 2013

Tyler and Monroe Schools are ghosts from America’s past — a past that’s now seemingly irretrievable. It was a time of innocence, when America’s wars were fought to victory; when children could safely roam (large cities excepted, as always, from prevailing mores); when marriage was between man and woman, and usually for life; when deviant behavior was discouraged, not “celebrated”; when a high-school diploma and four-year degree meant something, and were worth something; when the state wasn’t the enemy of the church; when politics didn’t intrude into science; when people resorted to government in desperation, not out of habit; and when people had real friends, not Facebook “friends.”

Social Accounting: A Tool of Social Engineering

Steven Landsburg writes about social accounting here and here. In the first-linked post, Landsburg says:

Economic theory tells us that under quite general hypotheses, the private value of an activity is in synch with its social value. If growing an orange makes you a dollar richer, that’s because growing that orange makes the world a dollar richer. And that’s good, because it encourages people to grow all and only those oranges that are (socially) worth growing.

Here’s my version of the “general hypotheses”: People engage in voluntary exchange if it benefits them. The buyers of an orange is willing to pay the grower $1 for the orange because the benefit derived from the orange is worth (at least) $1 to the  buyer. At the same time, the grower is willing to sell oranges for $1 apiece because he expects (at least) to cover his costs if he sells oranges at that price. (His costs include the interest that he could have earned had he put his money into, say, an equally risky corporate bond instead of land, trees, and equipment.)

Now comes the hard part, which Landsburg skips. Does growing an orange and selling it for $1 really make the world a dollar richer? The buyer of the orange is “richer” (i.e., better off) only to the extent that the enjoyment/satisfaction/utility he derives from the orange is greater than the enjoyment/satisfaction/utility that he would have derived from an alternative use of his dollar. The alternatives include giving away the dollar, buying something other than an orange (maybe something less expensive that yields the buyer as much or more enjoyment/satisfaction/utility), and saving the dollar, that is, making it available for investment in, say, an orange grove.

It may be convenient to add the dollar values of final transactions and call the resulting number GDP (or GWP, gross world product). But adding $1 to GDP doesn’t mean that the world (or the U.S.) is $1 richer for it, even in the scenario described by Landsburg. For one thing, there’s no common denominator for enjoyment/satisfaction/utility, which are personal matters. For a second thing, the marginal gain in enjoyment/satisfaction/utility — the difference between first-best (buying an orange for $1) and second-best (e.g., saving $1) — is also a personal matter without a common denominator. (What’s more, there are many scenarios in which the addition of $1 to GDP makes the world poorer; for example: government entices workers into government service by offering above-market compensation, and then has those workers produce economy-stultifying regulations.)

As for the essential meaninglessness of GDP as a measure of anything, I borrow from an old post of mine:

Consider A and B, who discover that, together, they can have more clothing and more food if each specializes: A in the manufacture of clothing, B in the production of food. Through voluntary exchange and bargaining, they find a jointly satisfactory balance of production and consumption. A makes enough clothing to cover himself adequately, to keep some clothing on hand for emergencies, and to trade the balance to B for food. B does likewise with food. Both balance their production and consumption decisions against other considerations (e.g., the desire for leisure).

A and B’s respective decisions and actions are microeconomic; the sum of their decisions, macroeconomic. The microeconomic picture might look like this:

  • A produces 10 units of clothing a week, 5 of which he trades to B for 5 units of food a week, 4 of which he uses each week, and 1 of which he saves for an emergency.
  • B, like A, uses 4 units of clothing each week and saves 1 for an emergency.
  • B produces 10 units of food a week, 5 of which she trades to A for 5 units of clothing a week, 4 of which she consumes each week, and 1 of which she saves for an emergency.
  • A, like B, consumes 4 units of food each week and saves 1 for an emergency.

Given the microeconomic picture, it is trivial to depict the macroeconomic situation:

  • Gross weekly output = 10 units of clothing and 10 units of food
  • Weekly consumption = 8 units of clothing and 8 units of food
  • Weekly saving = 2 units of clothing and 2 units of food

You will note that the macroeconomic metrics add no useful information; they merely summarize the salient facts of A and B’s economic lives — though not the essential facts of their lives, which include (but are far from limited to) the degree of satisfaction that A and B derive from their consumption of food and clothing.

The customary way of getting around the aggregation problem is to sum the dollar values of microeconomic activity. But this simply masks the aggregation problem by assuming that it is possible to add the marginal valuations (i.e., prices) of disparate products and services being bought and sold at disparate moments in time by disparate individuals and firms for disparate purposes. One might as well add two bananas to two apples and call the result four bapples.

The essential problem is that A and B will derive different kinds and amounts of enjoyment from clothing and food, and that those different kinds and amounts of enjoyment cannot be summed in any meaningful way. If meaningful aggregation is impossible for A and B, how can it be possible for an economy that consists of millions of economic actors and an untold variety of goods and services? And how is it possible when technological change yields results such as this?

GDP, in other words, is nothing more than what it seems to be on the surface: an estimate of the dollar value of economic output. It is not a measure of “social welfare” because there is no such thing.

And yet, Landsburg (among many economists) seems to believe that it’s possible to measure “social welfare,” that is, to measure how much “richer” the world is because of voluntary exchange. (I wouldn’t think of accusing Landsburg or any other economist — Paul Krugman and Brad DeLong excepted — of equating government spending and “social welfare.”)

This isn’t a first for Landsburg. About four years ago he wrote this:

Suppose you live next door to Bill Gates. Bill likes to play loud music at night. You’re a light sleeper. Should he be forced to turn down the volume?

An efficiency analysis would begin, in principle (though it might not be so easy in practice) by asking how much Bill’s music is worth to him (let’s say we somehow know that the answer is $10,000) and how much your sleep is worth to you (let’s say $25). It is important to realize from the outset that no economist thinks those numbers in any way measure Bill’s subjective enjoyment of his music or your subjective annoyance. Only a crazy person would think such a thing, and I’ve never met anybody who’s that crazy in that particular way. Instead, these numbers primarily reflect the fact that Bill is a whole lot richer than you are. Nevertheless, the economist will surely declare it inefficient to take $10,000 worth of enjoyment from Bill in order to give you $25 worth of sleep. We call that a $9,975 deadweight loss.

Landsburg properly denies the commensurability of the two experiences, and then turns around and declares them commensurate. My comment, at the time:

The problem with this kind of thinking should be obvious to anyone with the sense God gave a goose. The value of Bill’s enjoyment of loud music and the value of “your” enjoyment of sleep, whatever they may be, are irrelevant because they are incommensurate. They are separate, variably subjective entities. Bill’s enjoyment (at a moment in time) is Bill’s enjoyment. “Your” enjoyment (at a moment in time) is your enjoyment. There is no way to add, subtract, divide, or multiply the value of those two separate, variably subjective things. Therefore, there is no such thing (in this context) as a deadweight loss because there is no such thing as “social welfare” — a summation of the state of individuals’ enjoyment (or utility, as some would have it).

Prices serve the useful purpose of helping individual persons and firms to move toward maximum utility and maximum profits. (I say “move toward” because the vagaries of life seldom accommodate the attainment of nirvana.) Prices do not — do not — enable the attainment of “efficiency,” that is, the maximization of “social welfare.” They cannot because there is no such thing.

Only a dedicated social engineer could believe that it’s possible to sum degrees of happiness across individuals, or claim that a public project is justified because the costs (imposed on one set of persons) exceed the benefits (enjoyed by a mostly different set of persons).

*     *      *

Related posts:
Socialist Calculation and the Turing Test
Income and Diminishing Marginal Utility
Greed, Cosmic Justice, and Social Welfare
Positive Rights and Cosmic Justice
Utilitarianism, ‘Liberalism,’ and Omniscience
Utilitarianism vs. Liberty
Accountants of the Soul
Rawls Meets Bentham
The Case of the Purblind Economist
Enough of ‘Social Welfare’
Macroeconomics and Microeconomics
Social Justice
Positive Liberty vs. Liberty
More Social Justice
Luck Egalitarianism and Moral Luck
Utilitarianism and Psychopathy

The Pretence of Knowledge

Friedrich Hayek, in his Nobel Prize lecture of 1974, “The Pretence of Knowledge,” observes that

the great and rapid advance of the physical sciences took place in fields where it proved that explanation and prediction could be based on laws which accounted for the observed phenomena as functions of comparatively few variables.

Hayek’s particular target was the scientism then (and still) rampant in economics. In particular, there was (and is) a quasi-religious belief in the power of central planning (e.g., regulation, “stimulus” spending, control of the money supply) to attain outcomes superior to those that free markets would yield.

But, as Hayek says in closing,

There is danger in the exuberant feeling of ever growing power which the advance of the physical sciences has engendered and which tempts man to try, “dizzy with success” … to subject not only our natural but also our human environment to the control of a human will. The recognition of the insuperable limits to his knowledge ought indeed to teach the student of society a lesson of humility which should guard him against becoming an accomplice in men’s fatal striving to control society – a striving which makes him not only a tyrant over his fellows, but which may well make him the destroyer of a civilization which no brain has designed but which has grown from the free efforts of millions of individuals.

I was reminded of Hayek’s observations by John Cochrane’s post, “Groundhog Day” (The Grumpy Economist, May 11, 2014), wherein Cochrane presents this graph:

The fed's forecasting models are broken

Cochrane adds:

Every serious forecast looked like this — Fed, yes, but also CBO, private forecasters, and the term structure of forward rates. Everyone has expected bounce-back growth and rise in interest rates to start next year, for the last 6 years. And every year it has not happened. Welcome to the slump. Every year, Sonny and Cher wake us up, and it’s still cold, and it’s still grey. But we keep expecting spring tomorrow.

Whether the corrosive effects of government microeconomic and regulatory policy, or a failure of those (unprintable adjectives) Republicans to just vote enough wasted-spending Keynesian stimulus, or a failure of the Fed to buy another $3 trillion of bonds, the question of the day really should be why we have this slump — which, let us be honest, no serious forecaster expected.

(I add the “serious forecaster” qualification on purpose. I don’t want to hear randomly mined quotes from bloviating prognosticators who got lucky once, and don’t offer a methodology or a track record for their forecasts.)

The Fed’s forecasting models are nothing more than sophisticated charlatanism — a term that Hayek applied to pseudo-scientific endeavors like macroeconomic modeling. Nor is charlatanism confined to economics and the other social “sciences.” It’s rampant in climate “science,” as Roy Spencer has shown. Consider, for example, this graph from Spencers’s post, “95% of Climate Models Agree: The Observations Must Be Wrong” (Roy Spencer, Ph.D., February 7, 2014):

95% of climate models agree_the observations must be wrong

Spencer has a lot more to say about the pseudo-scientific aspects of climate “science.” This example is from “Top Ten Good Skeptical Arguments” (May 1, 2014):

1) No Recent Warming. If global warming science is so “settled”, why did global warming stop over 15 years ago (in most temperature datasets), contrary to all “consensus” predictions?

2) Natural or Manmade? If we don’t know how much of the warming in the longer term (say last 50 years) is natural, then how can we know how much is manmade?

3) IPCC Politics and Beliefs. Why does it take a political body (the IPCC) to tell us what scientists “believe”? And when did scientists’ “beliefs” translate into proof? And when was scientific truth determined by a vote…especially when those allowed to vote are from the Global Warming Believers Party?

4) Climate Models Can’t Even Hindcast How did climate modelers, who already knew the answer, still fail to explain the lack of a significant temperature rise over the last 30+ years? In other words, how to you botch a hindcast?

5) …But We Should Believe Model Forecasts? Why should we believe model predictions of the future, when they can’t even explain the past?

6) Modelers Lie About Their “Physics”. Why do modelers insist their models are based upon established physics, but then hide the fact that the strong warming their models produce is actually based upon very uncertain “fudge factor” tuning?

7) Is Warming Even Bad? Who decided that a small amount of warming is necessarily a bad thing?

8) Is CO2 Bad? How did carbon dioxide, necessary for life on Earth and only 4 parts in 10,000 of our atmosphere, get rebranded as some sort of dangerous gas?

9) Do We Look that Stupid? How do scientists expect to be taken seriously when their “theory” is supported by both floods AND droughts? Too much snow AND too little snow?

10) Selective Pseudo-Explanations. How can scientists claim that the Medieval Warm Period (which lasted hundreds of years), was just a regional fluke…yet claim the single-summer (2003) heat wave in Europe had global significance?

11) (Spinal Tap bonus) Just How Warm is it, Really? Why is it that every subsequent modification/adjustment to the global thermometer data leads to even more warming? What are the chances of that? Either a warmer-still present, or cooling down the past, both of which produce a greater warming trend over time. And none of the adjustments take out a gradual urban heat island (UHI) warming around thermometer sites, which likely exists at virtually all of them — because no one yet knows a good way to do that.

It is no coincidence that leftists believe in the efficacy of central planning and cling tenaciously to a belief in catastrophic anthropogenic global warming. The latter justifies the former, of course. And both beliefs exemplify the left’s penchant for magical thinking, about which I’ve written several times (e.g., here, here, here, here, and here).

Magical thinking is the pretense of knowledge in the nth degree. It conjures “knowledge” from ignorance and hope. And no one better exemplifies magical thinking than our hopey-changey president.

*     *     *

Related posts:
Modeling Is Not Science
The Left and Its Delusions
Economics: A Survey
AGW: The Death Knell
The Keynesian Multiplier: Phony Math
Modern Liberalism as Wishful Thinking

Flummoxed by Firefox 29?


I recently — and unhappily — updated to Firefox 29, which is yet another in a long string of software-engineer-friendly “upgrades” by the boys and girls at Mozilla. Now, I have to admit that Firefox, on the whole, is a more user-friendly browser than the several others that I’ve tried: Comodo Dragon, Google Chrome, Internet Explorer, Opera, and Safari — each of which has a serious-to-fatal flaw (e.g., vulnerable to malware, hard to customize, can’t open groups of tabs, can’t import bookmarks).

But being user-friendly is a relative thing, and Firefox seems bent on joining the ranks of its less-friendly peers. Firefox 29, for example, incorporates the page-reload button in the navigation bar (the place where a site’s URL appears). That’s neither a convenient nor intuitive place for the page-reload button. It’s true that one can reload a tab by right-clicking the tab and selecting “Reload Tab” from the pop-up menu. But it’s actually easier to point one’s mouse at a reload button that’s located in a fixed position that’s close to the tab strip — usually on the left.

Speaking of the tab strip, why have tabs if you can’t see them? I exaggerate, but just a bit. In the default mode of Firefox 29, tabs (other than the one that’s currently open) are almost invisible. Navigating from tab to tab involves a lot of squinting. The tabless look may be aesthetic, but it’s worse than useless.

I will say that other than the fixed position of the reload button — which is immovable, even after installing the Classic Theme Restorer add-on — Firefox 29 is more readily customizable than its predecessors. (One exception: It takes some Googling to learn how to put the tab strip back where it belongs, which is just above the page, not at the top of the screen.) But why “upgrade” Firefox to a “look” that many users will immediately try to customize to something more useful? Many (most?) Firefox users cut their teeth on earlier versions of Firefox, and they grew used to the “look” and “feel” of those earlier versions.

What’s wrong with that? Everything, apparently, if you’re a software engineer with fascistic tendencies. Consider this thread from the Firefox non-support forum:

Can I go back to Firefox 28. I have Firefox 29 now. I don’t like it.

4/29/14 6:36 AM

I want Firefox 28 back. How do I do that?

Chosen solution

Since the original question was Can I go back to Firefox 28, I don’t see how my response constitutes a hijacking. Everything else in your post is lawyer-speak.

Read this answer in context 0


  • Top 10 Contributor
  • Moderator

198 solutions 1862 answers


Is there a particular reason you want to go back to 28? If this is about the new user interface looks then you can restore the way Firefox acted and looked with this add-on

OldRogue 0 solutions 4 answers

Why can’t someone answer the question asked? How do you download version 28 and revert to before 29. Classic Theme Restorer goes about 10% of the way to making FF useful again. Specifically, what needs to be remove from my Profile to get it back.


  • Top 10 Contributor
  • Moderator

198 solutions 1862 answers

Hi OldRogue,

1) We don’t link to old Firefox versions simply because of the latest version’s bug fixes/security patches, etc.
2) See #1 and We don’t HAVE to link to version 28. I’m pretty sure you’re capable of finding a little download link yourself.
3) You should create your own thread as you’re technically hijacking another person’s thread. Please create a new one at /questions/new

OldRogue 0 solutions 4 answers

Chosen Solution

Since the original question was Can I go back to Firefox 28, I don’t see how my response constitutes a hijacking. Everything else in your post is lawyer-speak.

Modified April 30, 2014 11:00:48 AM PDT by OldRogue


  • Top 10 Contributor
  • Moderator

198 solutions 1862 answers

I’m not going to argue with you and waste my time. I’m just going to say this:

  • From the Forum rules and guidelines For support requests, do not re-use existing threads started by others, even if they are seemingly on the same subject.

I’m not a Mozilla developer or employee so my “lawyer-speak” is all my words. I don’t work for Mozilla in case you haven’t noticed.

Also, to the OP, I’ve already answered their question. They can find the download link on their own. Takes maybe 2 minutes to find it…literally. But just in case someone doesn’t want to take their time and look for it, here it is:

Thread closed as I’ve given the download link!

Modified April 30, 2014 11:22:34 AM PDT by Moses

He may be Moses the lawgiver — with a vengeance — but he’s not the Moses that you want in charge when you’re looking for the promised land of browserdom. What a jerk!

Moses’s protestations to the contrary notwithstanding, he was being legalistic and OldRogue wasn’t hijacking the thread. How “big” of Moses to finally answer the original question. If he’d done that in the first place, he wouldn’t have revealed himself as a first-class a**hole.

Anyway, there’s your answer. If Mozilla slips in a new version of Firefox while you’re not looking, install an earlier version. In fact, take your pick from all of the earlier versions at the Index of pub/mozilla.org/firefox/releases/. If you happen upon a page that leads you to the Index, you’ll probably see something like this: “Warning: Using old versions of Firefox poses a significant security risk.”

Yeah, well, thanks for the warning. But I keep my firewall turned on, and I have a good anti-malware program (Malwarebytes Anti-Malware), and you should, too. When a version of Firefox gets too old, it stops working properly, which is a good sign that you should upgrade to a newer version, though not the newest one.

One last, important thing. Don’t let Mozilla slip in a new version of Firefox while you’re not looking. Go to “Tools” in the menu bar of Firefox (which I display for ease of use, despite Mozilla’s attempt to hide it), select Options, select the “Update” tab, and then choose either “Check for updates, but let me choose when to install them” or “Never check for updates.” If you choose “Check for updates,” read about an update before you install it — look especially for information about the ability to customize the new version. Don’t rely on Mozilla’s pitch; look for reviews on sites that specialize in computing and internet matters (e.g., PCMag.com and C|Net). And look especially for independent reviews of the kind you can find with a search engine; the lone-wolf reviewer is more likely to be critical than the establishment press.

By the way, I tried Firefox 29 on a gamble, and lost. I then rolled back to Firefox 28, which I had already tweaked to my taste.

Happy browsing.

UPDATE (05/07/14)

A reader kindly pointed me to Pale Moon, a Mozilla-based browser that works like Firefox used to. I’m now using Pale Moon, and loving it. (If I encounter glitches, I’ll add updates to this post.)

Why would I (or anyone) want a browser that works just like Firefox, but isn’t Firefox? Well, here’s one reason: the ousting of Mozilla CEO Brendan Eich for having made a donation to the Proposition 8 campaign in California. (See my post, “Surrender? Hell No!” and the articles I link to at the end of the post.)

And what does that have to do with Pale Moon? This from Pale Moon’s FAQ (as of today):

Will Firefox and Pale Moon work together in the future?

Since Mozilla has obviously chosen to follow a different path at the management level, it doesn’t seem likely that Pale Moon and Firefox will ever see a unification or joining of forces….


Read between the lines.

Nor is Pale Moon a mere copy of Firefox. This is from the same FAQ entry:

[T]here have been and are growing conflicts of interests between Pale Moon and Firefox as far as the so-called UX (User eXperience) developments are concerned. This results in a different user interface approach in Pale Moon. For example, less stress is put on minimizing the size of UI elements or saving every pixel possible to benefit the content area – in this day and age of full HD monitors and laptops that seems to be very counter-intuitive. Australis is considered unacceptable, and will not be aimed for – quite the opposite.

In other words, Australis-based Firefox 29 is a step in the wrong direction if you care about users. (Right on!) And Pale Moon isn’t going in that direction. Indeed, when I say that Pale Moon works like Firefox used to, I mean that it works like Firefox 28, to which I had returned after uninstalling Firefox 29.

If you want to try Pale Moon, you can download it here. If you’re currently a Firefox user and want to import your Firefox profile to Pale Moon, select the “don’t import anything” option at the end of the installation. There’s a separate tool for importing Firefox profiles, which you can download here. I used the tool, and it worked perfectly.

Happier browsing.

UPDATE (05/08/14)

My transition from Firefox 28 to Pale Moon has been seamless, as they say. So seamless, in fact, that I’ve made Pale Moon my default browser and unpinned Firefox from my Windows task bar. At this point I can’t see a reason to return to Firefox.

My next step will be to switch from Mozilla Thunderbird to Pale Moon’s FossaMail.

UPDATE (05/12/14)

I am now using FossaMail. After some unsuccessful attempts to copy my Thunderbird profile into Fossamail, I found a migration tool that works perfectly. During the migration, you might get a message saying that a script is taking longer than expected to run. If you do, select “continue” and let it run; it won’t take much longer for the tool to finish the job.

When you’re alerted that migration is complete, FossaMail may not respond immediately. The migration seems to continue in the background. Wait a few minutes, then try to open FossaMail. If your experience is like mine, when FossaMail opens it will contain an exact duplicate of your Thunderbird folders and messages.

Bye-bye, Mozilla.

A Guide to the Pronunciation of General American English

This post, originally published as “Phonetic Spelling: A Modest Proposal,” is drastically different from the original. I am indebted to commenter Jim Hlavac for his criticisms, which are reflected in this version of the post. In the course of revising the post, I made extensive changes to the pronunciation key. As before, comments about this work in progress are welcome.

When you’re in doubt about how to pronounce a word in American English, you may consult a source that relies on the International Phonetic Alphabet (IPA). The IPA is cumbersome, to say the least. It requires one to distinguish among dozens of tiny symbols, and then decode them by going to a rather busy page full of symbols and their translations.

Standard phonetic symbols for American English — the symbols we were supposed to learn in high school — aren’t much better. For example, go to The Free Dictionary and look up phonetic → fə-nĕt′ĭk,. Not only is the schwa (ə, an “uh” sound) incorrect (in my view), but to grasp the pronunciation of the word, you must still turn to a separate pronunciation key.

A proper guide to the pronunciation of American English should enable anyone who speaks or understands General American to grasp the proper (or generally accepted) pronunciation of a word simply by looking at a phonetic spelling that consists entirely of letters (e.g., word → werd, refuel → re-few-uhl). And the relationship between the phonetic spellings and the sounds that they represent should be intuitively obvious — again, if you speak or understand General American.

I emphasize General American (GA) for two reasons. First, GA — in the guise of “television English” — is heard across the country, not just in the Midwest and areas where similar accents dominate (e.g., the Great Plains and West Coast). Second, because most Americans understand “television English” (even if many of them don’t speak it), a guide that is keyed to GA should be useful to (almost) everyone.

In the rest of this post, I propose and demonstrate the application of a pronunciation guide that is based on GA, as I understand it. The guide comprises 50 sounds, which is five more sounds than are given in a standard guide (e.g., here). I’ve added several sounds that the standard guide merges with dissimilar sounds. And I’ve merged (or dropped) a few sounds that the standard guide mistakenly or unnecessarily lists as separate sounds.

In any event, the guide that I propose consists entirely letters of the alphabet. It is therefore more accessible than guides that rely heavily on symbols.

Here is the key, which for ease of use omits syllabic emphasis:

Phonetic pronunciation key

As noted, the key omits syllabic emphasis. In the following spellings of the 100 most commonly spoken words in English, I indicate emphasis with CAPS:

Phonetic spellings of 100 most common words

And here are ten words chosen from a list of 100 elegant words:

Phonetic spelllings of 10 elegant words

The Obama Effect: Disguised Unemployment


By the measure of real unemployment, the Great Recession is still with us. Nor is it likely to end anytime soon, given the anti-business and anti-growth policies and rhetoric of the Obama administration.

Officially, the unemployment rate stands at 4.6 percent, as of November 2016. Unofficially — but in reality — the unemployment rate stands 6.6 percentage points higher at 11.2 percent. While the official unemployment rate has dropped by 5.4 percentage points from its peak in 2009, the real unemployment rate has dropped by only 2.3 percentage points since then.

No amount of “stimulus” or “quantitative easing” will create jobs when employers and entrepreneurs are loath to take the risk of expanding and starting businesses, given Obama’s penchant for regulating against success and taxing it when it is achieved. The job-killing effects of Obamacare will only worsen the situation. And, of course, taxing “the rich” is a sure way to hamper economic growth by stifling productive effort, innovation, and investment.

How can I say that the real unemployment rate is 6.6 percentage points above the real rate? Easily. Just follow this trail of definitions, provided by the official purveyor of unemployment statistics, the Bureau of Labor Statistics:

Unemployed persons (Current Population Survey)
Persons aged 16 years and older who had no employment during the reference week, were available for work, except for temporary illness, and had made specific efforts to find employment sometime during the 4-week period ending with the reference week. Persons who were waiting to be recalled to a job from which they had been laid off need not have been looking for work to be classified as unemployed.

Unemployment rate
The unemployment rate represents the number unemployed as a percent of the labor force.

Labor force (Current Population Survey)
The labor force includes all persons classified as employed or unemployed in accordance with the definitions contained in this glossary.

Labor force participation rate
The labor force as a percent of the civilian noninstitutional population.

Civilian noninstitutional population (Current Population Survey)
Included are persons 16 years of age and older residing in the 50 States and the District of Columbia who are not inmates of institutions (for example, penal and mental facilities, homes for the aged), and who are not on active duty in the Armed Forces.

In short, if you are 16 years of age and older, not confined to an institution or on active duty in the armed forces, but have not recently made specific efforts to find employment, you are not (officially) a member of the labor force. And if you are not (officially) a member of the labor force because you have given up looking for work, you are not (officially) unemployed — according to the BLS. Of course, you are really unemployed, but your unemployment is well disguised by the BLS’s contorted definition of unemployment.

What has happened is this: Since the first four months of 2000, when the labor-force participation rate peaked at 67.3 percent, it has declined to 62.7 percent:

Source: See next graph.

Why the decline, which had came to a halt during G.W. Bush’s second term but resumed in late 2008? The slowdown of 2000 (coincident with the bursting of the dot-com bubble) and the shock of 9/11 can account for the decline from 2000 to 2004, as workers chose to withdraw from the labor force when faced with dimmer employment prospects. But what about the sharper decline that began near the end of Bush’s second term?

There we see not only the demoralizing effects of the Great Recession but also the lure of incentives to refrain from work, namely, extended unemployment benefits, the relaxation of welfare rules, the aggressive distribution of food stamps, and “free” healthcare” for an expanded Medicaid enrollment base and 20-somethings who live in their parents’ basements.* Need I add that both the prolongation of the Great Recession and the enticements to refrain from work are Obama’s doing? (That’s on the supply side. On the demand side, of course, there are the phony and even negative effects of “stimulus” spending, the chilling effects of regime uncertainty, which has persisted beyond the official end of the Great Recession, and the expansion of government spending.)

If the labor-force participation rate had remained at its peak of 67.3 percent, so that the disguised unemployed was no longer disguised, the official unemployment rate would have reached 13.5 percent in December 2009, as against the nominal peak of 10 percent in October 2009. Further, instead of declining to the phony rate of 4.6 percent in November 2016, the official unemployment rate would have stayed almost constant — hovering between 11 percent and 13.5 percent.

The growing disparity between the real and nominal unemployment rates is evident in this graph:

Derived from Series LNS12000000, Seasonally Adjusted Employment Level; Series LNS11000000, Seasonally Adjusted Civilian Labor Force Level; and Series LNS11300000, Seasonally Adjusted Civilian labor force participation rate. All are available at BLS, Labor Force Statistics from the Current Population Survey.

* Contrary to some speculation, the labor-force participation rate is not declining because older workers are retiring earlier. The participation rate among workers 55 and older rose steadily from 1994 to 2014. The decline is concentrated among workers under the age of 55, and especially workers in the 16-24 age bracket. (See this table at BLS.gov.) Why? My conjecture: The Great Recession caused a shakeout of marginal (low-skill) workers, many of whom simply dropped out of the labor market. And it became easier for them to drop out because, under Obamacare, many of them became eligible for Medicaid and many others enjoy prolonged coverage (until age 26) under their parents’ health plans.


*     *     *

Related reading:

Randall Holcombe, “Long-Term Unemployment Benefits Expire; Long-Term Unemployment Falls,” Mises Economics Blog, September 10, 2014

Arnold Kling, “The State of the Economy,” askblog, October 12, 2014

Stephen Moore, “Why Are So Many Employers Unable to Fill Jobs?The Daily Signal, April 6, 2015

Related posts: See the list here.

%d bloggers like this: