The Obama Effect: Disguised Unemployment

Updated here.

Two takeaways:

  • The “official” unemployment rate of 5.6 percent is phony. The real rate is 12 percent, just 1.5 points below the 21st century high-water mark of 13.5 (reached in 2009, 2010, 2011, and 2013).
  • The real unemployment rate is disguised by the declining the labor-force participation rate, which has accelerated since the onset of Obamanomics. The decline is concentrated among younger workers, and has probably been helped along by Obamacare. (See the final paragraph of the post.)

Signature

Sober Reflections on “Charlie Hebdo”

Some of the rabid dogs who brutally murdered 12 persons at the offices of Charlie Hebdo were put down, as they should have been.

But I’m not joining the hysterical cult of “Je suis Charlie.” Why not? I begin with Peter (the good) Hitchens, who writes:

Once again we are ruled by a Dictatorship of Grief. Ever since the death of Princess Diana, we have been subject to these periodic spasms when everyone is supposed to think and say the same thing, or else.

We were told on Friday that ‘politicians from all sides’ had lined up to attack Ukip’s Nigel Farage for supposedly ‘exploiting’ the Paris massacre.

Mr Farage had (quite reasonably) pointed out that the presence of Islamist fanatics in our midst might have something to do with, a) uncontrolled mass migration from the Muslim world, and b) decades of multicultural refusal to integrate them into our laws and customs.

Rather than disputing this with facts and logic (admittedly this would be hard), the three ‘mainstream’ parties joined in screeching condemnation….

The Liberal Democrat Nick Clegg said Mr Farage was ‘making political points’ on the ‘back of bloody murders’.

Well, who wasn’t? A sanctimonious unanimity descended on politics and the media. ‘Je suis Charlie,’ everyone said. It was an issue of liberty, we all said. They can’t silence us, stop us drawing cartoons, etc etc etc.

Great mountains of adjectives piled up on every corner, much like those hills of flowers and teddy bears we like to place at the scenes of tragedies.

You can feel the presence of the snarling conformist mob, waiting for some dissenter on whom they can fall, kicking and biting. So-called social media, in fact an intolerant and largely brainless electronic mob, has made this much worse since the sad death of the Princess.

Speaking of intolerance, that’s the name of Charlie‘s game. It’s a stridently left-wing rag that mocks religion (of all kinds), and anything else deemed too “respectable” for the adolescent tastes of its staff.

What’s most striking about the “Je suis Charlie” movement is its pure hypocrisy. Back to Hitchens:

As for freedom, here’s an interesting thing. The French Leftist newspaper Liberation reported on September 12, 1996, that three stalwarts of Charlie Hebdo (including Stephane ‘Charb’ Charbonnier) had campaigned in their magazine to collect more than 170,000 signatures for a petition calling for a ban on the French National Front party. They did this in the name of the ‘Rights of Man’.

You, like me, may dislike the National Front greatly. But lovers of liberty simply do not seek to ban parties they do not like.

This is a double paradox. The French National Front exists mainly because a perfectly reasonable concern about mass immigration was sneeringly dismissed by the mainstream French parties. Something similar is happening in Germany, where large demonstrations against ‘the Islamisation of the West’ in many cities have been scornfully attacked by that country’s elite.

Yes, the left gets up in arms when some of its members are slaughtered by Muslim pigs (I love that phrase). But this is the same, hypocritical left that condones and promotes censorship. Clarice Feldman nails it:

Count me in the camp with Matthew Continetti, who gives countless examples of liberal hypocrisy about free speech including the following examples:

Do liberals actually believe in the right to offend? Their attitude seems to me to be ambivalent at best. And this equivocation was apparent within hours of the attack, when news outlets censored or refused to publish the images for which the Charlie Hebdo editors were killed. Classifying satire or opinion as “hate speech” subject to regulation is not an aberration. It is commonplace.

Indeed, the outpouring of support for free speech in the aftermath of the Paris attack coincides with, and partially obscures, the degradation of speech rights in the West. Commencement last year was marked by universities revoking appearances by speakers Condoleezza Rice and Ayaan Hirsi Ali for no other reason than that mobs disagreed with the speakers’ points of view. I do not recall liberals rallying behind Condi and Hirsi Ali then.

He adds to the mix of examples, Brendan Eich’s opposition to gay marriage costing him his job, the Chicago Sun Times’ removal of a Kevin D. Williamson article critical of transgender activism, Brandeis University’s unremitting assaults on a student for publicizing another student’s cheering  the assassination of police officers, blaming an obscure video for the violent attack in Benghazi. Worse yet, there’s the political and academic efforts to shut off free speech which might offend someone, (someone, I observe, who usually just happens to hold the views prevailing among the left-wing professors and administrators)….

Obama can’t even bring himself to speak plainly about the savages whose deeds have sparked millions to rally in the name of Charlie. Scott Johnson is on the case:

President Obama performed the obligatory characterization of the terrorist attack on Charlie Hebdo in Paris last week as “cowardly” and “evil.” “Evil” it certainly was. “Cowardly” would probably be an adjective more appropriate to the Obama administration’s characterization of Islamist terrorism as “violent extremism,” though “stupid” certainly shouldn’t be overlooked either.

President Obama and his administration refuse to identify the ideology that inspires our enemy. They continue to yammer incessantly about “extremism” and “extremists.” Islam is not to be mentioned, unless it is to be appeased and defended. Obama is himself something of an extremist on the subject.

Of course he is, as Clarice Feldman reminds us:

[H]ere’s what Obama said in 2012 after the slaughter in Benghazi: “A crude and disgusting video sparked outrage throughout the Muslim world.  Now, I have made it clear that the United States government had nothing to do with this video, and I believe its message must be rejected by all who respect our common humanity. It is an insult not only to Muslims, but to America as well — for as the city outside these walls makes clear, we are a country that has welcomed people of every race and every faith. We are home to Muslims who worship across our country. We not only respect the freedom of religion, we have laws that protect individuals from being harmed because of how they look or what they believe. We understand why people take offense to this video because millions of our citizens are among them…. The future must not belong to those who slander the prophet of Islam.”

(What does it mean, this phrase: “The future must not belong to those who slander the prophet of Islam”, if not an incitement to attack targets like Hedbo?)

The future must not belong to Islam or its apologists on the left. It must not belong to those who are afraid to speak the truth about the aims of Islam. It must not belong to anti-Semites.

The slaughter at Charlie Hebdo is not a reason for solidarity with the left, but a reason to oppose the left and its clients — especially (but not exclusively) the murderous adherents of Islam.

Solidarity with the left suborns what Victor Davis Hanson rightly calls multicultural suicide:

Multiculturalism is one of those buzzwords that does not mean what it should. The ancient and generic Western study of many cultures is not multiculturalism. Rather, the trendy term promotes non-Western cultures to a status equal with or superior to Western culture largely to fulfill contemporary political agendas….

…In terms of the challenge of radical Islam, multiculturalism manifests itself in the abstract with the notion that Islamists are simply the fundamentalist counterparts to any other religion. Islamic extremists are no different from Christian extremists, as the isolated examples of David Koresh or the Rev. Jim Jones are cited ad nauseam as the morally and numerically equivalent bookends to thousands of radical Islamic terrorist acts that plague the world each month. We are not to assess other religions by any absolute standard, given that such judgmentalism would inevitably be prejudiced by endemic Western privilege….

Most of the millions who today paid lip-service to liberty in the streets of Paris and other cosmopolitan capitals will tomorrow resume their war against liberty, in the name of multiculturalism and other manifestations of political correctness.

*     *     *

Related reading, in addition to the articles and posts quoted and linked above:
John Ransom, “In the Clash of Intolerants, I’m Not Charlie,” Townhall.com, January 10, 2015
Selwyn Duke, “Je ne suis pas Charlie (I’m Sane),” American Thinker, January 12, 2015
Takimag, “The Week That Perished” (first entry), January 12, 2015
Mark Steyn, “Where’s the Lead in the Pencil?,” SteynOnline, January 14, 2015
Jaci Greggs, “Meet the Hypocrites Who Did Attend the Paris Unity Rally,” The Federalist, January 15, 2014
Andrew Napolitano, “What Freedom of Speech?,” The Unz Review, January 15, 2015
Theden, “#JeSuisUsefulIdiot:Western Leaders Exploit the Paris Attacks,” January 25, 2015

*     *     *

Related post: Riots, Culture, and the Final Showdown

Signature

Not-So-Random Thoughts (XII)

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

*     *     *

Intolerance as Illiberalism” by Kim R. Holmes (The Public Discourse, June 18, 2014) is yet another reminder, of innumerable reminders, that modern “liberalism” is a most intolerant creed. See my ironically titled “Tolerance on the Left” and its many links.

*     *     *

Speaking of intolerance, it’s hard to top a strident atheist like Richard Dawkins. See John Gray’s “The Closed Mind of Richard Dawkins” (The New Republic, October 2, 2014). Among the several posts in which I challenge the facile atheism of Dawkins and his ilk are “Further Thoughts about Metaphysical Cosmology” and “Scientism, Evolution, and the Meaning of Life.”

*     *     *

Some atheists — Dawkins among them — find a justification for their non-belief in evolution. On that topic, Gertrude Himmelfarb writes:

The fallacy in the ethics of evolution is the equation of the “struggle for existence” with the “survival of the fittest,” and the assumption that “the fittest” is identical with “the best.” But that struggle may favor the worst rather than the best. [“Evolution and Ethics, Revisited,” The New Atlantis, Spring 2014]

As I say in “Some Thoughts about Evolution,”

Survival and reproduction depend on many traits. A particular trait, considered in isolation, may seem to be helpful to the survival and reproduction of a group. But that trait may not be among the particular collection of traits that is most conducive to the group’s survival and reproduction. If that is the case, the trait will become less prevalent. Alternatively, if the trait is an essential member of the collection that is conducive to survival and reproduction, it will survive. But its survival depends on the other traits. The fact that X is a “good trait” does not, in itself, ensure the proliferation of X. And X will become less prevalent if other traits become more important to survival and reproduction.

The same goes for “bad” traits. Evolution is no guarantor of ethical goodness.

*     *     *

It shouldn’t be necessary to remind anyone that men and women are different. But it is. Lewis Wolpert gives it another try in “Yes, It’s Official, Men Are from Mars and Women from Venus, and Here’s the Science to Prove It” (The Telegraph, September 14, 2014). One of my posts on the subject is “The Harmful Myth of Inherent Equality.” I’m talking about general tendencies, of course, not iron-clad rules about “men’s roles” and “women’s roles.” Aside from procreation, I can’t readily name “roles” that fall exclusively to men or women out of biological necessity. There’s no biological reason, for example, that an especially strong and agile woman can’t be a combat soldier. But it is folly to lower the bar just so that more women can qualify as combat soldiers. The same goes for intellectual occupations. Women shouldn’t be discouraged from pursuing graduate degrees and professional careers in math, engineering, and the hard sciences, but the qualifications for entry and advancement in those fields shouldn’t be watered down just for the sake of increasing the representation of women.

*     *     *

Edward Feser, writing in “Nudge Nudge, Wink Wink” at his eponymous blog (October 24, 2014), notes

[Michael] Levin’s claim … that liberal policies cannot, given our cultural circumstances, be neutral concerning homosexuality.  They will inevitably “send a message” of approval rather than mere neutrality or indifference.

Feser then quotes Levin:

[L]egislation “legalizing homosexuality” cannot be neutral because passing it would have an inexpungeable speech-act dimension.  Society cannot grant unaccustomed rights and privileges to homosexuals while remaining neutral about the value of homosexuality.

Levin, who wrote that 30 years ago, gets a 10 out 10 for prescience. Just read “Abortion, ‘Gay Rights’, and Liberty” for a taste of the illiberalism that accompanies “liberal” causes like same-sex “marriage.”

*     *     *

“Liberalism” has evolved into hard-leftism. It’s main adherents are now an elite upper crust and their clients among the hoi polloi. Steve Sailer writes incisively about the socioeconomic divide in “A New Caste Society” (Taki’s Magazine, October 8, 2014). “‘Wading’ into Race, Culture, and IQ” offers a collection of links to related posts and articles.

*     *     *

One of the upper crust’s recent initiatives is so-called libertarian paternalism. Steven Teles skewers it thoroughly in “Nudge or Shove?” (The American Interest, December 10, 2014), a review of Cass Sunstein’s Why Nudge? The Politics of Libertarian Paternalism. I have written numerous times about Sunstein and (faux) libertarian paternalism. The most recent entry, “The Sunstein Effect Is Alive and  Well in the White House,” ends with links to two dozen related posts. (See also Don Boudreaux, “Where Nudging Leads,” Cafe Hayek, January 24, 2015.)

*     *     *

Maria Konnikova gives some space to Jonathan Haidt in “Is Social Psychology Biased against Republicans?” (The New Yorker, October 30, 2014). It’s no secret that most academic disciplines other than math and the hard sciences are biased against Republicans, conservatives, libertarians, free markets, and liberty. I have something to say about it in “The Pseudo-Libertarian Temperament,” and in several of the posts listed here.

*     *     *

Keith E. Stanovich makes some good points about the limitations of intelligence in “Rational and Irrational Thought: The Thinking that IQ Tests Miss” (Scientific American, January 1, 2015). Stanovich writes:

The idea that IQ tests do not measure all the key human faculties is not new; critics of intelligence tests have been making that point for years. Robert J. Sternberg of Cornell University and Howard Gardner of Harvard talk about practical intelligence, creative intelligence, interpersonal intelligence, bodily-kinesthetic intelligence, and the like. Yet appending the word “intelligence” to all these other mental, physical and social entities promotes the very assumption the critics want to attack. If you inflate the concept of intelligence, you will inflate its close associates as well. And after 100 years of testing, it is a simple historical fact that the closest associate of the term “intelligence” is “the IQ test part of intelligence.”

I make a similar point in “Intelligence as a Dirty Word,” though I don’t denigrate IQ, which is a rather reliable predictor of performance in a broad range of endeavors.

*     *     *

Brian Caplan, whose pseudo-libertarianism rankles, tries to defend the concept of altruism in “The Evidence of Altruism” (EconLog, December 30, 2014). Caplan aids his case by using the loaded “selfishness” where he means “self-interest.” He also ignores empathy, which is a key ingredient of the Golden Rule. As for my view of altruism (as a concept), see “Egoism and Altruism.”

Steroids in Baseball: A Counterproductive Side Show or an Offensive Boon?

The widespread use of steroids and other performance-enhancing drugs (PEDs) in recent decades probably led to an increase in extra-base hits. But, paradoxically, the emphasis on power hitting may have led to a decrease in run production. Or maybe not.

I begin with a statistic that I call Slugging+ (abbreviated as SLG+). It measures total bases on hits and walks per plate appearance:

(1) SLG+ = [1B + 2B(2) + 3B(3) + HR(4) + BB]/PA

where,
1B = singles
2B = doubles
3B = triples
HR = home runs
BB = bases-on-balls (walks)
PA = plate appearances

(I prefer SLG+ to OPS — a popular statistic that combines on-base-average and slugging percentage. OPS is the sum of two fractions with different denominators, PA and AB (at-bats), and it double-counts hits, which are in the numerator of both fractions.)

The following graph of SLG+ is suggestive:

SLG+ 1901-2014

The values of SLG+ after 1993 — and especially for the years 1994-2009 — boost the regression line markedly upward.

A similar picture emerges in the next graph, which focuses on the years after 1968, when offensive statistics reached a nadir:

SLG+ since 1968

SLG+ values for 1994-2009 lie above the trend line.

On the evidence of the two preceding graphs I designate 1994-2009 as the era of PEDs. (I know that PEDs didn’t come into use in 1994 and disappear from the scene after 2009. It’s just that their effects are most obvious during 1994-2009.)

So much for the effect of PEDs on power hitting. What about the effect of PEDs on run production? You might expect that the unsurpassed surge in power hitting during the PEDs era resulted in an unsurpassed surge in run production: But it didn’t:

MLB team runs per game

Why did run production in the PEDs era fall short of run production in 1921-1942, the original “lively ball” era. Here’s my hypothesis: The frequency of home runs was on the rise during the original “lively ball” era. But the game in that era was still strongly influenced by the dynamic style of play of the preceding “dead ball” era. Scoring in the lively-ball era still depended heavily on slapping out base hits, taking the extra base on an outfield hit, and the hit-and-run. Those practices dwindled in later years, when scoring became more a matter of waiting for sluggers to deliver home runs, when they weren’t drawing walks or striking out. Some of the differences are evident in this graph:

Selected MLB statistics_1901-2014

It turns out that the emphasis on power hitting (especially home-run hitting) may be counterproductive. The relationship between runs per game (R) and other significant variables for the period 1921-2014 looks like this:

(2) R = – 0.220 + 18.7(BA) + 0.721(2B) + 1.26(HR) – 0.160(SO) – 0.0540(BatAge)

where,
BA = batting average
2B = doubles per game
HR = home runs per game
SO = strikeouts per game
BatAge = average age of batters

Applying (2) to the actual range of values for each variable, I get:

Effect of variables on run production_1921-2014

“Max” and “min” are the maximum and minimum values for 1921-2014. “Diff” is the difference between the maximum and minimum values. “R diff” represents the number of runs accounted for by “Diff,” based on equation (2). “Pct. of avg. R” is “R diff” as a percentage of the average number of runs per game during 1921-2014.

Team statistics for 2014 yield a somewhat more detailed but similar result:

(3) R = – 0.153 + 16.8(BA) + 0.00269(2B) + 0.00675(3B) + 0.00488(HR) – 0.000428(SO) + 0.00198(BB) – 0.00408(GDP) – 0.00314(SH)

where,
BA = team batting average
2B = doubles hit by team
3B = triples hit by team
HR = home runs hit by team
SO = team strikeouts
BB = team walks
GDP = number of times a team grounds into double plays
SH = number of times a team executes a sacrifice hit (a bunt that advances a base runner)

Applying (3) to the actual range of values for each variable, I get:

Effect of variables on run production_2014

For a third look, I analyzed the offensive records of the 560 players with at least 3,000 plate appearances whose careers started no sooner than 1946 and ended no later than 1993. I computed, for each player, a measure of his career run-scoring potential (R*):

(4) R* = [(1B + 2(2B) + 3(3B) + 4(HR) + BB + HBP + SH + SF – GDP + SB – CS]/PA

where,
1B = singles
2B = doubles
3B = triples
HR = home runs
BB = bases on balls
HBP = hit by pitcher
SH = sacrifice hits
SF = sacrifice flies
GDP = grounded into double plays
SB = stolen bases
CS = caught stealing
PA = plate appearances

This regression equation explains R*:

(5) R* = 0.0521 + 0.796(1B)  + 1.794(2B) + 3.29 (3B) + 3.68(HR) + 0.998(BB) – 0.0450(SO) + 1.18(SB – CS)

(The explanatory variables are career totals divided by total number of plate appearances. The equation has an r-squared of 0.985, with extremely significant F- and p-values.)

I derived the following table of elasticities from (5):

Elasticities of variables

Elasticity measures the responsiveness of R* to a change in the value of each variable. Thus, for example, a 1-percent increase in 1B/PA would cause R* to increase by 0.135 percent, and so on. The elasticities suggest that singles hitters generate more scoring opportunities than home-run hitters, on the whole. Case closed?

Not at all. Look at this table of cross-correlations:

Cross correlations

Even though there’s a strong, positive correlation between HR/PA and SO/PA,  the elasticity on SO/PA is relatively small. Further, the elasticity on BB/PA is relatively high, and BB/PA is strongly and negatively correlated with 1B/PA — and less strongly but positively correlated with HR/PA. This leads me to suspect that the elasticities on 1B/PA and HR/PA overstate the contributions of singles hitters and understate the contributions of home-run hitters.

I forced a regression in which the only explanatory variables are 1B, 2B, 3B, and HR. The resulting equation yields these elasticities:

Elasticities of variables_2

(I obtained similar results when I revisited the statistics for 1921-2014 and the 2014 season.)

This is a less-than-satisfactory result because the underlying equation omits several explanatory variables. But it hints at the value of hitters with extra-base power, especially home-run hitters. Issues of health and integrity aside, it seems that a “juiced” hitter can do his team a lot of good — if he doesn’t strike out a lot more or walk a lot less than usual in his pursuit of more home runs.

All of this uncertainty reminds me of “Baseball Statistics and the Consumer Price Index,” where I say this:

There are many variations in the conditions of play that have resulted in significant changes in offensive statistics. Among those changes are the use of cleaner and more tightly wound baseballs, the advent of night baseball, better lighting for night games, bigger gloves, lighter bats, bigger and stronger players, the expansion of the major leagues in fits and starts, the size of the strike zone, the height of the pitching mound, and — last but far from least in this list — the integration of black and Hispanic players into major league baseball. In addition to these structural variations, there are others that mitigate against the commensurability of statistics over time; for example, the rise and decline of each player’s skills, the skills of teammates (which can boost or depress a player’s performance), the characteristics of a player’s home ballpark (where players generally play half their games), and the skills of the opposing players who are encountered over the course of a career.

Despite all of these obstacles to commensurability, the urge to evaluate the relative performance of players from different teams, leagues, seasons, and eras is irrepressible. Baseball-Reference.com is rife with such evaluations; the Society for American Baseball Research (SABR) revels in them; many books offer them (e.g., this one); and I have succumbed to the urge more than once.

It is one thing to have fun with numbers. It is quite another thing to ascribe meanings to them that they cannot support.

All I can safely say about the effect of PEDs on run-scoring is that the PEDs era saw more of it than the preceding era (see the third graph, “MLB Team Runs per Game”).

In the end, the seemingly large effect of PEDs may be illusory:

…In congressional testimony in 2005, [Sandy] Alderson [former general manager of the Oakland A’s] said that during the 1990s, other factors “obscured a steroid problem”:

Home runs and run production were increasing during the time but not always year to year. At the same time, strength programs were in vogue across baseball. Hitter-friendly ballparks were being built. Expansion had occurred in 1993 and again in 1998. Two seasons, ’94 and ’95, had been shortened by a players’ strike. Bat design had changed and there was an emphasis with many clubs on having more offensive players even at traditionally defensive positions. [From pp. 62-3 of the Mitchell report, listed in “Related reading.”]

The factors cited by Alderson probably boosted the rate at which batters were pumping out extra-base hits. Another significant factor is the size of the strike zone, which had been shrinking for years before it began to expand around 2009-10. Has power hitting declined because of the growing strike zone or because fewer players are using PEDs? The right answer is “yes.”

Uncertainty rears its head again.

*     *     *

Acknowledgement: This analysis draws on statistics provided by Baseball-Reference.com.

*    *     *

Related reading:
Mitchell Grossman et al., “Steroids in Major League Baseball,” undated
Baseball Prospectus, “Baseball between the Numbers: What Do Statistics Tell Us About Steroids?,” March 9, 2006
George J. Mitchell, “Report to the Commissioner of Baseball of an Independent Investigation into the Illegal Use of Steroids and Other Performance Enhancing Substances by Players in Major League Baseball,” December 13, 2007
Zachary D. Rymer, “Proof That the Steroids-Era Power Surge in Baseball Has Been Stopped,” Bleacher Report, May 22, 2013
Brian M. Mills, “Expert Workers, Performance Standards, and On-the-Job Training: Evaluating Major League Baseball Umpires,” Journal of Economic Literature, August 27, 2014
Jon Roegele, “Baseball’s Strike Zone Expansion Is Out of Control,” Slate, October 15, 2014

Signature

Let’s Not Get Too Excited about Recent GDP Growth

According to the U.S. Department of Commerce’s Bureau of Economic Analysis,

Real gross domestic product — the value of the production of goods and services in the United States, adjusted for price changes — increased at an annual rate of 5.0 percent in the third quarter of 2014, according to the “third” estimate released by the Bureau of Economic Analysis. In the second quarter, real GDP increased 4.6 percent.

Sounds great, but let’s put recent quarter-to-quarter and year-over-year changes in context:

Quarterly vs. annual changes in real GDP

Despite the recent gains, welcome as they are, GDP remains in the doldrums:

Real GDP 1947q1-2014q3

Here’s another depiction, which emphasizes the declining rate of growth:

Real GDP 1947-2014

For more, see “The Rahn Curve Revisited” and the list of posts at the bottom.

Signature

Some Thoughts about Probability

This is the final version of a post that was accidentally published in draft form on December 2, 2014.

This post is prompted by a reader’s comments about “The Compleat Monty Hall Problem.” I open with a discussion of probability and its inapplicability to single games of chance (e.g., one toss of a coin). With that as background, I then address the reader’s specific comments. I close with a discussion of the debasement of the meaning of probability.

INTRODUCTORY REMARKS

What is probability? Is it a property of a thing (e.g., a coin), a property of an event involving a thing (e.g., a toss of the coin), or a description of the average outcome of a large number of such events (e.g., “heads” and “tails” will come up about the same number of times)? I take the third view.

What does it mean to say, for example, that there’s a probability of 0.5 (50 percent) that a tossed coin will come up “heads” (H), and a probability of 0.5 that it will come up “tails” (T)? Does such a statement have any bearing on the outcome of a single toss of a coin? No, it doesn’t. The statement is only a short way of saying that in a sufficiently large number of tosses, approximately half will come up H and half will come up T. The result of each toss, however, is a random event — it has no probability.

That is the standard, frequentist interpretation of probability, to which I subscribe. It replaced the classical interpretation , which is problematic:

If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by

P(A) = {N_A \over N} .

There are two clear limitations to the classical definition.[16] Firstly, it is applicable only to situations in which there is only a ‘finite’ number of possible outcomes. But some important random experiments, such as tossing a coin until it rises heads, give rise to an infinite set of outcomes. And secondly, you need to determine in advance that all the possible outcomes are equally likely without relying on the notion of probability to avoid circularity….

A similar charge has been laid against frequentism:

It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular.

Not so:

  • There is no “real probability.” If there were, the classical theory would measure it, but the classical theory is circular, as explained above.
  • It is therefore meaningless to refer to “error of measurement.” Estimates of probability may well vary from one series of trials to another. But they will “tend to a fixed limit” over many trials (see below).

There are other approaches to probability. See, for example, this, this, and this.) One approach is known as propensity probability:

Propensities are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate a given outcome type at a persistent rate. A central aspect of this explanation is the law of large numbers. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will (with high probability) be close to the probability of heads on each single toss. This law suggests that stable long-run frequencies are a manifestation of invariant single-case probabilities.

This is circular. You observe the relative frequencies of outcomes and, lo and behold, you have found the “propensity” that yields those relative frequencies.

Another approach is Bayesian probability:

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

Or consider this account:

The Bayesian approach to learning is based on the subjective interpretation of probability.   The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p….

“Level of certainty” and “subjective interpretation” mean “guess.” The guess may be “educated.” It’s well known, for example, that a balanced coin will come up heads about half the time, in the long run. But to say that “I’m 50-percent confident that the coin will come up heads” is to say nothing meaningful about the outcome of a single coin toss. There are as many probable outcomes of a coin toss as there are bystanders who are willing to make a statement like “I’m x-percent confident that the coin will come up heads.” Which means that a single toss doesn’t have a probability, though it can be the subject of many opinions as to the outcome.

Returning to reality, Richard von Mises eloquently explains frequentism in Probability, Statistics and Truth (second revised English edition, 1957). Here are some excerpts:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply the theory of probability we must have a practically unlimited sequence of uniform observations. [P. 11]

*     *     *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the game of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. [P. 11]

*     *     *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute in a given collective’. [Pp. 11-12]

*     *     *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attribute would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective. [P. 15, emphasis in the original]

*     *     *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? [P. 33, emphasis added]

As stated earlier, it is simply meaningless to say that the probability of H or T coming up in a single toss is 0.5. Here’s the proper way of putting it: There is no reason to expect a single coin toss to have a particular outcome (H or T), given that the coin is balanced, the toss isn’t made is such a way as to favor H or T, and there are no other factors that might push the outcome toward H or T. But to say that P(H) is 0.5 for a single toss is to misrepresent the meaning of probability, and to assert something meaningless about a single toss.

If you believe that probabilities attach to a single event, you must also believe that a single event has an expected value. Let’s say, for example, that you’re invited to toss a coin once, for money. You get $1 if H comes up; you pay $1 if T comes up. As a believer in single-event probabilities, you “know” that you have a “50-50 chance” of winning or losing. Would you play a single game, which has an expected value of $0? If you would, it wouldn’t be because of the expected value of the game; it would be because you might win $1, and because losing $1 would mean little to you.

Now, change the bet from $1 to $1,000. The “expected value” of the single game remains the same: $0. But the size of the stake wonderfully concentrates your mind. You suddenly see through the “expected value” of the game. You are struck by the unavoidable fact that what really matters is the prospect of winning $1,000 or losing $1,000, because those are the only possible outcomes.

Your decision about playing a single game for $1,000 will depend on your finances (e.g., you may be very wealthy or very desperate for money) and your tolerance for risk (e.g., you may be averse to risk-taking or addicted to it). But — if you are rational — you will not make your decision on the basis of the fictional expected value of a single game, which derives from the fictional single-game probabilities of H and T. You will decide whether you’re willing and able to risk the loss of $1,000.

Do I mean to say that probability is irrelevant to a single play of the Monty Hall problem, or to a choice between games of chance? If you’re a proponent of propensity, you might say that in the Monty Hall game the prize has a propensity to be behind the other unopened door (i.e., the door not chosen by you and not opened by the host). But does that tell you anything about the actual location of the prize in a particular game? No, because the “propensity” merely reflects the outcomes of many games; it says nothing about a single game, which (like Schrödinger’s cat) can have only a single outcome (prize or no prize), not 2/3 of one.

If you’re a proponent of Bayesian probability, you might say that you’re confident with “probability” 2/3 that the prize is behind the other unopened door. But that’s just another way of saying that contestants win 2/3 of the time if they always switch doors. That’s the background knowledge that you bring to your statement of confidence. But someone who’s ignorant of the Monty Hall problem might be confident with 1/2 “probability” that the prize is behind the other unopened door. And he could be right about a particular game, despite his lower level of confidence.

So, yes, I do mean to say that there’s no such thing as a single-case probability. You may have an opinion ( or a hunch or a guess) about the outcome of a single game, but it’s only your opinion (hunch, guess). In the end, you have to bet on a discrete outcome. If it gives you comfort to switch to the unopened door because that’s the winning door 2/3 of the time (according to classical probability) and about 2/3 of the time (according to the frequentist interpretation), be my guest. I might do the same thing, for the same reason: to be comfortable about my guess. But I’d be able separate my psychological need for comfort from the reality of the situation:

A single game is just one event in the long series of events from which probabilities emerge. I can win the Monty Hall game about 2/3 of the time in repeated plays if I always switch doors. But that probability has nothing to do with a single game, the outcome of which is a random occurrence.

REPLIES TO A READER’S COMMENTS

I now turn to the reader’s specific comments, which refer to “The Compleat Monty Hall Problem.” (You should read it before continuing with this post if you’re unfamiliar with the Monty Hall problem or my analysis of it.) The reader’s comments — which I’ve rearranged slightly — are in italic type. (Here and there, I’ve elaborated on the reader’s comments; my elaborations are placed in brackets and set in roman type.) My replies are in bold type.

I find puzzling your statement that a probability cannot “describe” a single instance, eg one round of the Monty Hall problem.

See my introductory remarks.

While the long run result serves to prove the probability of a particular outcome, that does not mean that that probability may not be assigned to a smaller number of instances. That is the beauty of probability.

The long-run result doesn’t “prove” the probability of a particular outcome; it determines the relative frequency of occurrence of that outcome — and nothing more. There is no probability associated with a “smaller number of instances,” certainly not 1 instance. Again, see my introductory remarks.

If the [Monty Hall] game is played once [and I don’t switch doors], I should budget for one car [the prize that’s usually cited in discussions of the Monty hall problem], and if it is played 100 times [and I never switch doors], I budget for 33….

“Budget” seems to refer to the expected number of cars won, given the number of plays of the game and a strategy of never switching doors. The reader contradicts himself by “budgeting” for 1 car in a single play of the Monty Hall problem. In doing so, he is being unfaithful to his earlier statement: “While the long run result serves to prove the probability of a particular outcome, that does not mean that that probability may not be assigned to a smaller number of instances.” Removing the double negatives, we get “probability may be assigned to a smaller number of instances.” Given that 1 is a smaller number than 100, it follows, by the reader’s logic, that his “budget” for a single game should be 1/3 car (assuming, as he does, a strategy of not switching doors). The reader’s problem here is his insistence that a probability expresses something other than the long-run relative frequency of a particular outcome.

To justify your contrary view, you ask how you can win 2/3 of a car [the long-run average if the contestant plays many games and always switches doors]; you can win or you can not win, you say, you cannot partly win. Is this not sophistry or a straw man, sloppy reasoning at best, to convince uncritical thinkers who agree that you cannot drive 2/3 of a car?

My “contrary view” of what? My view of statistics isn’t “contrary.” Rather, it’s in line with the standard, frequentist interpretation.

It’s a simple statement of obvious fact you can’t win 2/3 of a car. There’s no “sophistry” or “straw man” about it. If you can’t win 2/3 of a car, what does it mean to assign a probability of 2/3 to winning a car by adopting the switching strategy? As discussed above, it means only one thing: A long series of games will be won about 2/3 of the time if all contestants adopt the switching strategy.

On what basis other than an understanding of probability would you be optimistic at the prospect of being offered one chance of picking a single Golden Ball worth $1m from a bag of just three balls and pessimistic about your prospects of picking the sole Golden Ball from a barrel of 10,000 balls?

The only difference between the two games is that on the one hand you have a decent (33%) chance of winning and on the other hand you have a lousy (0.01%) chance. Isn’t it these disparate probabilities that give you cause for optimism or pessimism, as the case may be?

“Optimism” and “pessimism” — like “comfort” — are subjective terms for ill-defined states of mind. There are persons who will be “optimistic” about a given situation, and persons who will be “pessimistic” about the same situation. For example: There are hundreds of millions of persons who are “optimistic” about winning various lotteries, even though they know that the grand prize in each lottery will be assigned to only one of millions of possible numbers. By the same token, there are hundreds of millions of persons who, knowing the same facts, refuse to buy lottery tickets because they are “pessimistic” about the likely outcome of doing so. But “optimism” and “pessimism” — like “comfort” — have nothing to do with probability, which isn’t an attribute of a single game.

If probability cannot describe the chances of each of the two one-off “games”, does that mean I could not provide a mathematical basis for my advice that you play the game with 3 balls (because you have a one-in-three chance of winning) rather than the ball in the barrel game which offers a one in ten thousand chance of winning?

You can provide a mathematical basis for preferring the game with 3 balls. But you must, in honesty, state that the mathematical basis applies only to many games, and that the outcome of a single game is unpredictable.

It might be that probability cannot reliably describe the actual outcome of a single event because the sample size of 1 game is too small to reflect the long-run average that proves the probability. However, comparing the probabilities for winning the two games describes the relative likelihood of winning each game and informs us as to which game will more likely provide the prize.

If not by comparing the probability of winning each game, how do we know which of the two games has a better chance of delivering a win? One cannot compare the probability of selecting the Golden Ball from each of the two games unless the probability of each game can be expressed, or described, as you say.

Here, the reader comes close to admitting that a probability can’t describe the (expected) outcome of a single event (“reliably” is superfluous). But he goes off course when he says that “comparing the probabilities for the two games … informs us as to which game will more likely provide the prize.” That statement is true only for many plays of the two ball games. It has nothing to do with a single play of either ball game. The choice there must be based on subjective considerations: “optimism,” “pessimism,” “comfort,” a guess, a hunch, etc.

Can I not tell a smoker that their lifetime risk of developing lung cancer is 23% even though smokers either get lung cancer or they do not? No one gets 23% cancer. Did someone say they did? No one has 0.2 of a child either but, on average, every family in a census did at one stage have 2.2 children.

No, the reader may not (honestly) tell a smoker that his lifetime risk of developing lung cancer is 23 percent, or any specific percentage. The smoker has one life to live; he will either get lung cancer or he will not. What the reader may honestly tell the smoker is that statistics based on the fates of a large number of smokers over many decades indicate that a certain percentage of those smokers contracted lung cancer. The reader should also tell the smoker that the frequency of the incidence of lung cancer in a large population varies according to the number of cigarettes smoked daily. (According to Wikipedia: “For every 3–4 million cigarettes smoked, one lung cancer death occurs.[1][132]“) Further, the reader should note that the incidence of lung cancer also varies with the duration of smoking at various rates, and with genetic and environmental factors that vary from person to person.

As for family size, given that the census counts only post-natal children (who come in integer values), how could “every family in a census … at one stage have 2.2 children”? The average number of children across a large number of families may be 2.2, but surely the reader knows that “every family” did not somehow have 2.2 children “at one stage.” And surely the reader knows that average family size isn’t a probabilistic value, one that measures the relative frequency of an event (e.g., “heads”) given many repetitions of the same trial (e.g., tossing a fair coin), under the same conditions (e.g., no wind blowing). Each event is a random occurrence within the long string of repetitions. The reader may have noticed that family size is in fact strongly determined (especially in Western countries) by non-random events (e.g., deliberate decisions by couples to reproduce, or not). In sum, probabilities may represent averages, but not all (or very many) averages represent probabilities.

If not [by comparing probabilities], how do we make a rational recommendation and justify it in terms the board of a think-tank would accept? [This seems to be a reference to my erstwhile position as an officer of a defense think-tank.]

Here, the reader extends an inappropriate single-event view of probability to an inappropriate unique-event view. I would not have gone before the board and recommended a course of action — such as bidding on a contract for a new line of work — based on a “probability of success.” That would be an absurd statement to make about an event that is defined by unique circumstances (e.g., the composition of the think-tank’s staff at that time, the particular kind of work to be done, the qualifications of prospective competitors’ staffs). I would simply have spelled out the facts and the uncertainties. And if I had a hunch about the likely success or failure of the venture, I would have recommended for or against it, giving specific reasons for my hunch (e.g., the relative expertise of our staff and competitors’ staffs). But it would have been nothing more than a hunch; it wouldn’t have been my (impossible) assessment of the probability of a unique event.

Boards (and executives) don’t base decisions on (non-existent) probabilities; they base decisions on unique sets of facts, and on hunches (preferably hunches rooted in knowledge and experience). Those hunches may sometimes be stated as probabilities, as in “We’ve got a 50-50 chance of winning the contract.” (Though I would never say such a thing.) But such statements are only idiomatic, and have nothing to do with probability as it is properly understood.

CLOSING THOUGHTS

The reader’s comments reflect the popular debasement of the meaning of probability. The word has been adapted to many inappropriate uses: the probability of precipitation (a quasi-subjective concept), the probability of success in a business venture (a concept that requires the repetition of unrepeatable events), the probability that a batter will get a hit in his next at-bat (ditto, given the many unique conditions that attend every at-bat), and on and on. The effect of all such uses (and, often, the purpose of such uses) is to make a guess seem like a “scientific” prediction.

Signature

The Many-Sided Curse of Very Old Age

Here’s a tale that will be familiar to millions of the Greatest generation and their children of the Silent and Baby Boomer generations:

My grandparents and my wife’s grandparents were born in the years from 1875 to 1899. Their ages at death ranged from 25 to 96, for an average of 62.

My wife’s parents are still living, at the ages of 95 and 94. My father died at the age of 72, but my mother lives on at almost-99. That’s an average age of 90 — and rising.

My wife’s father was 6 when his father died and 56 when his mother died.

My wife’s mother was 14 when her father died and 52 when her mother died.

My mother was 25 when her father died and 61 when her mother died.

My father was 7 when his mother died and 35 when his father died.

My wife and I are 70-plus, and our three remaining parents are still going on … and on …

A long-lived parent is a mixed blessing. If you’re close to a parent, that parent’s growing dependence on you becomes a labor of love. If you’re not close to a parent, his or her long life simply imposes a labor of duty. In either case, the labor comes at an age when the child is on the downslope of energy.

What’s worse is that the rather competent, energetic, and often engaging parent of one’s earlier years is replaced by the addled, hobbled, and dull parent of one’s “golden years.” Financial prudence becomes miserliness; gregariousness turns into the retelling of stories and jokes for the umpteenth time; admirable determination gives way to pointless stubbornness.

Very old age is a test of character, and it’s a great disappointment when a parent fails the test. Too often, the facade of good cheer crumbles, to reveal extreme egoism, irresponsibility, and rancid bigotry.

I blame Medicare, in good part, for the miseries that very old age inflicts on the very old and on their children. I also blame Medicare for the miseries that it has inflicted and will continue to inflict on American taxpayers and workers.

The idea of ensuring access to health care is laudable, but Medicare — like so many government programs — has baleful side effects. To begin with, Medicare is yet another instance of the presumptuousness of the powerful. Big Brother presumes to know better than his subjects how they should spend their money and arrange their lives.

Medicare wasn’t sold as a subsidy, but it is one, just like Social Security and Medicaid. As with Social Security, the Medicare payroll tax doesn’t finance the full cost of the program. But because there are Medicare (and Social Security) taxes, most retirees believe (wrongly) that they have paid for their Medicare (and Social Security) benefits. They then consume their Medicare benefits with abandon because those benefits are almost free to them.

Instead of helping the truly penurious, Medicare (like Social Security) has become a middle-class welfare program. It relieves millions of workers from the responsibility of saving enough to pay for (or insure) their health care in old age. Thus the disincentivizing effects of Medicare (and Social Security) have caused and will continue to cause hundreds of millions of workers to produce far less than they would otherwise have produced. But drone-like politicians don’t understand such things.

The ever-growing cost of Medicare (with Social Security and Medicaid) threatens the well-being of future generations. Our progeny will be saddled with the exorbitant cost of caring for and subsidizing a burgeoning population of long-lived retirees — so many of whom know that they have lived too long and pray nightly “to shuffle[] off this mortal coil.”

We and our progeny will be hard-pressed to bear the exorbitant cost of Medicare and its socialistic ilk. The economy will sink deeper into the doldrums as resources are diverted from capital investments to subsidize oldsters who, for the most part, could have saved enough to pay for their own care and feeding had they not been discouraged from doing so.

The perfect storm of rising costs and slower economic growth means that future generations of Americans will be less well-fed, less well-clothed, less well-sheltered, and less healthy than the Greatest, the Silent, and the Boomer generations. There is great irony in the naming of Medicare, Medicaid, and Social Security.

*     *     *

Related reading:
Life Expectancy Graphs at Mapping History
Steve Calfo et al., “Last Year of Life Study,” Centers for Medicare & Medicaid Services, Office of the Actuary (undated)
Frank R. Lichtenberg, “The Effects of Medicare on Health Care Utilization and Outcomes,” Frontiers in Health Policy Research (Volume 5), MIT Press, January 2002
Kenneth Y. Chay et al., “Medicare, Hospital Utilization and Mortality: Evidence from the Program’s Origins,” NBER conference paper, February 2010
Theodore Dalrymple, “The Demand for Perfection,” Taki’s Magazine, December 14, 2014
Timothy Taylor, “How Medicare Spending Rises with Age,” The Conversable Economist, January 15, 2015

*     *     *

Related posts:
As Goes Greece
The Commandeered Economy
America’s Financial Crisis Is Now
The Rahn Curve Revisited

Signature

Presidential Trivia: Recurring First Names

Among the 44 Presidents of the United States, there are eight first names that occur more than once. Do you know the eight names? Do you know the middle names (if any) and last names that go with the first names? Try to answer those questions without peeking at a list of presidents, then go to the bottom of this page for the answers.

The Rahn Curve Revisited

The theory behind the Rahn Curve is simple — but not simplistic. A relatively small government with powers limited mainly to the protection of citizens and their property is worth more than its cost to taxpayers because it fosters productive economic activity (not to mention liberty). But additional government spending hinders productive activity in many ways, which are discussed in Daniel Mitchell’s paper, “The Impact of Government Spending on Economic Growth.” (I would add to Mitchell’s list the burden of regulatory activity, which accumulates with the size of government.)

What does the Rahn Curve look like? Daniel Mitchell estimates this relationship between government spending and economic growth:

Rahn curve (2)

The curve is dashed rather than solid at low values of government spending because it has been decades since the governments of developed nations have spent as little as 20 percent of GDP. But as Mitchell and others note, the combined spending of governments in the U.S. was 10 percent (and less) until the eve of the Great Depression. And it was in the low-spending, laissez-faire era from the end of the Civil War to the early 1900s that the U.S. enjoyed its highest sustained rate of economic growth.

In an earlier post, I ventured an estimate of the Rahn curve that spanned most of the history of the United States. I came up with this relationship:

Real rate of growth = -0.066(G/GDP) + 0.054

To be precise, it’s the annualized rate of growth over the most recent 10-year span, as a function of G/GDP (fraction of GDP spent by governments at all levels) in the preceding 10 years. I used a lagged relationship because it takes time for government spending (and related regulatory activities) to wreak their counterproductive effects on economic activity. Also, I include transfer payments (e.g., Social Security) in my measure of G because there’s no essential difference between transfer payments and many other kinds of government spending, which also take money from those who produce and give it to those who don’t (e.g., government employees engaged in paper-shuffling, unproductive social-engineering schemes, and counterproductive regulatory activities).

Because of the marked decline in the rate of growth since World War II, I’ve taken a closer look at the post-war numbers. With this result:

Real rate of growth = -0.372(G/GDP) + 0.067(BA/GDP) + 0.080

Again, it’s the annualized rate of growth over a 10-year span, as a function of G/GDP (fraction of GDP spent by governments at all levels) in the preceding 10 years. The new term, BA/GDP, represents the constant-dollar value of private nonresidential assets (i.e., business assets) as a fraction of GDP, averaged over the preceding 10 years. The idea is to capture the effect of capital accumulation on economic growth, which I didn’t do in the earlier analysis.

The equation has a good r-squared (0.66) and is highly significant (F-value = 2.84E-11). The p-values of the coefficients and intercept are also highly significant (1.812E-09, 1.059E-11, and 9.192E-06). The standard error of the estimate is 0.0055, that is, about one-half of 1 percentage point. I found no other intuitively appealing variables that add to the explanatory power of the equation.

What does it mean for the next 10 years? Based on G/GDP and BA/GDP for the most recent 10-year period (2004-2013), the real rate of growth will be about 1.9 percent. The earlier equation yields an estimate of 2.9 percent. The new equation wins the reality test:

Year-over-year changes in real GDP

Stagnation is upon us.

*    *     *

Related reading:
Arnold Kling, “Business Births and Deaths,” askblog, January 2, 2014
Sean Davis, “No, Demographics Are Not the Reason for Labor Force Dropouts,” The Federalist, January 13, 2014
James Pethokoukis, “3 Disturbing Charts Showing the Alarming Decline of US Economic Dynamism,” AEI.org, May 5, 2014
Ryan McCarthy, “The Story of the American Recovery in 15 Charts,” Wonkblog (The Washington Post), May 8, 2014 (See especially chart 4.4.2, which is consistent with my estimate and suggests that slow growth will be the norm among “social democracies.”)
James Pethokoukis, “Declining Business Dynamism … ,” AEI.org, May 22, 2014
James Pethokoukis, “JP Morgan: Less ‘Creative Destruction’ Is Hurting the US Economy,” AEI.org, June 4, 2014
James Pethokoukis, “Where Are All the Startups? More on America’s Economic Calcification,” AEI.org, November 19, 2014
James Pethokoukis, “America Needs More Startups. Here’s How to Ignite an Explosion in US Entrepreneurship,” AEI.org, December 26, 2014

*     *     *

Related posts:
The Laffer Curve, “Fiscal Responsibility,” and Economic Growth
The Causes of Economic Growth
In the Long Run We Are All Poorer
A Short Course in Economics
Addendum to a Short Course in Economics
As Goes Greece
Ricardian Equivalence Reconsidered
The Real Burden of Government
The Illusion of Prosperity and Stability
Taxing the Rich
More about Taxing the Rich
The Keynesian Fallacy and Regime Uncertainty
Why the “Stimulus” Failed to Stimulate
The “Jobs Speech” That Obama Should Have Given
Say’s Law, Government, and Unemployment
Regime Uncertainty and the Great Recession
Regulation as Wishful Thinking
The Commandeered Economy
We Owe It to Ourselves
In Defense of the 1%
Lay My (Regulatory) Burden Down
The Burden of Government
Economic Growth Since World War II
Government in Macroeconomic Perspective
Keynesianism: Upside-Down Economics in the Collectivist Cause
Economics: A Survey (also here)
Why Are Interest Rates So Low?
Vulgar Keynesianism and Capitalism
Estimating the Rahn Curve: Or, How Government Spending Inhibits Economic Growth
America’s Financial Crisis Is Now
The Keynesian Multiplier: Phony Math
The True Multiplier
Obamanomics: A Report Card
The Obama Effect: Disguised Unemployment
Income Inequality and Economic Growth

Signature

McCloskey on Piketty

UPDATED 01/07/15

The left loves Thomas Piketty‘s Capital in the Twenty-First Century because it lends pseudo-scientific backing to some of the left’s favorite economic postulates; to wit:

  • Income inequality is bad, even if real incomes are rising across the board. Why is it bad? It just is, if you’re an envious Marxist. But also…
  • Income inequality is bad because wealth usually derives from income. The rich get richer, as the old song goes. And the rich — the super-rich, in today’s parlance — acquire inordinate political power because of their great wealth. (Leftists conveniently overlook the fact that the penchant for statist schemes — redistributionism among them — is positively correlated with income. This is a morally confused stance, based on guilt-feelings and indoctrination at the hands of leftist “educators,” “journalists,” and “entertainers.”)
  • Further, inequality yields slower economic growth because persons with high incomes consume a smaller fraction of their incomes than do persons with low incomes. The result, according to the Keynesian consumption-based model, is a reduction in GDP, other things being the same. And other things being the same, slower growth means that it is harder for low-income persons to rise from poverty or near-poverty. (This is a logically and empirically backward view of reality, which is that economic growth requires more investment and less consumption. And investment spending is stifled when redistribution takes money from high earners and gives it to low earners, that is, persons with a high propensity to consume. Investment is also stifled by progressive taxation, which penalizes success, and burdensome regulation, which deters entrepreneurship and job creation.)

Deirdre McCloskey‘s forthcoming review of Piketty’s book praises it and damns it. First the praise:

Piketty gives a fine example of how to do it [economic history]. He does not get entangled as so many economists do in the sole empirical tool they are taught, namely, regression analysis on someone else’s “data”…. Therefore he does not commit one of the two sins of modern economics, the use of meaningless “tests” of statistical significance…. Piketty constructs or uses statistics of aggregate capital and of inequality and then plots them out for inspection, which is what physicists, for example, also do in dealing with their experiments and observations. Nor does he commit the other sin, which is to waste scientific time on existence theorems. Physicists, again, don’t. If we economists are going to persist in physics envy let’s at least learn what physicists actually do. Piketty stays close to the facts, and does not, say, wander into the pointless worlds of non-cooperative game theory, long demolished by experimental economics. He also does not have recourse to non-computable general equilibrium, which never was of use for quantitative economic science, being a branch of philosophy, and a futile one at that. On both points, bravissimo.

His book furthermore is clearly and unpretentiously, if dourly, written….

That comes early in McCloskey’s long review (50 double-spaced pages in the .pdf version). But she ends with blistering damnation:

On the next to last page of his book Piketty writes, “It is possible, and even indispensable, to have an approach that is at once economic and political, social and cultural, and concerned with wages and wealth.” One can only agree. But he has not achieved it. His gestures to cultural matters consist chiefly of a few naively used references to novels he has read superficially—for which on the left he has been embarrassingly praised. His social theme is a narrow ethic of envy. His politics assumes that governments can do anything they propose to do. And his economics is flawed from start to finish.

It is a brave book. But it is mistaken.

There is much in between to justify McCloskey’s conclusion. Here she puts Piketty’s work in context:

[T]he left in its worrying routinely forgets this most important secular event since the invention of agriculture—the Great Enrichment of the last two centuries—and goes on worrying and worrying, like the little dog worrying about his bone in the Traveler’s insurance company advertisement on TV, in a new version every half generation or so.

Here is a partial list of the worrying pessimisms, which each has had its day of fashion since the time, as the historian of economic thought Anthony Waterman put it, “Malthus’ first [1798] Essay made land scarcity central. And so began a century-long mutation of ‘political economy,’ the optimistic science of wealth, to ‘economics,’ the pessimistic science of scarcity.” Malthus worried that workers would proliferate and Ricardo worried that the owners of land would engorge the national product. Marx worried, or celebrated, depending on how one views historical materialism, that owners of capital would at least make a brave attempt to engorge it…. Mill worried, or celebrated, depending on how one views the sick hurry of modern life, that the stationary state was around the corner. Then the economists, many on the left but some on the right, in quick succession 1880 to the present—at the same time that trade-tested betterment was driving real wages up and up and up—commenced worrying about, to name a few of the grounds for pessimisms they discerned concerning ”capitalism”: greed, alienation, racial impurity, workers’ lack of bargaining strength, women working, workers’ bad taste in consumption, immigration of lesser breeds, monopoly, unemployment, business cycles, increasing returns, externalities, under-consumption, monopolistic competition, separation of ownership from control, lack of planning, post-War stagnation, investment spillovers, unbalanced growth, dual labor markets, capital insufficiency … , peasant irrationality, capital-market imperfections, public choice, missing markets, informational asymmetry, third-world exploitation, advertising, regulatory capture, free riding, low-level traps, middle-level traps, path dependency, lack of competitiveness, consumerism, consumption externalities, irrationality, hyperbolic discounting, too big to fail, environmental degradation, underpaying of care, overpayment of CEOs, slower growth, and more.

One can line up the later items in the list, and some of the earlier ones revived à la Piketty or Krugman…. I will not name here the men … , but can reveal their formula: first, discover or rediscover a necessary condition for perfect competition or a perfect world (in Piketty’s case, for example, a more perfect equality of income). Then assert without evidence (here Piketty does a great deal better than the usual practice) but with suitable mathematical ornamentation (thus Jean Tirole, Nobel 2014) that the condition might be imperfectly realized or the world might not develop in a perfect way. Then conclude with a flourish (here however Piketty falls in with the usual low scientific standard) that “capitalism” is doomed unless experts intervene with a sweet use of the monopoly of violence in government to implement anti-trust against malefactors of great wealth or subsidies to diminishing-returns industries or foreign aid to perfectly honest governments or money for obviously infant industries or the nudging of sadly childlike consumers or, Piketty says, a tax on inequality-causing capital worldwide. A feature of this odd history of fault-finding and the proposed statist corrections, is that seldom does the economic thinker feel it necessary to offer evidence that his … proposed state intervention will work as it is supposed to, and almost never does he feel it necessary to offer evidence that the imperfectly attained necessary condition for perfection before intervention is large enough to have reduced much the performance of the economy in aggregate.

I heartily agree with McCloskey’s diagnosis of the causes of leftist worrying:

One begins to suspect that the typical leftist … starts with a root conviction that capitalism is seriously defective. The conviction is acquired at age 16 years when the proto-leftist discovers poverty but has no intellectual tools to understand its source. I followed this pattern, and therefore became for a time a Joan-Baez socialist. Then the lifelong “good social democrat,” as he describes himself (and as I for a while described myself), when he has become a professional economist, in order to support the now deep-rooted conviction, looks around for any qualitative indication that in some imagined world the conviction would be true, without bothering to ascertain numbers drawn from our own world…. It is the utopianism of good-hearted leftward folk who say, “Surely this wretched society, in which some people are richer and more powerful than others, can be greatly improved. We can do much, much better!”

Piketty’s typically leftist blend of pessimism and utopianism is hitched to bad economic reasoning, ignorance of economic history, and a misreading of his own statistics:

Piketty’s (and Aristotle’s) theory is that the yield on capital usually exceeds the growth rate of the economy, and so the share of capital’s returns in national income will steadily increase, simply because interest income—what the presumably rich capitalists get and supposedly manage to cling to and supposedly reinvest—is growing faster than the income the whole society is getting.

Aristotle and his followers, such as Aquinas and Marx and Piketty, were much concerned with such “unlimited” gain. The argument is, you see, very old, and very simple. Piketty ornaments it a bit with some portentous accounting about capital-output ratios and the like, producing his central inequality about inequality: so long as r > g, where r is the return on capital and g is the growth rate of the economy, we are doomed to ever increasing rewards to rich capitalists while the rest of us poor suckers fall relatively behind. The merely verbal argument I just gave, however, is conclusive, so long as its factual assumptions are true: namely, only rich people have capital; human capital doesn’t exist; the rich reinvest their return; they never lose it to sloth or someone else’s creative destruction; inheritance is the main mechanism, not a creativity that raises g for the rest of us just when it results in an r shared by us all; and we care ethically only about the Gini coefficient, not the condition of the working class. Notice one aspect of that last: in Piketty’s tale the rest of us fall only relatively behind the ravenous capitalists. The focus on relative wealth or income or consumption is one serious problem in the book. Piketty’s vision of a “Ricardian Apocalypse,” as he calls it, leaves room for the rest of us to do very well indeed, most non-apocalyptically, as in fact since 1800 we have. What is worrying Piketty is that the rich might possibly get richer, even though the poor get richer, too. His worry, in other words, is purely about difference, about the Gini coefficient, about a vague feeling of envy raised to a theoretical and ethical proposition.

Another serious problem is that r will almost always exceed g, as anyone can tell you who knows about the rough level of interest rates on invested capital and about the rate at which most economies have grown (excepting only China recently, where contrary to Piketty’s prediction, inequality has increased). If his simple logic is true, then the Ricardian Apocalypse looms, always. Let us therefore bring in the sweet and blameless and omni-competent government—or, even less plausibly, a world government, or the Gallactic Empire—to implement “a progressive global tax on capital” (p. 27) to tax the rich. It is our only hope…. In other words, Piketty’s fears were not confirmed anywhere 1910 to 1980, nor anywhere in the long run at any time before 1800, nor anywhere in Continental Europe and Japan since World War II, and only recently, a little, in the United States, the United Kingdom, and Canada (Canada, by the way, is never brought into his tests).

That is a very great puzzle if money tends to reproduce itself, always, evermore, as a general law governed by the Ricardo-plus-Marx inequality at the rates of r and g actually observed in world history. Yet inequality in fact goes up and down in great waves, for which we have evidence from many centuries ago down to the present, which also doesn’t figure in such a tale (Piketty barely mentions the work of the economic historians Peter Lindert and Jeffrey Williamson documenting the inconvenient fact). According to his logic, once a Pikettywave starts—as it would at any time you care to mention if an economy satisfied the almostalways-satisfied condition of the interest rate exceeding the growth rate of income—it would never stop. Such an inexorable logic means we should have been overwhelmed by an inequality-tsunami in 1800 CE or in 1000 CE or for that matter in 2000 BCE. At one point Piketty says just that: “r > g will again become the norm in the twenty-first century, as it had been throughout history until the eve of World War I (… one wonders what he does with historically low interest rates right now, or the negative real interest rates in the inflation of the 1970s and 1980s). Why then did the share of the rich not rise anciently to 100 percent?

McCloskey gets it right:

With a bigger pie, someone has to get more. In the event what rose were wages on raw labor and especially a great accumulation of human capital, but capital owned by the laborers, not by the truly rich. The return to physical capital was higher than a riskless return on British or American government bonds, in order to compensate for the risk in holding capital (such as being made obsolete by betterment—think of your computer, obsolete in four years). But the return on physical capital, and on human capital, was anyway held down to its level of very roughly 5 to 10 percent by competition among the proliferating capitalists. Imagine our immiserization if the income of workers, because they did not accumulate human capital, and their societies had not adopted the accumulation of ingenuities since 1800, had experienced the history of stagnation since 1800 that the per-unit return to capital has. It is not hard to imagine, because such miserable income of workers exists even now in places like Somalia and North Korea. Instead, since 1800 in the average rich country the income of the workers per person increased by a factor of about 30 (2,900 percent, if you please) and in even in the world as a whole, including the still poor countries, by a factor of 10 (900 percent), while the rate of return to physical capital stagnated.

Piketty does not acknowledge that each wave of inventors, of entrepreneurs, and even of routine capitalists find their rewards taken from them by entry, which is an economic concept he does not appear to grasp. Look at the history of fortunes in department stores. The income from department stores in the late nineteenth century, in Le Bon Marché, Marshall Fields, and Selfridge’s, was entrepreneurial. The model was then copied all over the rich world, and was the basis for little fortunes in Cedar Rapids, Iowa and Benton Harbor, Michigan. Then in the late twentieth century the model was challenged by a wave of discounters, and they then in turn by the internet. The original accumulation slowly or quickly dissipates. In other words, the profit going to the profiteers is more or less quickly undermined by outward-shifting supply, if governmental monopolies and protectionisms of the sort Matt Ridley noted in recent British history do not intervene. The economist William Nordhaus has calculated that the inventors and entrepreneurs nowadays earn in profit only 2 percent of the social value of their inventions. If you are Sam Walton the 2 percent gives you personally a great deal of money from introducing bar codes into stocking of supermarket shelves. But 98 percent at the cost of 2 percent is nonetheless a pretty good deal for the rest of us. The gain from macadamized roads or vulcanized rubber, then modern universities, structural concrete, and the airplane, has enriched even the poorest among us.

But Piketty doesn’t see this because he’s a poor economist and a knee-jerk socialist:

Piketty, who does not believe in supply responses [as discussed below], focuses instead on the great evil of very rich people having seven Rolex watches by mere inheritance. Lillian Bettancourt, heiress to the L’Oréal fortune (p. 440), the third richest woman in the world, who “has never worked a day in her life, saw her fortune grow exactly as fast as that of [the admittedly bettering] Bill Gates.” Ugh, Piketty says, which is his ethical philosophy in full.

*     *     *

[T]he effect of inherited wealth on children is commonly to remove their ambition, as one can witness daily on Rodeo Drive. Laziness—or for that matter regression to the mean of ability—is a powerful equalizer. “There always comes a time,” Piketty writes against his own argument, “when a prodigal child squanders the family fortune” (p. 451), which was the point of the centuries-long struggle in English law for and against entailed estates.

*     *     *

Because Piketty is obsessed with inheritance, moreover, he wants to downplay entrepreneurial profit, the trade-tested betterment that has made the poor rich. It is again Aristotle’s claim that money is sterile and interest is therefore unnatural. Aristotle was on this matter mistaken. It is commonly the case, contrary to Piketty, and setting aside the cheapening of our goods produced by the investments of their wealth by the rich, that the people with more money got their more by being more ingeniously productive, for the benefit of us all—getting that Ph.D., for example, or being excellent makers of automobiles or excellent writers of horror novels or excellent throwers of touchdown passes or excellent providers of cell phones, such as Carlos Slim of Mexico, the richest man in the world (with a little boost, it may be, from corrupting the Mexican parliament). That Frank Sinatra became richer than most of his fans was not an ethical scandal. The “Wilt Chamberlain” example devised by the philosopher Robert Nozick (Piketty mentions John Rawls, but not Nozick, Rawls’ nemesis) says that if we pay voluntarily to get the benefit of clever CEOs or gifted athletes there is no further ethical issue. The unusually high rewards to the Frank Sinatras and Jamie Dimons and Wilt Chamberlains come from the much wider markets of the age of globalization and mechanical reproduction, not from theft. Wage inequality in the rich countries experiencing an enlarging gap of rich vs. poor, few though the countries are (Piketty’s finding, remember: Canada, U.S.A., U.K.)), is mainly, he reports, caused by “the emergence of extremely high remunerations at the summit of the wage hierarchy, particularly among top managers of large firms.” The emergence, note, has nothing to do with r > g.

How poor an economist? Consider:

Piketty’s definition of wealth does not include human capital, owned by the workers, which has grown in rich countries to be the main source of income, when it is combined with the immense accumulation since 1800 of capital in knowledge and social habits, owned by everyone with access to them. Therefore his laboriously assembled charts of the (merely physical and private) capital/output ratio are erroneous. They have excluded one of the main forms of capital in the modern world. More to the point, by insisting on defining capital as something owned nearly always by rich people, Piketty mistakes the source of income, which is chiefly embodied human ingenuity, not accumulated machines or appropriated land.

*    *     *

The fundamental technical problem in the book, however, is that Piketty the economist does not understand supply responses….

Startling evidence of Piketty’s miseducation occurs as early as page 6. He begins by seeming to concede to his neoclassical opponents…. “To be sure, there exists in principle a quite simple economic mechanism that should restore equilibrium to the process [in this case the process of rising prices of oil or urban land leading to a Ricardian Apocalypse]: the mechanism of supply and demand. If the supply of any good is insufficient, and its price is too high, then demand for that good should decrease, which would lead to a decline in its price.” [This] clearly mix[es] up movement along a demand curve with movement of the entire curve, a first-term error at university. The correct analysis (we tell our first-year, first-term students at about week four) is that if the price is “too high” it is not the whole demand curve that “restores equilibrium” … , but an eventually outward-moving supply curve. The supply curve moves out because entry is induced by the smell of super-normal profits, in the medium and long run (which is the Marshallian definition of the terms). New oil deposits are discovered, new refineries are built, new suburbs are settled, new high-rises saving urban land are constructed, as has in fact happened massively since, say, 1973, unless government has restricted oil exploitation (usually on environmental grounds) or the building of high-rises (usually on corrupt grounds). Piketty goes on—remember: it does not occur to him that high prices cause after a while the supply curve to move out; he thinks the high price will cause the demand curve to move in, leading to “a decline in price” (of the scarce item, oil’s or urban land)—“such adjustments might be unpleasant or complicated.” To show his contempt for the ordinary working of the price system he imagines comically that “people should . . . take to traveling about by bicycle.” The substitutions along a given demand curve, or one mysteriously moving in, without any supply response “might also take decades, during which landlords and oil well owners might well accumulate claims on the rest of the population” (now he has the demand curve moving out, for some reason faster than the supply curve moves out) “so extensive that they could they could easily [on grounds not argued] come to own everything that can be owned, including” in one more use of the comical alternative, “bicycles, once and for all.” Having butchered the elementary analysis of entry and of substitute supplies, which after all is the economic history of the world, he speaks of “the emir of Qatar” as a future owner of those bicycles, once and for all. The phrase must have been written before the recent and gigantic expansion of oil and gas exploitation in Canada and the United States….

Piketty, it would seem, has not read with understanding the theory of supply and demand that he disparages, such as Smith (one sneering remark on p. 9), Say (ditto, mentioned in a footnote with Smith as optimistic), Bastiat (no mention), Walras (no mention), Menger (no mention), Marshall (no mention), Mises (no mention), Hayek (one footnote citation on another matter), Friedman (pp. 548-549, but only on monetarism, not the price system). He is in short not qualified to sneer at self-regulated markets (for example on p. 572), because he has no idea how they work. It would be like someone attacking the theory of evolution (which is identical to the theory the economists use of entry and exit in self-regulating markets—the supply response, an early version of which inspired Darwin) without understanding natural selection or the the Galton-Watson process or modern genetics.

McCloskey continues:

Beyond technical matters in economics, the fundamental ethical problem in the book is that Piketty has not reflected on why inequality by itself would be bad…. The motive of the true Liberal … should not be equality but [says Joshua Monk, a character in Anthony Trollope’s novel, Phineas] “the wish of every honest [that is, honorable] man . . . to assist in lifting up those below him.” Such an ethical goal was to be achieved, says Monk the libertarian liberal (as Richard Cobden and John Bright and John Stuart Mill were, and Bastiat in France at the time, and in our times Hayek and Friedman, or for that matter M’Cluskie), not by direct programs of redistribution, nor by regulation, nor by trade unions, but by free trade and tax-supported compulsory education and property rights for women—and in the event by the Great Enrichment, which finally in the late nineteenth century started sending real wages sharply up, Europe-wide, and then world-wide.

The absolute condition of the poor has been raised overwhelmingly more by the Great Enrichment than by redistribution. The economic historians Ian Gazeley and Andrew Newell noted in 2010 “the reduction, almost to elimination, of absolute poverty among working households in Britain between 1904 and 1937.” “The elimination of grinding poverty among working families,” they show, “was almost complete by the late thirties, well before the Welfare State.” Their Chart 2 exhibits income distributions in 1886 prices at 1886, 1906, 1938, and 1960, showing the disappearance of the classic line of misery for British workers, “round about a pound a week.”

And it didn’t stop there:

In 2013 the economists Donald Boudreaux and Mark Perry noted that “according to the Bureau of Economic Analysis, spending by households on many of modern life’s ‘basics’—food at home, automobiles, clothing and footwear, household furnishings and equipment, and housing and utilities—fell from 53 percent of disposable income in 1950 to 44 percent in 1970 to 32 percent today.” It is a point which the economic historian Robert Fogel had made in 1999 for a longer span. The economist Steven Horwitz summarizes the facts on labor hours required to buy a color TV or an automobile, and notes that “these data do not capture . . . the change in quality . . . . The 1973 TV was at most 25 inches, with poor resolution, probably no remote control, weak sound, and generally nothing like its 2013 descendant. . . . Getting 100,000 miles out of a car in the 1970s was cause for celebration. Not getting 100,000 miles out of a car today is cause to think you bought a lemon.”

Nor in the United States are the poor getting poorer. Horwitz observes that “looking at various data on consumption, from Census Bureau surveys of what the poor have in their homes to the labor time required to purchase a variety of consumer goods, makes clear that poor Americans are living better now than ever before. In fact, poor Americans today live better, by these measures, than did their middle class counterparts in the 1970s.” In the summer of 1976 an associate professor of economics at the University of Chicago had no air conditioning in his apartment. Nowadays many quite poor Chicagoans have it. The terrible heat wave in Chicago of July 1995 killed over 700 people, mainly low-income. Yet earlier heat waves in 1936 and 1948, before air-conditioning was at all common, had probably killed many more.

There is one point at which McCloskey almost veers off course, but she recovers nicely:

To be sure, it’s irritating that a super rich woman buys a $40,000 watch. The purchase is ethically objectionable. She really should be ashamed. She should be giving her income in excess of an ample level of comfort—two cars, say, not twenty, two houses, not seven, one yacht, not five—to effective charities…. But that many rich people act in a disgraceful fashion does not automatically imply that the government should intervene to stop it. People act disgracefully in all sorts of ways. If our rulers were assigned the task in a fallen world of keeping us all wholly ethical, the government would bring all our lives under its fatherly tutelage, a nightmare achieved approximately before 1989 in East Germany and now in North Korea.

And that is the key point, to my mind. Perfection always eludes the human race, even where its members have managed to rise from the primordial scramble for sustenance and above Hobbes’s “warre, as is of every man, against every man.” Economic progress without economic inequality is impossible, and efforts to reduce inequality by punishing economic success must inevitably hinder progress, which is built on the striving of entrepreneurs. Further, the methods used to punish economic success are anti-libertarian — whether they are the police-state methods of the Soviet Union or the “soft despotism” of the American regulatory-welfare state.

So what if an entrepreneur — an Edison, Rockefeller, Ford, Gates, or Jobs — produces something of great value to his fellow men, and thus becomes rich and adorns his spouse with a $40,000 watch, owns several homes, and so on? So what if that same entrepreneur is driven (in part, at least) by a desire to bestow great wealth upon his children? So what if that same entrepreneur chooses to live among and associate with other persons of great wealth? He has no obligation to “give back”; he has already given by providing his fellow men with something that they value enough to make him rich. (Similarly, the super-star athlete and actor.)

McCloskey gets the penultimate word:

Supposing our common purpose on the left and on the right, then, is to help the poor, … the advocacy by the learned cadres of the left for equalizing restrictions and redistributions and regulations can be viewed at best as thoughtless. Perhaps, considering what economic historians now know about the Great Enrichment, but which the left clerisy, and many of the right, stoutly refuse to learn, it can even be considered unethical. The left clerisy such as Tony Judt or Paul Krugman or Thomas Piketty, who are quite sure that they themselves are taking the ethical high road against the wicked selfishness of Tories or Republicans or La Union pour un Mouvement Populaire, might on such evidence be considered dubiously ethical. They are obsessed with first-[order] changes that cannot much help the poor, and often can be shown to damage them, and are obsessed with angry envy at the consumption of the uncharitable rich, of whom they personally are often examples (what will you do with your royalties, Professor Piketty?), and the ending of which would do very little to improve the position of the poor. They are very willing to stifle through taxing the rich the trade-tested betterments which in the long run have gigantically helped the poor, who were the ancestors of most of the rest of us.

I added the emphasis to underscore what seems to me to be the left’s greatest ethical offense in the matter of inequality, as it is in the matter of race relations: hypocrisy. Hypocritical leftists like Judt, Krugman, and Pikkety (to name only a few of their ilk) aren’t merely wrong in their views about how to help the (relatively) poor, they make money (and a lot of it) by espousing their erroneous views. They obviously see nothing wrong with making a lot of money. So why is it all right for them to make a lot of money — more than 99.9 percent of the world’s population, say — but not all right for other persons to make even more money? The dividing line between deservingness and greed seems always to lie somewhere above their munificent earnings.

UPDATE:

As John Cochrane notes,

Most Piketty commentary … focuses on the theory, r>g, and so on. After all, that’s easy and you don’t have to read hundreds of pages.

Cochrane then points to a paper by Philip W. Magness and Robert P. Murphy (listed below in “Related reading”) that focuses on the statistics that Piketty compiled and relied on to advance his case for global redistribution of income and wealth. Here is the abstract of the paper:

Thomas Piketty’s Capital in the 21st Century has been widely debated on theoretical grounds, yet continues to attract acclaim for its historically-infused data analysis. In this study we conduct a closer scrutiny of Piketty’s empirics than has appeared thus far, focusing upon his treatment of the United States. We find evidence of pervasive errors of historical fact, opaque methodological choices, and the cherry-picking of sources to construct favorable patterns from ambiguous data. Additional evidence suggests that Piketty used a highly distortive data assumption from the Soviet Union to accentuate one of his main historical claims about global “capitalism” in the 20th century. Taken together, these problems suggest that Piketty’s highly praised and historically-driven empirical work may actually be the book’s greatest weakness.

I’ve quickly read Magness and Murphy’s paper. It seems to live up to the abstract. Piketty (and friends) may challenge Magness and Murphy, but Piketty bears the burden of showing that he hasn’t stacked the empirical deck in favor of his redistributionist message. Not that the empirical flaws should matter, given the theoretical flaws exposed by McCloskey and others, but Piketty’s trove of spurious statistics should be discredited before it becomes a standard reference.

*     *     *

Related reading:
David R. Henderson, “An Unintended Case for More Capitalism,” Regulation, Fall 2014
John Cochrane, “Why and How We Care about Inequality,” The Grumpy Economist, September 29, 2014
John Cochrane, “Envy and Excess,” The Grump Economist, October 1, 2014
Mark J. Perry, “New CBO Study Shows That ‘The Rich’ Don’t Just Pay Their ‘Fair Share,’ They Pay Almost Everybody’s Share,” Carpe Diem, November 15, 2014
Mark J. Perry, “IRS Data Show That the Vast Majority of Taxpayers in the ‘Fortunate 400′ Are Only There for One Year,” Carpe Diem, November 25, 2014
Robert Higgs, “income Inequality Is a Statistical Artifact,” The Beacon (Independent Institute), December 1, 2014
John Cochrane, “McCloskey on Piketty and Friends,” The Grumpy Economist, December 2, 2014
James Pethokoukis, “IMF Study, ‘No Evidence ‘High-End’ Income Inequality Hurts Economic Growth,” AEI.org, December 9, 2014
Philip W. Magness and Robert P. Murphy, “Challenging the Empirical Contribution of Thomas Piketty’s Capital in the 21st Century,” Journal of Private Enterprise, forthcoming

Related posts:
Taxing the Rich
More about Taxing the Rich
The Keynesian Fallacy and Regime Uncertainty
Creative Destruction, Reification, and Social Welfare
Why the “Stimulus” Failed to Stimulate
Regime Uncertainty and the Great Recession
Regulation as Wishful Thinking
In Defense of the 1%
Lay My (Regulatory) Burden Down
Economic Growth Since World War II
Government in Macroeconomic Perspective
Keynesianism: Upside-Down Economics in the Collectivist Cause
How High Should Taxes Be?
The 80-20 Rule, Illustrated
Economics: A Survey
Estimating the Rahn Curve: Or, How Government Spending Inhibits Economic Growth
The Keynesian Multiplier: Phony Math
The True Multiplier
Some Inconvenient Facts about Income Inequality
Mass (Economic) Hysteria: Income Inequality and Related Themes
Social Accounting: A Tool of Social Engineering
Income Inequality and Inherited Wealth: So What?
Income Inequality and Economic Growth
A Case for Redistribution, Not Made

Signature

Getting “Equal Protection” Right

More than nine years ago, I wrote:

What “equal protection” really means is this:

Any law that is otherwise constitutional is a valid law, which must be applied equally to all persons.

As long as that law is applied equally to all persons, it is irrelevant if the application of the law happens to lead to unequal outcomes for various identifiable groups of persons….

Four years later, I added this about the decision of federal district judge Vaughn Walker in the case of Perry v. Schwarzenegger (later Hollingsworth v. Perry):

Judge Walker goes on to address equal protection:

The Equal Protection Clause of the Fourteenth Amendment provides that no state shall “deny to any person within its jurisdiction the equal protection of the laws.”…

Proposition 8 targets gays and lesbians in a manner specific to their sexual orientation and, because of their relationship to one another, Proposition 8 targets them specifically due to sex. Having considered the evidence, the relationship between sex and sexual orientation and the fact that Proposition 8 eliminates a right only a gay man or a lesbian would exercise, the court determines that plaintiffs’ equal protection claim is based on sexual orientation, but this claim is equivalent to a claim of discrimination based on sex.

The circularity of Judge Walker’s reasoning with respect to equal protection begins much earlier in his decision, where he writes that

The right to marry has been historically and remains the right to choose a spouse and, with mutual consent, join together and form a household. Race and gender restrictions shaped marriage during eras of race and gender inequality, but such restrictions were never part of the historical core of the institution of marriage. Today, gender is not relevant to the state in determining spouses’ obligations to each other and to their dependents. Relative gender composition aside, same-sex couples are situated identically to opposite-sex couples in terms of their ability to perform the rights and obligations of marriage under California law. Gender no longer forms an essential part of marriage; marriage under law is a union of equals.

But the right to marry, historically, has been the right to choose a spouse of the opposite sex, not merely to choose a spouse. Judge Walker even acknowledges that fact, inadvertently, when he puts aside “relative gender composition,” as if it were a mere trifle and not central to a social tradition that dates back millennia and should not be swept aside casually by a judge because he finds it “irrational,” on the basis of spurious social science. Walker then says that “gender is not relevant,” thus circularly assuming that which is to be proved. As if in support of that assertion he asserts, laughably, that “gender restrictions … were never part of the historical core of the institution of marriage.”

In sum, Judge Walker approaches the constitutional matter of equal protection by assuming that gays have the right to marry. Given that assumption, it is easy to assert that Proposition 8 amounts to a denial of equal protection for gays who seek to marry….

I have never doubted the correctness of my interpretation of “equal protection,” but I’m glad to see it supported by a constitutional scholar. This is from Andrew Hyman’s post, “A Comment in Response to Dale Carpenter Regarding Equal Protection,” at The Originalism Blog (November 23, 2014):

Mike Ramsey recently quoted Professor Dale Carpenter as follows: “The Equal Protection Clause is a self-conscious repudiation of exclusion and hierarchy supported by nothing more than ancient practice.”  Perhaps it would have been wise if the clause really said that, but I don’t think it was written that way…. As I understand it, this clause of the Constitution does not endorse unreasonable exclusion and hierarchy, but neither does it authorize the federal judiciary to make such reasonableness determinations all by itself….

[L]et us consider what a few legal luminaries have had to say about the original meaning of this clause of our Constitution….  Professor Laurence Tribe says that “the Constitution lacks a textual basis for much of what is commonly attributed to the very notion of ‘the equal protection of the laws’….[which] was taken to mean less than ‘the protection of equal laws.’”  As far as I am aware, Professor Steven Calabresi has not altered his view that, “the Equal Protection Clause says nothing about equality in the making or implementing of equal laws.” According to Professor Kermit Roosevelt, “the most natural reading of ‘equal protection of the laws’ probably takes it to be about application or enforcement, rather than content.”… Others could be added to the list, which should at least give pause to anyone who suggests, as Professor Carpenter does, that the U.S. Supreme Court was actually given power in 1868 to strike down whatever governmental classifications that it deems unreasonable and/or hierarchical….

So where did Professor Carpenter’s notion come from?  It is certainly not original to him, so where did it originate?  As best I can tell, the historical source most commonly cited for this idea is the speech of Senator Jacob Howard introducing the Fourteenth Amendment in the Senate, in 1866.  According to the Congressional Globe, he said: “This abolishes all class legislation in the states, and does away with the injustice of subjecting one caste of persons to a code not applicable to another.”  Don’t get me wrong, these are excellent sentiments to guide legislative action, but if Howard was correct then the Supreme Court could legitimately (though unwisely) characterize virtually any legislative classification as verboten, whether it be a law that imposes special burdens or disabilities upon kleptomaniacs, or children, or police officers, or what have you….

Exactly. By the “logic” of Dale Carpenter, Judge Vaughan Walker, and their legalistic ilk, it is unconstitutional to discriminate on any basis. Thus no one should be found unfit for a particular job (that saves Carpenter and Walker); no one should be found unfit for admission to a university; there should be no minimum age at which one is permitted to drink, drive, wed, or join the armed forces; there should be no prohibition of marriage between siblings; churches should be required to ordain atheists; and on and on.

Above all — by the same “logic” — the laws should not have any basis in morality. Because the imposition of morality results in “discrimination” against persons who cheat, beat, steal from, rape, and murder other persons.

*     *     *

Related posts:
“Equal Protection” and Homosexual “Marriage”
Perry v. Schwarzenegger, Due Process, and Equal Protection
Murder Is Constitutional
Posner the Fatuous

Signature

Some Thoughts about Evolution

I came across this in a post about a psychological trait known as affective empathy*:

[I]f affective empathy helps people to survive and reproduce, there will be more and more of it in succeeding generations. If not, there will be less and less.

I’ve run across similar assertions about other traits in my occasional reading about evolution. But is it true that if X helps people to survive and reproduce, X will proliferate?

Survival and reproduction depend on many traits. A particular trait, considered in isolation, may seem to be helpful to the survival and reproduction of a group. But that trait may not be among the particular collection of traits that is most conducive to the group’s survival and reproduction. If that is the case, the trait will become less prevalent.

Alternatively, if the trait is an essential member of the collection that is conducive to survival and reproduction, it will survive. But its survival depends on the other traits. The fact that X is a “good trait” does not, in itself, ensure the proliferation of X. And X will become less prevalent if other traits become more important to survival and reproduction.

In any event, it is my view that genetic fitness for survival has become almost irrelevant in places like the United States. The rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

I’m only making an observation. Eugenics is the province of the left.

__________
* Affective empathy is defined by the same writer as the

capacity not only to understand how another person feels but also to experience those feelings involuntarily and to respond appropriately.

*     *     *

Related Posts:
Luck-Egalitarianism and Moral Luck
Empathy Is Overrated
IQ, Political Correctness, and America’s Present Condition
Evolution and Race
Alienation
Income Inequality and Economic Growth
Egoism and Altruism
Evolution, Culture, and “Diversity”
A Case for Redistribution, Not Made
Greed, Conscience, and Big Government
Ruminations on the Left in America
The Harmful Myth of Inherent Equality
Not-So-Random Thoughts (XI) (first entry)

Signature

Crime Revisited

I last took a comprehensive look at crime seven years ago. That analysis drew on statistics for 1960-2004. The results reported in this post are based on statistics for 1960-2009. The newer analysis doesn’t contradict the older one, but it does add some explanatory power.

Cutting to the chase, the following equation explains the rate of violent and property crimes (VPC) as a function of:

BLK — the number of blacks as a decimal fraction of the population

GRO — the change in the rate of growth of real GDP per capita in the previous year, where the rate is expressed as a decimal fraction

PSQ — the square of the decimal fraction representing the proportion of the population in federal and State prisons

ORA — the number of persons of other races (not black or white) as a decimal fraction of the population.

The equation is highly significant (F = 1.44179E-31), as are the intercept and the coefficients (p-values in parentheses):

VPC =

- 333768 (3.30579E-28)

+ 339535 BLK (1.06615E-29)

- 6133 GRO (0.00065)

-174136761 PSQ (1.00729E-15)

- 27614 ORA (0.0018)

(Values are rounded to the nearest whole number. The adjusted r-squared is 0.96, and the standard error of the estimate is 5.3 percent of the mean value of VPC.)

Here are the actual and estimated values of VPC:

Crime rates (actual vs estimated)_2014

I’m satisfied with the equation, including the negative sign on the coefficient of ORA, which mainly represents Hispanics and Asians. But I will reassess the equation if the difference between actual and estimated VPC continues to diverge after 2007, when they were almost identical.

*     *     *

Related reading: Greg Allmain, “Another Study Links Violence to the Presence of Specific Genes,” Theden, November 11, 2014

Related posts:
Lock ‘Em Up
Estimating the Rahn Curve: Or, How Government Spending Inhibits Economic Growth
Race and Reason: The Achievement Gap — Causes and Implications
Hispanics and Crime
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Poverty, Crime, and Big Government

Signature

On Writing: Part Four

Part One gives excerpts of W.Somerset Maugham’s candid insights about the craft of writing. Part Two gives my advice to writers of non-fiction works. Part Three recommends some writings about writing, some writers to emulate, and a short list of reference works. This part delivers some sermons about practices to follow if you wish to communicate effectively, be taken seriously, and not be thought of as a semi-literate, self-indulgent, faddish dilettante. (In Part Three, I promised sermonettes, but they grew into sermons as I wrote.)

The first section, “Stasis, Progress, Regress, and Language,” comes around to a defense of prescriptivism in language. The second section, “Illegitimi Non Carborundum Lingo” (mock-Latin for “Don’t Let the Bastards Wear Down the Language”), counsels steadfastness in the face of political correctness and various sloppy usages.

STASIS, PROGRESS, REGRESS, AND LANGUAGE

To every thing there is a season, and a time to every purpose under the heaven….

Ecclesiastes 3:1 (King James Bible)

Nothing man-made is permanent; consider, for example, the list of empires here. In spite of the history of empires — and other institutions and artifacts of human endeavor — most people seem to believe that the future will be much like the present. And if the present embodies progress of some kind, most people seem to expect that progress to continue.

Things do not simply go on as they have been without the expenditure of requisite effort. Take the Constitution’s broken promises of liberty, about which I have written so much. Take the resurgence of Russia as a rival for international influence. This has been in the works for about 20 years, but didn’t register on most Americans until the recent Crimean crisis and related events in Ukraine. What did Americans expect? That the U.S. could remain the unchallenged superpower while reducing its armed forces to the point that they were strained by relatively small wars in Afghanistan and Iraq? That Vladimir Putin would be cowed by an American president who had so blatantly advertised his hopey-changey attitude toward Iran and Islam, while snubbing traditional allies like Poland and Israel?

Turning to naïveté about progress, I offer Steven Pinker’s fatuous The Better Angels of Our Nature: Why Violence Has Declined. Pinker tries to show that human beings are becoming kinder and gentler. I have much to say in another post about Pinker’s thesis. One of my sources is Robert Epstein’s review of Pinker’s book. This passage is especially apt:

The biggest problem with the book … is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Less important in the grand scheme, but no less wrong-headed, is the idea of limitless progress in the arts. To quote myself:

In the early decades of the twentieth century, the visual, auditory, and verbal arts became an “inside game.” Painters, sculptors, composers (of “serious” music), choreographers, and writers of fiction began to create works not for the enjoyment of audiences but for the sake of exploring “new” forms. Given that the various arts had been perfected by the early 1900s, the only way to explore “new” forms was to regress toward primitive ones — toward a lack of structure…. Aside from its baneful influence on many true artists, the regression toward the primitive has enabled persons of inferior talent (and none) to call themselves “artists.” Thus modernism is banal when it is not ugly.

Painters, sculptors, etc., have been encouraged in their efforts to explore “new” forms by critics, by advocates of change and rebellion for its own sake (e.g., “liberals” and “bohemians”), and by undiscriminating patrons, anxious to be au courant. Critics have a special stake in modernism because they are needed to “explain” its incomprehensibility and ugliness to the unwashed.

The unwashed have nevertheless rebelled against modernism, and so its practitioners and defenders have responded with condescension, one form of which is the challenge to be “open minded” (i.e., to tolerate the second-rate and nonsensical). A good example of condescension is heard on Composers Datebook, a syndicated feature that runs on some NPR stations. Every Composers Datebook program closes by “reminding you that all music was once new.” As if to lump Arnold Schoenberg and John Cage with Johann Sebastian Bach and Ludwig van Beethoven.

All music, painting, sculpture, dance, and literature was once new, but not all of it is good. Much (most?) of what has been produced since 1900 is inferior, self-indulgent crap.

And most of the ticket-buying public knows it. Take opera, for example. A recent article purports to show that “Opera is dead, in one chart” (Christopher Ingraham, The Washington Post, October 31, 2014). Here’s the chart and the writer’s interpretation of it:

The chart shows that opera ceased to exist as a contemporary art form roughly around 1970. It’s from a blog post by composer and programmer Suby Raman, who scraped the Met’s public database oF performances going back to the 19th century. As Raman notes, 50 years is an insanely low bar for measuring the “contemporary” – in pop music terms, it would be like considering The Beatles’ I Wanna Hold Your Hand as cutting-edge.

Back at the beginning of the 20th century, anywhere from 60 to 80 percent of Met performances were of operas composed in some time in the 50 years prior. But since 1980, the share of contemporary performances has surpassed 10 percent only once.

Opera, as a genre, is essentially frozen in amber – Raman found that the median year of composition of pieces performed at the Met has always been right around 1870. In other words, the Met is essentially performing the exact same pieces now that it was 100 years ago….

Contrary to Ingraham, opera isn’t dead; for example, there are more than 220 active opera companies in the U.S. It’s just that there’s little demand for operatic works written after the late 1800s. Why? Because most opera-lovers don’t want to hear the strident, discordant, unmelodic trash that came later. Giacomo Puccini, who wrote melodic crowd-pleasers until his death in 1924, is an exception that proves the rule.

It occurred to me recently that language is in the same parlous state as the arts. Written and spoken English improved steadily as Americans became more educated — and as long as that education included courses which prescribed rules of grammar and usage. By “improved” I mean that communication became easier and more effective; specifically:

  • A larger fraction of Americans followed the same rules in formal communications (e.g., speeches, business documents, newspapers, magazines, and books,).
  • Movies and radio and TV shows also tended to follow those rules, thereby reaching vast numbers of Americans who did little or no serious reading.
  • There was a “trickle down” effect on Americans’ written and spoken discourse, especially where it involved mere acquaintances or strangers. Standard American English became a kind of lingua franca, which enabled the speaker or writer to be understood and taken seriously.

I call that progress.

There is, however, an (unfortunately) influential attitude toward language known as descriptivism. It is distinct from (and often opposed to) rule-setting (prescriptivism). Consider this passage from the first chapter of an online text:

Prescriptive grammar is based on the idea that there is a single right way to do things. When there is more than one way of saying something, prescriptive grammar is generally concerned with declaring one (and only one) of the variants to be correct. The favored variant is usually justified as being better (whether more logical, more euphonious, or more desirable on some other grounds) than the deprecated variant. In the same situation of linguistic variability, descriptive grammar is content simply to document the variants – without passing judgment on them.

This misrepresents the role of prescriptive grammar. It’s widely understood that there’s  more than one way of saying something, and more than one way that’s understandable to others. The rules of prescriptive grammar, when followed, improve understanding, in two ways. First, by avoiding utterances that would be incomprehensible or, at least, very hard to understand. Second, by ensuring that utterances aren’t simply ignored or rejected out of hand because their form indicates that the writer or speaker is either ill-educated or stupid.

What, then, is the role of descriptive grammar? The authors offer this:

[R]ules of descriptive grammar have the status of scientific observations, and they are intended as insightful generalizations about the way that speakers use language in fact, rather than about the way that they ought to use it. Descriptive rules are more general and more fundamental than prescriptive rules in the sense that all sentences of a language are formed in accordance with them, not just a more or less arbitrary subset of shibboleth sentences. A useful way to think about the descriptive rules of a language … is that they produce, or generate, all the sentences of a language. The prescriptive rules can then be thought of as filtering out some (relatively minute) portion of the entire output of the descriptive rules as socially unacceptable.

Let’s consider the assertion that descriptive rules produce all the sentences of a language. What does that mean? It seems to mean the actual rules of a language can be inferred by examining sentences uttered or written by users of the language. But which users? Native users? Adults? Adults who have graduated from high-school? Users with IQs of at least 85?

Pushing on, let’s take a closer look at descriptive rules and their utility. The authors say that

we adopt a resolutely descriptive perspective concerning language. In particular, when linguists say that a sentence is grammatical, we don’t mean that it is correct from a prescriptive point of view, but rather that it conforms to descriptive rules….

The descriptive rules amount to this: They conform to practices that a speakers and writers actually use in an attempt to convey ideas, whether or not the practices state the ideas clearly and concisely. Thus the authors approve of these sentences because they’re of a type that might well occur in colloquial speech:

Over there is the guy who I went to the party with.

Over there is the guy with whom I went to the party.

(Both are clumsy ways of saying “I went to the party with that person.”)

Bill and me went to the store.

(“Bill and I went to the store.” or “Bill went to the store with me.” or “I went to the store with Bill.” Aha! Three ways to say it correctly, not just one way.)

But the authors label the following sentences as ungrammatical because they don’t comport with the colloquial speech:

Over there is guy the who I went to party the with.

Over there is the who I went to the party with guy.

Bill and me the store to went.

In other words, the authors accept as grammatical anything that a speaker or writer is likely to say, according to the “rules” that can be inferred from colloquial speech and writing. It follows that whatever is is right, even “Bill and me to the store went” or “Went to the store Bill and me,” which aren’t far-fetched variations on “Bill and me went to the store.” (Yoda-isms they read like.) They’re understandable, but only with effort. And further evolution would obliterate their meaning.

The fact is that the authors of the online text — like descriptivists generally — don’t follow their own anarchistic prescription. Wilson Follett puts it this way in Modern American Usage: A Guide:

It is … one of the striking features of the libertarian position [with respect to language] that it preaches an unbuttoned grammar in a prose style that is fashioned with the utmost grammatical rigor. H.L. Mencken’s two thousand pages on the vagaries of the American language are written in the fastidious syntax of a precisian. If we go by what these men do instead of by what they say, we conclude that they all believe in conventional grammar, practice it against their own preaching, and continue to cultivate the elegance they despise in theory….

[T]he artist and the user of language for practical ends share an obligation to preserve against confusion and dissipation the powers that over the centuries the mother tongue has acquired. It is a duty to maintain the continuity of speech that makes the thought of our ancestors easily understood, to conquer Babel every day against the illiterate and the heedless, and to resist the pernicious and lulling dogma that in language … whatever is is right and doing nothing is for the best (pp. 30-1).

Follett also states the true purpose of prescriptivism, which isn’t to prescribe rules for their own sake:

[This book] accept[s] the long-established conventions of prescriptive grammar … on the theory that freedom from confusion is more desirable than freedom from rule…. (op. cit., p. 243).

E.B. White puts  it more colorfully in his introduction to The Elements of Style. Writing about William Strunk Jr., author of the original version of the book, White says:

All through The Elements of Style one finds evidence of the author’s deep sympathy for the reader. Will felt that the reader was in serious trouble most of the time, a man floundering in a swamp, and that it was the duty of anyone attempting to write English to drain this swamp quickly and get his man up on dry ground, or at least throw him a rope. In revising the text, I have tried to hold steadily in mind this belief of his, this concern for the bewildered reader (p. xvi, Third Edition).

Descriptivists would let readers founder in the swamp of incomprehensibility. If descriptivists had their way — or what they claim to be their way — American English would, like the arts, recede into formless primitivism.

Eternal vigilance about language is the price of comprehensibility.

ILLEGITIMI NON CARBORUNDUM LINGO

The vigilant are sorely tried these days. What follows are several restrained rants about some practices that should be resisted and repudiated.

Eliminate Filler Words

When I was a child, most parents and all teachers promptly ordered children to desist from saying “uh” between words. “Uh” was then the filler word favored by children, adolescents, and even adults. The resort to “uh” meant that the speaker was stalling because he had opened his mouth without having given enough thought to what he meant to say.

Next came “you know.” It has been displaced, in the main, by “like,” where it hasn’t been joined to “like” in the formation “like, you know.”

The need of a filler word (or phrase) seems ineradicable. Too many people insist on opening their mouths before thinking about what they’re about to say. Given that, I urge Americans in need of a filler word to use “uh” and eschew “like” and “like, you know.” “Uh” is far less distracting and irritating than the rat-a-tat of “like-like-like-like.”

Of course, it may be impossible to return to “uh.” Its brevity may not give the users of “like” enough time to organize their TV-smart-phone-video-game-addled brains and deliver coherent speech.

In any event, speech influences writing. Sloppy speech begets sloppy writing, as I know too well. I have spent the past 50 years of my life trying to undo habits of speech acquired in my childhood and adolescence — habits that still creep into my writing if I drop my guard.

Don’t Abuse Words

How am I supposed to know what you mean if you abuse perfectly good words? Here I discuss four prominent examples of abuse.

Anniversary

Too many times in recent years I’ve heard or read something like this: “Sally and me are celebrating our one-year anniversary.” The “me” is bad enough; “one-year anniversary” (or any variation of it) is truly egregious.

The word “anniversary” means “the annually recurring date of a past event.” To write or say “x-year anniversary” is redundant as well as graceless. Just write or say “first anniversary,” “two-hundred fiftieth anniversary,” etc., as befits the occasion.

To write or say “x-month anniversary” is nonsensical. Something that happened less than a year ago can’t have an anniversary. What is meant is that such-and-such happened “x” months ago. Just say it.

Data

A person who writes or says “data is” is at best an ignoramus and at worst a Philistine.

Language, above all else, should be used to make one’s thoughts clear to others. The pairing of a plural noun and a singular verb form is distracting, if not confusing. Even though datum is seldom used by Americans, it remains the singular foundation of data, which is the plural form. Data, therefore, never “is”; data always “are.”

H.W. Fowler says:

Latin plurals sometimes become singular English words (e.g., agenda, stamina) and data is often so treated in U.S.; in Britain this is still considered a solecism… (A Dictionary of Modern English Usage, Second Edition, p.119).

But Wilson Follett is better on the subject:

Those who treat data as a singular doubtless think of it as a generic noun, comparable to knowledge or information… [TEA: a generous interpretation]. The rationale of agenda as a singular is its use to mean a collective program of action, rather than separate items to be acted on. But there is as yet no obligation to change the number of data under the influence of error mixed with innovation (op. cit., pp. 130-1).

Hopefully and Its Brethren

Mark Liberman of Language Log discusses

the AP Style Guide’s decision to allow the use of hopefully as a sentence adverb, announced on Twitter at 6:22 a.m. on 17 April 2012:

Hopefully, you will appreciate this style update, announced at ‪#aces2012‬. We now support the modern usage of hopefully: it’s hoped, we hope.

Liberman, who is a descriptivist, defends AP’s egregious decision. His defense consists mainly of citing noted writers who have used “hopefully” where they meant “it is to be hoped.” I suppose that if those same noted writers had chosen to endanger others by driving on the wrong side of the road, Liberman would praise them for their “enlightened” approach to driving.

Geoff Nunberg also defends “hopefully“ in “The Word ‘Hopefully’ Is Here to Stay, Hopefully,” which appears at npr.org. Numberg (or the headline writer) may be right in saying that “hopefully” is here to stay. But that does not excuse the widespread use of the word in ways that are imprecise and meaningless.

The crux of Nunberg’s defense is that “hopefully” conveys a nuance that “language snobs” (like me) are unable to grasp:

Some critics object that [“hopefully” is] a free-floating modifier (a Flying Dutchman adverb, James Kirkpatrick called it) that isn’t attached to the verb of the sentence but rather describes the speaker’s attitude. But floating modifiers are mother’s milk to English grammar — nobody objects to using “sadly,” “mercifully,” “thankfully” or “frankly” in exactly the same way.

Or people complain that “hopefully” doesn’t specifically indicate who’s doing the hoping. But neither does “It is to be hoped that,” which is the phrase that critics like Wilson Follett offer as a “natural” substitute. That’s what usage fetishism can drive you to — you cross out an adverb and replace it with a six-word impersonal passive construction, and you tell yourself you’ve improved your writing.

But the real problem with these objections is their tone-deafness. People get so worked up about the word that they can’t hear what it’s really saying. The fact is that “I hope that” doesn’t mean the same thing that “hopefully” does. The first just expresses a desire; the second makes a hopeful prediction. I’m comfortable saying, “I hope I survive to 105″ — it isn’t likely, but hey, you never know. But it would be pushing my luck to say, “Hopefully, I’ll survive to 105,” since that suggests it might actually be in the cards.

Floating modifiers may be common in English, but that does not excuse them. Given Numberg’s evident attachment to them, I am unsurprised by his assertion that “nobody objects to using ‘sadly,’ ‘mercifully,’ ‘thankfully’ or ‘frankly’ in exactly the same way.”

Nobody, Mr. Nunberg? Hardly. Anyone who cares about clarity and precision in the expression of ideas will object to such usages. A good editor would rewrite any sentence that begins with a free-floating modifier — no matter which one of them it is.

Nunberg’s defense against such rewriting is that Wilson Follett offers “It is to be hoped that” as a cumbersome, wordy substitute for “hopefully.” I assume that Nunberg refers to Follett’s discussion of “hopefully” in Modern American Usage. If so, Nunberg once again proves himself an adherent of imprecision, for this is what Follett actually says about “hopefully”:

The German language is blessed with an adverb, hoffentlich, that affirms the desirability of an occurrence that may or may not come to pass. It is generally to be translated by some such periphrasis as it is to be hoped that; but hack translators and persons more at home in German than in English persistently render it as hopefully. Now, hopefully and hopeful can indeed apply to either persons or affairs. A man in difficulty is hopeful of the outcome, or a situation looks hopeful; we face the future hopefully, or events develop hopefully. What hopefully refuses to convey in idiomatic English is the desirability of the hoped-for event. College, we read, is a place for the development of habits of inquiry, the acquisition of knowledge and, hopefully, the establishment of foundations of wisdom. Such a hopefully is un-English and eccentric; it is to be hoped is the natural way to express what is meant. The underlying mentality is the same—and, hopefully, the prescription for cure is the same (let us hope) / With its enlarged circulation–and hopefully also increased readership–[a periodical] will seek to … (we hope) / Party leaders had looked confidently to Senator L. to win . . . by a wide margin and thus, hopefully, to lead the way to victory for. . . the Presidential ticket (they hoped) / Unfortunately–or hopefully, as you prefer it–it is none too soon to formulate the problems as swiftly as we can foresee them. In the last example, hopefully needs replacing by one of the true antonyms of unfortunately–e.g. providentially.

The special badness of hopefully is not alone that it strains the sense of -ly to the breaking point, but that appeals to speakers and writers who do not think about what they are saying and pick up VOGUE WORDS [another entry in Modern American Usage] by reflex action. This peculiar charm of hopefully accounts for its tiresome frequency. How readily the rotten apple will corrupt the barrel is seen in the similar use of transferred meaning in other adverbs denoting an attitude of mind. For example: Sorrowfully (regrettably), the officials charged with wording such propositions for ballot presentation don’t say it that way / the “suicide needle” which–thankfully–he didn’t see fit to use (we are thankful to say). Adverbs so used lack point of view; they fail to tell us who does the hoping, the sorrowing, or the being thankful. Writers who feel the insistent need of an English equivalent for hoffentlich might try to popularize hopingly, but must attach it to a subject capable of hoping (op. cit., pp. 178-9).

Follett, contrary to Nunberg’s assertion, does not offer “It is to be hoped that” as a substitute for “hopefully,” which would “cross out an adverb and replace it with a six-word impersonal passive construction.” Follett gives “it is to be hoped for” as the sense of “hopefully.” But, as the preceding quotation attests, Follett is able to replace “hopefully” (where it is misused) with a few short words that take no longer to write or say than “hopefully,” and which convey the writer’s or speaker’s intended meaning more clearly. And if it does take a few extra words to say something clearly, why begrudge those words?

What about the other floating modifiers — such as “sadly,” “mercifully,” “thankfully” and “frankly” — which Nunberg defends with much passion and no logic? Follett addresses those others in the third paragraph quoted above, but he does not dispose of them properly. For example, I would not simply substitute “regrettably” for “sorrowfully”; neither is adequate. What is wanted is something like this: “The officials who write propositions for ballots should not have said … , which is misleading (vague/ambiguous).” More words? Yes, but so what? (See above.)

In any event, a writer or speaker who is serious about expressing himself clearly to an audience will never say things like “Sadly (regrettably), the old man died,” when he means either “I am (we are/they are/everyone who knew him) is saddened by (regrets) the old man’s dying,” or (less probably) “The old man grew sad as he died” or “The old man regretted dying.” I leave “mercifully,” “thankfully,” “frankly” and the rest of the overused “-ly” words as an exercise for the reader.

The aims of a writer or speaker ought to be clarity and precision, not a stubborn, pseudo-logical insistence on using a word or phrase merely because it is in vogue or (more likely) because it irritates so-called language snobs. I doubt that even the pseudo-logical “language slobs” of Nunberg’s ilk condone “like” and “you know” as interjections. But, by Nunberg’s “logic,” those interjections should be condoned — nay, encouraged — because “everyone” knows what someone who uses them is “really saying,” namely, “I am too stupid or lazy to express myself clearly and precisely.”

Literally

This is from Dana Coleman’s article “According to the Dictionary, ‘Literally’ Also Now Means ‘Figuratively’,” (Salon, August 22, 2013):

Literally, of course, means something that is actually true: “Literally every pair of shoes I own was ruined when my apartment flooded.”

When we use words not in their normal literal meaning but in a way that makes a description more impressive or interesting, the correct word, of course, is “figuratively.”

But people increasingly use “literally” to give extreme emphasis to a statement that cannot be true, as in: “My head literally exploded when I read Merriam-Webster, among others, is now sanctioning the use of literally to mean just the opposite.”

Indeed, Ragan’s PR Daily reported last week that Webster, Macmillan Dictionary and Google have added this latter informal use of “literally” as part of the word’s official definition. The Cambridge Dictionary has also jumped on board….

Webster’s first definition of literally is, “in a literal sense or matter; actually.” Its second definition is, “in effect; virtually.” In addressing this seeming contradiction, its authors comment:

“Since some people take sense 2 to be the opposition of sense 1, it has been frequently criticized as a misuse. Instead, the use is pure hyperbole intended to gain emphasis, but it often appears in contexts where no additional emphasis is necessary.”…

The problem is that a lot of people use “literally” when they mean “figuratively” because they don’t know better. It’s literally* incomprehensible to me that the editors of dictionaries would suborn linguistic anarchy. Hopefully,** they’ll rethink their rashness.
_________
* “Literally” is used correctly, though it’s superfluous here.
** “Hopefully” is used incorrectly, but in the spirit of the times.

Punctuate Properly

I can’t compete with Lynne Truss’s Eats, Shoots & Leaves: The Zero-Tolerance Approach to Punctuation (discussed in Part Three), so I won’t try. Just read it and heed it.

But I must address the use of the hyphen in compound adjectives, and the serial comma.

Regarding the hyphen, David Bernstein of The Volokh Conspiracy writes:

I frequently have disputes with law reviewer editors over the use of dashes. Unlike co-conspirator Eugene, I’m not a grammatical expert, or even someone who has much of an interest in the subject.

But I do feel strongly that I shouldn’t use a dash between words that constitute a phrase, as in “hired gun problem”, “forensic science system”, or “toxic tort litigation.” Law review editors seem to want to generally want to change these to “hired-gun problem”, “forensic-science system”, and “toxic-tort litigation.” My view is that “hired” doesn’t modify “gun”; rather “hired gun” is a self-contained phrase. The same with “forensic science” and “toxic tort.”

Most of the commenters are right to advise Bernstein that the “dashes” — he means hyphens — are necessary. Why? To avoid confusion as to what is modifying the noun “problem.”

In “hired gun,” for example, “hired” (adjective) modifies “gun” (noun, meaning “gunslinger” or the like). But in “hired-gun problem,” “hired-gun” is a compound adjective which requires both of its parts to modify “problem.” It’s not a “hired problem” or a “gun problem,” it’s a “hired-gun problem.” The function of the hyphen is to indicate that “hired” and “gun,” taken separately, are meaningless as modifiers of “problem,” that is, to ensure that the meaning of the adjective-noun phrase is not misread.

A hyphen isn’t always necessary in such instances. But the consistent use of the hyphen in such instances avoids confusion and the possibility of misinterpretation.

The consistent use of the hyphen to form a compound adjective has a counterpart in the consistent use of the serial comma, which is the comma that precedes the last item in a list of three or more items (e.g., the red, white, and blue). Newspapers (among other sinners) eschew the serial comma for reasons too arcane to pursue here. Thoughtful counselors advise its use. (See, for example, Follett at pp. 422-3.) Why? Because the serial comma, like the hyphen in a compound adjective, averts ambiguity. It isn’t always necessary, but if it is used consistently, ambiguity can be avoided. (Here’s a great example, from the Wikipedia article linked to in the first sentence of this paragraph: “To my parents, Ayn Rand and God.” The writer means, of course, “To my parents, Ayn Rand, and God.”)

A little punctuation goes a long way.

Stand Fast against Political Correctness

As a result of political correctness, some words and phrases have gone out of favor. needlessly. Others are cluttering the language, needlessly. Political correctness manifests itself in euphemisms, verboten words, and what I call gender preciousness.

Euphemisms

These are much-favored by persons of the left, who seem unable to have an aversion to reality. Thus, for example:

  • “Crippled” became “handicapped,” which became “disabled” and then “differently abled” or “something-challenged.”
  • “Stupid” became “learning disabled,” which became “special needs” (a euphemistic category that houses more than the stupid).
  • “Poor” became “underprivileged,” which became “economically disadvantaged,” which became “entitled” (to other people’s money), in fact if not in word.
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

How these linguistic contortions have helped the crippled, stupid, poor, and colored is a mystery to me. Tact is admirable, but euphemisms aren’t tactful. They’re insulting because they’re condescending.

Verboten Words

The list is long; see this and this, for example. Words become verboten for the same reason that euphemisms arise: to avoid giving offense, even where offense wouldn’t or shouldn’t be taken.

David Bernstein, writing at TCS Daily several ago, recounted some tales about political correctness. This one struck close to home:

One especially merit-less [hostile work environment] claim that led to a six-figure verdict involved Allen Fruge, a white Department of Energy employee based in Texas. Fruge unwittingly spawned a harassment suit when he followed up a southeast Texas training session with a bit of self-deprecating humor. He sent several of his colleagues who had attended the session with him gag certificates anointing each of them as an honorary Coon Ass — usually spelled coonass — a mildly derogatory slang term for a Cajun. The certificate stated that [y]ou are to sing, dance, and tell jokes and eat boudin, cracklins, gumbo, crawfish etouffe and just about anything else. The joke stemmed from the fact that southeast Texas, the training session location, has a large Cajun population, including Fruge himself.

An African American recipient of the certificate, Sherry Reid, chief of the Nuclear and Fossil Branch of the DOE in Washington, D.C., apparently missed the joke and complained to her supervisors that Fruge had called her a coon. Fruge sent Reid a formal (and humble) letter of apology for the inadvertent offense, and explained what Coon Ass actually meant. Reid nevertheless remained convinced that Coon Ass was a racial pejorative, and demanded that Fruge be fired. DOE supervisors declined to fire Fruge, but they did send him to diversity training. They also reminded Reid that the certificate had been meant as a joke, that Fruge had meant no offense, that Coon Ass was slang for Cajun, and that Fruge sent the certificates to people of various races and ethnicities, so he clearly was not targeting African Americans. Reid nevertheless sued the DOE, claiming that she had been subjected to a racial epithet that had created a hostile environment, a situation made worse by the DOEs failure to fire Fruge.

Reid’s case was seemingly frivolous. The linguistics expert her attorney hired was unable to present evidence that Coon Ass meant anything but Cajun, or that the phrase had racist origins, and Reid presented no evidence that Fruge had any discriminatory intent when he sent the certificate to her. Moreover, even if Coon Ass had been a racial epithet, a single instance of being given a joke certificate, even one containing a racial epithet, by a non-supervisory colleague who works 1,200 miles away does not seem to remotely satisfy the legal requirement that harassment must be severe and pervasive for it to create hostile environment liability. Nevertheless, a federal district court allowed the case to go to trial, and the jury awarded Reid $120,000, plus another $100,000 in attorneys fees. The DOE settled the case before its appeal could be heard for a sum very close to the jury award.

I had a similar though less costly experience some years ago, when I was chief financial and administrative officer of a defense think-tank. In the course of discussing the company’s budget during meeting with employees from across the company, I uttered “niggardly” (meaning stingy or penny-pinching). The next day a fellow vice president informed me that some of the black employees from her division had been offended by “niggardly.” I suggested that she give her employees remedial training in English vocabulary. That should have been the verdict in the Reid case.

Gender Preciousness

It has become fashionable for academicians and pseudo-serious writers to use “she” where “he” long served as the generic (and sexless) reference to a singular third person. Here is an especially grating passage from an by Oliver Cussen:

What is a historian of ideas to do? A pessimist would say she is faced with two options. She could continue to research the Enlightenment on its own terms, and wait for those who fight over its legacy—who are somehow confident in their definitions of what “it” was—to take notice. Or, as [Jonathan] Israel has done, she could pick a side, and mobilise an immense archive for the cause of liberal modernity or for the cause of its enemies. In other words, she could join Moses Herzog, with his letters that never get read and his questions that never get answered, or she could join Sandor Himmelstein and the loud, ignorant bastards (“The Trouble with the Enlightenment,” Prospect, May 5, 2013).

I don’t know about you, but I’m distracted by the use of the generic “she,” especially by a male. First, it’s not the norm (or wasn’t the norm until the thought police made it so). Thus my first reaction to reading it in place of “he” is to wonder who this “she” is; whereas,  the function of “he” as a stand-in for anyone (regardless of gender) was always well understood. Second, the usage is so obviously meant to mark the writer as “sensitive” and “right thinking” that it calls into question his sincerity and objectivity.

I could go on about the use of “he or she” in place of “he” or “she.” But it should be enough to call it what it is: verbal clutter.

Then there is “man,” which for ages was well understood (in the proper context) as referring to persons in general, not to male persons in particular. (“Mankind” merely adds a superfluous syllable.)

The short, serviceable “man” has been replaced, for the most part, by “humankind.” I am baffled by the need to replaced one syllable with three. I am baffled further by the persistence of “man” — a sexist term — in the three-syllable substitute. But it gets worse when writers strain to avoid the solo use of “man” by resorting to “human beings” and the “human species.” These are longer than “humankind,” and both retain the accursed “man.”

Don’t Split Infinitives

Just don’t do it, regardless of the pleadings of descriptivists. Even Follett counsels the splitting of infinitives, when the occasion demands it. I part ways with Follett in this matter, and stand ready to be rebuked for it.

Consider the case of Eugene Volokh, a known grammatical relativist, who scoffs at “to increase dramatically” — as if “to dramatically increase” would be better. The meaning of “to increase dramatically” is clear. The only reason to write “to dramatically increase” would be to avoid the appearance of stuffiness; that is, to pander to the least cultivated of one’s readers.

Seeming unstuffy (i.e., without standards) is neither a necessary nor sufficient reason to split an infinitive. The rule about not splitting infinitives, like most other grammatical rules, serves the valid and useful purpose of preventing English from sliding yet further down the slippery slope of incomprehensibility than it has slid.

If an unsplit infinitive makes a clause or sentence seem awkward, the clause or sentence should be recast to avoid the awkwardness. Better that than make an exception that leads to further exceptions — and thence to Babel.

A Dictionary of Modern English Usage (a.k.a. Fowler’s Modern English Usage) counsels splitting an infinitive where recasting doesn’t seem to work:

We admit that separation of to from its infinitive is not in itself desirable, and we shall not gratuitously say either ‘to mortally wound’ or ‘to mortally be wounded’…. We maintain, however, that a real [split infinitive], though not desirable in itself, is preferable to either of two things, to real ambiguity, and to patent artificiality…. We will split infinitives sooner than be ambiguous or artificial; more than that, we will freely admit that sufficient recasting will get rid of any [split infinitive] without involving either of those faults, and yet reserve to ourselves the right of deciding in each case whether recasting is worth while. Let us take an example: ‘In these circumstances, the Commission … has been feeling its way to modifications intended to better equip successful candidates for careers in India and at the same time to meet reasonable Indian demands.’… What then of recasting? ‘intended to make successful candidates fitter for’ is the best we can do if the exact sense is to be kept… (p. 581, Second Edition).

Good try, but not good enough. This would do: “In these circumstances, the Commission … has been considering modifications that would better equip successful candidates for careers in India and at the same time meet reasonable Indian demands.”

Enough said? I think so.

*     *     *

Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.

Signature

Election 2014: The Outlook on E-Day

UPDATED BELOW

Here it is, the day that anti-leftists have been waiting for. The portents remain favorable for GOP control of Congress.

As of this moment, the “poll of polls” at RealClearPolitics.com has the GOP gaining 7 Senate seats, for a 52-48 majority (assuming that 3 independents caucus with Democrats), and winning at least 226 House seats (241 if the tossups divide evenly). Henry Olsen also predicts that the GOP will pick up 7 Senate seats. And he sees the GOP taking 245 House seats.

The projected outcome in the House is close to my own estimate, which doesn’t rely on polls. In any event, the GOP is certain to retain its House majority, and almost certain to increase it — perhaps winning more seats than in any election since World War II. But don’t expect to wake up tomorrow morning with a GOP Senate majority in the bag. It may not be secured until December 6, with a runoff between Mary Landrieu (D) and Bill Cassidy (R), or until January 6, with a runoff between Michelle Nunn (D) and David Perdue (R).

The GOP’s resurgence has a lot (perhaps everything) to do with the continuing unpopularity of Obama and Obamacare. Both are less popular now than they were four years ago, when the GOP gained 6 Senate seats and won 242 House seats:

Election indicators - 2014 vs 2010

The indicators are drawn from the Obama Approval Index History published at Rasmussen Reports, and Rasmussen’s sporadic polling of likely voters about Obamacare (latest report here).

The first indicator (blue lines) measures Obama’s overall rating with likely voters. This indicator is a measure of superficial support for Obama. On that score, he’s just as unpopular now as he was four years ago. A plus for the GOP.

The second indicator (black lines) measures Obama’s rating with likely voters who express strong approval or disapproval of him. Obama’s strong-approval rating remains well below the pace of four years ago. A big plus for the GOP.

The third indicator (red lines) represents Obama’s strong-approval quotient (fraction of likely voters who strongly approve/fraction of likely voters who approve) divided by his strong-disapproval quotient (fraction of likely voters who strongly disapprove/fraction of likely voters who disapprove). I call this the “enthusiasm” indicator. Higher values represent greater enthusiasm for Obama; lower values, less enthusiasm. This is perhaps the best measure of support for Obama — and it looks a lot worse (for Democrats) than it did in 2010. Another big plus for the GOP.

The green points (connected by lines) are plots of Obamacare’s standing, as measured by the ratio of strong approval to strong disapproval among likely voters. Obamacare is faring much worse in 2014 than it did in 2010. Yet another big plus for the GOP.

UPDATE (11/06/14)

The indicators were on target.

With 52 Senate seats in the bag, the Republican candidate leading the Democrat incumbent in Alaska, and a pending runoff in Louisiana that’s almost certain to result in another GOP gain, it looks like the Reppublicans will end up with 54 seats. That would be a gain of 9 seats, as against 6 in 2010.

The GOP has already won 243 House seats, and it looks like another 5 will go Republican. A total of 248 would give the GOP its largest House majority since World War II.

UPDATE (12/17/14)

The final results are in. The GOP gained 9 Senate seats, for a 54-46 majority, and 14 House seats, for a 247-188 majority. That’s a benchmark that I’ll use in future projections of congressional elections.

Signature

On Writing: Part Three

Part One gives excerpts of W.Somerset Maugham’s candid insights about the craft of writing. Part Two gives my advice to writers of non-fiction works. This part recommends some writings about writing, some writers to emulate, and a short list of reference works. Part Four will deliver some sermonettes about practices to follow if you wish to be taken seriously and not thought of as a semi-literate, self-indulgent, faddish dilettante.

WRITINGS ABOUT WRITING

See Part One for excerpts of Maugham‘s memoir, The Summing Up. Follow the link to order a copy of the book. It’s personal, candid, and insightful. And it bears re-reading at intervals because it’s so densely packed with wisdom.

Read Steven Pinker‘s essay, “Why Academic Writing Stinks” (The Chronicle Review, September 26, 2014). You may not be an academic, but I’ll bet that you sometimes lapse into academese. (I know that I sometimes do.) Pinker’s essay will help you to recognize academese, and to understand why it’s to be avoided.

Pinker’s essay also appears in a booklet, “Why Academics Stink at Writing–and How to Fix It,” which is available here in exchange for your name, your job title, the name of your organization, and your e-mail address. (Whether you wish to give true information is up to you.) Of the four essays that follow Pinker’s, I prefer the one by Michael Munger.

Beyond that, pick and chose by searching on “writers on writing.” Google gave me 193,000 hits. Hidden among the dross, I found this, which led me to this gem: “George Orwell on Writing, How to Counter the Mindless Momentum of Language, and the Four Questions a Great Writer Must Ask Herself.” (“Herself”? I’ll deliver a sermonette about gender in Part Four.)

Those of you who know (or know of) The Elements of Style, you may wonder why I haven’t mentioned E.B. White. I’m saving him for the next two sections.

WRITERS TO EMULATE

Study Maugham’s The Summing Up for its straightforward style. Consider these opening sentences of a paragraph, for example:

Another cause of obscurity is that the writer is himself not quite sure of his meaning. He has a vague impression of what he wants to say, but has not, either from lack of mental power or from laziness, exactly formulated it in his mind and it is natural enough that he should not find a precise expression for a confused idea. This is due largely to the fact that many writers think, not before, but as they write. The pen originates the thought.

This is a classic example of good writing. The first sentence states the topic of the paragraph. The following sentences elaborate it. Each sentence is just long enough to convey a single, complete thought. Because of that, even the rather long second sentence should be readily understood by a high-school graduate (a graduate of a small-city high school in the 1950s, at least).

I offer the great mathematician, G.H. Hardy, as a second exemplar. In particular, I recommend Hardy’s A Mathematician’s Apology. (It’s an apology n the sense of “a formal written defense of something you believe in strongly,” where the something is the pursuit of pure mathematics.) The introduction by C.P Snow is better than Hardy’s long essay, but Snow was a published novelist as well as a trained scientist. Hardy’s publications, other than the essay, are mathematical. The essay is notable for its accessibility, even to non-mathematicians. Of its 90 pages, only 23 (clustered near the middle) require a reader to cope with mathematics, but it’s mathematics that shouldn’t daunt a person who has taken and passed high-school algebra.

Hardy’s prose is flawed, to be sure. He overuses shudder quotes, and occasionally gets tangled in a too-long sentence. But I’m taken by his exposition of the art of doing higher mathematics, and the beauty of doing it well. Hardy, in other words, sets an example to be followed by writers who wish to capture the essence of a technical subject and convey that essence to intelligent laymen.

Here are some samples:

There are many highly respectable motives which may lead men to prosecute research, but three which are much more important than the rest. The first (without which the rest must come to nothing) is intellectual curiosity, desire to know the truth. Then, professional pride, anxiety to be satisfied with one’s performance, the shame that overcomes any self-respecting craftsman when his work is unworthy of his talent. Finally, ambition, desire for reputation, and the position, even the power or the money, which it brings. It may be fine to feel, when you have done your work, that you have added to the happiness or alleviated the sufferings of others, but that will not be why you did it. So if a mathematician, or a chemist, or even a physiologist, were to tell me that the driving force in his work had been the desire to benefit humanity, then I should not believe him (nor should I think any better of him if I did). His dominant motives have been those which I have stated and in which, surely, there is nothing of which any decent man need be ashamed.

*     *     *

A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas. A painter makes patters with shapes and colors, a poet with words. A painting may embody an ‘idea’, but the idea is usually commonplace and unimportant. In poetry, ideas count for a good deal more; but, as Housman insisted, the importance of ideas in poetry is habitually exaggerated…

…A mathematician, on the other hand, has no material to work with but ideas, and his patterns are likely to last longer, since ideas wear less with time than words.

A third exemplar is E.B. White, a successful writer of fiction who is probably best known for The Elements of Style. (It’s usually called “Strunk & White” or “the little book.”) It’s an outgrowth of a slimmer volume of the same name by William Strunk Jr. (Strunk had been dead for 13 years when White produced the first edition of Strunk & White.)

I’ll address the little book’s authoritativeness in the next section. Here, I’ll highlight White’s style of writing. This is from the introduction to the third edition (the last one edited by White):

 The Elements of Style, when I re-examined it in 1957, seemed to me to contain rich deposits of gold. It was Will Strunk’s parvum opus, his attempt to cut the vast tangle of English rhetoric down to size and write its rules and principles on the head of a pin. Will himself had hung the tag “little” on the book; he referred to it sardonically and with secret pride as “the little book,” always giving the word “little” a special twist, as though he were putting a spin on a ball. In its original form, it was  forty-three-page summation of the case for cleanliness, accuracy, and brevity in the use of English.

Vivid, direct, and engaging. And the whole book reads like that.

REFERENCE WORKS

If you could have only one book to help you write better, it would be The Elements of Style. (There’s now a fourth edition, for which I can’t vouch, but which seems to cover the same ground as my trusty third edition.) Admittedly, Strunk & White has a vociferous critic, one Geoffrey K. Pullum. But Pullum documents only one substantive flaw: an apparent mischaracterization of what constitutes the passive voice. What Pullum doesn’t say is that the book correctly flays the kind of writing that it calls passive (correctly or not). Further, Pullum derides the book’s many banal headings, while ignoring what follows them: sound advice, backed by concrete examples. (There’s a nice rebuttal of Pullum here.) It’s evident that the book’s real sin — in Pullum’s view — is “bossiness” (prescriptivism), which is no sin at all, as I’ll explain in Part Four.

There are so many good writing tips in Strunk & White that it was hard for me to choose a sample. I randomly chose “Omit Needless Words” (one of the headings derided by Pullum), which opens with a statement of principles:

Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine to unnecessary parts. This requires not that the writer make all of his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.

That would be empty rhetoric, were it not followed by further discussion and 17 specific examples. Here are a few:

the question as to whether should be replaced by whether or the question whether

the reason why is that should be replaced by because

I was unaware of the fact that should be replace by I was unaware that or I did not know that

His brother, who is a member of the same firm should be replaced by His brother, a member of the same firm

There’s much more than that to Strunk & White, of course, (Go here to see table of contents.) You’ll become a better writer — perhaps an excellent one — if you carefully read Strunk & White, re-read it occasionally, and apply the principles that it espouses and illustrates.

After Strunk & White, my favorite instructional work is Lynne Truss‘s Eats, Shoots & Leaves: The Zero-Tolerance Approach to Punctuation. I vouch for the accuracy of this description of the book (Publishers Weekly via Amazon.com):

Who would have thought a book about punctuation could cause such a sensation? Certainly not its modest if indignant author, who began her surprise hit motivated by “horror” and “despair” at the current state of British usage: ungrammatical signs (“BOB,S PETS”), headlines (“DEAD SONS PHOTOS MAY BE RELEASED”) and band names (“Hear’Say”) drove journalist and novelist Truss absolutely batty. But this spirited and wittily instructional little volume, which was a U.K. #1 bestseller, is not a grammar book, Truss insists; like a self-help volume, it “gives you permission to love punctuation.” Her approach falls between the descriptive and prescriptive schools of grammar study, but is closer, perhaps, to the latter. (A self-professed “stickler,” Truss recommends that anyone putting an apostrophe in a possessive “its”-as in “the dog chewed it’s bone”-should be struck by lightning and chopped to bits.) Employing a chatty tone that ranges from pleasant rant to gentle lecture to bemused dismay, Truss dissects common errors that grammar mavens have long deplored (often, as she readily points out, in isolation) and makes elegant arguments for increased attention to punctuation correctness: “without it there is no reliable way of communicating meaning.” Interspersing her lessons with bits of history (the apostrophe dates from the 16th century; the first semicolon appeared in 1494) and plenty of wit, Truss serves up delightful, unabashedly strict and sometimes snobby little book, with cheery Britishisms (“Lawks-a-mussy!”) dotting pages that express a more international righteous indignation.

Next up is Wilson Follet’s Modern American Usage. The link points to a newer edition than the one that I’ve relied on for more than 40 years. Reviews of the newer edition, edited by one Erik Wensberg, are mixed but generally favorable. However, the newer edition seems to lack Follett’s “Introductory,” which is divided into “Usage, Purism, and Pedantry” and “The Need of an Orderly Mind.” If that is so, the newer edition is likely to be less uncompromising toward language relativists like Geoffrey Pullum. The following quotations from Follett’s “Introductory” (one from each section), will give you an idea of Follett’s stand on relativism:

[F]atalism about language cannot be the philosophy of those who care abut language; it is the illogical philosophy of their opponents. Surely the notion that, because usage is ultimately what everybody does to words, nobody can or should do anything about them is self-contradictory. Somebody, by definition does something, and this something is best done by those with convictions and a stake in the outcome, whether the stake of private pleasure or of professional duty or both does not matter. Resistance always begins with individuals.

*     *     *

A great deal of our language is so automatic that even the thoughtful never think about it, and this mere not-thinking is the gate through which solecisms or inferior locutions slip in. Some part, greater or smaller, of every thousand words is inevitably parroted, even by the least parrotlike.

(A reprint of the original edition is available here.)

I have one more book to recommend: The Chicago Manual of Style. Though the book is a must-have for editors, serious writers should also own a copy and consult it often. If you’re unfamiliar with the book, you can get an idea of its vast range and depth of coverage by following the preceding link, clicking on “Look inside,” and perusing the table of contents, first pages, and index.

Every writer should have a good dictionary and thesaurus at hand. I use The Free Dictionary, and am seldom disappointed by it. There also look promising: Dictionary.com and Merriam-Webster. I suggest, you decide (or offer alternatives).

Signature

On Writing: Part Two

In Part One of this series, I sampled the insights of W. Somerset Maugham (English, 1874-1965), a prolific and popular playwright, novelist, short-story writer, and author of non-fiction works. I chose to begin with Maugham — in particular, with excerpts of his memoir, The Summing Up — because of his unquestioned success as a writer and his candid assessment of writers, himself included.

Maugham’s advice to “write lucidly, simply, euphoniously and yet with liveliness” is well-supported by examples and analysis. But Maugham focuses on literary fiction and does not delve the mechanics of non-fiction writing. Thus this post, which distills lessons learned in my 51 years as a writer, critic, and publisher of non-fiction material, much of it technical.

THE FIRST DRAFT

1. Decide — before you begin to write — on your main point and your purpose for making it.

Can you state your main point in a sentence? If you can’t, you’re not ready to write, unless writing is (for you) a form of therapy or catharsis. If it is, record your thoughts in a private journal and spare the serious readers of the world.

Your purpose may be descriptive, explanatory, or persuasive. An economist may, for example, begin an article by describing the state of the economy, as measured by Gross Domestic Product (GDP). He may then explain that the rate of growth in GDP has receded since the end of World War II, because of greater  government spending and the cumulative effect of regulatory activity. He is then poised to make a case for less spending and for the cancellation of regulations that impede economic growth.

2. Avoid wandering from your main point and purpose; use an outline.

You can get by with a bare outline, unless you’re writing a book, a manual, or a long article. Fill the outline as you go. Change the outline if you see that you’ve omitted a step or put some steps in the wrong order. But always work to an outline, however sketchy and malleable it may be.

3.  Start by writing an introductory paragraph that summarizes your “story line.”

The introductory paragraph in a news story is known as  “the lead” or “the lede” (a spelling that’s meant to convey the correct pronunciation). A classic lead gives the reader the who, what, why, when, where, and how of the story. As noted in Wikipedia, leads aren’t just for journalists:

Leads in essays summarize the outline of the argument and conclusion that follows in the main body of the essay. Encyclopedia leads tend to define the subject matter as well as emphasize the interesting points of the article. Features and general articles in magazines tend to be somewhere between journalistic and encyclopedian in style and often lack a distinct lead paragraph entirely. Leads or introductions in books vary enormously in length, intent and content.

Think of the lead as a target toward which you aim your writing. You should begin your first draft with a lead, even if you later decide to eliminate or radically prune the lead.

4. Lay out a straight path for the reader.

You needn’t fill your outline sequentially, but the outline should trace a linear progression from statement of purpose to conclusion or call for action. Trackbacks and detours can be effective literary devices in the hands of a skilled writer of fiction. But you’re not writing fiction, let alone mystery fiction. So just proceed in a straight line, from beginning to end.

Quips, asides, and anecdotes should be used sparingly, and only if they reinforce your message and don’t distract the reader’s attention from it.

5. Know your audience, and write for it.

I aim at readers who can grasp complex concepts and detailed arguments. But if you’re writing something like a policy manual for employees at all levels of your company, you’ll want to keep it simple and well-marked: short words, short sentences, short paragraphs, numbered sections and sub-sections, and so on.

6. Facts are your friends — unless you’re trying to sell a lie, of course.

Unsupported generalities will defeat your purpose, unless you’re writing for a gullible, uneducated audience. Give concrete examples and cite authoritative references. If your work is technical, show your data and calculations, even if you must put the details in footnotes or appendices to avoid interrupting the flow of your argument. Supplement your words with tables and graphs, if possible, but make them as simple as you can without distorting the underlying facts.

7. Momentum is your best friend.

Write a first draft quickly, even if you must leave holes to be filled later. I’ve always found it easier to polish a rough draft that spans the entire outline than to work from a well-honed but unaccompanied introductory section.

FROM FIRST DRAFT TO FINAL VERSION

8. Your first draft is only that — a draft.

Unless you’re a prodigy, you’ll have to do some polishing (probably a lot) before you have something that a reader can follow with ease.

9. Where to begin? Stand back and look at the big picture.

Is your “story line” clear? Are your points logically connected? Have you omitted key steps or important facts? If you find problems, fix them before you start nit-picking your grammar, syntax, and usage.

10. Nit-picking is important.

Errors of grammar, syntax, and usage can (and probably will) undermine your credibility. Thus, for example, subject and verb must agree (“he says” not “he say”); number must be handled correctly (“there are two” not “there is two”); tense must make sense (“the shirt shrank” not “the shirt shrunk”); usage must be correct (“its” is the possessive pronoun, “it’s” is the contraction for “it is”).

11. Critics are necessary, even if not mandatory.

Unless you’re a first-rate editor and objective self-critic, steps 9 and 10 should be handed off to another person or persons — even if you’re an independent writer without a boss or editor to look over your shoulder. If your work must be reviewed by a boss or editor, count yourself lucky. Your boss is responsible for the quality of your work; he therefore has a good reason to make it better. If your editor isn’t qualified to do substantive editing (step 9), he can at least nit-pick with authority (step 10).

12. Accept criticism gratefully and graciously.

Bad writers don’t, which is why they remain bad writers. Yes, you should reject (or fight against) changes and suggestions if they are clearly wrong, and if you can show that they’re wrong. But if your critic tells you that your logic is muddled, your facts are inapt, and your writing stinks (in so many words), chances are that your critic is right. And you’ll know that your critic is dead right if your defense (perhaps unvoiced) is “That’s just my style of writing.”

13. What if you’re an independent writer and have no one to turn to?

Be your own worst critic. Let your first draft sit for a day or two before you return to it. Then look at it as if you’d never seen it before, as if someone else had written it. Ask yourself if it makes sense, if every key point is well-supported, and if key points are missing, Look for glaring errors in grammar, syntax, and usage. (I’ll list some useful reference works in Part Three.) If you can’t find any problems, you shouldn’t be a self-critic — and you’re probably a terrible writer.

14. How many times should you revise your work before it’s published?

That depends, of course, on the presence or absence of a deadline. The deadline may be a formal one, geared to a production schedule. Or it may be an informal but real one, driven by current events (e.g., the need to assess a new economics text while it’s in the news). But even without a deadline, two revisions of a rough draft should be enough. A piece that’s rewritten several times can lose its (possessive pronoun) edge. And unless you’re a one-work wonder, or an amateur with time to spare, every rewrite represents a forgone opportunity to begin a new work.

*     *     *

If you act on this advice you’ll become a better writer. But be patient with yourself. Improvement takes time, and perfection never arrives.

I welcome your comments, structural or nit-picking as they may be.

Signature

Election 2014: E-Day Minus 1 Week

UPDATED HERE

As of this moment, the “poll of polls” at RealClearPolitics.com has the GOP gaining 7 Senate seats, for a 52-48 majority, and winning at least 228 House seats (240 if the tossups divide evenly). The numbers will change between now and election day, so just click on the links for the latest estimates.

The projected outcome in the House is close to my own estimate, which doesn’t rely on polls. In any event, the GOP is certain to retain its majority, and almost certain to increase it — perhaps winning more seats than in any election since World War II.

The outcome in the Senate is less certain. But I remain optimistic, given the unpopularity of Obama and Obamacare relative to their standing four years ago, when the GOP gained 6 Senate seats:

Election indicators - 2014 vs 2010

The indicators are drawn from the Obama Approval Index History published at Rasmussen Reports, and Rasmussen’s sporadic polling of likely voters about Obamacare (latest report here).

The first indicator (blue lines) measures Obama’s overall rating with likely voters. This indicator is a measure of superficial support for Obama. On that score, he’s just as unpopular now as he was four years ago. A plus for the GOP.

The second indicator (black lines) measures Obama’s rating with likely voters who express strong approval or disapproval of him. Obama’s strong-approval rating remains well below the pace of four years ago. A big plus for the GOP.

The third indicator (red lines) represents Obama’s strong-approval quotient (fraction of likely voters who strongly approve/fraction of likely voters who approve) divided by his strong-disapproval quotient (fraction of likely voters who strongly disapprove/fraction of likely voters who disapprove). I call this the “enthusiasm” indicator. Higher values represent greater enthusiasm for Obama; lower values, less enthusiasm. This is perhaps the best measure of support for Obama — and it looks a lot worse (for Democrats) than it did in 2010. Another big plus for the GOP.

The green points (connected by lines) are plots of Obamacare’s standing, as measured by the ratio of strong approval to strong disapproval among likely voters. Obamacare is faring much worse in 2014 than it did in 2010. Yet another big plus for the GOP.

Stay tuned for my final report on the morning of election day.

Signature

My View of Libertariansim

A reader asked for my definition of “libertarian.” I’ve written about libertarianism many times since my early days as an unsophisticated adherent of J.S. Mill’s solipsistic “harm principle.”

My journey away from solipsistic libertarianism began with “A Paradox for Libertarians.” “Common Ground for Conservatives and Libertarians?” marks the next step in my journey. My declaration of independence from the harm principle is documented in “The Paradox of Libertarianism.” I then wrote “Liberty As a Social Construct,” “Social Norms and Liberty,” and “A Footnote about Liberty and Social Norms.” Those posts go beyond my rejection of the harm principle as the proper basis of libertarianism, and introduce the social aspect of liberty. I reiterated and elaborated my criticism of the harm principle in “The Harm Principle,” “Footnotes to ‘The Harm Principle’,” and “The Harm Principle, Again.”

All of those posts — and more in the same revisionist vein — appeared at my old blog, Liberty Corner. Those many posts set the stage for many more at Politics & Prosperity, including these:

On Liberty

Pseudo-Libertarian Sophistry vs. True Libertarianism

Libertarian Conservative or Conservative Libertarian?

More Pseudo-Libertarianism