On Writing: Part Two

In Part One of this series, I sampled the insights of W. Somerset Maugham (English, 1874-1965), a prolific and popular playwright, novelist, short-story writer, and author of non-fiction works. I chose to begin with Maugham — in particular, with excerpts of his memoir, The Summing Up — because of his unquestioned success as a writer and his candid assessment of writers, himself included.

Maugham’s advice to “write lucidly, simply, euphoniously and yet with liveliness” is well-supported by examples and analysis. But Maugham focuses on literary fiction and does not delve the mechanics of non-fiction writing. Thus this post, which distills lessons learned in my 51 years as a writer, critic, and publisher of non-fiction material, much of it technical.

THE FIRST DRAFT

1. Decide — before you begin to write — on your main point and your purpose for making it.

Can you state your main point in a sentence? If you can’t, you’re not ready to write, unless writing is (for you) a form of therapy or catharsis. If it is, record your thoughts in a private journal and spare the serious readers of the world.

Your purpose may be descriptive, explanatory, or persuasive. An economist may, for example, begin an article by describing the state of the economy, as measured by Gross Domestic Product (GDP). He may then explain that the rate of growth in GDP has receded since the end of World War II, because of greater  government spending and the cumulative effect of regulatory activity. He is then poised to make a case for less spending and for the cancellation of regulations that impede economic growth.

2. Avoid wandering from your main point and purpose; use an outline.

You can get by with a bare outline, unless you’re writing a book, a manual, or a long article. Fill the outline as you go. Change the outline if you see that you’ve omitted a step or put some steps in the wrong order. But always work to an outline, however sketchy and malleable it may be.

3.  Start by writing an introductory paragraph that summarizes your “story line.”

The introductory paragraph in a news story is known as  “the lead” or “the lede” (a spelling that’s meant to convey the correct pronunciation). A classic lead gives the reader the who, what, why, when, where, and how of the story. As noted in Wikipedia, leads aren’t just for journalists:

Leads in essays summarize the outline of the argument and conclusion that follows in the main body of the essay. Encyclopedia leads tend to define the subject matter as well as emphasize the interesting points of the article. Features and general articles in magazines tend to be somewhere between journalistic and encyclopedian in style and often lack a distinct lead paragraph entirely. Leads or introductions in books vary enormously in length, intent and content.

Think of the lead as a target toward which you aim your writing. You should begin your first draft with a lead, even if you later decide to eliminate or radically prune the lead.

4. Lay out a straight path for the reader.

You needn’t fill your outline sequentially, but the outline should trace a linear progression from statement of purpose to conclusion or call for action. Trackbacks and detours can be effective literary devices in the hands of a skilled writer of fiction. But you’re not writing fiction, let alone mystery fiction. So just proceed in a straight line, from beginning to end.

Quips, asides, and anecdotes should be used sparingly, and only if they reinforce your message and don’t distract the reader’s attention from it.

5. Know your audience, and write for it.

I aim at readers who can grasp complex concepts and detailed arguments. But if you’re writing something like a policy manual for employees at all levels of your company, you’ll want to keep it simple and well-marked: short words, short sentences, short paragraphs, numbered sections and sub-sections, and so on.

6. Facts are your friends — unless you’re trying to sell a lie, of course.

Unsupported generalities will defeat your purpose, unless you’re writing for a gullible, uneducated audience. Give concrete examples and cite authoritative references. If your work is technical, show your data and calculations, even if you must put the details in footnotes or appendices to avoid interrupting the flow of your argument. Supplement your words with tables and graphs, if possible, but make them as simple as you can without distorting the underlying facts.

7. Momentum is your best friend.

Write a first draft quickly, even if you must leave holes to be filled later. I’ve always found it easier to polish a rough draft that spans the entire outline than to work from a well-honed but unaccompanied introductory section.

FROM FIRST DRAFT TO FINAL VERSION

8. Your first draft is only that — a draft.

Unless you’re a prodigy, you’ll have to do some polishing (probably a lot) before you have something that a reader can follow with ease.

9. Where to begin? Stand back and look at the big picture.

Is your “story line” clear? Are your points logically connected? Have you omitted key steps or important facts? If you find problems, fix them before you start nit-picking your grammar, syntax, and usage.

10. Nit-picking is important.

Errors of grammar, syntax, and usage can (and probably will) undermine your credibility. Thus, for example, subject and verb must agree (“he says” not “he say”); number must be handled correctly (“there are two” not “there is two”); tense must make sense (“the shirt shrank” not “the shirt shrunk”); usage must be correct (“its” is the possessive pronoun, “it’s” is the contraction for “it is”).

11. Critics are necessary, even if not mandatory.

Unless you’re a first-rate editor and objective self-critic, steps 9 and 10 should be handed off to another person or persons — even if you’re an independent writer without a boss or editor to look over your shoulder. If your work must be reviewed by a boss or editor, count yourself lucky. Your boss is responsible for the quality of your work; he therefore has a good reason to make it better. If your editor isn’t qualified to do substantive editing (step 9), he can at least nit-pick with authority (step 10).

12. Accept criticism gratefully and graciously.

Bad writers don’t, which is why they remain bad writers. Yes, you should reject (or fight against) changes and suggestions if they are clearly wrong, and if you can show that they’re wrong. But if your critic tells you that your logic is muddled, your facts are inapt, and your writing stinks (in so many words), chances are that your critic is right. And you’ll know that your critic is dead right if your defense (perhaps unvoiced) is “That’s just my style of writing.”

13. What if you’re an independent writer and have no one to turn to?

Be your own worst critic. Let your first draft sit for a day or two before you return to it. Then look at it as if you’d never seen it before, as if someone else had written it. Ask yourself if it makes sense, if every key point is well-supported, and if key points are missing, Look for glaring errors in grammar, syntax, and usage. (I’ll list some useful reference works in Part Three.) If you can’t find any problems, you shouldn’t be a self-critic — and you’re probably a terrible writer.

14. How many times should you revise your work before it’s published?

That depends, of course, on the presence or absence of a deadline. The deadline may be a formal one, geared to a production schedule. Or it may be an informal but real one, driven by current events (e.g., the need to assess a new economics text while it’s in the news). But even without a deadline, two revisions of a rough draft should be enough. A piece that’s rewritten several times can lose its (possessive pronoun) edge. And unless you’re a one-work wonder, or an amateur with time to spare, every rewrite represents a forgone opportunity to begin a new work.

*     *     *

If you act on this advice you’ll become a better writer. But be patient with yourself. Improvement takes time, and perfection never arrives.

I welcome your comments, structural or nit-picking as they may be.

Signature

Election 2014: E-Day Minus 1 Week

UPDATED HERE

As of this moment, the “poll of polls” at RealClearPolitics.com has the GOP gaining 7 Senate seats, for a 52-48 majority, and winning at least 228 House seats (240 if the tossups divide evenly). The numbers will change between now and election day, so just click on the links for the latest estimates.

The projected outcome in the House is close to my own estimate, which doesn’t rely on polls. In any event, the GOP is certain to retain its majority, and almost certain to increase it — perhaps winning more seats than in any election since World War II.

The outcome in the Senate is less certain. But I remain optimistic, given the unpopularity of Obama and Obamacare relative to their standing four years ago, when the GOP gained 6 Senate seats:

Election indicators - 2014 vs 2010

The indicators are drawn from the Obama Approval Index History published at Rasmussen Reports, and Rasmussen’s sporadic polling of likely voters about Obamacare (latest report here).

The first indicator (blue lines) measures Obama’s overall rating with likely voters. This indicator is a measure of superficial support for Obama. On that score, he’s just as unpopular now as he was four years ago. A plus for the GOP.

The second indicator (black lines) measures Obama’s rating with likely voters who express strong approval or disapproval of him. Obama’s strong-approval rating remains well below the pace of four years ago. A big plus for the GOP.

The third indicator (red lines) represents Obama’s strong-approval quotient (fraction of likely voters who strongly approve/fraction of likely voters who approve) divided by his strong-disapproval quotient (fraction of likely voters who strongly disapprove/fraction of likely voters who disapprove). I call this the “enthusiasm” indicator. Higher values represent greater enthusiasm for Obama; lower values, less enthusiasm. This is perhaps the best measure of support for Obama — and it looks a lot worse (for Democrats) than it did in 2010. Another big plus for the GOP.

The green points (connected by lines) are plots of Obamacare’s standing, as measured by the ratio of strong approval to strong disapproval among likely voters. Obamacare is faring much worse in 2014 than it did in 2010. Yet another big plus for the GOP.

Stay tuned for my final report on the morning of election day.

Signature

My View of Libertariansim

A reader asked for my definition of “libertarian.” I’ve written about libertarianism many times since my early days as an unsophisticated adherent of J.S. Mill’s solipsistic “harm principle.”

My journey away from solipsistic libertarianism began with “A Paradox for Libertarians.” “Common Ground for Conservatives and Libertarians?” marks the next step in my journey. My declaration of independence from the harm principle is documented in “The Paradox of Libertarianism.” I then wrote “Liberty As a Social Construct,” “Social Norms and Liberty,” and “A Footnote about Liberty and Social Norms.” Those posts go beyond my rejection of the harm principle as the proper basis of libertarianism, and introduce the social aspect of liberty. I reiterated and elaborated my criticism of the harm principle in “The Harm Principle,” “Footnotes to ‘The Harm Principle’,” and “The Harm Principle, Again.”

All of those posts — and more in the same revisionist vein — appeared at my old blog, Liberty Corner. Those many posts set the stage for many more at Politics & Prosperity, including these:

On Liberty

Pseudo-Libertarian Sophistry vs. True Libertarianism

Libertarian Conservative or Conservative Libertarian?

More Pseudo-Libertarianism

Not-So-Random Thoughts (XI)

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

Steve Stewart-Williams asks “Did Morality Evolve?” (The Nature-Nurture-Neitzsche Blog, May 2, 2010). His answer:

[T]here are at least two reasons to think morality bears the imprint of our evolutionary history. The first comes from observations of a class of individuals that psychologists all too often ignore: other animals. Nonhuman animals obviously don’t reason explicitly about right and wrong, but they do exhibit some aspects of human morality. Rather than being locked into an eternal war of all-against-all, many animals display tendencies that we count among our most noble: They cooperate; they help one another; they share resources; they love their offspring. For those who doubt that human morality has evolutionary underpinnings, the existence of these ‘noble’ traits in other animals poses a serious challenge….

…A second [reason] is that, not only do we know that these kinds of behaviour are part of the standard behavioural repertoire of humans in all culture and of other animals, we now have a pretty impressive arsenal of theories explaining how such behaviour evolved. Kin selection theory explains why many animals – humans included – are more altruistic toward kin than non-kin: Kin are more likely than chance to share any genes contributing to this nepotistic tendency. Reciprocal altruism theory explains how altruism can evolve even among non-relatives: Helping others can benefit the helper, as long as there’s a sufficient probability that the help will be reciprocated and as long as people avoid helping those who don’t return the favour. Another promising theory is that altruism is a costly display of fitness, which makes the altruist more attractive as a mate or ally. Overall, the evolutionary explanation of altruism represents one of the real success stories of the evolutionary approach to psychology.

Though I’m disinclined to use the term “altruism,” it’s useful shorthand for the kinds of behavior that seems selfless. In any event, I am sympathetic to Stewart-Williams’s view of morality as evolutionary. Morality is at least a learned and culturally-transmitted phenomenon, which manifests itself globally in the Golden Rule.

*     *     *

Pierre Lemieux decries “The Vacuity of the Political ‘We’” (The Library of Economics and Liberty, October 6, 2014):

One can barely read a newspaper or listen to a politician’s speech without hearing the standard “we as a society” or its derivatives….

The truth is that this collective “we” has no scientific meaning.

In the history of economic thought, two main strands of analysis support this conclusion. One was meant to criticize economists’ use of “social indifference curves” (also called “community indifference curves”), which generally appeared in the welfare analysis of international trade between personalized trading countries. Countries were personalized in the sense that they were assumed to have preferences, just as an individual does. In a famous 1956 article, Paul Samuelson definitively demonstrated that such social indifference curves, analogous to individual indifference curves, do not exist….

A second strand of analysis leads to a similar but more general conclusion. The problem has come to be known as the “preference aggregation” issue: how can we aggregate—”add up” as it were—individual preferences? Can we fuse them into social preferences—a “social welfare function”—that would equally represent all individuals? This second tradition of analysis follows a long and broken line of theorists. The Marquis de Condorcet in the 18th century, Charles Dodgson (a.k.a. Lewis Carroll) in the 19th, and economist Duncan Black in the 20th all discovered independently that majority voting does not provide an acceptable aggregation mechanism.

I’ve discussed the vacuity of the political “we” and the “social welfare function” in many posts; most recently this one, where I make these two points:

1. It is a logical and factual error to apply the collective “we” to Americans, except when referring generally to the citizens of the United States. Other instances of “we” (e.g., “we” won World War II, “we” elected Barack Obama) are fatuous and presumptuous. In the first instance, only a small fraction of Americans still living had a hand in the winning of World War II. In the second instance, Barack Obama was elected by amassing the votes of fewer than 25 percent of the number of Americans living in 2008 and 2012. “We the People” — that stirring phrase from the Constitution’s preamble — was never more hollow than it is today.

2. Further, the logical and factual error supports the unwarranted view that the growth of government somehow reflects a “national will” or consensus of Americans. Thus, appearances to the contrary (e.g., the adoption and expansion of national “social insurance” schemes, the proliferation of cabinet departments, the growth of the administrative state) a sizable fraction of Americans (perhaps a majority) did not want government to grow to its present size and degree of intrusiveness. And a sizable fraction (perhaps a majority) would still prefer that it shrink in both dimensions. In fact, The growth of government is an artifact of formal and informal arrangements that, in effect, flout the wishes of many (most?) Americans. The growth of government was not and is not the will of “we Americans,” “Americans on the whole,” “Americans in the aggregate,” or any other mythical consensus.

*     *     *

I’m pleased to note that there are still some enlightened souls like David Mulhhausen, who writes about “How the Death Penalty Saves Lives” (September 30, 2014) , at the website of The Heritage Foundation. Muhlhausen cites several studies in support of his position. I’ve treated crime and punishment many times; for example:

Does Capital Punishment Deter Homicide?
Libertarian Twaddle about the Death Penalty
Crime and Punishment
Saving the Innocent?
Saving the Innocent?: Part II
More Punishment Means Less Crime
More About Crime and Punishment
More Punishment Means Less Crime: A Footnote
Let the Punishment Fit the Crime
Another Argument for the Death Penalty
Less Punishment Means More Crime
Crime, Explained
Clear Thinking about the Death Penalty
What Is Justice?
Saving the Innocent
Why Stop at the Death Penalty?
Lock ‘Em Up
Free Will, Crime, and Punishment
Left-Libertarians, Obama, and the Zimmerman Case
Stop, Frisk, and Save Lives
Poverty, Crime, and Big Governmen

The numbers are there to support strict punishment, up to and including capital punishment. But even if the numbers weren’t conclusive, I’d be swayed by John McAdams, a professor of political science at Marquette University, who makes a succinct case for the death penalty, regardless of its deterrent effect:

I’m a bit surprised . . . [by the] claim that “the burden of empirical proof would seem to lie with the pro-death penalty scholar.” If we execute murderers and there is in fact no deterrent effect, we have killed a bunch of murderers. If we fail to execute murderers, and doing so would in fact have deterred other murders, we have allowed the killing of a bunch of innocent victims. I would much rather risk the former. This, to me, is not a tough call.

The same goes for fraudsters, thieves, rapists, and other transgressors against morality.

*     *     *

Walter E. Williams asks “Will the West Defend Itself?” (creators.com, October 1, 2014):

A debate about whether Islam is a religion of peace or not is entirely irrelevant to the threat to the West posed by ISIL, al-Qaida and other Middle Eastern terrorist groups. I would like to gather a news conference with our Army’s chief of staff, Gen. Raymond T. Odierno; Marines’ commandant, Gen. Joseph Dunford; chief of naval operations, Adm. Jonathan W. Greenert; and Gen. Mark A. Welsh, the U.S. Air Force’s chief of staff. This would be my question to them: The best intelligence puts ISIL’s size at 35,000 to 40,000 people. Do you officers think that the combined efforts of our military forces could defeat and lay waste to ISIL? Before they had a chance to answer, I’d add: Do you think the combined military forces of NATO and the U.S. could defeat and eliminate ISIL. Depending on the answers given, I’d then ask whether these forces could also eliminate Iran’s capability of making nuclear weapons.

My question to my fellow Americans is: What do you think their answers would be? No beating around the bush: Does the U.S. have the power to defeat the ISIL/al-Qaida threat and stop Iran’s nuclear ambitions — yes or no?

If our military tells us that we do have the capacity to defeat the terror threat, then the reason that we don’t reflects a lack of willingness. It’s that same lack of willingness that led to the deaths of 60 million people during World War II. In 1936, France alone could have stopped Adolf Hitler, but France and its allies knowingly allowed Hitler to rearm, in violation of treaties. When Europeans finally woke up to Hitler’s agenda, it was too late. Their nations were conquered. One of the most horrible acts of Nazi Germany was the Holocaust, which cost an estimated 11 million lives. Those innocents lost their lives because of the unwillingness of Europeans to protect themselves against tyranny.

Westerners getting the backbone to defend ourselves from terrorists may have to await a deadly attack on our homeland. You say, “What do you mean, Williams?” America’s liberals have given terrorists an open invitation to penetrate our country through our unprotected southern border. Terrorists can easily come in with dirty bombs to make one of our major cities uninhabitable through radiation. They could just as easily plant chemical or biological weapons in our cities. If they did any of these acts — leading to the deaths of millions of Americans — I wonder whether our liberal Democratic politicians would be able to respond or they would continue to mouth that “Islam teaches peace” and “Islam is a religion of peace.”

Unfortunately for our nation’s future and that of the world, we see giving handouts as the most important function of government rather than its most basic function: defending us from barbarians.

Exactly. The title of my post, “A Grand Strategy for the United States,” is a play on Strategy for the  West (1954), by Marshall of the Royal Air Force Sir John Cotesworth Slessor. Slessor was, by some accounts, a principal author of nuclear deterrence. Aside from his role in the development of a strategy for keeping the USSR at bay, Slessor is perhaps best known for this observation:

It is customary in democratic countries to deplore expenditure on armaments as conflicting with the requirements of the social services. There is a tendency to forget that the most important social service that a government can do for its people is to keep them alive and free. (Strategy for the West, p. 75)

Doctrinaire libertarians seem unable to grasp this. Like “liberals,” they tend to reject the notion of a strong defense because they are repelled by the “tribalism” represented by state sovereignty. One such doctrinaire libertarian is Don Boudreaux, who — like Walter E. Williams — teaches economics at George Mason University.

A post by Boudreaux at his blog Cafe Hayek caused me to write “Liberalism and Sovereignty,” where I say this:

Boudreaux … states a (truly) liberal value, namely, that respect for others should not depend on where they happen to live. Boudreaux embellishes that theme in the the next several paragraphs of his post; for example:

[L]iberalism rejects the notion that there is anything much special or compelling about political relationships.  It is tribalistic, atavistic, to regard those who look more like you to be more worthy of your regard than are those who look less like you.  It is tribalistic, atavistic, to regard those who speak your native tongue to be more worthy of your affection and concern than are those whose native tongues differ from yours.

For the true liberal, the human race is the human race.  The struggle is to cast off as much as possible primitive sentiments about “us” being different from “them.”

The problem with such sentiments — correct as they may be — is the implication that we have nothing more to fear from people of foreign lands than we have to fear from our own friends and neighbors. Yet, as Boudreaux himself acknowledges,

[t]he liberal is fully aware that such sentiments [about “us” being different from “them”] are rooted in humans’ evolved psychology, and so are not easily cast off.  But the liberal does his or her best to rise above those atavistic sentiments,

Yes, the liberal does strive to rise above such sentiments, but not everyone else makes the same effort, as Boudreaux admits. Therein lies the problem.

Americans — as a mostly undifferentiated mass — are disdained and hated by many foreigners (and by many an American “liberal”). The disdain and hatred arise from a variety of imperatives, ranging from pseudo-intellectual snobbery to nationalistic rivalry to anti-Western fanaticism. When those imperative lead to aggression (threatened or actual), that aggression is aimed at all of us: liberal, “liberal,” conservative, libertarian, bellicose, pacifistic, rational, and irrational.

Having grasped that reality, the Framers “did ordain and establish” the Constitution “in Order to . . . provide for the common defence” (among other things). That is to say, the Framers recognized the importance of establishing the United States as a sovereign state for limited and specified purposes, while preserving the sovereignty of its constituent States and their inhabitants for all other purposes.

If Americans do not mutually defend themselves through the sovereign state which was established for that purpose, who will? That is the question which liberals (both true and false) often fail to ask. Instead, they tend to propound internationalism for its own sake. It is a mindless internationalism, one that often disdains America’s sovereignty, and the defense thereof.

Mindless internationalism equates sovereignty with  jingoism, protectionism, militarism, and other deplorable “isms.” It ignores or denies the hard reality that Americans and their legitimate overseas interests are threatened by nationalistic rivalries and anti-Western fanaticism.

In the real world of powerful rivals and determined, resourceful fanatics, the benefits afforded Americans by our (somewhat eroded) constitutional contract — most notably the enjoyment of civil liberties, the blessings of  free markets and free trade, and the protections of a common defense — are inseparable from and dependent upon the sovereign power of the United States.  To cede that sovereignty for the sake of mindless internationalism is to risk the complete loss of the benefits promised by the Constitution.

Signature

Election 2014: E-Day Minus 2 Weeks

UPDATE HERE

As of today, it looks like the GOP will repeat or improve on its showing in the 2010 mid-term election. Four years ago, the GOP won 242 House seats, to retake the majority in that body, and posted a significant 6-seat gain in the Senate.

It’s almost certain that the GOP will hold a larger majority in the House when all the votes have been counted in November. Further, the smart money is on a GOP gain of at least 6 seats in the Senate — enough to recapture the majority.

Obama’s current unpopularity, compared with his unpopularity four years ago, also bodes will for Republicans. I have concocted four indicators of Obama’s unpopularity in 2014 vs. 2010. They’re plotted in the graph at the end of this post.

The first indicator (blue lines) measures Obama’s overall rating with likely voters. This indicator is a measure of superficial support for Obama. On that score, he’s doing  a bit better than he was four years ago at this time.

The second indicator (black lines) measures Obama’s rating with likely voters who express strong approval or disapproval of him. Obama’s strong-approval rating remains well below the pace of four years ago, which is a good sign for the GOP.

The third indicator (red lines) represents Obama’s strong-approval quotient (fraction of likely voters who strongly approve/fraction of likely voters who approve) divided by his strong-disapproval quotient (fraction of likely voters who strongly disapprove/fraction of likely voters who disapprove). I call this the “enthusiasm” indicator. Higher values represent greater enthusiasm for Obama; lower values, less enthusiasm. This is perhaps the best measure of support for Obama — and, despite a recent uptick, it looks a lot worse (for Democrats) than it did in 2010.

The green points (connected by lines) are plots of Obamacare’s standing, as measured by the ratio of strong approval to strong disapproval among likely voters. Obamacare is faring much worse in 2014 than it did in 2010 — another good sign for the GOP.

Election indicators - 2014 vs 2010
The indicators are drawn from the Obama Approval Index History published at Rasmussen Reports, and Rasmussen’s sporadic polling of likely voters about Obamacare (latest report here).

Signature

May the Best Team Lose

This is an update of a six-season-old post. It includes 2016 post-season play to date. I will update it again after the 2016 World Series.

The first 65 World Series (1903 and 1905-1968) were contests between the best teams in the National and American Leagues. The winner of a season-ending Series was therefore widely regarded as the best team in baseball for that season (except by the fans of the losing team and other soreheads). The advent of divisional play in 1969 meant that the Series could include a team that wasn’t the best in its league. From 1969 through 1993, when participation in the Series was decided by a single postseason playoff between division winners (1981 excepted), the leagues’ best teams met in only 10 of 24 series. The advent of three-tiered postseason play in 1995 and four-tiered postseason play in 2012, has only made matters worse.*

By the numbers:

  • Postseason play originally consisted of a World Series (period) involving 1/8 of major-league teams — the best in each league. Postseason play now involves 1/3 of major-league teams and 7 postseason series (3 in each league plus the inter-league World Series).
  • Only 3 of the 22 Series from 1995 through 2016 have featured the best teams of both leagues, as measured by W-L record.
  • Of the 21 Series from 1995 through 2015, only 6 were won by the best team in a league.
  • Of the same 21 Series, 10 (48 percent) were won by the better of the two teams, as measured by W-L record. Of the 65 Series played before 1969, 35 were won by the team with the better W-L record and 2 involved teams with the same W-L record. So before 1969 the team with the better W-L record won 35/63 of the time for an overall average of 56 percent. That’s not significantly different from the result for the 21 Series played in 1995-2015, but the teams in the earlier era were each league’s best, which is no longer true. . .
  • From 1995 through 2016, a league’s best team (based on W-L record) appeared in a Series only 15 of 44 possible times — 6 times for the NL (pure luck), 9 times for the AL (little better than pure luck). (A random draw among teams qualifying for post-season play would have resulted in the selection of each league’s best team about 6 times out of 22.)
  • Division winners have opposed each other in only 11 of the 22 Series from 1995 through 2016.
  • Wild-card teams have appeared in 10 of those Series, with all-wild-card Series in 2002 and 2014.
  • Wild-card teams have occupied more than one-fourth of the slots in the 1995-2016 Series — 12 slots out of 44.

The winner of the World Series used to be a league’s best team over the course of the entire season, and the winner had to beat the best team in the other league. Now, the winner of the World Series usually can claim nothing more than having won the most postseason games — 11 or 12 out of as many as 19 or 20. Why not eliminate the 162-game regular season, select the postseason contestants at random, and go straight to postseason play?

__________
* Here are the World Series pairings for 1994-2016 (National League teams listed first; + indicates winner of World Series):

1995 –
Atlanta Braves (division winner; .625 W-L, best record in NL)+
Cleveland Indians (division winner; .694 W-L, best record in AL)

1996 –
Atlanta Braves (division winner; .593, best in NL)
New York Yankees (division winner; .568, second-best in AL)+

1997 –
Florida Marlins (wild-card team; .568, second-best in NL)+
Cleveland Indians (division winner; .534, fourth-best in AL)

1998 –
San Diego Padres (division winner; .605 third-best in NL)
New York Yankees (division winner, .704, best in AL)+

1999 –
Atlanta Braves (division winner; .636, best in NL)
New York Yankees (division winner; .605, best in AL)+

2000 –
New York Mets (wild-card team; .580, fourth-best in NL)
New York Yankees (division winner; .540, fifth-best in AL)+

2001 –
Arizona Diamondbacks (division winner; .568, fourth-best in NL)+
New York Yankees (division winner; .594, third-best in AL)

2002 –
San Francisco Giants (wild-card team; .590, fourth-best in NL)
Anaheim Angels (wild-card team; .611, third-best in AL)+

2003 –
Florida Marlines (wild-card team; .562, third-best in NL)+
New York Yankees (division winner; .623, best in AL)

2004 –
St. Louis Cardinals (division winner; .648, best in NL)
Boston Red Sox (wild-card team; .605, second-best in AL)+

2005 –
Houston Astros (wild-card team; .549, third-best in NL)
Chicago White Sox (division winner; .611, best in AL)*

2006 –
St. Louis Cardinals (division winner; .516, fifth-best in NL)+
Detroit Tigers (wild-card team; .586, third-best in AL)

2007 –
Colorado Rockies (wild-card team; .552, second-best in NL)
Boston Red Sox (division winner; .593, tied for best in AL)+

2008 –
Philadelphia Phillies (division winner; .568, second-best in NL)+
Tampa Bay Rays (division winner; .599, second-best in AL)

2009 –
Philadelphia Phillies (division winner; .574, second-best in NL)
New York Yankees (division winner; .636, best in AL)+

2010 —
San Francisco Giants (division winner; .568, second-best in NL)+
Texas Rangers (division winner; .556, fourth-best in AL)

2011 —
St. Louis Cardinals (wild-card team; .556, fourth-best in NL)+
Texas Rangers (division winner; .593, second-best in AL)

2012 —
San Francisco Giants (division winner; .580, third-best in AL)+
Detroit Tigers (division winner; .543, seventh-best in AL)

2013 —
St. Louis Cardinals (division winner; .599, best in NL)
Boston Red Sox (division winner; .599, best in AL)+

2014 —
San Francisco Giants (wild-card team; .543, 4th-best in NL)+
Kansas City Royals (wild-card team; .549, 4th-best in AL)

2015 —
New York Mets (division winner; .556, 5th best in NL)
Kansas City Royals (division winner; .586, best in AL)+

2016 —
Chicago Cubs (division winner; .640, best in NL)
Cleveland Indians (division winner; .584, 2nd best in AL)

Signature

The Harmful Myth of Inherent Equality

Malcolm Gladwell popularized the 10,000-hour rule in Outliers: The Story of Success. According to the Wikipedia article about the book,

…Gladwell repeatedly mentions the “10,000-Hour Rule”, claiming that the key to success in any field is, to a large extent, a matter of practicing a specific task for a total of around 10,000 hours….

…[T]he “10,000-Hour Rule” [is] based on a study by Anders Ericsson. Gladwell claims that greatness requires enormous time, using the source of The Beatles’ musical talents and Gates’ computer savvy as examples….

Reemphasizing his theme, Gladwell continuously reminds the reader that genius is not the only or even the most important thing when determining a person’s success….

For “genius” read “genes.” Gladwell’s borrowed theme reinforces the left’s never-ending effort to sell the idea that all men and women are born with the same potential. And, of course, it’s the task of the almighty state to ensure that outcomes (e.g., housing, jobs, college admissions, and income) conform to nature’s design.

I encountered the 10,000-hour rule several years ago, and referred to it in this post, where I observed that “outcomes are skewed … because talent is distributed unevenly.” By “talent” I mean inherent ability of a particular kind — high intelligence and athletic prowess, for example — the possession of which obviously varies from person to person and (on average) from gender to gender and race to race. Efforts to deny such variations are nothing less than anti-scientific. They exemplify the left’s penchant for magical thinking.

There’s plenty of evidence of the strong link between inherent ability to success in any endeavor. I’ve offered some evidence here, here, here, and here. Now comes “Practice Does Not Make Perfect” by , , and (Slate, September 28, 2014). The piece veers off into social policy (with a leftish tinge) and an anemic attempt to rebut the race-IQ correlation, but it’s good on the facts. First, the authors frame the issue:

…What makes someone rise to the top in music, games, sports, business, or science? This question is the subject of one of psychology’s oldest debates.

The “debate” began sensibly enough:

In the late 1800s, Francis Galton—founder of the scientific study of intelligence and a cousin of Charles Darwin—analyzed the genealogical records of hundreds of scholars, artists, musicians, and other professionals and found that greatness tends to run in families. For example, he counted more than 20 eminent musicians in the Bach family. (Johann Sebastian was just the most famous.) Galton concluded that experts are “born.”

Then came the experts-are-made view and the 10,000-hour rule:

Nearly half a century later, the behaviorist John Watson countered that experts are “made” when he famously guaranteed that he could take any infant at random and “train him to become any type of specialist [he] might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents.”

The experts-are-made view has dominated the discussion in recent decades. In a pivotal 1993 article published in Psychological Review—psychology’s most prestigious journal—the Swedish psychologist K. Anders Ericsson and his colleagues proposed that performance differences across people in domains such as music and chess largely reflect differences in the amount of time people have spent engaging in “deliberate practice,” or training exercises specifically designed to improve performance…. For example, the average for elite violinists was about 10,000 hours, compared with only about 5,000 hours for the least accomplished group. In a second study, the difference for pianists was even greater—an average of more than 10,000 hours for experts compared with only about 2,000 hours for amateurs. Based on these findings, Ericsson and colleagues argued that prolonged effort, not innate talent, explained differences between experts and novices.

But reality has a way of making itself known:

[R]ecent research has demonstrated that deliberate practice, while undeniably important, is only one piece of the expertise puzzle—and not necessarily the biggest piece. In the first study to convincingly make this point, the cognitive psychologists Fernand Gobet and Guillermo Campitelli found that chess players differed greatly in the amount of deliberate practice they needed to reach a given skill level in chess. For example, the number of hours of deliberate practice to first reach “master” status (a very high level of skill) ranged from 728 hours to 16,120 hours. This means that one player needed 22 times more deliberate practice than another player to become a master.

A recent meta-analysis by Case Western Reserve University psychologist Brooke Macnamara and her colleagues (including the first author of this article for Slate) came to the same conclusion. We searched through more than 9,000 potentially relevant publications and ultimately identified 88 studies that collected measures of activities interpretable as deliberate practice and reported their relationships to corresponding measures of skill…. [P]eople who reported practicing a lot tended to perform better than those who reported practicing less. But the correlations were far from perfect: Deliberate practice left more of the variation in skill unexplained than it explained. For example, deliberate practice explained 26 percent of the variation for games such as chess, 21 percent for music, and 18 percent for sports. So, deliberate practice did not explain all, nearly all, or even most of the performance variation in these fields. In concrete terms, what this evidence means is that racking up a lot of deliberate practice is no guarantee that you’ll become an expert. Other factors matter.

Genes are among the other factors:

There is now compelling evidence that genes matter for success, too. In a study led by the King’s College London psychologist Robert Plomin, more than 15,000 twins in the United Kingdom were identified through birth records and recruited to perform a battery of tests and questionnaires, including a test of drawing ability in which the children were asked to sketch a person. In a recently published analysis of the data, researchers found that there was a stronger correspondence in drawing ability for the identical twins than for the fraternal twins. In other words, if one identical twin was good at drawing, it was quite likely that his or her identical sibling was, too. Because identical twins share 100 percent of their genes, whereas fraternal twins share only 50 percent on average, this finding indicates that differences across people in basic artistic ability are in part due to genes. In a separate study based on this U.K. sample, well over half of the variation between expert and less skilled readers was found to be due to genes.

In another study, a team of researchers at the Karolinska Institute in Sweden led by psychologist Miriam Mosing had more than 10,000 twins estimate the amount of time they had devoted to music practice and complete tests of basic music abilities, such as determining whether two melodies carry the same rhythm. The surprising discovery of this study was that although the music abilities were influenced by genes—to the tune of about 38 percent, on average—there was no evidence they were influenced by practice. For a pair of identical twins, the twin who practiced music more did not do better on the tests than the twin who practiced less. This finding does not imply that there is no point in practicing if you want to become a musician. The sort of abilities captured by the tests used in this study aren’t the only things necessary for playing music at a high level; things such as being able to read music, finger a keyboard, and commit music to memory also matter, and they require practice. But it does imply that there are limits on the transformative power of practice. As Mosing and her colleagues concluded, practice does not make perfect.

This is bad news for the blank-slate crowd on the left:

Ever since John Locke laid the groundwork for the Enlightenment by proposing that we are born as tabula rasa—blank slates—the idea that we are created equal has been the central tenet of the “modern” worldview. Enshrined as it is in the Declaration of Independence as a “self-evident truth,” this idea has special significance for Americans. Indeed, it is the cornerstone of the American dream—the belief that anyone can become anything they want with enough determination….

Wouldn’t it be better to just act as if we are equal, evidence to the contrary notwithstanding? That way, no people will be discouraged from chasing their dreams—competing in the Olympics or performing at Carnegie Hall or winning a Nobel Prize. The answer is no, for two reasons. The first is that failure is costly, both to society and to individuals. Pretending that all people are equal in their abilities will not change the fact that a person with an average IQ is unlikely to become a theoretical physicist, or the fact that a person with a low level of music ability is unlikely to become a concert pianist. It makes more sense to pay attention to people’s abilities and their likelihood of achieving certain goals, so people can make good decisions about the goals they want to spend their time, money, and energy pursuing…. Pushing someone into a career for which he or she is genetically unsuited will likely not work.

With regard to the latter point, Richard Sander has shown that aspiring blacks are chief among the victims of the form of “pushing” known as affirmative action. A few years ago, Sander was a guest blogger at The Volokh Conspiracy, where he posted thrice on the subject. In his first post, Sander writes:

As some readers will recall, a little more than seven years ago I published an analysis of law school affirmative action in the Stanford Law Review. The article was the first to present detailed data on the operation and effects of racial preferences in law schools (focusing on blacks).

I also laid out evidence suggesting that large preferences seemed to be worsening black outcomes. I argued that this was plausibly due to a “mismatch effect”; students receiving large preferences (for whatever reason) were likely to find themselves in academic environments where they had to struggle just to keep up; professor instruction would typically be aimed at the “median” student, so students with weaker academic preparation would tend to fall behind, and, even if they did not become discouraged and give up, would tend to learn less than they would have learned in an environment where their level of academic preparation was closer to the class median.

I suggested that the “mismatch effect” could explain as much as half of the black-white gap in first-time bar passage rates (the full gap is thirty to forty percentage points). I also suggested that “mismatch” might so worsen black outcomes that, on net, contemporary affirmative action was not adding to the total number of black lawyers, and might even be lowering the total number of new, licensed black attorneys.

This is from Sander’s second post:

Some of the most significant recent work on affirmative action concerns a phenomenon called “science mismatch”. The idea behind science mismatch is very intuitive: if you are a high school senior interested in becoming, for example, a chemist, you may seriously harm your chances of success by attending a school where most of the other would-be chemists have stronger academic preparation than you do. Professors will tend to pitch their class at the median student, not you; and if you struggle or fall behind in the first semester of inorganic chemistry, you will be in even worse shape in the second semester, and in very serious trouble when you hit organic chemistry. You are likely to get bad grades and to either transfer out of chemistry or fail to graduate altogether….

Duke economists Peter Arcidiacono, Esteban Aucejo, and Ken Spenner last year completed a study that looked at a number of ways that differences in admissions standards at Duke affected academic outcomes. In one of many useful analyses they did, they found that 54% of black men at Duke who, as freshmen, had been interested in STEM fields or economics, had switched out of those fields before graduation; the comparative rate for white men was 8%. Importantly, they found that “these cross-race differences in switching patterns can be fully explained by differences in academic background.” In other words, preferences – not race – was the culprit.

In research conducted by FTC economist Marc Luppino and me, using data from the University of California, we have found important peer effects and mismatch effects that affect students of all races; our results show that one’s chances of completing a science degree fall sharply, at a given level of academic preparation, as one attends more and more elite schools within the UC system. At Berkeley, there is a seven-fold difference in STEM degree completion between students with high and low pre-college credentials.

As is always the case with affirmative action, ironies abound. Although young blacks are about one-seventh as likely as young whites to eventually earn a Ph.D. in STEM fields, academically strong blacks in high school are more likely than similar whites to aspire to science careers. And although a U.S. Civil Rights Commission report in 2010 documented the “science mismatch” phenomenon in some detail, President Obama’s new initiative to improve the nation’s production of scientists neither recognizes nor addresses mismatch….

Science mismatch is, of course, relevant to the general affirmative action debate in showing that preferences can boomerang on their intended beneficiaries. But it also has a special relevance to Fisher v. University of Texas. The university’s main announced purpose in reintroducing racial preferences in 2004 was to increase “classroom” diversity. The university contended that, even though over a fifth of its undergraduates were black or Hispanic, many classrooms had no underrepresented minorities. It sought to use direct (and very large) racial preferences to increase campus URM numbers and thus increase the number of URMs in classes that lacked them. But science mismatch shows that this strategy, too, can be self-defeating. The larger a university’s preferences, the more likely it is that preferenced students will have trouble competing in STEM fields and other majors that are demanding and grade sternly. These students will tend to drop out of the tough fields and congregate in comparatively less demanding ones. Large preferences, in other words, can increase racial segregation across majors and courses within a university, and thus hurt classroom diversity.

And this is from Sander’s third post:

[In the previous post] I discussed a body of research – all of it uncontroverted – that documents a serious flaw in affirmative action programs pursued by elite colleges. Students who receive large preferences and arrive on campus hoping to major in STEM fields (e.g., Science, Technology, Engineering and Math) tend to migrate out of those fields at very high rates, or, if they remain in those fields, often either fail to graduate or graduate with very low GPAs. There is thus a strong tension between receiving a large admissions preference to a more elite school, and one’s ability to pursue a STEM career.

Is it possible for contemporary American universities to engage constructively with this type of research? …

Colleges and universities are committed to the mythology that diversity happens merely because they want it and put resources into it, and that all admitted students arrive with all the prerequisites necessary to flourish in any way they choose. Administrators work hard to conceal the actual differences in academic preparation that almost invariably accompany the aggressive use of preferences. Any research that documents the operation and effects of affirmative action therefore violates this “color-blind” mythology and accompanying norms; minority students are upset, correctly realizing that either the research is wrong or that administrators have misled them. In this scenario, administrators invariably resort to the same strategy: dismiss the research without actually lying about it; reassure the students that the researchers are misguided, but that the university can’t actually punish the researchers because of “academic freedom”….

Leftists — academic and other — cannot abide the truth when it refutes their prejudices. Affirmative action, as it turns out, is harmful to aspiring blacks. Most leftists will deny it because their leftist faith — their magical thinking– is more important to them than the well-being of those whose cause they claim to champion.

Signature

Election 2014: E-Day Minus 3 Weeks

LATEST VERSION HERE

Can the GOP repeat or improve on its showing in the 2010 mid-term election? Four years ago, the GOP won 242 House seats, a gain of 64 and more than enough to retake the majority. Over in the Senate, the GOP gained 6 seats, a good rebound but not enough for a majority.

Despite the loss of 8 House seats in 2012, the GOP retained a comfortable majority. And it’s almost certain that the GOP will hold a larger majority when all the votes are counted in November.

The outlook for the Senate is less clear, though there’s good reason to expect a GOP gain of 6 seats (or more) — enough to restore GOP control of the Senate.

I base my optimism on some indicators that I’ll continue to update as election day approaches. They’re drawn from the Obama Approval Index History published at Rasmussen Reports, and Rasmussen’s sporadic polling of likely voters about Obamacare (latest report here).

Election indicators - 2014 vs 2010

The first indicator (blue lines) measures Obama’s overall rating with likely voters. This indicator is a measure of superficial support for Obama. On that score, he’s doing about as well as he was four years ago at this time.

The second indicator (black lines) measures Obama’s rating with likely voters who express strong approval or disapproval. Obama’s strong-approval rating is below the pace of four years ago, which is a good sign for the GOP.

The third indicator (red lines) represents Obama’s strong-approval quotient (fraction of likely voters who strongly approve/fraction of likely voters who approve) divided by his strong-disapproval quotient (fraction of likely voters who strongly disapprove/fraction of likely voters who disapprove). I call this the “enthusiasm” indicator. Higher values represent greater enthusiasm for Obama; lower values, less enthusiasm. This is perhaps the best measure of support for Obama — and it looks a lot worse (for Democrats) than it did in 2010.

The green points (connected by lines) are plots of Obamacare’s standing, as measured by the ratio of strong approval to strong disapproval among likely voters. Obamacare is faring much worse in 2014 than it did in 2010 — another good sign for the GOP.

Some words of caution: It ain’t over ’til it’s over.

Signature

 

Another Look at Election 2014

I’ve been running a series of poll-based posts about the November election. The most recent post is here; an update is due tomorrow. The numbers, to date, suggest a re-run of the mid-term election of 2010, when the GOP won 242 House seats and gained 6 Senate seats.

As a cross-check on the polls, I ran a statistical analysis of House results for 1946-2012, that is, for all 34 elections since World War II.* I won’t bore you with the details of the analysis, but I will share the results, in graphical form:

House seats won by GOP - actual and estimated

The light gray lines represent the 95-percent confidence interval around the estimates.

The estimate for 2014 is 257 GOP seats — a number that would be a post-war record if it comes to pass. The high end of the 95-percent confidence interval is 295 seats; the low end is 218 seats. If I were a bettor, I’d put my money on 257, plus or minus 5 percent, that is, 244-270 seats.

Signature

__________
* I revised this post 5 hours after its initial publication, to incorporate the result of a more robust statistical analysis. The projected number of House seats won by the GOP in 2014 has been revised downward by 6, from 263 to 257.

On Writing: Part One

Lynn Patra offers some writing tips in her post “On Becoming a Better Writer,” and invites readers to add their tips in the comments section. This post will appear while I’m taking a break from the keyboard, so I won’t be able to place it in the comments section of Lynn’s post. Consider this a virtual comment.

This is the first of Lynn’s tips:

Voraciously read the work of great writers and allow yourself to be guided by whatever subjects interest you. If you love to read, this the most enjoyable and engaging way to learn how to write well. With continuous exposure to good writing, your mind will absorb the various lessons that school teachers tried to impart the boring way.

That’s excellent advice. A related bit of advice is to heed what great writers have to say about writing.

W. Somerset Maugham (English, 1874-1965) was a prolific and popular playwright, novelist, short-story writer, and author of non-fiction works. He reflected on his life and career as a writer in The Summing Up. It appeared in 1938, when Maugham was 64 years old and more than 40 years into his very long career. I first read The Summing Up about 40 years ago, and immediately became an admirer of Maugham’s candor and insight. This led me to become an avid reader of Maugham’s novels and short-story collections. And I have continued to consult The Summing Up for booster shots of Maugham’s wisdom.

I offer the following excerpts of the early pages of The Summing Up, where Maugham discusses the craft of writing:

I have never had much patience with the writers who claim from the reader an effort to understand their meaning…. There are two sorts of obscurity that you find in writers. One is due to negligence and the other to wilfulness. People often write obscurely because they have never taken the trouble to learn to write clearly. This sort of obscurity you find too often in modern philosophers, in men of science, and even in literary critics. Here it is indeed strange. You would have thought that men who passed their lives in the study of the great masters of literature would be sufficiently sensitive to the beauty of language to write if not beautifully at least with perspicuity. Yet you will find in their works sentence after sentence that you must read twice to discover the sense. Often you can only guess at it, for the writers have evidently not said what they intended.

Another cause of obscurity is that the writer is himself not quite sure of his meaning. He has a vague impression of what he wants to say, but has not, either from lack of mental power or from laziness, exactly formulated it in his mind and it is natural enough that he should not find a precise expression for a confused idea. This is due largely to the fact that many writers think, not before, but as they write. The pen originates the thought. …. From this there is only a little way to go to fall into the habit of setting down one’s impressions in all their original vagueness. Fools can always be found to discover a hidden sense in them….

Simplicity is not such an obvious merit as lucidity. I have aimed at it because I have no gift for richness. Within limits I admire richness in others, though I find it difficult to digest in quantity. I can read one page of Ruskin with delight, but twenty only with weariness. The rolling period, the stately epithet, the noun rich in poetic associations, the subordinate clauses that give the sentence weight and magnificence, the grandeur like that of wave following wave in the open sea; there is no doubt that in all this there is something inspiring. Words thus strung together fall on the ear like music. The appeal is sensuous rather than intellectual, and the beauty of the sound leads you easily to conclude that you need not bother about the meaning. But words are tyrannical things, they exist for their meanings, and if you will not pay attention to these, you cannot pay attention at all. Your mind wanders…..

But if richness needs gifts with which everyone is not endowed, simplicity by no means comes by nature. To achieve it needs rigid discipline…. To my mind King James’s Bible has been a very harmful influence on English prose. I am not so stupid as to deny its great beauty, and it is obvious that there are passages in it of a simplicity which is deeply moving. But the Bible is an oriental book. Its alien imagery has nothing to do with us. Those hyperboles, those luscious metaphors, are foreign to our genius…. The plain, honest English speech was overwhelmed with ornament. Blunt Englishmen twisted their tongues to speak like Hebrew prophets. There was evidently something in the English temper to which this was congenial, perhaps a native lack of precision in thought, perhaps a naive delight in fine words for their own sake, an innate eccentricity and love of embroidery, I do not know; but the fact remains that ever since, English prose has had to struggle against the tendency to luxuriance…. It is obvious that the grand style is more striking than the plain. Indeed many people think that a style that does not attract notice is not style…. But I suppose that if a man has a confused mind he will write in a confused way, if his temper is capricious his prose will be fantastical, and if he has a quick, darting intelligence that is reminded by the matter in hand of a hundred things he will, unless he has great self-control, load his pages with metaphor and simile….

Whether you ascribe importance to euphony … must depend on the sensitiveness of your ear. A great many readers, and many admirable writers, are devoid of this quality. Poets as we know have always made a great use of alliteration. They are persuaded that the repetition of a sound gives an effect of beauty. I do not think it does so in prose. It seems to me that in prose alliteration should be used only for a special reason; when used by accident it falls on the ear very disagreeably. But its accidental use is so common that one can only suppose that the sound of it is not universally offensive. Many writers without distress will put two rhyming words together, join a monstrous long adjective to a monstrous long noun, or between the end of one word and the beginning of another have a conjunction of consonants that almost breaks your jaw. These are trivial and obvious instances. I mention them only to prove that if careful writers can do such things it is only because they have no ear. Words have weight, sound and appearance; it is only by considering these that you can write a sentence that is good to look at and good to listen to.

I have read many books on English prose, but have found it hard to profit by them; for the most part they are vague, unduly theoretical, and often scolding. But you cannot say this of Fowler’s Dictionary of Modern English Usage. It is a valuable work. I do not think anyone writes so well that he cannot learn much from it. It is lively reading. Fowler liked simplicity, straightforwardness and common sense. He had no patience with pretentiousness. He had a sound feeling that idiom was the backbone of a language and he was all for the racy phrase. He was no slavish admirer of logic and was willing enough to give usage right of way through the exact demesnes of grammar. English grammar is very difficult and few writers have avoided making mistakes in it….

But Fowler had no ear. He did not see that simplicity may sometimes make concessions to euphony. I do not think a far-fetched, an archaic or even an affected word is out of place when it sounds better than the blunt, obvious one or when it gives a sentence a better balance. But, I hasten to add, though I think you may without misgiving make this concession to pleasant sound, I think you should make none to what may obscure your meaning. Anything is better than not to write clearly. There is nothing to be said against lucidity, and against simplicity only the possibility of dryness. This is a risk that is well worth taking when you reflect how much better it is to be bald than to wear a curly wig. But there is in euphony a danger that must be considered. It is very likely to be monotonous…. I do not know how one can guard against this. I suppose the best chance is to have a more lively faculty of boredom than one’s readers so that one is wearied before they are. One must always be on the watch for mannerisms and when certain cadences come too easily to the pen ask oneself whether they have not become mechanical. It is very hard to discover the exact point where the idiom one has formed to express oneself has lost its tang….

If you could write lucidly, simply, euphoniously and yet with liveliness you would write perfectly: you would write like Voltaire. And yet we know how fatal the pursuit of liveliness may be: it may result in the tiresome acrobatics of Meredith. Macaulay and Carlyle were in their different ways arresting; but at the heavy cost of naturalness. Their flashy effects distract the mind. They destroy their persuasiveness; you would not believe a man was very intent on ploughing a furrow if he carried a hoop with him and jumped through it at every other step. A good style should show no sign of effort. What is written should seem a happy accident….

Ruminations on the Left in America

I deplore the adage that “we get the kind of government that we deserve.” I’m not part of the “we,” nor are the millions of other Americans who despise the kind of government that’s been forced upon us. The adage should be “We — who despise big government — have gotten the kind of government that others want and therefore deserve.”

On of their government’s favorite themes is “racism.” It’s threadbare from use; it’s the “global warming” of the 2010s. Another favorite theme is “inequality,” which is as threadbare as “racism.” What’s more, the loudest voices against “inequality” are the very persons who could do something about it, personally, inasmuch as they’re left-wing 1-percenters with plenty of money to hand out. I wonder how many homeless persons they have invited into their homes. I wonder how many of those homeless persons are black.

Speaking of “global warming” — or whatever it’s called nowadays — did you follow the recent march against it? Neither did I, though I couldn’t avoid seeing a few mentions of it in “news” outlets. The marchers looked like a roundup of the usual suspects: overenthusiastic youth; naive believers in the power of government to do good (a lot of overlap with overenthusiastic youth); granola-munching, sandal-wearing Luddites who think they’d like to live in the Dark Ages, but with smart phones, of course; and all manner of lefties who think they’d like to live in a place where an all-wise dictator to tells everyone what to do — though I’ve noticed that they’re not flocking to Cuba.

By now, you may have surmised that hypocrisy rankles me. And because it does, I despise most politicians, media types, and celebrities — and all affluent lefties. But I also despise less-than-affluent lefties who simply want to feed at a trough that’s filled by the efforts of others, and who like to chalk up their failures to “society” and discrimination of one kind and another. Those various kinds of discrimination must, of course, be alleviated by government-granted privileges of one kind and another. The idea of making the best of one’s lot, by dint of determination and effort, seems to have vanished from the mental makeup of most Americans (in emulation of Europeans). Claims of victimhood and demands for special treatment by government have become de rigeur.

This is merely a manifestation of the sea-change in the American ethos. Though the origins of the sea-change can be traced to the Progressive Era of the late 1800s and early 1900s, and to the New Deal of the 1930s, its inevitability was ensured in the 1960s. It was then that the Great Society enshrined “entitlements” and the media sanctified unwashed, loud-mouthed, quasi-traitors for their trend-setting effort to ensure that government became incapable of doing the one thing that it should do: protect Americans from predators, foreign and domestic.

Closer to home, there’s the People’s Republic of insert-the-name-of-your-municipality (mine is Austin), which hosts dozens of blood-sucking tax-levying jurisdictions in constant search of ways to make life more miserable and expensive for residents (the indolent and dependent excepted, of course); for example:

  • frequent, traffic-jamming street closings for various politically correct observances
  • four weekends of very loud music festivals (the sound carries for miles)
  • tax breaks to attract “jobs,” which means more residents, but not lower taxes per capita
  • a poor road network that could barely handle Austin’s population as it was 10 years ago, but which could have been improved
  • restriction of the inadequate road network’s capacity by adding bike lanes that are used mainly by Yuppies, for exercise
  • a push to install a very expensive urban rail line that will be disruptive while it’s under construction, will displace traffic lanes and parking spaces, and won’t handle more than a tiny fraction of Austin’s transportation needs
  • lack of interest in a rapid bus system because it’s not “sexy” like urban rail
  • “affordable” (i.e., subsidized) housing, to foster “diversity” (i.e., the indolent and crime-prone get to live near and make life “better” for the aspiring and hard-working)
  • expensive “green” energy because it’s “religiously” correct to believe in AGW.

These and other abominations are supported by local lefties, the core constituency of which is the students and faculty of the University of Texas, and a dwindling hippie population that’s being priced out of Austin (an inadequate but welcome recompense). The core has been augmented by the hordes of Californians who have flocked to Austin to escape their home State’s high taxes and onerous regulations. They, of course, favor the programs that yield high taxes and onerous regulations, but are surprised when they figure out (as some of them do) that there’s a link between the programs, on the one hand, and the taxes and regulations, on the other hand. Unless you’re very lucky, you live in a People’s Republic much like Austin.

Which brings me (don’t ask how) to the final part of today’s sermon: euphemisms. These are much-favored by lefties, who seem unable to confront reality (as discussed in the preceding paragraph). Thus, for example:

  • crippled became handicapped, which became disabled and then differently abled or something-challenged
  • stupid became learning disabled, which became special needs (a euphemistic category that houses more than the stupid)
  • poor became underprivileged, which became economically disadvantaged, which became (though isn’t overtly called) entitled (as in entitled to other people’s money)
  • colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

Why do lefties insist on varnishing the truth? They are — they insist — strong supporters of science, which is (ideally) the pursuit of truth. Well, that’s because they aren’t supporters of science (witness their devotion to the “unsettled” science of AGW). Nor do they want the truth. They simply want to see the world as they would like it to be; for example:

  • a vibrant economy, but without the “too rich” who inevitably accompany it; they should be punished for the sin of being “too rich” (athletes, media stars, and rich benefactors of left-wing causes excluded, of course)
  • redemption for the left’s pets du jour through government programs that never seem to overcome human failings and foibles but always result in well-fed bureaucrats
  • peace on Earth without without swift and certain justice or a strong military, because “we” don’t want to offend a certain racial group or members of a certain religion (who have shown that they hate America, Americans, and some of the left’s pets du jour, namely, emancipated women and homosexuals) — but don’t take away my bodyguard or his .357 Magnum.

And so it goes in what little is left of the Founders’ America. For more about America’s left and the damage it has done to liberty and prosperity, see the related posts listed below.

Signature

*     *     *

Related posts:
Socialist Calculation and the Turing Test
How to Deal with Left-Wing Academic Blather
It’s Not Anti-Intellectualism, Stupid
The Case Against Campus Speech Codes
The Pathology of Academic Leftism
The People’s Romance
Lefty Profs
Apropos Academic Freedom and Western Values
Whiners — Left and Libertarian
Diagnosing the Left
Why So Few Free-Market Economists?
Academic Bias
Intellectuals and Capitalism
The Media, the Left, and War
Asymmetrical (Ideological) Warfare
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Tolerance on the Left
David Brooks, Useful Idiot for the Left
Left-Libertarians, Obama, and the Zimmerman Case
The Culture War
Sorkin’s Left-Wing Propaganda Machine
The Pretence of Knowledge

The Obama Effect: Disguised Unemployment

Updated Here.

Two takeaways:

  • The “official” unemployment rate of 5.9 percent is phony. The real rate remains at 12.4 percent, just 1.1 points below the 21st century high-water mark of 13.5 (reached in 2009, 2010, 2011, and 2013).
  • The real unemployment rate is disguised by the continued decline of the labor-force participation rate — a decline that has accelerated since the onset of Obamanomics. The decline is concentrated among younger workers, and has probably been helped along by Obamacare. (See the final paragraph of the post.)

Signature

The Broader Meaning of a Personal Anniversary

This is the 17th anniversary of my retirement from full-time employment. I’ve discussed the broader implications of this anniversary in three earlier posts: “Patience As a Tool of Strategy” (10/03/11), “Happy Anniversary to Me” (10/03/12), and “More Thoughts about Patience and Its Significance” (10/03/13). The third post covers the ground well. I commend it to you.

Signature