Democracy, Human Nature, and America’s Future

Like many (most?) persons of a libertarian stripe, I see democracy as an enemy of liberty. Democracy is popularly thought of as

a form of government in which the supreme power is vested in the people and exercised directly by them or by their elected agents under a free electoral system.

There are two things wrong with this view. First, the “supreme power” isn’t just exercised by elected agents but, with their blessing, it is exercised mainly by unelected agents: judges, law-enforcement personnel, regulators of myriad economic activities at all levels of government, and on and on. Many of these appointed functionaries write the very rules that they and others enforce — rules that often are barely recognizable as deriving from ordinances and statutes enacted by elected agents.

In sum, what is called democracy in America can reasonably be called fascism, in the proper meaning of the word. It isn’t called that mainly because neither “the people” nor the elite purveyors of fascism are willing to face facts. And then there are the many (far too many) Americans who don’t seem to object to an intrusive state.

Here’s the second problem with the popular view of democracy: It implies that a majority of voters — or a majority of their elected agents — should have unlimited power to meddle in everyone’s personal and business affairs. The implication has become fact, with the sweeping aside of constitutional checks on the powers of the legislative and executive branches, with the connivance of the judicial branch. The elected agents of “the people” — and those agents’ appointed functionaries — have acquired unlimited power by pandering to “the people,” by appealing to their envy, greed, and deluded faith in central planning.

What all of this illustrates is something that was obvious to the Framers of the Constitution: Even if there were (or could be) such a thing as political equality, democracy is dangerous because it can’t be constrained. Why would anyone expect “the people” or their elected representatives or their appointed functionaries to limit the power of the state to the defense of citizens? “The people” believe — wrongly, in most cases — that the state’s unlimited power makes them better off. In fact, the true beneficiaries of the state’s power are elected officials, appointed functionaries, and their pseudo-capitalist cronies.

True believers will retort that the problem isn’t with democracy, it’s with the way that democracy has been put into practice. They are indulging in the nirvana fallacy, the tendency to believe in “more perfect” systems that can somehow be attained despite human nature. In short, true believers substitute “ought to be” (in their view) for “what can be.”

They are no different than the true believers in socialism, who maintain — despite all evidence to the contrary — that “true socialism” is possible but hasn’t yet been put into practice. It would be possible only if socialism (like democracy) didn’t involve human beings. No system that involves human beings can rise above the tendencies of human nature, among which, as noted above, are envy and greed.

Then, there is power-lust. This may be less prevalent than envy and greed, but it is more dangerous because it exploits envy and greed, and amplifies their effects. Almost no politician, regardless of his rhetoric, is driven by a pure desire to “do good”; he is almost certainly driven by a desire to use his power to do what he thinks of — or rationalizes — as “good.”

And use his power he will, for he believes that it is his right and duty to make rules for others to obey. This is always done in the name of “good,” but is really done in the service of cronies and constituents who enable the politician to remain in power. In short, the last person to trust with high office is a person who seeks it. That is why elections usually come down to a choice among the lesser of evils.

What is to be done about democracy in America? Nothing like the revocation of near-universal suffrage, of course. The natives (of all hues, creeds, genders, and origins) wouldn’t stand for it. The only viable reform is constitutional, that is, a constant chipping-away at the power of the state.

And how is that to be accomplished, inasmuch as the GOP has proved to be an unreliable ally in the fight against statism? Perhaps the GOP would be less faint-hearted if it were to control the White House and Congress. And perhaps the best thing to come of that control would be the replacement of a Ruth Bader Ginsburg by another Clarence Thomas. (I hold little hope for courageous action on entitlements and regulatory excesses.) But, given the electorate’s fickleness, it wouldn’t be many years before an Antonin Scalia is replaced by a reincarnated William O. Douglas. In sum, I hold little hope that the Supreme Court will rescue liberty from democracy.

It’s also possible that GOP control might result in an Article V convention:

…[O]n the Application of the Legislatures of two thirds of the several States, [Congress] shall call a Convention for proposing Amendments, which … shall be valid to all Intents and Purposes, as Part of this Constitution, when ratified by the Legislatures of three fourths of the several States, or by Conventions in three fourths thereof, as the one or the other Mode of Ratification may be proposed by the Congress….

But what would be the thrust of any proposed amendments that leap the high hurdle of ratification, “The Constitution says this, and we mean it”? The Constitution already says this, and it’s ignored.

What’s needed is real action, not the mere placement of words on paper. Thus the best (and perhaps only) hope for a permanent withdrawal from the precipice of totalitarianism is de facto secession:

This has begun in a small way, with State-level legalization of marijuana, which has happened in spite of the central government’s de jure power to criminalize it. It is therefore imaginable that GOP control of the White House and Congress would embolden some GOP-controlled States to openly flout federal laws, regulations, and judicial decrees about such matters as same-sex marriage, environmental emissions, and Obamacare — to name a few obvious targets. The result, if it came to pass, would be something like the kind of federalism envisioned by the Framers of the Constitution.

Beyond that, the only hope for liberty seems to lie in drastic (but unlikely) action.

*     *     *

Related reading:
Hans-Hermann Hoppe, “Natural Elites, Intellectuals, and the State,” Mises Institute, July 31, 2006
Hans-Hermann Hoppe, A Short History of Man: Progress and Decline, Mises Institute, March 5, 2015
Hans von Spakovsky, “Book Review: Mike Lee on the 6 ‘Lost’ Provisions of the Constitution,” The Daily Signal, April 8, 2015
Myron Magnet, “The Dead Constitution,” City Journal, April 10, 2015

Related posts:
The State of Nature
Democracy and Liberty
The Interest-Group Paradox
Fascism and the Future of America
The Near-Victory of Communism
Tocqueville’s Prescience
The Constitution: Original Meaning, Corruption, and Restoration
Our Perfect, Perfect Constitution
Restoring Constitutional Government: The Way Ahead
“We the People” and Big Government
How Libertarians Ought to Think about the Constitution
An Agenda for the GOP
The States and the Constitution
No Wonder Liberty Is Disappearing

Signature

Tolerance

Bryan Caplan struggles to define tolerance. This seems to be what he’s searching for:

a disposition to allow freedom of choice and behavior

TheFreeDictionary.com, Thesaurus, Noun, 2

With that definition in mind, I’ll address the reasons given by Caplan for practicing tolerance:

1. People’s moral objections to how other people use their own person and property are usually greatly overstated – or simply wrong.  Think about how often people sneer at the way others dress, talk, or even walk.  Think about how often people twist personality clashes into battles of good versus evil.  From a calm, detached point of view, most of these complaints are simply silly.

The link points to a post in which Caplan confesses his own immature silliness. What’s missing are the “complaints” that are not “simply silly.” Take abortion, for example. It’s a practice that’s often defended on pseudo-libertarian grounds: a patently contrived right to privacy, for example. Caplan is cagey about abortion. If he is opposed to it, his reasons seem utilitarian rather than moral. In any event, opposition to abortion is not mere silliness; it is based on a profound moral objection to murder.

Nor should so-called personality clashes be dismissed as silliness. For example, during my 30 years as an analyst and manager at a major defense think-tank, I was a party to five conflicts (lasting months and years) that ignorant bystanders might have called personality clashes (and some bystanders did just that). But all five conflicts involved substantive differences about management methods, business ethics, or contractual performance.

Contra Caplan, I believe that differences about principle or substance give rise to most so-called personality clashes. It’s easy to dislike a person — and hard to disguise dislike — when that person reveals himself as incompetent, venal, manipulative, or corrupt. It seems to me that Caplan’s unfounded characterization of “most” disputes as personality clashes, and his back-handed dismissal of them as “battles of good versus evil,”  reflects his own deep-seated taste for conflict avoidance, as an avowed and outspoken pacifist.

*     *     *

2. People’s moral objections to how other people use their own person and property often conflate innocent ignorance with willful vice.

I’ll have to compensate for Caplan’s vagueness by offering examples of what he might have in mind:

Al disapproves of Bob’s drunken driving, which caused a serious accident. Bob didn’t know he had been drinking vodka-spiked lemonade.

Bob was innocently ignorant of the vodka in the lemonade when he was drinking it. But Bob probably knew that he wasn’t fit to drive if he was impaired enough to have an alcohol-induced accident. It’s therefore reasonable to disapprove of Bob’s drunken driving, even though he didn’t intend to drink alcohol.

Jimmy and Johnny were playing with matches, and started a fire that caused their family’s house to burn to the ground. They escaped safely, but all of their family’s possessions — many of them irreplaceable — were lost. Nor did insurance cover the full cost of rebuilding their house.

Jimmy and Johnny may have been innocent, but it’s hard not to disapprove of their parents for lax child-rearing or imprudence (not keeping matches safely hidden from children).

Alison looked carefully before changing lanes, but a car on her right was in her blind spot. She almost hit the car as she began to change lanes, but pulled back into her own lane before hitting it. Jake, the driver of the other car, was enraged by the near collision and honked at Allison.

Jake was rightly enraged. He might have been killed. Alison may have looked carefully, but it’s evident that she didn’t look carefully enough.

LaShawn enjoys rap music, especially loud rap music. (Is there any other way to play it?) He has some neighbors who don’t enjoy rap music and don’t want to hear it. The only way to get LaShawn to turn down the volume is to complain to him about the music. It doesn’t occur to LaShawn that the volume is too high and that his neighbors might not care for rap music.

This used to be called “lack of consideration,” and it was rightly thought of as a willful vice.

DiDi is a cell-phone addict. She’s on the phone almost everywhere she goes, yakking it up with her friends. DiDi doesn’t seem to care that her overheard conversations — loud and one-sided — are annoying and distracting to many of the persons who are subjected to them.

Lack of consideration, again.

Jerry has a fondness for booze. But he stays sober until Friday night, when he goes to his local bar and gets plastered. The more he drinks the louder and more obnoxious he becomes.

When Jerry gets drunk, he isn’t in control of himself, in some psychological sense. Thus his behavior might be said, by some, to arise out of innocent ignorance. But Jerry is in control of himself before he gets drunk. He surely knows how he behaves when he’s drunk, and how his behavior affects others. Jerry’s drunken behavior arises from a willful vice.

Ted and Deirdre, a married couple, are highly paid yuppies. They worked hard to earn advanced degrees, and they work hard at their socially valued professions (physician and psychologist). They live in an upscale, gated community, drive $75,000 cars, dine at top-rated restaurants, etc. And yet, despite the obvious connection between their hard work and their incomes (and what those incomes afford them), they are ardent “liberals.” (See the sidebar for my views on modern “liberalism.”) They vote for left-wing candidates, and contribute as much as the law allows to the campaigns of left-wing candidates. They have many friends who are like them in background, accomplishments, and political views.

This may seem like a case of innocent ignorance, but it’s not. Ted and Deirdre (and their friends) are intelligent. They understand incentives. They understand (or they would, if they thought about it) that progressive taxation and regulations blunt incentives to work, save, and invest. They therefore understand (or could easily understand) that the plight of the poor and “downtrodden” who are supposed to be helped by progressive taxation and regulations is actually made worse by those things. They certainly understand such things viscerally because they make every effort to reduce their taxes (through legal means, of course); they do not contribute voluntarily to the U.S. Treasury (even though they know that they could); and they dislike regulations that affect them directly. Ted and Deidre (and the legions like them) allow their guilt-driven desire for “equality” to obscure easily grasped facts of life. They ignore or suppress the facts of life in order to preen as “caring” persons. At bottom, their ignorance is willful, and inexcusable in persons of intelligence.

In sum, it’s far from evident to me that “how other people use their own person[s] and property often conflate[s] innocent ignorance with willful vice.” There’s much less innocent ignorance in the world than Caplan would like to believe.

*     *     *

3. People’s best-founded moral objections to how other people use their own person and property are usually morally superfluous.  Why?  Because the Real World already provides ample punishment.  Consider laziness.  Even from a calm, detached point of view, a life of sloth seems morally objectionable.  But there’s no need for you to berate the lazy – even inwardly.  Life itself punishes laziness with poverty and unemployment… So even if you accept (as I do) the Rossian principle that a just world links virtue with pleasure and vice with pain, there is no need to add your harsh condemnation to balance the cosmic scales.

On what planet does Caplan live? Governments in the United States — the central government foremost among them — reward and encourage sloth through extended unemployment benefits, bogus disability payments, food stamps, etc., etc. etc. There’s every reason to voice one’s displeasure with such goings on, and to give force to that displeasure by working and voting against the policies and politicians who make it possible for the slothful to live on the earnings of others.

*     *     *

4. The “especially strangers” parenthetical preempts the strongest counter-examples to principled tolerance.  There are obvious cases where you should strongly oppose what your spouse, children, or friends do with themselves or their stuff.  But strangers?  Not really.

Yes, really. See all of my comments above.

*     *     *

5. Intolerance is bad for the intolerant.  As Buddha never said, “Holding onto anger is like drinking poison and expecting the other person to die.”  The upshot is that the Real World punishes intolerance along with laziness, drunkenness, and gluttony.  Perhaps this is the hidden wisdom of the truism that “Haters gonna hate.

Here Caplan makes the mistake of identifying intolerance with anger. A person who is intolerant of carelessness, thoughtlessness, and willful vice isn’t angry all the time. He may be angered by careless, thoughtlessness, and willful vice when he sees them, but his anger is righteous, targeted, and controlled. Generally, he’s a happy person because he’s probably conservative.

It’s all well and good to tolerate freedom of choice and behavior, in the abstract. But civilization depends crucially on intolerance of particular choices and behaviors that result in real harm to others — psychic, material, and physical. Tolerance of such choices and behaviors is simply a kind of appeasement, which is what I would expect of Caplan — a man who can safely preach pacifism because he is well-guarded by the police and defense forces of his locality, State, and nation.

*     *    *

Related posts:
The Folly of Pacifism
The Folly of Pacifism, Again
More Pseudo-Libertarianism
Defending Liberty against (Pseudo) Libertarians

Signature

The Many-Sided Curse of Very Old Age

Here’s a tale that will be familiar to millions of the Greatest generation and their children of the Silent and Baby Boomer generations:

My grandparents and my wife’s grandparents were born in the years from 1875 to 1899. Their ages at death ranged from 25 to 96, for an average of 62.

My wife’s parents are still living, at the ages of 95 and 94. My father died at the age of 72, but my mother lives on at almost-99. That’s an average age of 90 — and rising.

My wife’s father was 6 when his father died and 56 when his mother died.

My wife’s mother was 14 when her father died and 52 when her mother died.

My mother was 25 when her father died and 61 when her mother died.

My father was 7 when his mother died and 35 when his father died.

My wife and I are 70-plus, and our three remaining parents are still going on … and on …

A long-lived parent is a mixed blessing. If you’re close to a parent, that parent’s growing dependence on you becomes a labor of love. If you’re not close to a parent, his or her long life simply imposes a labor of duty. In either case, the labor comes at an age when the child is on the downslope of energy.

What’s worse is that the rather competent, energetic, and often engaging parent of one’s earlier years is replaced by the addled, hobbled, and dull parent of one’s “golden years.” Financial prudence becomes miserliness; gregariousness turns into the retelling of stories and jokes for the umpteenth time; admirable determination gives way to pointless stubbornness.

Very old age is a test of character, and it’s a great disappointment when a parent fails the test. Too often, the facade of good cheer crumbles, to reveal extreme egoism, irresponsibility, and rancid bigotry.

I blame Medicare, in good part, for the miseries that very old age inflicts on the very old and on their children. I also blame Medicare for the miseries that it has inflicted and will continue to inflict on American taxpayers and workers.

The idea of ensuring access to health care is laudable, but Medicare — like so many government programs — has baleful side effects. To begin with, Medicare is yet another instance of the presumptuousness of the powerful. Big Brother presumes to know better than his subjects how they should spend their money and arrange their lives.

Medicare wasn’t sold as a subsidy, but it is one, just like Social Security and Medicaid. As with Social Security, the Medicare payroll tax doesn’t finance the full cost of the program. But because there are Medicare (and Social Security) taxes, most retirees believe (wrongly) that they have paid for their Medicare (and Social Security) benefits. They then consume their Medicare benefits with abandon because those benefits are almost free to them.

Instead of helping the truly penurious, Medicare (like Social Security) has become a middle-class welfare program. It relieves millions of workers from the responsibility of saving enough to pay for (or insure) their health care in old age. Thus the disincentivizing effects of Medicare (and Social Security) have caused and will continue to cause hundreds of millions of workers to produce far less than they would otherwise have produced. But drone-like politicians don’t understand such things.

The ever-growing cost of Medicare (with Social Security and Medicaid) threatens the well-being of future generations. Our progeny will be saddled with the exorbitant cost of caring for and subsidizing a burgeoning population of long-lived retirees — so many of whom know that they have lived too long and pray nightly “to shuffle[] off this mortal coil.”

We and our progeny will be hard-pressed to bear the exorbitant cost of Medicare and its socialistic ilk. The economy will sink deeper into the doldrums as resources are diverted from capital investments to subsidize oldsters who, for the most part, could have saved enough to pay for their own care and feeding had they not been discouraged from doing so.

The perfect storm of rising costs and slower economic growth means that future generations of Americans will be less well-fed, less well-clothed, less well-sheltered, and less healthy than the Greatest, the Silent, and the Boomer generations. There is great irony in the naming of Medicare, Medicaid, and Social Security.

*     *     *

Related reading:
Life Expectancy Graphs at Mapping History
Steve Calfo et al., “Last Year of Life Study,” Centers for Medicare & Medicaid Services, Office of the Actuary (undated)
Frank R. Lichtenberg, “The Effects of Medicare on Health Care Utilization and Outcomes,” Frontiers in Health Policy Research (Volume 5), MIT Press, January 2002
Kenneth Y. Chay et al., “Medicare, Hospital Utilization and Mortality: Evidence from the Program’s Origins,” NBER conference paper, February 2010
Theodore Dalrymple, “The Demand for Perfection,” Taki’s Magazine, December 14, 2014
Timothy Taylor, “How Medicare Spending Rises with Age,” The Conversable Economist, January 15, 2015

*     *     *

Related posts:
As Goes Greece
The Commandeered Economy
America’s Financial Crisis Is Now
The Rahn Curve Revisited

Signature

On Writing

I’ve combined and edited my four posts on writing. The result is here.

On Writing: Part Four

Part One gives excerpts of W.Somerset Maugham’s candid insights about the craft of writing. Part Two gives my advice to writers of non-fiction works. Part Three recommends some writings about writing, some writers to emulate, and a short list of reference works. This part delivers some sermons about practices to follow if you wish to communicate effectively, be taken seriously, and not be thought of as a semi-literate, self-indulgent, faddish dilettante. (In Part Three, I promised sermonettes, but they grew into sermons as I wrote.)

The first section, “Stasis, Progress, Regress, and Language,” comes around to a defense of prescriptivism in language. The second section, “Illegitimi Non Carborundum Lingo” (mock-Latin for “Don’t Let the Bastards Wear Down the Language”), counsels steadfastness in the face of political correctness and various sloppy usages.

STASIS, PROGRESS, REGRESS, AND LANGUAGE

To every thing there is a season, and a time to every purpose under the heaven….

Ecclesiastes 3:1 (King James Bible)

Nothing man-made is permanent; consider, for example, the list of empires here. In spite of the history of empires — and other institutions and artifacts of human endeavor — most people seem to believe that the future will be much like the present. And if the present embodies progress of some kind, most people seem to expect that progress to continue.

Things do not simply go on as they have been without the expenditure of requisite effort. Take the Constitution’s broken promises of liberty, about which I have written so much. Take the resurgence of Russia as a rival for international influence. This has been in the works for about 20 years, but didn’t register on most Americans until the recent Crimean crisis and related events in Ukraine. What did Americans expect? That the U.S. could remain the unchallenged superpower while reducing its armed forces to the point that they were strained by relatively small wars in Afghanistan and Iraq? That Vladimir Putin would be cowed by an American president who had so blatantly advertised his hopey-changey attitude toward Iran and Islam, while snubbing traditional allies like Poland and Israel?

Turning to naïveté about progress, I offer Steven Pinker’s fatuous The Better Angels of Our Nature: Why Violence Has Declined. Pinker tries to show that human beings are becoming kinder and gentler. I have much to say in another post about Pinker’s thesis. One of my sources is Robert Epstein’s review of Pinker’s book. This passage is especially apt:

The biggest problem with the book … is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Less important in the grand scheme, but no less wrong-headed, is the idea of limitless progress in the arts. To quote myself:

In the early decades of the twentieth century, the visual, auditory, and verbal arts became an “inside game.” Painters, sculptors, composers (of “serious” music), choreographers, and writers of fiction began to create works not for the enjoyment of audiences but for the sake of exploring “new” forms. Given that the various arts had been perfected by the early 1900s, the only way to explore “new” forms was to regress toward primitive ones — toward a lack of structure…. Aside from its baneful influence on many true artists, the regression toward the primitive has enabled persons of inferior talent (and none) to call themselves “artists.” Thus modernism is banal when it is not ugly.

Painters, sculptors, etc., have been encouraged in their efforts to explore “new” forms by critics, by advocates of change and rebellion for its own sake (e.g., “liberals” and “bohemians”), and by undiscriminating patrons, anxious to be au courant. Critics have a special stake in modernism because they are needed to “explain” its incomprehensibility and ugliness to the unwashed.

The unwashed have nevertheless rebelled against modernism, and so its practitioners and defenders have responded with condescension, one form of which is the challenge to be “open minded” (i.e., to tolerate the second-rate and nonsensical). A good example of condescension is heard on Composers Datebook, a syndicated feature that runs on some NPR stations. Every Composers Datebook program closes by “reminding you that all music was once new.” As if to lump Arnold Schoenberg and John Cage with Johann Sebastian Bach and Ludwig van Beethoven.

All music, painting, sculpture, dance, and literature was once new, but not all of it is good. Much (most?) of what has been produced since 1900 is inferior, self-indulgent crap.

And most of the ticket-buying public knows it. Take opera, for example. A recent article purports to show that “Opera is dead, in one chart” (Christopher Ingraham, The Washington Post, October 31, 2014). Here’s the chart and the writer’s interpretation of it:

The chart shows that opera ceased to exist as a contemporary art form roughly around 1970. It’s from a blog post by composer and programmer Suby Raman, who scraped the Met’s public database oF performances going back to the 19th century. As Raman notes, 50 years is an insanely low bar for measuring the “contemporary” – in pop music terms, it would be like considering The Beatles’ I Wanna Hold Your Hand as cutting-edge.

Back at the beginning of the 20th century, anywhere from 60 to 80 percent of Met performances were of operas composed in some time in the 50 years prior. But since 1980, the share of contemporary performances has surpassed 10 percent only once.

Opera, as a genre, is essentially frozen in amber – Raman found that the median year of composition of pieces performed at the Met has always been right around 1870. In other words, the Met is essentially performing the exact same pieces now that it was 100 years ago….

Contrary to Ingraham, opera isn’t dead; for example, there are more than 220 active opera companies in the U.S. It’s just that there’s little demand for operatic works written after the late 1800s. Why? Because most opera-lovers don’t want to hear the strident, discordant, unmelodic trash that came later. Giacomo Puccini, who wrote melodic crowd-pleasers until his death in 1924, is an exception that proves the rule.

It occurred to me recently that language is in the same parlous state as the arts. Written and spoken English improved steadily as Americans became more educated — and as long as that education included courses which prescribed rules of grammar and usage. By “improved” I mean that communication became easier and more effective; specifically:

  • A larger fraction of Americans followed the same rules in formal communications (e.g., speeches, business documents, newspapers, magazines, and books,).
  • Movies and radio and TV shows also tended to follow those rules, thereby reaching vast numbers of Americans who did little or no serious reading.
  • There was a “trickle down” effect on Americans’ written and spoken discourse, especially where it involved mere acquaintances or strangers. Standard American English became a kind of lingua franca, which enabled the speaker or writer to be understood and taken seriously.

I call that progress.

There is, however, an (unfortunately) influential attitude toward language known as descriptivism. It is distinct from (and often opposed to) rule-setting (prescriptivism). Consider this passage from the first chapter of an online text:

Prescriptive grammar is based on the idea that there is a single right way to do things. When there is more than one way of saying something, prescriptive grammar is generally concerned with declaring one (and only one) of the variants to be correct. The favored variant is usually justified as being better (whether more logical, more euphonious, or more desirable on some other grounds) than the deprecated variant. In the same situation of linguistic variability, descriptive grammar is content simply to document the variants – without passing judgment on them.

This misrepresents the role of prescriptive grammar. It’s widely understood that there’s  more than one way of saying something, and more than one way that’s understandable to others. The rules of prescriptive grammar, when followed, improve understanding, in two ways. First, by avoiding utterances that would be incomprehensible or, at least, very hard to understand. Second, by ensuring that utterances aren’t simply ignored or rejected out of hand because their form indicates that the writer or speaker is either ill-educated or stupid.

What, then, is the role of descriptive grammar? The authors offer this:

[R]ules of descriptive grammar have the status of scientific observations, and they are intended as insightful generalizations about the way that speakers use language in fact, rather than about the way that they ought to use it. Descriptive rules are more general and more fundamental than prescriptive rules in the sense that all sentences of a language are formed in accordance with them, not just a more or less arbitrary subset of shibboleth sentences. A useful way to think about the descriptive rules of a language … is that they produce, or generate, all the sentences of a language. The prescriptive rules can then be thought of as filtering out some (relatively minute) portion of the entire output of the descriptive rules as socially unacceptable.

Let’s consider the assertion that descriptive rules produce all the sentences of a language. What does that mean? It seems to mean the actual rules of a language can be inferred by examining sentences uttered or written by users of the language. But which users? Native users? Adults? Adults who have graduated from high-school? Users with IQs of at least 85?

Pushing on, let’s take a closer look at descriptive rules and their utility. The authors say that

we adopt a resolutely descriptive perspective concerning language. In particular, when linguists say that a sentence is grammatical, we don’t mean that it is correct from a prescriptive point of view, but rather that it conforms to descriptive rules….

The descriptive rules amount to this: They conform to practices that a speakers and writers actually use in an attempt to convey ideas, whether or not the practices state the ideas clearly and concisely. Thus the authors approve of these sentences because they’re of a type that might well occur in colloquial speech:

Over there is the guy who I went to the party with.

Over there is the guy with whom I went to the party.

(Both are clumsy ways of saying “I went to the party with that person.”)

Bill and me went to the store.

(“Bill and I went to the store.” or “Bill went to the store with me.” or “I went to the store with Bill.” Aha! Three ways to say it correctly, not just one way.)

But the authors label the following sentences as ungrammatical because they don’t comport with the colloquial speech:

Over there is guy the who I went to party the with.

Over there is the who I went to the party with guy.

Bill and me the store to went.

In other words, the authors accept as grammatical anything that a speaker or writer is likely to say, according to the “rules” that can be inferred from colloquial speech and writing. It follows that whatever is is right, even “Bill and me to the store went” or “Went to the store Bill and me,” which aren’t far-fetched variations on “Bill and me went to the store.” (Yoda-isms they read like.) They’re understandable, but only with effort. And further evolution would obliterate their meaning.

The fact is that the authors of the online text — like descriptivists generally — don’t follow their own anarchistic prescription. Wilson Follett puts it this way in Modern American Usage: A Guide:

It is … one of the striking features of the libertarian position [with respect to language] that it preaches an unbuttoned grammar in a prose style that is fashioned with the utmost grammatical rigor. H.L. Mencken’s two thousand pages on the vagaries of the American language are written in the fastidious syntax of a precisian. If we go by what these men do instead of by what they say, we conclude that they all believe in conventional grammar, practice it against their own preaching, and continue to cultivate the elegance they despise in theory….

[T]he artist and the user of language for practical ends share an obligation to preserve against confusion and dissipation the powers that over the centuries the mother tongue has acquired. It is a duty to maintain the continuity of speech that makes the thought of our ancestors easily understood, to conquer Babel every day against the illiterate and the heedless, and to resist the pernicious and lulling dogma that in language … whatever is is right and doing nothing is for the best (pp. 30-1).

Follett also states the true purpose of prescriptivism, which isn’t to prescribe rules for their own sake:

[This book] accept[s] the long-established conventions of prescriptive grammar … on the theory that freedom from confusion is more desirable than freedom from rule…. (op. cit., p. 243).

E.B. White puts  it more colorfully in his introduction to The Elements of Style. Writing about William Strunk Jr., author of the original version of the book, White says:

All through The Elements of Style one finds evidence of the author’s deep sympathy for the reader. Will felt that the reader was in serious trouble most of the time, a man floundering in a swamp, and that it was the duty of anyone attempting to write English to drain this swamp quickly and get his man up on dry ground, or at least throw him a rope. In revising the text, I have tried to hold steadily in mind this belief of his, this concern for the bewildered reader (p. xvi, Third Edition).

Descriptivists would let readers founder in the swamp of incomprehensibility. If descriptivists had their way — or what they claim to be their way — American English would, like the arts, recede into formless primitivism.

Eternal vigilance about language is the price of comprehensibility.

ILLEGITIMI NON CARBORUNDUM LINGO

The vigilant are sorely tried these days. What follows are several restrained rants about some practices that should be resisted and repudiated.

Eliminate Filler Words

When I was a child, most parents and all teachers promptly ordered children to desist from saying “uh” between words. “Uh” was then the filler word favored by children, adolescents, and even adults. The resort to “uh” meant that the speaker was stalling because he had opened his mouth without having given enough thought to what he meant to say.

Next came “you know.” It has been displaced, in the main, by “like,” where it hasn’t been joined to “like” in the formation “like, you know.”

The need of a filler word (or phrase) seems ineradicable. Too many people insist on opening their mouths before thinking about what they’re about to say. Given that, I urge Americans in need of a filler word to use “uh” and eschew “like” and “like, you know.” “Uh” is far less distracting and irritating than the rat-a-tat of “like-like-like-like.”

Of course, it may be impossible to return to “uh.” Its brevity may not give the users of “like” enough time to organize their TV-smart-phone-video-game-addled brains and deliver coherent speech.

In any event, speech influences writing. Sloppy speech begets sloppy writing, as I know too well. I have spent the past 50 years of my life trying to undo habits of speech acquired in my childhood and adolescence — habits that still creep into my writing if I drop my guard.

Don’t Abuse Words

How am I supposed to know what you mean if you abuse perfectly good words? Here I discuss four prominent examples of abuse.

Anniversary

Too many times in recent years I’ve heard or read something like this: “Sally and me are celebrating our one-year anniversary.” The “me” is bad enough; “one-year anniversary” (or any variation of it) is truly egregious.

The word “anniversary” means “the annually recurring date of a past event.” To write or say “x-year anniversary” is redundant as well as graceless. Just write or say “first anniversary,” “two-hundred fiftieth anniversary,” etc., as befits the occasion.

To write or say “x-month anniversary” is nonsensical. Something that happened less than a year ago can’t have an anniversary. What is meant is that such-and-such happened “x” months ago. Just say it.

Data

A person who writes or says “data is” is at best an ignoramus and at worst a Philistine.

Language, above all else, should be used to make one’s thoughts clear to others. The pairing of a plural noun and a singular verb form is distracting, if not confusing. Even though datum is seldom used by Americans, it remains the singular foundation of data, which is the plural form. Data, therefore, never “is”; data always “are.”

H.W. Fowler says:

Latin plurals sometimes become singular English words (e.g., agenda, stamina) and data is often so treated in U.S.; in Britain this is still considered a solecism… (A Dictionary of Modern English Usage, Second Edition, p.119).

But Wilson Follett is better on the subject:

Those who treat data as a singular doubtless think of it as a generic noun, comparable to knowledge or information… [TEA: a generous interpretation]. The rationale of agenda as a singular is its use to mean a collective program of action, rather than separate items to be acted on. But there is as yet no obligation to change the number of data under the influence of error mixed with innovation (op. cit., pp. 130-1).

Hopefully and Its Brethren

Mark Liberman of Language Log discusses

the AP Style Guide’s decision to allow the use of hopefully as a sentence adverb, announced on Twitter at 6:22 a.m. on 17 April 2012:

Hopefully, you will appreciate this style update, announced at ‪#aces2012‬. We now support the modern usage of hopefully: it’s hoped, we hope.

Liberman, who is a descriptivist, defends AP’s egregious decision. His defense consists mainly of citing noted writers who have used “hopefully” where they meant “it is to be hoped.” I suppose that if those same noted writers had chosen to endanger others by driving on the wrong side of the road, Liberman would praise them for their “enlightened” approach to driving.

Geoff Nunberg also defends “hopefully“ in “The Word ‘Hopefully’ Is Here to Stay, Hopefully,” which appears at npr.org. Numberg (or the headline writer) may be right in saying that “hopefully” is here to stay. But that does not excuse the widespread use of the word in ways that are imprecise and meaningless.

The crux of Nunberg’s defense is that “hopefully” conveys a nuance that “language snobs” (like me) are unable to grasp:

Some critics object that [“hopefully” is] a free-floating modifier (a Flying Dutchman adverb, James Kirkpatrick called it) that isn’t attached to the verb of the sentence but rather describes the speaker’s attitude. But floating modifiers are mother’s milk to English grammar — nobody objects to using “sadly,” “mercifully,” “thankfully” or “frankly” in exactly the same way.

Or people complain that “hopefully” doesn’t specifically indicate who’s doing the hoping. But neither does “It is to be hoped that,” which is the phrase that critics like Wilson Follett offer as a “natural” substitute. That’s what usage fetishism can drive you to — you cross out an adverb and replace it with a six-word impersonal passive construction, and you tell yourself you’ve improved your writing.

But the real problem with these objections is their tone-deafness. People get so worked up about the word that they can’t hear what it’s really saying. The fact is that “I hope that” doesn’t mean the same thing that “hopefully” does. The first just expresses a desire; the second makes a hopeful prediction. I’m comfortable saying, “I hope I survive to 105″ — it isn’t likely, but hey, you never know. But it would be pushing my luck to say, “Hopefully, I’ll survive to 105,” since that suggests it might actually be in the cards.

Floating modifiers may be common in English, but that does not excuse them. Given Numberg’s evident attachment to them, I am unsurprised by his assertion that “nobody objects to using ‘sadly,’ ‘mercifully,’ ‘thankfully’ or ‘frankly’ in exactly the same way.”

Nobody, Mr. Nunberg? Hardly. Anyone who cares about clarity and precision in the expression of ideas will object to such usages. A good editor would rewrite any sentence that begins with a free-floating modifier — no matter which one of them it is.

Nunberg’s defense against such rewriting is that Wilson Follett offers “It is to be hoped that” as a cumbersome, wordy substitute for “hopefully.” I assume that Nunberg refers to Follett’s discussion of “hopefully” in Modern American Usage. If so, Nunberg once again proves himself an adherent of imprecision, for this is what Follett actually says about “hopefully”:

The German language is blessed with an adverb, hoffentlich, that affirms the desirability of an occurrence that may or may not come to pass. It is generally to be translated by some such periphrasis as it is to be hoped that; but hack translators and persons more at home in German than in English persistently render it as hopefully. Now, hopefully and hopeful can indeed apply to either persons or affairs. A man in difficulty is hopeful of the outcome, or a situation looks hopeful; we face the future hopefully, or events develop hopefully. What hopefully refuses to convey in idiomatic English is the desirability of the hoped-for event. College, we read, is a place for the development of habits of inquiry, the acquisition of knowledge and, hopefully, the establishment of foundations of wisdom. Such a hopefully is un-English and eccentric; it is to be hoped is the natural way to express what is meant. The underlying mentality is the same—and, hopefully, the prescription for cure is the same (let us hope) / With its enlarged circulation–and hopefully also increased readership–[a periodical] will seek to … (we hope) / Party leaders had looked confidently to Senator L. to win . . . by a wide margin and thus, hopefully, to lead the way to victory for. . . the Presidential ticket (they hoped) / Unfortunately–or hopefully, as you prefer it–it is none too soon to formulate the problems as swiftly as we can foresee them. In the last example, hopefully needs replacing by one of the true antonyms of unfortunately–e.g. providentially.

The special badness of hopefully is not alone that it strains the sense of -ly to the breaking point, but that appeals to speakers and writers who do not think about what they are saying and pick up VOGUE WORDS [another entry in Modern American Usage] by reflex action. This peculiar charm of hopefully accounts for its tiresome frequency. How readily the rotten apple will corrupt the barrel is seen in the similar use of transferred meaning in other adverbs denoting an attitude of mind. For example: Sorrowfully (regrettably), the officials charged with wording such propositions for ballot presentation don’t say it that way / the “suicide needle” which–thankfully–he didn’t see fit to use (we are thankful to say). Adverbs so used lack point of view; they fail to tell us who does the hoping, the sorrowing, or the being thankful. Writers who feel the insistent need of an English equivalent for hoffentlich might try to popularize hopingly, but must attach it to a subject capable of hoping (op. cit., pp. 178-9).

Follett, contrary to Nunberg’s assertion, does not offer “It is to be hoped that” as a substitute for “hopefully,” which would “cross out an adverb and replace it with a six-word impersonal passive construction.” Follett gives “it is to be hoped for” as the sense of “hopefully.” But, as the preceding quotation attests, Follett is able to replace “hopefully” (where it is misused) with a few short words that take no longer to write or say than “hopefully,” and which convey the writer’s or speaker’s intended meaning more clearly. And if it does take a few extra words to say something clearly, why begrudge those words?

What about the other floating modifiers — such as “sadly,” “mercifully,” “thankfully” and “frankly” — which Nunberg defends with much passion and no logic? Follett addresses those others in the third paragraph quoted above, but he does not dispose of them properly. For example, I would not simply substitute “regrettably” for “sorrowfully”; neither is adequate. What is wanted is something like this: “The officials who write propositions for ballots should not have said … , which is misleading (vague/ambiguous).” More words? Yes, but so what? (See above.)

In any event, a writer or speaker who is serious about expressing himself clearly to an audience will never say things like “Sadly (regrettably), the old man died,” when he means either “I am (we are/they are/everyone who knew him) is saddened by (regrets) the old man’s dying,” or (less probably) “The old man grew sad as he died” or “The old man regretted dying.” I leave “mercifully,” “thankfully,” “frankly” and the rest of the overused “-ly” words as an exercise for the reader.

The aims of a writer or speaker ought to be clarity and precision, not a stubborn, pseudo-logical insistence on using a word or phrase merely because it is in vogue or (more likely) because it irritates so-called language snobs. I doubt that even the pseudo-logical “language slobs” of Nunberg’s ilk condone “like” and “you know” as interjections. But, by Nunberg’s “logic,” those interjections should be condoned — nay, encouraged — because “everyone” knows what someone who uses them is “really saying,” namely, “I am too stupid or lazy to express myself clearly and precisely.”

Literally

This is from Dana Coleman’s article “According to the Dictionary, ‘Literally’ Also Now Means ‘Figuratively’,” (Salon, August 22, 2013):

Literally, of course, means something that is actually true: “Literally every pair of shoes I own was ruined when my apartment flooded.”

When we use words not in their normal literal meaning but in a way that makes a description more impressive or interesting, the correct word, of course, is “figuratively.”

But people increasingly use “literally” to give extreme emphasis to a statement that cannot be true, as in: “My head literally exploded when I read Merriam-Webster, among others, is now sanctioning the use of literally to mean just the opposite.”

Indeed, Ragan’s PR Daily reported last week that Webster, Macmillan Dictionary and Google have added this latter informal use of “literally” as part of the word’s official definition. The Cambridge Dictionary has also jumped on board….

Webster’s first definition of literally is, “in a literal sense or matter; actually.” Its second definition is, “in effect; virtually.” In addressing this seeming contradiction, its authors comment:

“Since some people take sense 2 to be the opposition of sense 1, it has been frequently criticized as a misuse. Instead, the use is pure hyperbole intended to gain emphasis, but it often appears in contexts where no additional emphasis is necessary.”…

The problem is that a lot of people use “literally” when they mean “figuratively” because they don’t know better. It’s literally* incomprehensible to me that the editors of dictionaries would suborn linguistic anarchy. Hopefully,** they’ll rethink their rashness.
_________
* “Literally” is used correctly, though it’s superfluous here.
** “Hopefully” is used incorrectly, but in the spirit of the times.

Punctuate Properly

I can’t compete with Lynne Truss’s Eats, Shoots & Leaves: The Zero-Tolerance Approach to Punctuation (discussed in Part Three), so I won’t try. Just read it and heed it.

But I must address the use of the hyphen in compound adjectives, and the serial comma.

Regarding the hyphen, David Bernstein of The Volokh Conspiracy writes:

I frequently have disputes with law reviewer editors over the use of dashes. Unlike co-conspirator Eugene, I’m not a grammatical expert, or even someone who has much of an interest in the subject.

But I do feel strongly that I shouldn’t use a dash between words that constitute a phrase, as in “hired gun problem”, “forensic science system”, or “toxic tort litigation.” Law review editors seem to want to generally want to change these to “hired-gun problem”, “forensic-science system”, and “toxic-tort litigation.” My view is that “hired” doesn’t modify “gun”; rather “hired gun” is a self-contained phrase. The same with “forensic science” and “toxic tort.”

Most of the commenters are right to advise Bernstein that the “dashes” — he means hyphens — are necessary. Why? To avoid confusion as to what is modifying the noun “problem.”

In “hired gun,” for example, “hired” (adjective) modifies “gun” (noun, meaning “gunslinger” or the like). But in “hired-gun problem,” “hired-gun” is a compound adjective which requires both of its parts to modify “problem.” It’s not a “hired problem” or a “gun problem,” it’s a “hired-gun problem.” The function of the hyphen is to indicate that “hired” and “gun,” taken separately, are meaningless as modifiers of “problem,” that is, to ensure that the meaning of the adjective-noun phrase is not misread.

A hyphen isn’t always necessary in such instances. But the consistent use of the hyphen in such instances avoids confusion and the possibility of misinterpretation.

The consistent use of the hyphen to form a compound adjective has a counterpart in the consistent use of the serial comma, which is the comma that precedes the last item in a list of three or more items (e.g., the red, white, and blue). Newspapers (among other sinners) eschew the serial comma for reasons too arcane to pursue here. Thoughtful counselors advise its use. (See, for example, Follett at pp. 422-3.) Why? Because the serial comma, like the hyphen in a compound adjective, averts ambiguity. It isn’t always necessary, but if it is used consistently, ambiguity can be avoided. (Here’s a great example, from the Wikipedia article linked to in the first sentence of this paragraph: “To my parents, Ayn Rand and God.” The writer means, of course, “To my parents, Ayn Rand, and God.”)

A little punctuation goes a long way.

Stand Fast against Political Correctness

As a result of political correctness, some words and phrases have gone out of favor. needlessly. Others are cluttering the language, needlessly. Political correctness manifests itself in euphemisms, verboten words, and what I call gender preciousness.

Euphemisms

These are much-favored by persons of the left, who seem unable to have an aversion to reality. Thus, for example:

  • “Crippled” became “handicapped,” which became “disabled” and then “differently abled” or “something-challenged.”
  • “Stupid” became “learning disabled,” which became “special needs” (a euphemistic category that houses more than the stupid).
  • “Poor” became “underprivileged,” which became “economically disadvantaged,” which became “entitled” (to other people’s money), in fact if not in word.
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

How these linguistic contortions have helped the crippled, stupid, poor, and colored is a mystery to me. Tact is admirable, but euphemisms aren’t tactful. They’re insulting because they’re condescending.

Verboten Words

The list is long; see this and this, for example. Words become verboten for the same reason that euphemisms arise: to avoid giving offense, even where offense wouldn’t or shouldn’t be taken.

David Bernstein, writing at TCS Daily several ago, recounted some tales about political correctness. This one struck close to home:

One especially merit-less [hostile work environment] claim that led to a six-figure verdict involved Allen Fruge, a white Department of Energy employee based in Texas. Fruge unwittingly spawned a harassment suit when he followed up a southeast Texas training session with a bit of self-deprecating humor. He sent several of his colleagues who had attended the session with him gag certificates anointing each of them as an honorary Coon Ass — usually spelled coonass — a mildly derogatory slang term for a Cajun. The certificate stated that [y]ou are to sing, dance, and tell jokes and eat boudin, cracklins, gumbo, crawfish etouffe and just about anything else. The joke stemmed from the fact that southeast Texas, the training session location, has a large Cajun population, including Fruge himself.

An African American recipient of the certificate, Sherry Reid, chief of the Nuclear and Fossil Branch of the DOE in Washington, D.C., apparently missed the joke and complained to her supervisors that Fruge had called her a coon. Fruge sent Reid a formal (and humble) letter of apology for the inadvertent offense, and explained what Coon Ass actually meant. Reid nevertheless remained convinced that Coon Ass was a racial pejorative, and demanded that Fruge be fired. DOE supervisors declined to fire Fruge, but they did send him to diversity training. They also reminded Reid that the certificate had been meant as a joke, that Fruge had meant no offense, that Coon Ass was slang for Cajun, and that Fruge sent the certificates to people of various races and ethnicities, so he clearly was not targeting African Americans. Reid nevertheless sued the DOE, claiming that she had been subjected to a racial epithet that had created a hostile environment, a situation made worse by the DOEs failure to fire Fruge.

Reid’s case was seemingly frivolous. The linguistics expert her attorney hired was unable to present evidence that Coon Ass meant anything but Cajun, or that the phrase had racist origins, and Reid presented no evidence that Fruge had any discriminatory intent when he sent the certificate to her. Moreover, even if Coon Ass had been a racial epithet, a single instance of being given a joke certificate, even one containing a racial epithet, by a non-supervisory colleague who works 1,200 miles away does not seem to remotely satisfy the legal requirement that harassment must be severe and pervasive for it to create hostile environment liability. Nevertheless, a federal district court allowed the case to go to trial, and the jury awarded Reid $120,000, plus another $100,000 in attorneys fees. The DOE settled the case before its appeal could be heard for a sum very close to the jury award.

I had a similar though less costly experience some years ago, when I was chief financial and administrative officer of a defense think-tank. In the course of discussing the company’s budget during meeting with employees from across the company, I uttered “niggardly” (meaning stingy or penny-pinching). The next day a fellow vice president informed me that some of the black employees from her division had been offended by “niggardly.” I suggested that she give her employees remedial training in English vocabulary. That should have been the verdict in the Reid case.

Gender Preciousness

It has become fashionable for academicians and pseudo-serious writers to use “she” where “he” long served as the generic (and sexless) reference to a singular third person. Here is an especially grating passage from an by Oliver Cussen:

What is a historian of ideas to do? A pessimist would say she is faced with two options. She could continue to research the Enlightenment on its own terms, and wait for those who fight over its legacy—who are somehow confident in their definitions of what “it” was—to take notice. Or, as [Jonathan] Israel has done, she could pick a side, and mobilise an immense archive for the cause of liberal modernity or for the cause of its enemies. In other words, she could join Moses Herzog, with his letters that never get read and his questions that never get answered, or she could join Sandor Himmelstein and the loud, ignorant bastards (“The Trouble with the Enlightenment,” Prospect, May 5, 2013).

I don’t know about you, but I’m distracted by the use of the generic “she,” especially by a male. First, it’s not the norm (or wasn’t the norm until the thought police made it so). Thus my first reaction to reading it in place of “he” is to wonder who this “she” is; whereas,  the function of “he” as a stand-in for anyone (regardless of gender) was always well understood. Second, the usage is so obviously meant to mark the writer as “sensitive” and “right thinking” that it calls into question his sincerity and objectivity.

I could go on about the use of “he or she” in place of “he” or “she.” But it should be enough to call it what it is: verbal clutter.

Then there is “man,” which for ages was well understood (in the proper context) as referring to persons in general, not to male persons in particular. (“Mankind” merely adds a superfluous syllable.)

The short, serviceable “man” has been replaced, for the most part, by “humankind.” I am baffled by the need to replaced one syllable with three. I am baffled further by the persistence of “man” — a sexist term — in the three-syllable substitute. But it gets worse when writers strain to avoid the solo use of “man” by resorting to “human beings” and the “human species.” These are longer than “humankind,” and both retain the accursed “man.”

Don’t Split Infinitives

Just don’t do it, regardless of the pleadings of descriptivists. Even Follett counsels the splitting of infinitives, when the occasion demands it. I part ways with Follett in this matter, and stand ready to be rebuked for it.

Consider the case of Eugene Volokh, a known grammatical relativist, who scoffs at “to increase dramatically” — as if “to dramatically increase” would be better. The meaning of “to increase dramatically” is clear. The only reason to write “to dramatically increase” would be to avoid the appearance of stuffiness; that is, to pander to the least cultivated of one’s readers.

Seeming unstuffy (i.e., without standards) is neither a necessary nor sufficient reason to split an infinitive. The rule about not splitting infinitives, like most other grammatical rules, serves the valid and useful purpose of preventing English from sliding yet further down the slippery slope of incomprehensibility than it has slid.

If an unsplit infinitive makes a clause or sentence seem awkward, the clause or sentence should be recast to avoid the awkwardness. Better that than make an exception that leads to further exceptions — and thence to Babel.

A Dictionary of Modern English Usage (a.k.a. Fowler’s Modern English Usage) counsels splitting an infinitive where recasting doesn’t seem to work:

We admit that separation of to from its infinitive is not in itself desirable, and we shall not gratuitously say either ‘to mortally wound’ or ‘to mortally be wounded’…. We maintain, however, that a real [split infinitive], though not desirable in itself, is preferable to either of two things, to real ambiguity, and to patent artificiality…. We will split infinitives sooner than be ambiguous or artificial; more than that, we will freely admit that sufficient recasting will get rid of any [split infinitive] without involving either of those faults, and yet reserve to ourselves the right of deciding in each case whether recasting is worth while. Let us take an example: ‘In these circumstances, the Commission … has been feeling its way to modifications intended to better equip successful candidates for careers in India and at the same time to meet reasonable Indian demands.’… What then of recasting? ‘intended to make successful candidates fitter for’ is the best we can do if the exact sense is to be kept… (p. 581, Second Edition).

Good try, but not good enough. This would do: “In these circumstances, the Commission … has been considering modifications that would better equip successful candidates for careers in India and at the same time meet reasonable Indian demands.”

Enough said? I think so.

*     *     *

Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.

Signature

On Writing: Part Three

Part One gives excerpts of W.Somerset Maugham’s candid insights about the craft of writing. Part Two gives my advice to writers of non-fiction works. This part recommends some writings about writing, some writers to emulate, and a short list of reference works. Part Four will deliver some sermonettes about practices to follow if you wish to be taken seriously and not thought of as a semi-literate, self-indulgent, faddish dilettante.

WRITINGS ABOUT WRITING

See Part One for excerpts of Maugham‘s memoir, The Summing Up. Follow the link to order a copy of the book. It’s personal, candid, and insightful. And it bears re-reading at intervals because it’s so densely packed with wisdom.

Read Steven Pinker‘s essay, “Why Academic Writing Stinks” (The Chronicle Review, September 26, 2014). You may not be an academic, but I’ll bet that you sometimes lapse into academese. (I know that I sometimes do.) Pinker’s essay will help you to recognize academese, and to understand why it’s to be avoided.

Pinker’s essay also appears in a booklet, “Why Academics Stink at Writing–and How to Fix It,” which is available here in exchange for your name, your job title, the name of your organization, and your e-mail address. (Whether you wish to give true information is up to you.) Of the four essays that follow Pinker’s, I prefer the one by Michael Munger.

Beyond that, pick and chose by searching on “writers on writing.” Google gave me 193,000 hits. Hidden among the dross, I found this, which led me to this gem: “George Orwell on Writing, How to Counter the Mindless Momentum of Language, and the Four Questions a Great Writer Must Ask Herself.” (“Herself”? I’ll deliver a sermonette about gender in Part Four.)

Those of you who know (or know of) The Elements of Style, you may wonder why I haven’t mentioned E.B. White. I’m saving him for the next two sections.

WRITERS TO EMULATE

Study Maugham’s The Summing Up for its straightforward style. Consider these opening sentences of a paragraph, for example:

Another cause of obscurity is that the writer is himself not quite sure of his meaning. He has a vague impression of what he wants to say, but has not, either from lack of mental power or from laziness, exactly formulated it in his mind and it is natural enough that he should not find a precise expression for a confused idea. This is due largely to the fact that many writers think, not before, but as they write. The pen originates the thought.

This is a classic example of good writing. The first sentence states the topic of the paragraph. The following sentences elaborate it. Each sentence is just long enough to convey a single, complete thought. Because of that, even the rather long second sentence should be readily understood by a high-school graduate (a graduate of a small-city high school in the 1950s, at least).

I offer the great mathematician, G.H. Hardy, as a second exemplar. In particular, I recommend Hardy’s A Mathematician’s Apology. (It’s an apology n the sense of “a formal written defense of something you believe in strongly,” where the something is the pursuit of pure mathematics.) The introduction by C.P Snow is better than Hardy’s long essay, but Snow was a published novelist as well as a trained scientist. Hardy’s publications, other than the essay, are mathematical. The essay is notable for its accessibility, even to non-mathematicians. Of its 90 pages, only 23 (clustered near the middle) require a reader to cope with mathematics, but it’s mathematics that shouldn’t daunt a person who has taken and passed high-school algebra.

Hardy’s prose is flawed, to be sure. He overuses shudder quotes, and occasionally gets tangled in a too-long sentence. But I’m taken by his exposition of the art of doing higher mathematics, and the beauty of doing it well. Hardy, in other words, sets an example to be followed by writers who wish to capture the essence of a technical subject and convey that essence to intelligent laymen.

Here are some samples:

There are many highly respectable motives which may lead men to prosecute research, but three which are much more important than the rest. The first (without which the rest must come to nothing) is intellectual curiosity, desire to know the truth. Then, professional pride, anxiety to be satisfied with one’s performance, the shame that overcomes any self-respecting craftsman when his work is unworthy of his talent. Finally, ambition, desire for reputation, and the position, even the power or the money, which it brings. It may be fine to feel, when you have done your work, that you have added to the happiness or alleviated the sufferings of others, but that will not be why you did it. So if a mathematician, or a chemist, or even a physiologist, were to tell me that the driving force in his work had been the desire to benefit humanity, then I should not believe him (nor should I think any better of him if I did). His dominant motives have been those which I have stated and in which, surely, there is nothing of which any decent man need be ashamed.

*     *     *

A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas. A painter makes patters with shapes and colors, a poet with words. A painting may embody an ‘idea’, but the idea is usually commonplace and unimportant. In poetry, ideas count for a good deal more; but, as Housman insisted, the importance of ideas in poetry is habitually exaggerated…

…A mathematician, on the other hand, has no material to work with but ideas, and his patterns are likely to last longer, since ideas wear less with time than words.

A third exemplar is E.B. White, a successful writer of fiction who is probably best known for The Elements of Style. (It’s usually called “Strunk & White” or “the little book.”) It’s an outgrowth of a slimmer volume of the same name by William Strunk Jr. (Strunk had been dead for 13 years when White produced the first edition of Strunk & White.)

I’ll address the little book’s authoritativeness in the next section. Here, I’ll highlight White’s style of writing. This is from the introduction to the third edition (the last one edited by White):

 The Elements of Style, when I re-examined it in 1957, seemed to me to contain rich deposits of gold. It was Will Strunk’s parvum opus, his attempt to cut the vast tangle of English rhetoric down to size and write its rules and principles on the head of a pin. Will himself had hung the tag “little” on the book; he referred to it sardonically and with secret pride as “the little book,” always giving the word “little” a special twist, as though he were putting a spin on a ball. In its original form, it was  forty-three-page summation of the case for cleanliness, accuracy, and brevity in the use of English.

Vivid, direct, and engaging. And the whole book reads like that.

REFERENCE WORKS

If you could have only one book to help you write better, it would be The Elements of Style. (There’s now a fourth edition, for which I can’t vouch, but which seems to cover the same ground as my trusty third edition.) Admittedly, Strunk & White has a vociferous critic, one Geoffrey K. Pullum. But Pullum documents only one substantive flaw: an apparent mischaracterization of what constitutes the passive voice. What Pullum doesn’t say is that the book correctly flays the kind of writing that it calls passive (correctly or not). Further, Pullum derides the book’s many banal headings, while ignoring what follows them: sound advice, backed by concrete examples. (There’s a nice rebuttal of Pullum here.) It’s evident that the book’s real sin — in Pullum’s view — is “bossiness” (prescriptivism), which is no sin at all, as I’ll explain in Part Four.

There are so many good writing tips in Strunk & White that it was hard for me to choose a sample. I randomly chose “Omit Needless Words” (one of the headings derided by Pullum), which opens with a statement of principles:

Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine to unnecessary parts. This requires not that the writer make all of his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.

That would be empty rhetoric, were it not followed by further discussion and 17 specific examples. Here are a few:

the question as to whether should be replaced by whether or the question whether

the reason why is that should be replaced by because

I was unaware of the fact that should be replace by I was unaware that or I did not know that

His brother, who is a member of the same firm should be replaced by His brother, a member of the same firm

There’s much more than that to Strunk & White, of course, (Go here to see table of contents.) You’ll become a better writer — perhaps an excellent one — if you carefully read Strunk & White, re-read it occasionally, and apply the principles that it espouses and illustrates.

After Strunk & White, my favorite instructional work is Lynne Truss‘s Eats, Shoots & Leaves: The Zero-Tolerance Approach to Punctuation. I vouch for the accuracy of this description of the book (Publishers Weekly via Amazon.com):

Who would have thought a book about punctuation could cause such a sensation? Certainly not its modest if indignant author, who began her surprise hit motivated by “horror” and “despair” at the current state of British usage: ungrammatical signs (“BOB,S PETS”), headlines (“DEAD SONS PHOTOS MAY BE RELEASED”) and band names (“Hear’Say”) drove journalist and novelist Truss absolutely batty. But this spirited and wittily instructional little volume, which was a U.K. #1 bestseller, is not a grammar book, Truss insists; like a self-help volume, it “gives you permission to love punctuation.” Her approach falls between the descriptive and prescriptive schools of grammar study, but is closer, perhaps, to the latter. (A self-professed “stickler,” Truss recommends that anyone putting an apostrophe in a possessive “its”-as in “the dog chewed it’s bone”-should be struck by lightning and chopped to bits.) Employing a chatty tone that ranges from pleasant rant to gentle lecture to bemused dismay, Truss dissects common errors that grammar mavens have long deplored (often, as she readily points out, in isolation) and makes elegant arguments for increased attention to punctuation correctness: “without it there is no reliable way of communicating meaning.” Interspersing her lessons with bits of history (the apostrophe dates from the 16th century; the first semicolon appeared in 1494) and plenty of wit, Truss serves up delightful, unabashedly strict and sometimes snobby little book, with cheery Britishisms (“Lawks-a-mussy!”) dotting pages that express a more international righteous indignation.

Next up is Wilson Follet’s Modern American Usage. The link points to a newer edition than the one that I’ve relied on for more than 40 years. Reviews of the newer edition, edited by one Erik Wensberg, are mixed but generally favorable. However, the newer edition seems to lack Follett’s “Introductory,” which is divided into “Usage, Purism, and Pedantry” and “The Need of an Orderly Mind.” If that is so, the newer edition is likely to be less uncompromising toward language relativists like Geoffrey Pullum. The following quotations from Follett’s “Introductory” (one from each section), will give you an idea of Follett’s stand on relativism:

[F]atalism about language cannot be the philosophy of those who care abut language; it is the illogical philosophy of their opponents. Surely the notion that, because usage is ultimately what everybody does to words, nobody can or should do anything about them is self-contradictory. Somebody, by definition does something, and this something is best done by those with convictions and a stake in the outcome, whether the stake of private pleasure or of professional duty or both does not matter. Resistance always begins with individuals.

*     *     *

A great deal of our language is so automatic that even the thoughtful never think about it, and this mere not-thinking is the gate through which solecisms or inferior locutions slip in. Some part, greater or smaller, of every thousand words is inevitably parroted, even by the least parrotlike.

(A reprint of the original edition is available here.)

I have one more book to recommend: The Chicago Manual of Style. Though the book is a must-have for editors, serious writers should also own a copy and consult it often. If you’re unfamiliar with the book, you can get an idea of its vast range and depth of coverage by following the preceding link, clicking on “Look inside,” and perusing the table of contents, first pages, and index.

Every writer should have a good dictionary and thesaurus at hand. I use The Free Dictionary, and am seldom disappointed by it. There also look promising: Dictionary.com and Merriam-Webster. I suggest, you decide (or offer alternatives).

Signature

On Writing: Part Two

In Part One of this series, I sampled the insights of W. Somerset Maugham (English, 1874-1965), a prolific and popular playwright, novelist, short-story writer, and author of non-fiction works. I chose to begin with Maugham — in particular, with excerpts of his memoir, The Summing Up — because of his unquestioned success as a writer and his candid assessment of writers, himself included.

Maugham’s advice to “write lucidly, simply, euphoniously and yet with liveliness” is well-supported by examples and analysis. But Maugham focuses on literary fiction and does not delve the mechanics of non-fiction writing. Thus this post, which distills lessons learned in my 51 years as a writer, critic, and publisher of non-fiction material, much of it technical.

THE FIRST DRAFT

1. Decide — before you begin to write — on your main point and your purpose for making it.

Can you state your main point in a sentence? If you can’t, you’re not ready to write, unless writing is (for you) a form of therapy or catharsis. If it is, record your thoughts in a private journal and spare the serious readers of the world.

Your purpose may be descriptive, explanatory, or persuasive. An economist may, for example, begin an article by describing the state of the economy, as measured by Gross Domestic Product (GDP). He may then explain that the rate of growth in GDP has receded since the end of World War II, because of greater  government spending and the cumulative effect of regulatory activity. He is then poised to make a case for less spending and for the cancellation of regulations that impede economic growth.

2. Avoid wandering from your main point and purpose; use an outline.

You can get by with a bare outline, unless you’re writing a book, a manual, or a long article. Fill the outline as you go. Change the outline if you see that you’ve omitted a step or put some steps in the wrong order. But always work to an outline, however sketchy and malleable it may be.

3.  Start by writing an introductory paragraph that summarizes your “story line.”

The introductory paragraph in a news story is known as  “the lead” or “the lede” (a spelling that’s meant to convey the correct pronunciation). A classic lead gives the reader the who, what, why, when, where, and how of the story. As noted in Wikipedia, leads aren’t just for journalists:

Leads in essays summarize the outline of the argument and conclusion that follows in the main body of the essay. Encyclopedia leads tend to define the subject matter as well as emphasize the interesting points of the article. Features and general articles in magazines tend to be somewhere between journalistic and encyclopedian in style and often lack a distinct lead paragraph entirely. Leads or introductions in books vary enormously in length, intent and content.

Think of the lead as a target toward which you aim your writing. You should begin your first draft with a lead, even if you later decide to eliminate or radically prune the lead.

4. Lay out a straight path for the reader.

You needn’t fill your outline sequentially, but the outline should trace a linear progression from statement of purpose to conclusion or call for action. Trackbacks and detours can be effective literary devices in the hands of a skilled writer of fiction. But you’re not writing fiction, let alone mystery fiction. So just proceed in a straight line, from beginning to end.

Quips, asides, and anecdotes should be used sparingly, and only if they reinforce your message and don’t distract the reader’s attention from it.

5. Know your audience, and write for it.

I aim at readers who can grasp complex concepts and detailed arguments. But if you’re writing something like a policy manual for employees at all levels of your company, you’ll want to keep it simple and well-marked: short words, short sentences, short paragraphs, numbered sections and sub-sections, and so on.

6. Facts are your friends — unless you’re trying to sell a lie, of course.

Unsupported generalities will defeat your purpose, unless you’re writing for a gullible, uneducated audience. Give concrete examples and cite authoritative references. If your work is technical, show your data and calculations, even if you must put the details in footnotes or appendices to avoid interrupting the flow of your argument. Supplement your words with tables and graphs, if possible, but make them as simple as you can without distorting the underlying facts.

7. Momentum is your best friend.

Write a first draft quickly, even if you must leave holes to be filled later. I’ve always found it easier to polish a rough draft that spans the entire outline than to work from a well-honed but unaccompanied introductory section.

FROM FIRST DRAFT TO FINAL VERSION

8. Your first draft is only that — a draft.

Unless you’re a prodigy, you’ll have to do some polishing (probably a lot) before you have something that a reader can follow with ease.

9. Where to begin? Stand back and look at the big picture.

Is your “story line” clear? Are your points logically connected? Have you omitted key steps or important facts? If you find problems, fix them before you start nit-picking your grammar, syntax, and usage.

10. Nit-picking is important.

Errors of grammar, syntax, and usage can (and probably will) undermine your credibility. Thus, for example, subject and verb must agree (“he says” not “he say”); number must be handled correctly (“there are two” not “there is two”); tense must make sense (“the shirt shrank” not “the shirt shrunk”); usage must be correct (“its” is the possessive pronoun, “it’s” is the contraction for “it is”).

11. Critics are necessary, even if not mandatory.

Unless you’re a first-rate editor and objective self-critic, steps 9 and 10 should be handed off to another person or persons — even if you’re an independent writer without a boss or editor to look over your shoulder. If your work must be reviewed by a boss or editor, count yourself lucky. Your boss is responsible for the quality of your work; he therefore has a good reason to make it better. If your editor isn’t qualified to do substantive editing (step 9), he can at least nit-pick with authority (step 10).

12. Accept criticism gratefully and graciously.

Bad writers don’t, which is why they remain bad writers. Yes, you should reject (or fight against) changes and suggestions if they are clearly wrong, and if you can show that they’re wrong. But if your critic tells you that your logic is muddled, your facts are inapt, and your writing stinks (in so many words), chances are that your critic is right. And you’ll know that your critic is dead right if your defense (perhaps unvoiced) is “That’s just my style of writing.”

13. What if you’re an independent writer and have no one to turn to?

Be your own worst critic. Let your first draft sit for a day or two before you return to it. Then look at it as if you’d never seen it before, as if someone else had written it. Ask yourself if it makes sense, if every key point is well-supported, and if key points are missing, Look for glaring errors in grammar, syntax, and usage. (I’ll list some useful reference works in Part Three.) If you can’t find any problems, you shouldn’t be a self-critic — and you’re probably a terrible writer.

14. How many times should you revise your work before it’s published?

That depends, of course, on the presence or absence of a deadline. The deadline may be a formal one, geared to a production schedule. Or it may be an informal but real one, driven by current events (e.g., the need to assess a new economics text while it’s in the news). But even without a deadline, two revisions of a rough draft should be enough. A piece that’s rewritten several times can lose its (possessive pronoun) edge. And unless you’re a one-work wonder, or an amateur with time to spare, every rewrite represents a forgone opportunity to begin a new work.

*     *     *

If you act on this advice you’ll become a better writer. But be patient with yourself. Improvement takes time, and perfection never arrives.

I welcome your comments, structural or nit-picking as they may be.

Signature

On Writing: Part One

Lynn Patra offers some writing tips in her post “On Becoming a Better Writer,” and invites readers to add their tips in the comments section. This post will appear while I’m taking a break from the keyboard, so I won’t be able to place it in the comments section of Lynn’s post. Consider this a virtual comment.

This is the first of Lynn’s tips:

Voraciously read the work of great writers and allow yourself to be guided by whatever subjects interest you. If you love to read, this the most enjoyable and engaging way to learn how to write well. With continuous exposure to good writing, your mind will absorb the various lessons that school teachers tried to impart the boring way.

That’s excellent advice. A related bit of advice is to heed what great writers have to say about writing.

W. Somerset Maugham (English, 1874-1965) was a prolific and popular playwright, novelist, short-story writer, and author of non-fiction works. He reflected on his life and career as a writer in The Summing Up. It appeared in 1938, when Maugham was 64 years old and more than 40 years into his very long career. I first read The Summing Up about 40 years ago, and immediately became an admirer of Maugham’s candor and insight. This led me to become an avid reader of Maugham’s novels and short-story collections. And I have continued to consult The Summing Up for booster shots of Maugham’s wisdom.

I offer the following excerpts of the early pages of The Summing Up, where Maugham discusses the craft of writing:

I have never had much patience with the writers who claim from the reader an effort to understand their meaning…. There are two sorts of obscurity that you find in writers. One is due to negligence and the other to wilfulness. People often write obscurely because they have never taken the trouble to learn to write clearly. This sort of obscurity you find too often in modern philosophers, in men of science, and even in literary critics. Here it is indeed strange. You would have thought that men who passed their lives in the study of the great masters of literature would be sufficiently sensitive to the beauty of language to write if not beautifully at least with perspicuity. Yet you will find in their works sentence after sentence that you must read twice to discover the sense. Often you can only guess at it, for the writers have evidently not said what they intended.

Another cause of obscurity is that the writer is himself not quite sure of his meaning. He has a vague impression of what he wants to say, but has not, either from lack of mental power or from laziness, exactly formulated it in his mind and it is natural enough that he should not find a precise expression for a confused idea. This is due largely to the fact that many writers think, not before, but as they write. The pen originates the thought. …. From this there is only a little way to go to fall into the habit of setting down one’s impressions in all their original vagueness. Fools can always be found to discover a hidden sense in them….

Simplicity is not such an obvious merit as lucidity. I have aimed at it because I have no gift for richness. Within limits I admire richness in others, though I find it difficult to digest in quantity. I can read one page of Ruskin with delight, but twenty only with weariness. The rolling period, the stately epithet, the noun rich in poetic associations, the subordinate clauses that give the sentence weight and magnificence, the grandeur like that of wave following wave in the open sea; there is no doubt that in all this there is something inspiring. Words thus strung together fall on the ear like music. The appeal is sensuous rather than intellectual, and the beauty of the sound leads you easily to conclude that you need not bother about the meaning. But words are tyrannical things, they exist for their meanings, and if you will not pay attention to these, you cannot pay attention at all. Your mind wanders…..

But if richness needs gifts with which everyone is not endowed, simplicity by no means comes by nature. To achieve it needs rigid discipline…. To my mind King James’s Bible has been a very harmful influence on English prose. I am not so stupid as to deny its great beauty, and it is obvious that there are passages in it of a simplicity which is deeply moving. But the Bible is an oriental book. Its alien imagery has nothing to do with us. Those hyperboles, those luscious metaphors, are foreign to our genius…. The plain, honest English speech was overwhelmed with ornament. Blunt Englishmen twisted their tongues to speak like Hebrew prophets. There was evidently something in the English temper to which this was congenial, perhaps a native lack of precision in thought, perhaps a naive delight in fine words for their own sake, an innate eccentricity and love of embroidery, I do not know; but the fact remains that ever since, English prose has had to struggle against the tendency to luxuriance…. It is obvious that the grand style is more striking than the plain. Indeed many people think that a style that does not attract notice is not style…. But I suppose that if a man has a confused mind he will write in a confused way, if his temper is capricious his prose will be fantastical, and if he has a quick, darting intelligence that is reminded by the matter in hand of a hundred things he will, unless he has great self-control, load his pages with metaphor and simile….

Whether you ascribe importance to euphony … must depend on the sensitiveness of your ear. A great many readers, and many admirable writers, are devoid of this quality. Poets as we know have always made a great use of alliteration. They are persuaded that the repetition of a sound gives an effect of beauty. I do not think it does so in prose. It seems to me that in prose alliteration should be used only for a special reason; when used by accident it falls on the ear very disagreeably. But its accidental use is so common that one can only suppose that the sound of it is not universally offensive. Many writers without distress will put two rhyming words together, join a monstrous long adjective to a monstrous long noun, or between the end of one word and the beginning of another have a conjunction of consonants that almost breaks your jaw. These are trivial and obvious instances. I mention them only to prove that if careful writers can do such things it is only because they have no ear. Words have weight, sound and appearance; it is only by considering these that you can write a sentence that is good to look at and good to listen to.

I have read many books on English prose, but have found it hard to profit by them; for the most part they are vague, unduly theoretical, and often scolding. But you cannot say this of Fowler’s Dictionary of Modern English Usage. It is a valuable work. I do not think anyone writes so well that he cannot learn much from it. It is lively reading. Fowler liked simplicity, straightforwardness and common sense. He had no patience with pretentiousness. He had a sound feeling that idiom was the backbone of a language and he was all for the racy phrase. He was no slavish admirer of logic and was willing enough to give usage right of way through the exact demesnes of grammar. English grammar is very difficult and few writers have avoided making mistakes in it….

But Fowler had no ear. He did not see that simplicity may sometimes make concessions to euphony. I do not think a far-fetched, an archaic or even an affected word is out of place when it sounds better than the blunt, obvious one or when it gives a sentence a better balance. But, I hasten to add, though I think you may without misgiving make this concession to pleasant sound, I think you should make none to what may obscure your meaning. Anything is better than not to write clearly. There is nothing to be said against lucidity, and against simplicity only the possibility of dryness. This is a risk that is well worth taking when you reflect how much better it is to be bald than to wear a curly wig. But there is in euphony a danger that must be considered. It is very likely to be monotonous…. I do not know how one can guard against this. I suppose the best chance is to have a more lively faculty of boredom than one’s readers so that one is wearied before they are. One must always be on the watch for mannerisms and when certain cadences come too easily to the pen ask oneself whether they have not become mechanical. It is very hard to discover the exact point where the idiom one has formed to express oneself has lost its tang….

If you could write lucidly, simply, euphoniously and yet with liveliness you would write perfectly: you would write like Voltaire. And yet we know how fatal the pursuit of liveliness may be: it may result in the tiresome acrobatics of Meredith. Macaulay and Carlyle were in their different ways arresting; but at the heavy cost of naturalness. Their flashy effects distract the mind. They destroy their persuasiveness; you would not believe a man was very intent on ploughing a furrow if he carried a hoop with him and jumped through it at every other step. A good style should show no sign of effort. What is written should seem a happy accident….

To Old Age

A mini-tempest in the blogosphere followed upon the recent appearance of Ezekiel Emanuel’s “Why I Hope to Die at 75” (The Atlantic, September 17, 2014). Emanuel, who is a “progressive” brother of the “progressive” Rahm Emanuel, was active in the shaping of Obamacare. For that reason, his pronouncement has been interpreted by some as a sinister policy suggestion, and derided by others as a foolish position. (See, for example, this, this, this, this, and this.)

Emanuel takes pains in his article to deny sinister intent. But some commentators aren’t buying Emanuel’s denial. Here’s Greg Scandlen, for example:

In [Emanuel’s] opinion, people older than 75 are annoying. They aren’t as productive as they used to be, don’t “contribute to work, society, the world.” They are more likely to be disabled, “a state that may not be worse than death but is nonetheless deprived.” Plus, they are a pain in the ass: “they set expectations, render judgments, impose their opinions, interfere, and are generally a looming presence for even adult children.”

Zeke admits there are exceptions to all of this. Why, he even once worked with an 80-year-old economist who was quite useful….

And that may be Zeke’s biggest flaw—he is arbitrary. There is nothing magical about the age of 75. Some people have rich lives long beyond that. Other people become senile well before. But like most Progressives, Zeke sees only cookie-cutter people. In his mind, we are all the same, just a pile of numbers and statistics….

… He says, “We [Americans] are growing old, and our older years are not of high quality.” What an idiotic statement. What is “high quality?” Is it okay with him if my life is not “high” quality but still “pretty good” quality? Is his standard of high quality the same as mine? Are there no younger people with “low-quality” lives?

But he is also naïve. He cites a study of aging, and says “[t]he results show that as people age, there is a progressive erosion of physical functioning.” Good grief. He needed a study to know that? Everyone has known that since the dawn of man.

He writes of his 87-year old father, who had a heart attack about ten years ago. “Since then he has not been the same. Once the prototype of a hyperactive Emanuel, suddenly his walking, his talking, his humor got slower. Today he can swim, read the newspaper, needle his kids on the phone, and still live with my mother in their own house. But everything seems sluggish.” He also quotes his father as saying, “ I have slowed down tremendously. That is a fact. I no longer make rounds at the hospital or teach.” Zeke adds, “Despite this, he also said he was happy.”

The man is 87 and has slowed down, but he is happy. Only Zeke Emanuel would see this as a problem. Apparently, in Zeke’s mind the failure to be hyperactive is worthy of death (“Zeke Emanuel Wants You to Die at 75,” The Federalist, September 23, 2014).

David Henderson adds this:

Scandlen puts his finger accurately, based on what I’ve seen of Emanuel in the past, on Emanuel’s attitude. The way I would sum it up is “Sometimes wrong; never in doubt.” The man (Emanuel) really does seem to think he knows how everyone should live.

It seems clear, for example, that Emanuel would like his father to die….

What I learned from Emanuel’s article is what a narrow view of the good life he has.

Now, this wouldn’t matter much if Emanuel were a random guy saying that he doesn’t want certain kinds of medical tests after age 75. But he’s not. Remember that he was one of the architects of ObamaCare. With his attitude about the importance of people over age 75, can we seriously think that he wouldn’t want to cut off certain health care services for people over age 75?

Emanuel says he’s not advocating any particular health policy based on his views….

“Let me be clear.” Hmmm. Where have we heard that before? Basically, I just don’t believe him….

… Emanuel has never come across as someone who simply wants to persuade people; he has always come across as a life arranger…. (“Zeke Emanuel on Optimal Life Expectancy,” EconLog, September 23, 2014).

A “life arranger.” Perfect description. (Here’s a takedown of Cass Sunstein, another life-arranger associated with Obama, about whose “libertarian” paternalism and other statist urgings I’ve written here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here.)

As an antidote to Emanuel, I offer W. Somerset Maugham (who lived almost 92 years), writing at the age of 64:

I look forward to old age without dismay. When Lawrence of Arabia was killed I read in an article contributed by a friend that it was his habit to ride his motor-bicycle at an excessive speed with the notion that an accident would end his life while he was still in full possession of his powers and so spare him the indignity of old age. If this is true it was a great weakness in that strange and somewhat theatrical character. It showed want of sense. For the complete life, the perfect pattern, includes old age as well as youth and maturity. The beauty of the morning and the radiance of noon are good, but it would be a very silly person who drew the curtains and turned on the light in order to shut out the tranquility of the evening. Old age has its pleasures, which, though different, are not less than the pleasures of youth. The philosophers have always told us that we are the slaves of our passions, and is it so small a thing to be liberated from their sway? The fool’s old age will be foolish, but so was his youth. The young man turns away from it with horror because he thinks that when he reaches it, he will still yearn for the things that give variety and gusto to his youth. He is mistaken. It is true that the old man will no longer be able to climb an Alp or tumble a pretty girl on a bed; it is true that he can no longer arouse the concupiscence of others. It is something to be free from the pangs of unrequited love and the torment of jealousy. It is something that envy, which so often poisons youth, should be assuaged by the extinction of desire. But these are negative compensations; old age has positive compensations also. Paradoxical as it may sound it has more time. When I was young I was amazed at Plutarch’s statement that the elder Cato began at the age of eighty to learn Greek. I am amazed no longer. Old age is ready to undertake tasks that youth shirked because they would take too long. In old age the taste improves and it is possible to enjoy art and literature without the personal bias that in youth warps the judgment. It has the satisfaction of its own fulfillment. It is liberated from the trammels of human egoism; free at last, the soul delights in the passing moment, but does not bid it stay. It has completed the pattern (The Summing Up, Pocket Books Edition, June 1967, pp. 215-16).

I first read The Summing Up about 40 years ago, though I didn’t at the time focus on the quoted passage. (I was interested them, as I still am, by Maugham’s reflections on the craft of writing.) But here I am now, almost a decade older than Maugham was when he wrote The Summing Up, and 16 years older than Emanuel. I can tell you that Maugham is right and Emanuel is wrong, if not about himself then about the legions who are near or beyond the age at which Emanuel wants himself (all of us?) to die.Signature

The Hopey Changey Mood in 1934

I was reminded by Mark Steyn of Leni Riefenstahl‘s propagandistic masterpiece of 1934, Triumph of the Will. Fräulein Riefenstahl made a media star of Adolf Hitler, and in so doing showed the way for the image-makers who have since given us the likes of John F. Kennedy, Bill Clinton, and Barack Obama. Shallow men all, with images that glowed until the mask was ripped off.

Triumph of the Will reminds me especially of Obama. See it for yourself, here. A segment that begins around 32:00 praises Hitler’s “shovel ready” projects. A segment that begins around 42:00 looks like a preview of the mob hysteria that surrounded Obama’s first inaugural in 2009.

Can a Third Reich happen here? In many respects, it has happened here, though in slow motion, over a span of more than 100 years. Teddy Roosevelt was the first imperial president, but not the most imperious. FDR trumped TR, but FDR had nothing on BHO.

Signature

*     *     *

Related posts:
FDR and Fascism
An FDR Reader
Fascism with a “Friendly” Face
Penalizing “Thought Crimes”
Tocqueville’s Prescience
Invoking Hitler
Don’t Use the “S” Word When the “F” Word Will Do
The Barbarians Within and the State of the Union
Presidential Treason
“A Date Which Will Live in Infamy”
The Criminality and Psychopathy of Statism
Walking the Tightrope Reluctantly

Decline

Although I’ve declared baseball the “king of team sports,” I would agree with anyone who says that baseball is past its prime. When was that prime? Arguably, it was the original lively ball era, which by my reckoning extended from 1920 to 1941. The home run had become much more prevalent than in earlier dead-ball era, but not so prevalent that it dominated offensive strategy. Thus batting averages were high and scoring proceeded at a higher pace than in any of the other eras that I’ve identified.

In 1930, for example, the entire National League batted .303. The Chicago Cubs of that season finished in second place and batted .309 (not the highest team average in the league). The average number of runs scored in a Cubs’ game was 12.0 — a number surpassed only by the lowly Philadelphia Phillies, whose games yielded an average of 13.8 runs, most of them scored by the Phillies’ opponents. Despite the high scoring, the average Cubs game of the 1930 season lasted only 2 hours and 5 minutes. (An estimate that I derived from the sample of 67 Cubs’ games for which times are available, here.)

In sum, baseball’s first lively ball era produced what fans love to see: scoring. A great pitching duel is fine, but a great pitching duel is a rare thing. Too many low-scoring games are the result of failed offensive opportunities, which are marked by a high count of runners left of base. Once runners get on base, what fans want (or at least one team’s fans want) is to see them score.

The game in the first lively ball era was, as I say, dynamic because scoring depended less on the home run than it did in later eras. And the game unfolded at a smart pace. That pace, by the way, was about the same as it had been in the middle of the dead-ball era. (For example, the times recorded for the Cubs’ two games against the Cincinnati Reds on July 4, 1911, are 2:05 and 2:00.)

Baseball has declined since the first lively ball era, not just because the game has become more static but also because it now unfolds at a much slower pace. The average length of a game in 2014 is 3:08 (for games through 07/17/14) — more than an hour longer than the games played by the Cubs in 1930.

Baseball is far from the only cultural phenomenon that has declined from its peak. I have written several times about the decline of art and music, movies, language, and morals and mores: here, here, here, and here. (Each of the foregoing links leads to a post that includes links to related items.)

Baseball is sometimes called a metaphor for life. (It’s a better metaphor than soccer, to be sure.) I now venture to say that the decline of baseball is a metaphor for the decline of art, music, movies, language, and morals and mores.

Indeed, the decline of baseball is a metaphor for the decline of liberty in America, which began in earnest — and perhaps inexorably — during the New Deal, even as the first lively ball era was on the wane.

*     *     *

See also “The Fall and Rise of American Empire.”

Let’s Make a Deal

Let's make a deal

The last deal negates all of the concessions made in the other deals — for those of us who will choose to live in Free States.

The Passing of Red-Brick Schoolhouses and a Way of Life

My home town once boasted fifteen schoolhouses that were built between the end of the Civil War and 1899. All but the high school were named for presidents of the United States: Adams, Buchanan, Fillmore, Harrison, Jackson, Jefferson, Madison, Monroe, Pierce, Polk, Taylor, Tyler, Van Buren, and Washington. Another of their ilk came along sometime between 1904 and 1915; Lincoln was its name.

With the Adams School counting for two presidents — the second and sixth — there was a school for every president through Lincoln. Why Lincoln came late is a mystery to me. Lincoln was revered by us Northerners, and his picture was displayed proudly next to Washington’s in schools and municipal offices. We even celebrated Lincoln’s Birthday as a holiday distinct from Washington’s Birthday (a.k.a. President’s Day).

More schools — some named for presidents — followed well into the 20th century, but only the fifteen that I’ve named were built in the style of the classic red-brick schoolhouse: two stories, a center hall with imposing staircase, tall windows, steep roof, and often a tower for the bell that the janitor rang to summon neighborhood children to school. (The Lincoln, as a latecomer, was L-shaped rather than boxy, but it was otherwise a classic red-brick schoolhouse, replete with a prominent bell tower.)

I attended three of the fifteen red-brick schoolhouses. My first was Polk School, where I began kindergarten two days after the formal surrender of Japan on September 2, 1945. (For the benefit of youngsters, that ceremony marked the official end of World War II.)

Here’s the Polk in its heyday:

PolkSch

Kindergarten convened in a ground-floor room at the back of the school, facing what seemed then like a large playground, with room for a softball field. The houses at the far end of the field would have been easy targets for adult players, but it would have been a rare feat for a student to hit one over the fence that separated the playground from the houses.

In those innocent days, students got to school and back home by walking. Here’s the route that I followed as a kindergartener:

Route to Polk School

A kindergartener walking several blocks between home and school, usually alone most of the way? Unheard of today, it seems. But in those days predation was unheard of. And, as a practical matter, most families had only one car, which the working (outside-the-home) parent (then known as the father and head-of-household) used on weekdays for travel to and from his job. Moreover, the exercise of walking as much as a mile each way was considered good for growing children — and it was.

The route between my home and Polk School was 0.6 mile in length, and it crossed one busy street. Along that street were designated crossing points, at which stood Safety Patrol Boys, usually 6th-graders, who ensured that students crossed only when it was safe to do so. They didn’t stand in the street and stop oncoming traffic; they simply judged when students could safely cross, and gave them the “green light” by blowing on a whistle. In the several years of my elementary-school career, I never saw or heard of a close call, let alone an injury or a fatality.

I began at Polk School because the school closest to my home, Madison School, didn’t have kindergarten. I went Madison for 1st grade. It was a gloomy pile:

MadisonSch

Madison was shuttered after my year there, so I returned to Polk for 2nd and 3rd grades. Madison stood empty for a few years, and was razed in the late 1940s or early 1950s. Polk was shuttered sometime in the 1950s, and eventually was razed after being used for many years as a school-district warehouse.

The former site of Madison School now hosts “affordable housing”:

Madison School site

There’s a public playground where Polk School stood:

Polk School site

I spent two more years — 4th and 5th grades — in another red-brick schoolhouse: Tyler School. It’s still there, though it hasn’t been used as a school for many decades. It looked like this in 2006, when it served as a halfway house:

Tyler School_2

It now stands empty and uncared for. It looked like this in 2013:

Tyler School 2013

The only other survivor among the fifteen red-brick schoolhouses is Monroe School, the present use of which I can’t ascertain. It seems to have been cared for, however. This image is from 2013:

Monroe School 2013

Tyler and Monroe Schools are ghosts from America’s past — a past that’s now seemingly irretrievable. It was a time of innocence, when America’s wars were fought to victory; when children could safely roam (large cities excepted, as always, from prevailing mores); when marriage was between man and woman, and usually for life; when deviant behavior was discouraged, not “celebrated”; when a high-school diploma and four-year degree meant something, and were worth something; when the state wasn’t the enemy of the church; when politics didn’t intrude into science; when people resorted to government in desperation, not out of habit; and when people had real friends, not Facebook “friends.”

Flummoxed by Firefox 29?

SEE UPDATES AT BOTTOM OF POST

I recently — and unhappily — updated to Firefox 29, which is yet another in a long string of software-engineer-friendly “upgrades” by the boys and girls at Mozilla. Now, I have to admit that Firefox, on the whole, is a more user-friendly browser than the several others that I’ve tried: Comodo Dragon, Google Chrome, Internet Explorer, Opera, and Safari — each of which has a serious-to-fatal flaw (e.g., vulnerable to malware, hard to customize, can’t open groups of tabs, can’t import bookmarks).

But being user-friendly is a relative thing, and Firefox seems bent on joining the ranks of its less-friendly peers. Firefox 29, for example, incorporates the page-reload button in the navigation bar (the place where a site’s URL appears). That’s neither a convenient nor intuitive place for the page-reload button. It’s true that one can reload a tab by right-clicking the tab and selecting “Reload Tab” from the pop-up menu. But it’s actually easier to point one’s mouse at a reload button that’s located in a fixed position that’s close to the tab strip — usually on the left.

Speaking of the tab strip, why have tabs if you can’t see them? I exaggerate, but just a bit. In the default mode of Firefox 29, tabs (other than the one that’s currently open) are almost invisible. Navigating from tab to tab involves a lot of squinting. The tabless look may be aesthetic, but it’s worse than useless.

I will say that other than the fixed position of the reload button — which is immovable, even after installing the Classic Theme Restorer add-on — Firefox 29 is more readily customizable than its predecessors. (One exception: It takes some Googling to learn how to put the tab strip back where it belongs, which is just above the page, not at the top of the screen.) But why “upgrade” Firefox to a “look” that many users will immediately try to customize to something more useful? Many (most?) Firefox users cut their teeth on earlier versions of Firefox, and they grew used to the “look” and “feel” of those earlier versions.

What’s wrong with that? Everything, apparently, if you’re a software engineer with fascistic tendencies. Consider this thread from the Firefox non-support forum:

Can I go back to Firefox 28. I have Firefox 29 now. I don’t like it.

Posted
4/29/14 6:36 AM

I want Firefox 28 back. How do I do that?

Chosen solution

Since the original question was Can I go back to Firefox 28, I don’t see how my response constitutes a hijacking. Everything else in your post is lawyer-speak.

Read this answer in context 0

Moses

  • Top 10 Contributor
  • Moderator

198 solutions 1862 answers

Hi,

Is there a particular reason you want to go back to 28? If this is about the new user interface looks then you can restore the way Firefox acted and looked with this add-on

OldRogue 0 solutions 4 answers

Why can’t someone answer the question asked? How do you download version 28 and revert to before 29. Classic Theme Restorer goes about 10% of the way to making FF useful again. Specifically, what needs to be remove from my Profile to get it back.

Moses

  • Top 10 Contributor
  • Moderator

198 solutions 1862 answers

Hi OldRogue,

1) We don’t link to old Firefox versions simply because of the latest version’s bug fixes/security patches, etc.
2) See #1 and We don’t HAVE to link to version 28. I’m pretty sure you’re capable of finding a little download link yourself.
3) You should create your own thread as you’re technically hijacking another person’s thread. Please create a new one at /questions/new

OldRogue 0 solutions 4 answers

Chosen Solution

Since the original question was Can I go back to Firefox 28, I don’t see how my response constitutes a hijacking. Everything else in your post is lawyer-speak.

Modified April 30, 2014 11:00:48 AM PDT by OldRogue

Moses

  • Top 10 Contributor
  • Moderator

198 solutions 1862 answers

I’m not going to argue with you and waste my time. I’m just going to say this:

  • From the Forum rules and guidelines For support requests, do not re-use existing threads started by others, even if they are seemingly on the same subject.

I’m not a Mozilla developer or employee so my “lawyer-speak” is all my words. I don’t work for Mozilla in case you haven’t noticed.

Also, to the OP, I’ve already answered their question. They can find the download link on their own. Takes maybe 2 minutes to find it…literally. But just in case someone doesn’t want to take their time and look for it, here it is:

Thread closed as I’ve given the download link!

Modified April 30, 2014 11:22:34 AM PDT by Moses

He may be Moses the lawgiver — with a vengeance — but he’s not the Moses that you want in charge when you’re looking for the promised land of browserdom. What a jerk!

Moses’s protestations to the contrary notwithstanding, he was being legalistic and OldRogue wasn’t hijacking the thread. How “big” of Moses to finally answer the original question. If he’d done that in the first place, he wouldn’t have revealed himself as a first-class a**hole.

Anyway, there’s your answer. If Mozilla slips in a new version of Firefox while you’re not looking, install an earlier version. In fact, take your pick from all of the earlier versions at the Index of pub/mozilla.org/firefox/releases/. If you happen upon a page that leads you to the Index, you’ll probably see something like this: “Warning: Using old versions of Firefox poses a significant security risk.”

Yeah, well, thanks for the warning. But I keep my firewall turned on, and I have a good anti-malware program (Malwarebytes Anti-Malware), and you should, too. When a version of Firefox gets too old, it stops working properly, which is a good sign that you should upgrade to a newer version, though not the newest one.

One last, important thing. Don’t let Mozilla slip in a new version of Firefox while you’re not looking. Go to “Tools” in the menu bar of Firefox (which I display for ease of use, despite Mozilla’s attempt to hide it), select Options, select the “Update” tab, and then choose either “Check for updates, but let me choose when to install them” or “Never check for updates.” If you choose “Check for updates,” read about an update before you install it — look especially for information about the ability to customize the new version. Don’t rely on Mozilla’s pitch; look for reviews on sites that specialize in computing and internet matters (e.g., PCMag.com and C|Net). And look especially for independent reviews of the kind you can find with a search engine; the lone-wolf reviewer is more likely to be critical than the establishment press.

By the way, I tried Firefox 29 on a gamble, and lost. I then rolled back to Firefox 28, which I had already tweaked to my taste.

Happy browsing.

UPDATE (05/07/14)

A reader kindly pointed me to Pale Moon, a Mozilla-based browser that works like Firefox used to. I’m now using Pale Moon, and loving it. (If I encounter glitches, I’ll add updates to this post.)

Why would I (or anyone) want a browser that works just like Firefox, but isn’t Firefox? Well, here’s one reason: the ousting of Mozilla CEO Brendan Eich for having made a donation to the Proposition 8 campaign in California. (See my post, “Surrender? Hell No!” and the articles I link to at the end of the post.)

And what does that have to do with Pale Moon? This from Pale Moon’s FAQ (as of today):

Will Firefox and Pale Moon work together in the future?

Since Mozilla has obviously chosen to follow a different path at the management level, it doesn’t seem likely that Pale Moon and Firefox will ever see a unification or joining of forces….

 

Read between the lines.

Nor is Pale Moon a mere copy of Firefox. This is from the same FAQ entry:

[T]here have been and are growing conflicts of interests between Pale Moon and Firefox as far as the so-called UX (User eXperience) developments are concerned. This results in a different user interface approach in Pale Moon. For example, less stress is put on minimizing the size of UI elements or saving every pixel possible to benefit the content area – in this day and age of full HD monitors and laptops that seems to be very counter-intuitive. Australis is considered unacceptable, and will not be aimed for – quite the opposite.

In other words, Australis-based Firefox 29 is a step in the wrong direction if you care about users. (Right on!) And Pale Moon isn’t going in that direction. Indeed, when I say that Pale Moon works like Firefox used to, I mean that it works like Firefox 28, to which I had returned after uninstalling Firefox 29.

If you want to try Pale Moon, you can download it here. If you’re currently a Firefox user and want to import your Firefox profile to Pale Moon, select the “don’t import anything” option at the end of the installation. There’s a separate tool for importing Firefox profiles, which you can download here. I used the tool, and it worked perfectly.

Happier browsing.

UPDATE (05/08/14)

My transition from Firefox 28 to Pale Moon has been seamless, as they say. So seamless, in fact, that I’ve made Pale Moon my default browser and unpinned Firefox from my Windows task bar. At this point I can’t see a reason to return to Firefox.

My next step will be to switch from Mozilla Thunderbird to Pale Moon’s FossaMail.

UPDATE (05/12/14)

I am now using FossaMail. After some unsuccessful attempts to copy my Thunderbird profile into Fossamail, I found a migration tool that works perfectly. During the migration, you might get a message saying that a script is taking longer than expected to run. If you do, select “continue” and let it run; it won’t take much longer for the tool to finish the job.

When you’re alerted that migration is complete, FossaMail may not respond immediately. The migration seems to continue in the background. Wait a few minutes, then try to open FossaMail. If your experience is like mine, when FossaMail opens it will contain an exact duplicate of your Thunderbird folders and messages.

Bye-bye, Mozilla.

A Guide to the Pronunciation of General American English

This post, originally published as “Phonetic Spelling: A Modest Proposal,” is drastically different from the original. I am indebted to commenter Jim Hlavac for his criticisms, which are reflected in this version of the post. In the course of revising the post, I made extensive changes to the pronunciation key. As before, comments about this work in progress are welcome.

When you’re in doubt about how to pronounce a word in American English, you may consult a source that relies on the International Phonetic Alphabet (IPA). The IPA is cumbersome, to say the least. It requires one to distinguish among dozens of tiny symbols, and then decode them by going to a rather busy page full of symbols and their translations.

Standard phonetic symbols for American English — the symbols we were supposed to learn in high school — aren’t much better. For example, go to The Free Dictionary and look up phonetic → fə-nĕt′ĭk,. Not only is the schwa (ə, an “uh” sound) incorrect (in my view), but to grasp the pronunciation of the word, you must still turn to a separate pronunciation key.

A proper guide to the pronunciation of American English should enable anyone who speaks or understands General American to grasp the proper (or generally accepted) pronunciation of a word simply by looking at a phonetic spelling that consists entirely of letters (e.g., word → werd, refuel → re-few-uhl). And the relationship between the phonetic spellings and the sounds that they represent should be intuitively obvious — again, if you speak or understand General American.

I emphasize General American (GA) for two reasons. First, GA — in the guise of “television English” — is heard across the country, not just in the Midwest and areas where similar accents dominate (e.g., the Great Plains and West Coast). Second, because most Americans understand “television English” (even if many of them don’t speak it), a guide that is keyed to GA should be useful to (almost) everyone.

In the rest of this post, I propose and demonstrate the application of a pronunciation guide that is based on GA, as I understand it. The guide comprises 50 sounds, which is five more sounds than are given in a standard guide (e.g., here). I’ve added several sounds that the standard guide merges with dissimilar sounds. And I’ve merged (or dropped) a few sounds that the standard guide mistakenly or unnecessarily lists as separate sounds.

In any event, the guide that I propose consists entirely letters of the alphabet. It is therefore more accessible than guides that rely heavily on symbols.

Here is the key, which for ease of use omits syllabic emphasis:

Phonetic pronunciation key

As noted, the key omits syllabic emphasis. In the following spellings of the 100 most commonly spoken words in English, I indicate emphasis with CAPS:

Phonetic spellings of 100 most common words

And here are ten words chosen from a list of 100 elegant words:

Phonetic spelllings of 10 elegant words

Checking Out

UPDATED 06/25/14

The demise of Mickey Rooney at the age of 93 reminded me that several years ago I began to track some celebrities who had attained the age of 90. The rather quirky list of notables, which doesn’t include Rooney, now looks like this:

Luise Rainer 104, George Beverly Shea 104, Charles Lane 102, George Kennan 101, Gloria Stuart 100Eddie Albert 99Irwin Corey 99, Michael DeBakey 99, Mitch Miller 99, Max Schmeling 99, Risë Stevens 99, John Wooden 99Tony Martin 98, Dale Messick 98, Eli Wallach 98, Herman Wouk 98, Olivia de Havilland 97, Zsa Zsa Gabor 97John Kenneth Galbraith 97, Ernest Gallo 97, Estée Lauder 97, Art Linkletter 97, Al Lopez 97, Vera Lynn 97Karl Malden 97, John Mills 97, Kitty Carlisle 96, ,Jack LaLanne 96, Kevin McCarthy 96, Harry Morgan 96, Fay Wray 96Jane Wyatt 96, Joseph Barbera 95, Ernest Borgnine 95, Henri Cartier-Bresson 95, Monte Irvin 95, Herbert Lom 95, Peter Rodino, Jr 95, Sargent Shriver 95, Patty Andrews 94, Sammy Baugh 94, Constance Cummings 94, Lady Bird Johnson 94, Robert Mondavi 94, Byron Nelson 94, Les Paul 94Billy Graham 93, Ruth Hussey 93, Frankie Laine 93, Robert McNamara 93, Artie Shaw 93,  Richard Widmark 93, Oleg Cassini 92, Ralph Edwards 92Bob Feller 92, Ernie Harwell 92, Lena Horne 92Julia Child 91, Archibald Cox 91, Geraldine Fitzgerald 91, Frances Langford 91, John Profumo 91, William Westmoreland 91Jane Wyman 90.

By my reckoning, of the dozens (or hundreds) of actors who starred in Hollywood films before World War II, only two survive:

I should note that de Havilland’s younger sister and life-long rival, Joan Fontaine, died on December 15, 2013, at the age of 96. The de Havilland sisters came by their longevity the easy way; they inherited it. Their father lived to the age of 95; their mother, to the age of 88.

2013: A Bad Year at the Movies

Thanks to Netflix, I used to watch two or three feature films a week. I was able to sustain that pace for years because of a backlog of highly rated but yet-unwatched films, and the frequent release of new films of merit. The backlog has almost vanished, as has the offering of meritorious new films.

Take 2013, please! I have thus far seen only four of the films emitted in that year: American Hustle, Blue Jasmine, Captain Phillips, and Now You See Me. Viewers who rate films at IMDb (Internet Movie Database) have given the films average ratings of 7.5, 7.4, 8.0, and 7.3 out of 10, respectively, as against my own ratings of 4, 1, 7, and 7.*

Admittedly, a sample of four may seem inadequate to the task of judging a year’s worth of filmic output, but my assessment of that output would be even less glowing had I not rejected most of it sight unseen. Take American Hustle (please!), which I watched last night. It was nominated for 10 Academy Awards, despite the fact that it’s too long, too loud, too crude, and rarely funny where it’s meant to be funny. Thus my rating of 4. Blue Jasmine, to which I gave a 1, turned out to be another of Woody Allen’s series of kvetches — boring as hell unless you are fascinated by neurotic, yuppie Manhattanites. Captain Phillips and Now You See Me are good but not great films.

I’m content to call 2013 a bad year at the moves — perhaps the worst year — because of two trends. The first is an accelerating downward trend (with respect to year of release) in the percentage of movies that I have called a “favorite,” that is, a movie that I’ve rated 8, 9, or 10:

Favorites as pct of films seen and rated

What about overall ratings? Here are my ratings of movies, relative to the ratings given the same movies by IMDb users; note the steep decline after 1995:

Ratings as pct of IMDb users

Is it just me? Perhaps. But it’s more likely that movie-goers’ tastes have coarsened in the past two decades. Witness the popularity of American Hustle; witness the unremitting stream of sex, violence, and general depravity that emanates from movies and over the electromagnetic spectrum.

I conclude that movies are getting worse than ever, in keeping with popular culture.

*     *     *

Related posts:
The Movies: (Not) Better Than Ever
At the Movies: The Best and Worst Years
My Year at the Movies (2007)
The Movies: Not Better than Ever (II)
__________
Here’s a guide to my ratings:
1 – unwatchable
2 – watched all the way through, to my regret
3, 4, 5 – varying degrees of entertainment value, but altogether a waste of time
6 – generally engaging, but noticeably flawed in some way (e.g., a weak performance in a major role, trite story, a contrived ending, insufficient resolution of plot or sub-plot)
7 – well done in all respects, with only a few weak spots; enjoyable but not scintillating
8 – a thoroughly engaging movie; its weak spots (e.g., a corny plot), if any, are overwhelmed by scintillating performances (e.g., the spectacular dancing of Astaire and Rogers), sustained hilarity, a compelling plot, a witty script, etc. (a rating that I’ve given to 30 percent of the more than 2,000 feature films that I’ve seen)
9 – an “8” that is so good it bears re-watching (a rating that I’ve given to only 3 percent of the films I’ve seen)
10 – a movie that I didn’t want to end; a masterpiece of film-making (a rating that I’ve given to only 5 films — 0.2 percent)

O Tempora O Mores!

I was exceedingly irritated by a rah-rah piece about “affordable housing” in today’s edition of the local rag. The piece was early intended to promote subsidies that would enable low-income persons to live in mixed-income areas, that is, in the vicinity of persons with higher incomes. The writer of the piece advanced some (admittedly) not-very-convincing sociological arguments for mixed-income neighborhoods, including cost-benefit studies that purport to show that the benefits of subsidized housing outweigh the costs. He failed to mention, of course, that the persons who subsidize “affordable housing” for low-income persons are not the persons who benefit from it. Nor did he make much of the obvious fact that as people earn more, they generally prefer to live among persons with similar earnings, and not among people who earn a lot less.

But what people actually want doesn’t matter in the end, because what counts is what do-gooders want and what government can compel in the name of doing good, don’t you see? That’s why the gauleiters of our fair city persist in the subsidization of low-income housing in mixed-income areas.

In any event, the article led me to think about the many ways in which social norms have changed for the worse since the days of my Midwestern upbringing in the 1940s and 1950s. For one thing, the idea that people should work, save, and pay for their own housing — as I did and my parents did — seems to have gone to the great graveyard of quaint ideas. That graveyard is populated by such formerly vital notions as these:

Behavior is shaped by social norms, like those listed here. The norms are rooted in the Ten Commandments and time-tested codes of behavior. The norms aren’t altered willy-nilly in accordance with the wishes of “activists,” as amplified through the megaphone of the mass media.

Rules of grammar serve the useful purpose of enabling people to understand each other easily. The flouting of grammatical rules in everyday conversation is a sign of ignorance and ill-breeding, not originality.

Dead, white, European males produced some of the greatest works of art, music, literature, philosophy, science, and political theory. Those dead, white, European males are to be celebrated for their accomplishments, not derided just because they are dead or were not black/brown/tan, female, of confused gender, or inhabitants of non-European places.

Marriage is a union of man and women.

Marriage comes before children. This is not because people are pure at heart, but because it is the responsible way to start life together and to ensure that one’s children enjoy a stable, nurturing home life.

Marriage is until “death do us part.” Divorce is a recourse of last resort, not an easy way out of marital and familial responsibilities or the first recourse when one spouse disappoints or angers the other.

Children are disciplined — sometimes spanked — when they do wrong. They aren’t given long, boring, incomprehensible lectures about why they’re doing wrong. Why not? Because they usually know they’re doing wrong and are just trying to see what they can get away with.

Gentlemen don’t swear in front of ladies, and ladies don’t swear in front of gentlemen; discourse is therefore more likely to be rational, and certainly more bearable to those within earshot.

A person’s “space” is respected, as long as person is being respectful of others. A person’s space is not invaded by a loud conversation of no interest to anyone but the conversant.

A person grows old gracefully and doesn’t subject others to the sight of flabby, wrinkled tattoos (unless you were a sailor who has one tattoo on one arm). (This may seem like a nit-pick, but the epidemic of tattooing is symptomatic of the loud, brash, self-centered, faddish culture that now commands center stage in much of America.)

Drugs are taken for the treatment of actual illnesses, not for recreational purposes.

Income is earned, not “distributed.” Persons who earn a lot of money are to be respected. If you envy them to the point of wanting to take their money, you’re a pinko-commie-socialist (no joke).

Welfare is a gift that one accepts as a last resort, it is not a right or an entitlement, and it is not bestowed on persons with convenient disabilities

A man holds a door open for a woman out of courtesy, and he does the same for anyone who is obviously weaker than he is, or laden with packages

Sexism (though it isn’t called that) is nothing more than the understanding — shared by men and women — that women are members of a different sex (the only different one); are usually weaker than men; are endowed with different brain chemistry and physical skills than men (still a fact); and enjoy discreet admiration (flirting) if they’re passably good-looking, or better. Women who reject those propositions — and who try to enforce modes of behavior that assume differently — are embittered and twisted.

A mother who devotes time and effort to the making of a good home and the proper rearing of her children is a pillar of civilized society. Her life is to be celebrated, not condemned as “a waste.”

Homosexuality is a rare, aberrant kind of behavior. (And this is before AIDS proved it to be aberrant.) It’s certainly not a “lifestyle” to be celebrated and shoved down the throats of all who object to it.

Privacy is a constrained right. It doesn’t trump moral obligations, among which are the obligations to refrain from spreading a deadly disease and to preserve innocent life.

Addiction isn’t a disease; it’s a surmountable failing.

Envy is an unsavory and unseemly state of mind; a person should better himself instead of tearing others down.

Justice is for victims. Victims are persons to whom actual harm has been done by way of fraud, theft, bodily harm, murder, and suchlike. A person with a serious disease or handicap isn’t a victim, nor is a person with a drinking or drug problem.

Justice is a dish best served hot, so that would-be criminals can connect the dots between crime and punishment. Swift and sure punishment is the best deterrent of crime. Capital punishment is the ultimate deterrent because an executed killer can’t kill again.

Peace is the result of preparedness for war; lack of preparedness invites war.

The list isn’t exhaustive, but it’s certainly representative. The themes are few and simple: self-reliance, respect for others, respect for tradition, and the defense of society from predators foreign and domestic. The result is liberty: A regime of mutually beneficial coexistence based on trust.

Whence the now-dominant leftist schemes and themes, like “affordable housing” and “the undeserving rich” (a.k.a. “the 1%” and “the 0.1%”), which have replaced the dominant mores of old? Leftist ideas, like the poor, have always been with us, but their political ascendancy arises from the indoctrination mills known as the mainstream media and educational institutions. This is from an article by Graham Cunningham:

“[R]eality” as reflected in the big “old” media is—notwithstanding the relatively recent uncorking of (mainly U.S. based) conservative voices in the “new” media—still overwhelmingly liberal…. And the old media—a virtual Fifth Estate—is still a very big wild wood of seductive liberal myth and folklore. The “staccato signals of constant information” appear, in large part, to be apolitical, making them all the more persuasive. But such is the relentless focus of conservative intellectual discourse on a current affairs agenda that conservatives—never mind liberals—often cannot see the wood for the trees. It is a wood with tangled roots deep in early 20th century socialist intellectual soil. Its filigree branches have since grown and spread into every corner of 21st century public consciousness.

… As someone whose own working life has, at various times, brought me into close contact, not only with schools, colleges, and universities but also local government, the architectural profession, and the British NHS, I can attest that soft-left prejudices prevail in all of these. So the educational incubation of the professional, business, and mandarin classes is another part of the story of the rise of politically correct, middle-class, liberal orthodoxy.

It has also long been true that a great majority of school teachers will be Democrats/Labour Party voters. In varying degree they are likely to emerge from their teacher training with a soft-left baggage ranging from old-fashioned vaguely collectivist economic assumptions and Dickensian sentimental notions (like something called “The Working Class” being perennially victim of something called “The Rich”) to various newer relativist “liberation” and victimhood theologies. Plus a sympathetic take on various kinds of “anti-something-or-other” and “eco” militancy….

[T]he much more potent influence is that everyone born since the Second World War—university educated or not—will have spent a large part of their leisure time in Media Land—a virtual parallel universe, rich in sublimated myth and fairytale. Now Media Land is not some Orwellian Big-Brother conspiracy. It is in itself, too diffuse and anarchic to be a place of didactic political bias per se. Its quintessential characteristic is, rather, that it allows you—without any great effort on your part—to sustain the illusion that you know, and are entitled to have an opinion about, all manner of things beyond your direct experience. It is from these intangible, ego-flattering, seductive characteristics that its mind-bending power flows.

It is the great oracle from which we absorb not just “The News” (intrinsically an editorial semi-fiction anyway) but also the good-guy/bad-guy narratives of film and television drama, the satirical talking-heads panel show, the “shocking” lid-lifting documentary etc. So it is that—drip by drip—the public’s imagination becomes accustomed to the notion that the apparently law-abiding, white, middle-class dwellers in suburbia—though they may not in reality always be the one who have actually “done” the murder—nevertheless do have a dark side to their supposedly smug existence and their desk job in the City—which must, by the way, axiomatically be ignoble, venal and soul destroying. Whereas the violent teenage gangster turns out to have the soul of a poet buried under all those years of oppression. And the lardy, welfare-cheating couch-potato turns out to be quite a sound bloke underneath it all and good fun too. And anyone who takes to the streets in a “protest”—never mind how ignorant and bloody-minded—instantly becomes a hero whilst the target of the “protest” is instantly a villain. And so it is too that the alleged misdeeds of supposedly smug political and business elites are ruthlessly exposed and then wittily sent-up by even more smug, smartly-pants TV “personalities” whose own elite lifestyles remain relatively out of the media spotlight….

And then there is “The News”. Whilst the current affairs output of the mainstream media is not uniformly politically biased per se, it does often have the same entrenched undercurrents as the rest. Underpinning all the day-to-day news ephemera are some enduring fairytales that are both highly seductive and at the same time so diffuse as to be almost subconscious. A major example is the one in which some big bad wolf (maybe “The Government” or “Big Business”)—and definitely not you personally—is either to blame for all your problems in life or has failed to solve them for you. You—a member of “the great mass of ordinary decent people”—are a victim of some or other system or institution. Another (almost certainly subconscious) fairytale is the one in which—by the simple device of espousing “progressive” liberal attitudes—you can carry on with your (and your family’s) own personal pursuit of happiness, just like before but now with the added bonus of feeling that you—unlike those nasty “Right-wingers”—are on the side of the angels. Now that is a really seductive one! …

It is also worth noting that, quite apart from any questions of political bias as such, “The News”, with its inevitable editorial selectivity can—at least in the minds of the uncurious and suggestible—actually help to spread ignorance dressed up as illusory knowledge….

… Having so many alternative gadgets to play with, [members of the post-internet generation] are less and less likely to watch [TV] and especially “The News” and “Current Affairs”. But overall, the power of the Media-Academia Complex is likely to remain undiminished for a very long time to come. Its power comes ultimately from the illusion it creates that you can sit back and soak up all you need to “know” about the big wide world without actually having to be all that curious about it. (“How the Left Was Won,” The Imaginative Conservative, February 2014)

The 1940s and 1950s weren’t idyllic, by any means — but no era ever is, except in gauzy hindsight. There was more poverty and racism then than now. But the economy would be even more robust today, absent the incursions of the regulatory-welfare state. And racism would have declined in time, with less of the lingering resentment that was a foreseeable result of government’s heavy-handed “equality” policies. Simply enforcing existing laws so that blacks enjoyed equal treatment would have been enough.

The undoing of traditional mores began in earnest in the 1960s, with a frontal assault on traditional morality and the misguided expansion of the regulatory-welfare state. The unraveling continues to this day. Traditional morality is notable in its neglect; social cohesion is almost non-existent, except where the bonds of religion and ethnicity remain strong. The social fabric that once bound vast swaths of America has rotted — and is almost certainly beyond repair.

If Hillary Clinton possessed an ounce of intellectual honesty, she would justifiably call the great unraveling a vast, left-wing conspiracy. As Cunningham suggests, it is to some extent an unwitting conspiracy of smug, like-minded persons. But it is nevertheless a broad-based, often concerted, and nihilistic effort to undermine the foundations of morality — and economic progress.

*     *     *

Related reading:
Dwight Longnecker, “Modern Marriage – Revolution or Regression?,” The Imaginative Conservative, February 14, 2014

Related posts:
PC Madness
Why Not Marry Your Pet?
Stuff White (Liberal Yuppie) People Like
“Men’s Health”
I’ve Got a LIttle List
See also the preceding post, and the many posts listed at the bottom.

The Fall and Rise of American Empire

Most Americans don’t like the idea of empire. It smacks of power, which is comforting and enriching when you have it, though few like to admit it. In short, empire can be a good thing. Lawrence W. Reed opens “The Fall of the Republic” with this:

For nearly five centuries, Res Publica Romana—the Roman Republic—bestowed upon the world a previously unseen degree of respect for individual rights and the rule of law. When the republic expired, the world would not see those wondrous achievements again on a comparable scale for a thousand years.

Reed summarizes the decline and fall of Rome:

The Roman Republic died a death of a thousand cuts. Or, to borrow from another, well-known parable: The heat below the pot in which the proverbial frog was boiled started out as a mere flicker of a flame, then rose gradually until it was too late for the frog to escape. Indeed, for a brief time, he enjoyed a nice warm bath….

Writers from the first centuries B.C. and A.D. offered useful insights to the decline. Polybius predicted that politicians would pander to the masses, leading to the mob rule of an unrestrained democracy. The constitution, he surmised, could not survive when that happened. Sallust bemoaned the erosion of morals and character and the rise of personal power lust. Livy, Plutarch, and Cato expressed similar sentiments. To the moment of his assassination, Cicero defended the Republic against the assaults of the early dictators because he knew they would transform Rome into a tyrannical despotism.

Ultimately, the collapse of the political order of republican Rome has its origins in three developments that took root in the second century B.C., then blossomed by the end of the first. One was foreign adventure. The second was the welfare state. The third was a sacrifice of constitutional norms and the rule of law to the demands of the other two.

The American equivalent of the Roman Republic didn’t last nearly as long — only about a century, from the Spanish-American War of 1898 through 1991, which marked the end of the Cold War and victory in the Gulf War. The relative peace and prosperity of the next several years masked America’s underlying decline, which has since became evident in the military, political, and economic events of the 21st century.

The causes and symptoms of America’s decline bear a strong resemblance to the decline of Rome. Let’s start with foreign adventure. By the end of 1991, America’s influence in the world seemed assured, given collapse of the USSR and the easy victory over Iraq in response to Saddam Hussein’s grab of Kuwait. But those two events proved to be the American Empire’s last gasp.

The dust had barely settled on the Gulf War when Somalia joined the list of post-World War II military misadventures, namely, the Korean War, the Vietnam War, the lame response to the bombing of Marine barracks in Lebanon, and the jurisprudential reaction to the 1993 bombing of the World Trade Center. (Some would argue that America’s entry into World War I was also a misadventure because of the imperial origins and tragic aftermath of the peace, namely, the rise of totalitarianism. But, at least, World War I ended decisively and in a clear-cut victory for America’s side — a victory that wouldn’t have been possible without the intervention of American forces.) The seeming disinclination of American leaders to stay the course and to wreak vengeance was duly noted in Osama bin Laden’s 1996 fatwa against the United States. As if to endorse that view, the 1998 bombings of U.S. embassies in Africa were met with ineffectual missile strikes.

And then came 9/11, and in its wake the wars in Afghanistan and Iraq. Both were cast in the mold of Korea and Vietnam: not enough firepower, not enough willpower. Barack Obama’s subsequent foreign policy misadventures and general retreat from effective leadership have only cemented America’s place as a declining, feckless, no-longer-fearsome power. Whence Obama’s fecklessness? Some argue that it is evidence of a deliberate effort to debase the United States.

So much for military misadventures. Let us turn to the growth of the welfare state and the sacrifice of constitutional norms. These go hand-in-hand, and both began before America’s military misadventures after World War II.

Consider the judicial betrayal of the constitutional scheme of limited government, and of order and traditional morality. There is no way, in the course of a blog post, to assess the full scope of the betrayal, in which the U.S. Supreme Court was a willing co-conspirator. Some examples will have to do:

Home Building & Loan Association v. Blaisdell (1933) allowed governmental suspension of creditors’ remedies (i.e., foreclosure), thus undermining contractual relationships.

National Labor Relations Board v. Jones & Laughlin Steel Corporation (1937) validated the Wagner Act, which vastly expanded the ability of labor unions to extort employers, to restrict commerce, and to fatten the paychecks of union members at the expense of everyone else.

Helvering v. Davis (1937) found Social Security to be constitutional, despite the plain words of Article I, Section 8 (the enumerated powers of Congress).

Wickard v. Filburn (1942) gave Congress unlimited power to regulate anything remotely connected with interstate commerce.

Miranda v. Arizona (1966) stigmatized and hindered the efforts of police to protect the public. On the basis of “intuitive empiricism” (i.e., judicial guesswork), Miranda imposed an overly broad interpretation of the Fifth Amendment. (A subsequent empirical analysis suggests that Miranda was unwisely decided.)

Griggs v. Duke Power Company (1971) enshrined disparate impact as evidence of racial discrimination, and put the burden of proof on the accused employer.

Lemon v. Kurtzman (1971) gave judges an easy way (the “Lemon test”) to rule against any government action that might incidentally benefit religion.

Roe v. Wade (1973) authorized murder in the name of privacy.

Goss v. Lopez (1975) made it more difficult for school authorities to discipline disruptive and destructive behavior, and (in my view) established — beyond hope of reversal — the interference of the central government in matters that ought to be handled and disposed of locally.

Coker v. Georgia (1977) outlawed the death penalty in cases of rape, thus contributing to the erosion of the death penalty as a serious deterrent to the commission of heinous crimes and a just penalty for same.

Tennessee Valley authority v. Hill (1978) gave the snail darter — and as a result, all kinds of critters — precedence over human beings, under the Endangered Species Act.

Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc. (1984) vastly increased the power of regulatory agencies by decreeing “deference” toward rules made in the absence of specific congressional authorization, as long as the rules are “reasonable.”

Garcia v. San Antonio Metropolitan Transit Authority (1985) confirmed the hollowness of the Tenth Amendment and the States’ ability to exercise any power without the permission of the central government.

Kelo v. City of New London (2005) affirmed the right of any government in the United States to seize anyone’s property, at any time, for any use — even non-governmental.

National Federation of Independent Business v. Sebelius (2012) granted the federal government power to tax anyone for any purpose, even for not doing something.

Hollingsworth v. Perry (2013) left standing a federal district court judge’s self-serving declaration that California’s duly adopted ban on same-sex “marriage” was unconstitutional, thus opening the door to similar holdings by other federal judges about other States’ duly adopted bans on same-sex “marriage.”

The judiciary didn’t instigate the vast expansion of the regulatory-welfare state and the overthrow of social norms, but the judiciary abetted them.

What does the regulatory-welfare state amount to? Huge federal welfare schemes, including but not limited to Social Security, Medicare, and Medicaid; the addition of nine cabinet-level departments to the executive branch in the preceding 100 years; the creation of the cabinet-level Environmental Protection Agency (EPA); the delegation of legislative power to the EPA and other federal agencies, and ensuing accretion of rules made and enforced by those agencies; and the pervasive centralization of power in Washington, “thanks” to judicial misfeasance of the kinds listed above, and to political sleight-of-hand (e.g., “cooperative” federal-State programs like Medicare, and grants of “federal” money — i.e., taxpayers’ money — to State and local governments).

As for constitutional norms, the courts of the United States have become perversely “libertarian.” They seem driven to overturn long-standing, time-tested behavioral norms that guide individuals toward peaceful, constructive coexistence with their compatriots. Thus the “right” to an abortion in the first trimester, based on a non-existent general right of privacy, has become the right to kill a nearly born and newly born child. The “right” to practice sodomy has become an obligation to purvey goods and services to those who practice sodomy, regardless of one’s personal views about the practice. The “right” of a male student of confused gender to use the girl’s bathroom in a Maine school threatens to evolve into the “right” to walk into any damn bathroom at any time, regardless of one’s actual gender. And on and on, down the slippery slope and into unreason, barbarity, and oppression.

Where stands the Empire today? Clearly, America has less influence in the world than it had just after World War II and even after the Gulf War. What a joke it is when the American president must be rescued from the consequences of his own (possibly deliberate) haplessness by Russia’s leader, when Iran plays rope-a-dope with Obama in the matter of nuclear weapons, and when China flexes its new-found and growing military muscle without drawing a serious response from the U.S.

American power abroad could be restored in fairly short order, given the will to do so. But the hollowing out of America’s liberty and prosperity — which began in earnest with the New Deal — threatens to be permanent, given the decades-long transformation of the nation’s legal and bureaucratic infrastructure. Government — mainly the central government — now exerts financial control over 40 percent of the economy (here, see first graph), and arguably exerts regulatory control over almost all of it.

That control has long since passed from the elected “representatives” of the people to technocrats who are bent on dictating how Americans’ conduct their lives and earn their livelihoods. Thus:

In an FDA office building in suburban Maryland, the bureaucrats gather over coffee to draft rules meant to squeeze the trans fat out of snack foods.

Four blocks from the White House, in an EPA conference room: more bureaucrats, more meetings, more drafting of rules, these aimed at forcing industrialists to spend billions cutting carbon to fend off global warming.

Congress? Who needs Congress?

Americans heard President Barack Obama declare this week that he intends to bypass the gridlocked Hill to get things done on his own. What they didn’t hear: just how far he’s actually pushing his executive authority.

An in-depth examination of the administration’s actions and plans, agency by agency, regulation by regulation, reveals an executive power play that’s broad and bold — and intensely ambitious. Far more than he let on in the State of the Union, the president has marshaled the tools of his office to advance policies, many unabashedly liberal, that push deep into everyday life for tens of millions of Americans.

He wants to change how power plants operate. And what we buy for lunch. How we travel to work. And how our kids learn math. How our gasoline is formulated. How we light our aquariums.

Already, the president’s team has enacted 300 economically significant regulations, far more than Bill Clinton, George W. Bush or Ronald Reagan did in comparable periods. Some of those rules are driven by the Affordable Care Act and Dodd-Frank banking reform, the two big laws Obama pushed through Congress early in his first term, when he had Democratic majorities in both houses. But there is far more.

Follow the link and read the rest, if you have the stomach for it.

The Empire lives, but it’s a different Empire than the one that enjoyed its last hurrah in the early 1990s. The Empire now exists not to make Americans safe and prosperous, but to dominate Americans in the name of overblown and non-existent threats (e.g., sexism, racism, endangered species, global warming), out of ersatz compassion, and with the aim of attaining the impossible: equality for all. Well, equality for all but that minority of minorities — the hard-working, tax-paying, straight, white person of European or Asian descent who minds his own business and not everyone else’s. If you are one of those, and religious as well, you are a particular object of persecution and prosecution.

In sum, a new Empire has arisen on America’s shores. If it had a motto, it would be* “trillions for the regulatory-welfare state and its clients, but not enough for defense.”

*     *     *

Related reading:
Bill Gertz, “Putin’s July 4th Message,” The Washington Free Beacon, July 6, 2012
Dean Cheng, “South China Sea: China Drops a Bombshell,” The Foundry, July 7, 2012
Walter Russell Mead and staff, “Putin Tells His Ambassadors: The West Is All Washed Up,” The American Interest, July 9, 2012
Erica Ritz, “Troubling? Putin Oversees Largest Nuclear Tests since the Cold War,” The Blaze, October 20, 2012
Norman Podhoretz, “Obama’s Successful Foreign Failure,” WSJ.com, September 8, 2013
Melanie Phillips, “Putin Checkmates America,” Melanie’s Blog, September 15, 2013
Walter Russell Mead (and staff), “Mixed Messages from Washington Confuse Allies,” The American Interest, December 3, 2013
Lawrence W. Reed, “The Fall of the Republic,” The Freeman, January 8, 2014
doriangrey1, “The Iranian Rope-a-Dope,” The Wilderness of Mirrors, January 20, 2014
Bill Vallicella, “The Decline of the West: How Long Can We Last?,” Maverick Philosopher, January 21, 2014
Adam Garfinkle, “Obama’s Middle East Recessional” in four parts (here, here, here, here), The American Interest, January 21, 2014
Victor Davis Hanson, “Obama’s Recessional,” RealClearPolitics, January 22, 2014
Elise Cooper, “Barack Obama’s Foreign Policy: An Utter Failure,” American Thinker, January 26, 2014
Dan Roberts, “White House Warns Obama Ready to ‘Bypass on 2014 Agenda,” The Guardian, January 26, 2014
Alexander Boltin, “Cruz: Putin Plays Chess, Obama Plays Checkers on Foreign Policy,” The Hill, January 28, 2014
Stephanie Simon, “Obama’s Power Play,” Politico, January 31, 2014
Tom Blumer, “Is It Over and We Just Don’t Know It? Have We Lost Our Founders’ Government?,” PJ Media, February 10, 2014
Victor Davis Hanson, “An Orwellian Nation of Obamathink,” Jewish World Review, February 13, 2014
Angelo M. Codevilla, “Do We Deserve the Constitution of 2014?,” Library of Law and Liberty, February 16, 2014
Richard Winchester, “Left-Wing Totalitarianism in America,” American Thinker, February 17, 2014

Related posts:
The Near-Victory of Communism
Tocqueville’s Prescience
The Left
Our Enemy, the State
“Intellectuals and Society”: A Review
The Left’s Agenda
Rating America’s Wars
Transnationalism and National Defense
The Left and Its Delusions
The Destruction of Society in the Name of “Society”
September 20, 2001: Hillary Clinton Signals the End of “Unity”
The War on Terror, As It Should Have Been Fought
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Well-Founded Pessimism
Defense as an Investment in Liberty and Prosperity
Liberty and Society
Tolerance on the Left
America: Past, Present, and Future
The Barbarians within and the State of the Union
Estimating the Rahn Curve: Or, How Government Spending Inhibits Economic Growth
America’s Financial Crisis Is Now
The World Turned Upside Down
“We the People” and Big Government
The Culture War
Defense Spending: One More Time
Parsing Political Philosophy (II)
__________
* A mockery of the words of Robert Goodloe Harper, who as a member of the U.S. House of Representatives in 1797, said “Millions for defense, but not one cent for tribute.” The remark was occasioned by a demand from France for tribute (a bribe) in exchange for the release of American ships that had been seized by the French.

Speaking in Foreign Tongues

Have you wondered why it’s so hard to learn to speak a foreign language, especially as one gets older. (“Older” includes persons of high-school and college age, as opposed to toddlers.) I have some thoughts on the matter, which I’ll get to after a relevant detour into a pair of well-known English and American accents.

You’re probably familiar with the “posh” English accent, also known as Received Pronunciation (RP):

RP is defined in the Concise Oxford English Dictionary as “the standard accent of English as spoken in the south of England,”although it can be heard from native speakers throughout England and Wales…. Although there is nothing intrinsic about RP that marks it as superior to any other variety, sociolinguistic factors have given Received Pronunciation particular prestige in parts of Britain.It has thus been the accent of those with power, money and influence since the early to mid 20th century….

The modern style of RP is an accent often taught to non-native speakers learning British English…. RP is used as the standard for English in most books on general phonology and phonetics, and is represented in the pronunciation schemes of most dictionaries published in the United Kingdom.

[A] notable British phonetician, has identified the following people as RP speakers:

(More here about RP.)

If you’re unfamiliar with the speech of the Royal Family, etc., think of the Crawleys of Downton Abbey —  Elizabeth McGovern’s character excepted, of course.

RP has an American equivalent, General American (GA):

The General American accent is most closely related to a generalized Midwestern accent and is spoken particularly by many newscasters. It is thought to have evolved from the English spoken by colonials in the Mid-Atlantic states, evolved and moved west. Walter Cronkite is a good example of a broadcaster using this accent. This has led the accent to sometimes be referred to as a “newscaster accent” or “television English”. General American is sometimes promoted as preferable to other regional accents. In the United States, classes promising “accent reduction”,”accent modification” and “accent neutralization” generally attempt to teach speech patterns similar to this accent…. General American is also the accent typically taught to people learning English as a second language in the United States, as well as outside the country to anyone who wishes to learn “American English.”

Where does GA come from?

The Telsur Project … examines a number of phonetic properties by which regional accents of the U.S. may be identified. The area with Midwestern regional properties is indicated on the map: eastern Nebraska (including Omaha and Lincoln); northwestern, southern, and central Iowa (including Des Moines, Sioux City and the Iowa-side Quad Cities), with an adjacent narrow strip of northern Missouri; and western Illinois (including Peoria and the Illinois-side Quad Cities. Notably, this section of Illinois does not include the Chicago area).

Note that GA doesn’t encompass the entire Midwest, which contains a variety of distinct accents, even though most Midwesterners, seem to believe (wrongly) that they’re accent-free. For example, the Chicago accent, as I’ve heard it, has a “big city” shading — a more “aggressive” sound than the softer tones one associates with the rural and semi-urban areas of the Midwest. Many Chicagoans (e.g., the late Mayor Richard Daley) have been known to substitute “dese” and “dem” for “these” and “them.”  The accents of the Upper Midwest — Michigan, Wisconsin, and Minnesota — are also distinctive and easily mocked. (Think of the Minnesotan played by Frances McDormand in Fargo.)

GA, of course, sounds nothing like RP, nor is it considered “posh.” But, like RP, it is easily understood by other speakers of English.

Why? I think it’s because GA is a straightforward way of speaking, absent the vocal gymnastics that accompany other regional accents; for example:

  • The “a” sound is “ay”; it’s not drawn out into a nasal whine (“a-a-uh”), as it is in Upstate New York and parts of the Upper Midwest.
  • The letter “r” is pronounced “are,” not only at the beginning of a word but also in the middle and at the end. Thus words that end in “r” (or with the “r” sound) are instantly understandable to any American, even those who say “ah” for “are,” “heah” for “hear” and “hear,” “waw” and “wo-ah” for “war,” or “Cuber” for “Cuba” — to give but a few of many possible examples. (It doesn’t work the other way around. Years ago, before I became “worldly,” I had to ask a native of New York City to repeat “Clahk Milluh” three time before I — a native Midwesterner — understood that he was referring to a person named Clark Miller.)
  • The “t” is pronounced, except in words where it has long been suppressed (e.g., “often” = “offen”). Thus GA speakers say “plentiful,” not the “pleniful” of some Eastern accents.
  • The long “i” is “eye,” not the “ah” of “rahfle” (rifle) and “tahr’ (tire) that’s heard in some parts of the South.
  • Also in contrast to many Southern accents, words are pronounced crisply, not stretched; for example: “building” is “bil-ding,” not “bee-i-l-ding”; “fish” is just that, not “fee-ush.”

I could go on and on. But what I’m leading up to is this: It’s true that GA is an accent, but the simplicity of GA makes it easy to mimic. (That’s why it’s taught to non-English speakers.) British actors who try to “do” an American accent usually succeed only when they “do” GA. (British actors’ imitations of Southern accents usually seem hilarious to Southerners, as do most imitations essayed by non-Southern American actors.)

What does all of this have to do with the difficulty of mastering the pronunciation of a foreign language? My amateur guess is that two things keep most people (older than toddlers) from learning how to speak a foreign language as its native speakers do: (1) embarrassment and (2) embedded  habits of pronunciation. The two impediments are related. If you’ve grown up pronouncing vowels, consonants, diphthongs and other particles of speech in certain ways, you’re likely to feel self-conscious about pronouncing them in new ways, especially in the presence of your peers. And you’ll find it hard to pronounce particles of speech in new ways if doing so requires you to make sounds that you’re unaccustomed to making.

If, for example, you want to pronounce the “r” in garçon (boy or young man) as a French person does, you have to throw in a silent gargle, so that the “r” is almost suppressed. Then you get to “çon,” which starts out somewhere between “sone” and “sawn,” but ends with an open, nasal sound — the “n” is hinted at but not enunciated. Further, you have to resist the temptation to emphasize the first syllable, and put equal emphasis on both syllables.

See how hard it is? And that’s just a small sample of the vocal gymnastics required to speak one foreign tongue passably well. (Also required: some mastery of vocabulary and grammar, the latter of which is often more complex than in English.)

So, even if an American gets over the embarrassment of making “weird” sounds, he or she still faces the obstacle of making those sounds correctly. The same goes for non-English speakers who want to master British or American English. But if they aim to master RP or GA, their task is made easier by the relative simplicity of those two accents.