Not-So-Random Thoughts (XVII)

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

*     *     *

Victor Davis Hanson offers “The More Things Change, the More They Actually Don’t.” It echoes what I say in “The Fallacy of Human Progress.” Hanson opens with this:

In today’s technically sophisticated and globally connected world, we assume life has been completely reinvented. In truth, it has not changed all that much.
And he proceeds to illustrate his point (and mine).

*     *     *

Dr. James Thompson, and English psychologist, often blogs about intelligence. Here are some links from last year that I’ve been hoarding:

Intelligence: All That Matters” (a review of a book by Stuart Ritchie)

GCSE Genes” (commentary about research showing the strong relationship between genes and academic achievement)

GWAS Hits and Country IQ” (commentary about preliminary research into the alleles related to intelligence)

Also, from the International Journal of Epidemiology, comes “The Association between Intelligence and Lifespan Is Mostly Genetic.”

All of this is by way of reminding you of my many posts about intelligence, which are sprinkled throughout this list and this one.

*     *     *

How bad is it? This bad:

Thomas Lifson, “Mark Levin’s Plunder and Deceit

Arthur Milikh, “Alexis de Tocqueville Predicted the Tyranny of the Majority in Our Modern World

Steve McCann, “Obama and Neo-fascist America

Related reading: “Fascism, Pots, and Kettles,” by me, of course.

Adam Freedman’s book, A Less than Perfect Union: The Case for States’ Rights. States’ rights can be perfected by secession, and I make the legal case for it in “A Resolution of Secession.”

*     *     *

In a different vein, there’s Francis Menton’s series about anthropogenic global warming. The latest installment is “The Greatest Scientific Fraud of All Time — Part VIII.” For my take on the subject, start with “AGW in Austin?” and check out the readings and posts listed at the bottom.

On Writing: Part Four

Part One gives excerpts of W.Somerset Maugham’s candid insights about the craft of writing. Part Two gives my advice to writers of non-fiction works. Part Three recommends some writings about writing, some writers to emulate, and a short list of reference works. This part delivers some sermons about practices to follow if you wish to communicate effectively, be taken seriously, and not be thought of as a semi-literate, self-indulgent, faddish dilettante. (In Part Three, I promised sermonettes, but they grew into sermons as I wrote.)

The first section, “Stasis, Progress, Regress, and Language,” comes around to a defense of prescriptivism in language. The second section, “Illegitimi Non Carborundum Lingo” (mock-Latin for “Don’t Let the Bastards Wear Down the Language”), counsels steadfastness in the face of political correctness and various sloppy usages.

STASIS, PROGRESS, REGRESS, AND LANGUAGE

To every thing there is a season, and a time to every purpose under the heaven….

Ecclesiastes 3:1 (King James Bible)

Nothing man-made is permanent; consider, for example, the list of empires here. In spite of the history of empires — and other institutions and artifacts of human endeavor — most people seem to believe that the future will be much like the present. And if the present embodies progress of some kind, most people seem to expect that progress to continue.

Things do not simply go on as they have been without the expenditure of requisite effort. Take the Constitution’s broken promises of liberty, about which I have written so much. Take the resurgence of Russia as a rival for international influence. This has been in the works for about 20 years, but didn’t register on most Americans until the recent Crimean crisis and related events in Ukraine. What did Americans expect? That the U.S. could remain the unchallenged superpower while reducing its armed forces to the point that they were strained by relatively small wars in Afghanistan and Iraq? That Vladimir Putin would be cowed by an American president who had so blatantly advertised his hopey-changey attitude toward Iran and Islam, while snubbing traditional allies like Poland and Israel?

Turning to naïveté about progress, I offer Steven Pinker’s fatuous The Better Angels of Our Nature: Why Violence Has Declined. Pinker tries to show that human beings are becoming kinder and gentler. I have much to say in another post about Pinker’s thesis. One of my sources is Robert Epstein’s review of Pinker’s book. This passage is especially apt:

The biggest problem with the book … is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Less important in the grand scheme, but no less wrong-headed, is the idea of limitless progress in the arts. To quote myself:

In the early decades of the twentieth century, the visual, auditory, and verbal arts became an “inside game.” Painters, sculptors, composers (of “serious” music), choreographers, and writers of fiction began to create works not for the enjoyment of audiences but for the sake of exploring “new” forms. Given that the various arts had been perfected by the early 1900s, the only way to explore “new” forms was to regress toward primitive ones — toward a lack of structure…. Aside from its baneful influence on many true artists, the regression toward the primitive has enabled persons of inferior talent (and none) to call themselves “artists.” Thus modernism is banal when it is not ugly.

Painters, sculptors, etc., have been encouraged in their efforts to explore “new” forms by critics, by advocates of change and rebellion for its own sake (e.g., “liberals” and “bohemians”), and by undiscriminating patrons, anxious to be au courant. Critics have a special stake in modernism because they are needed to “explain” its incomprehensibility and ugliness to the unwashed.

The unwashed have nevertheless rebelled against modernism, and so its practitioners and defenders have responded with condescension, one form of which is the challenge to be “open minded” (i.e., to tolerate the second-rate and nonsensical). A good example of condescension is heard on Composers Datebook, a syndicated feature that runs on some NPR stations. Every Composers Datebook program closes by “reminding you that all music was once new.” As if to lump Arnold Schoenberg and John Cage with Johann Sebastian Bach and Ludwig van Beethoven.

All music, painting, sculpture, dance, and literature was once new, but not all of it is good. Much (most?) of what has been produced since 1900 is inferior, self-indulgent crap.

And most of the ticket-buying public knows it. Take opera, for example. A recent article purports to show that “Opera is dead, in one chart” (Christopher Ingraham, The Washington Post, October 31, 2014). Here’s the chart and the writer’s interpretation of it:

The chart shows that opera ceased to exist as a contemporary art form roughly around 1970. It’s from a blog post by composer and programmer Suby Raman, who scraped the Met’s public database oF performances going back to the 19th century. As Raman notes, 50 years is an insanely low bar for measuring the “contemporary” – in pop music terms, it would be like considering The Beatles’ I Wanna Hold Your Hand as cutting-edge.

Back at the beginning of the 20th century, anywhere from 60 to 80 percent of Met performances were of operas composed in some time in the 50 years prior. But since 1980, the share of contemporary performances has surpassed 10 percent only once.

Opera, as a genre, is essentially frozen in amber – Raman found that the median year of composition of pieces performed at the Met has always been right around 1870. In other words, the Met is essentially performing the exact same pieces now that it was 100 years ago….

Contrary to Ingraham, opera isn’t dead; for example, there are more than 220 active opera companies in the U.S. It’s just that there’s little demand for operatic works written after the late 1800s. Why? Because most opera-lovers don’t want to hear the strident, discordant, unmelodic trash that came later. Giacomo Puccini, who wrote melodic crowd-pleasers until his death in 1924, is an exception that proves the rule.

It occurred to me recently that language is in the same parlous state as the arts. Written and spoken English improved steadily as Americans became more educated — and as long as that education included courses which prescribed rules of grammar and usage. By “improved” I mean that communication became easier and more effective; specifically:

  • A larger fraction of Americans followed the same rules in formal communications (e.g., speeches, business documents, newspapers, magazines, and books,).
  • Movies and radio and TV shows also tended to follow those rules, thereby reaching vast numbers of Americans who did little or no serious reading.
  • There was a “trickle down” effect on Americans’ written and spoken discourse, especially where it involved mere acquaintances or strangers. Standard American English became a kind of lingua franca, which enabled the speaker or writer to be understood and taken seriously.

I call that progress.

There is, however, an (unfortunately) influential attitude toward language known as descriptivism. It is distinct from (and often opposed to) rule-setting (prescriptivism). Consider this passage from the first chapter of an online text:

Prescriptive grammar is based on the idea that there is a single right way to do things. When there is more than one way of saying something, prescriptive grammar is generally concerned with declaring one (and only one) of the variants to be correct. The favored variant is usually justified as being better (whether more logical, more euphonious, or more desirable on some other grounds) than the deprecated variant. In the same situation of linguistic variability, descriptive grammar is content simply to document the variants – without passing judgment on them.

This misrepresents the role of prescriptive grammar. It’s widely understood that there’s  more than one way of saying something, and more than one way that’s understandable to others. The rules of prescriptive grammar, when followed, improve understanding, in two ways. First, by avoiding utterances that would be incomprehensible or, at least, very hard to understand. Second, by ensuring that utterances aren’t simply ignored or rejected out of hand because their form indicates that the writer or speaker is either ill-educated or stupid.

What, then, is the role of descriptive grammar? The authors offer this:

[R]ules of descriptive grammar have the status of scientific observations, and they are intended as insightful generalizations about the way that speakers use language in fact, rather than about the way that they ought to use it. Descriptive rules are more general and more fundamental than prescriptive rules in the sense that all sentences of a language are formed in accordance with them, not just a more or less arbitrary subset of shibboleth sentences. A useful way to think about the descriptive rules of a language … is that they produce, or generate, all the sentences of a language. The prescriptive rules can then be thought of as filtering out some (relatively minute) portion of the entire output of the descriptive rules as socially unacceptable.

Let’s consider the assertion that descriptive rules produce all the sentences of a language. What does that mean? It seems to mean the actual rules of a language can be inferred by examining sentences uttered or written by users of the language. But which users? Native users? Adults? Adults who have graduated from high-school? Users with IQs of at least 85?

Pushing on, let’s take a closer look at descriptive rules and their utility. The authors say that

we adopt a resolutely descriptive perspective concerning language. In particular, when linguists say that a sentence is grammatical, we don’t mean that it is correct from a prescriptive point of view, but rather that it conforms to descriptive rules….

The descriptive rules amount to this: They conform to practices that a speakers and writers actually use in an attempt to convey ideas, whether or not the practices state the ideas clearly and concisely. Thus the authors approve of these sentences because they’re of a type that might well occur in colloquial speech:

Over there is the guy who I went to the party with.

Over there is the guy with whom I went to the party.

(Both are clumsy ways of saying “I went to the party with that person.”)

Bill and me went to the store.

(“Bill and I went to the store.” or “Bill went to the store with me.” or “I went to the store with Bill.” Aha! Three ways to say it correctly, not just one way.)

But the authors label the following sentences as ungrammatical because they don’t comport with the colloquial speech:

Over there is guy the who I went to party the with.

Over there is the who I went to the party with guy.

Bill and me the store to went.

In other words, the authors accept as grammatical anything that a speaker or writer is likely to say, according to the “rules” that can be inferred from colloquial speech and writing. It follows that whatever is is right, even “Bill and me to the store went” or “Went to the store Bill and me,” which aren’t far-fetched variations on “Bill and me went to the store.” (Yoda-isms they read like.) They’re understandable, but only with effort. And further evolution would obliterate their meaning.

The fact is that the authors of the online text — like descriptivists generally — don’t follow their own anarchistic prescription. Wilson Follett puts it this way in Modern American Usage: A Guide:

It is … one of the striking features of the libertarian position [with respect to language] that it preaches an unbuttoned grammar in a prose style that is fashioned with the utmost grammatical rigor. H.L. Mencken’s two thousand pages on the vagaries of the American language are written in the fastidious syntax of a precisian. If we go by what these men do instead of by what they say, we conclude that they all believe in conventional grammar, practice it against their own preaching, and continue to cultivate the elegance they despise in theory….

[T]he artist and the user of language for practical ends share an obligation to preserve against confusion and dissipation the powers that over the centuries the mother tongue has acquired. It is a duty to maintain the continuity of speech that makes the thought of our ancestors easily understood, to conquer Babel every day against the illiterate and the heedless, and to resist the pernicious and lulling dogma that in language … whatever is is right and doing nothing is for the best (pp. 30-1).

Follett also states the true purpose of prescriptivism, which isn’t to prescribe rules for their own sake:

[This book] accept[s] the long-established conventions of prescriptive grammar … on the theory that freedom from confusion is more desirable than freedom from rule…. (op. cit., p. 243).

E.B. White puts  it more colorfully in his introduction to The Elements of Style. Writing about William Strunk Jr., author of the original version of the book, White says:

All through The Elements of Style one finds evidence of the author’s deep sympathy for the reader. Will felt that the reader was in serious trouble most of the time, a man floundering in a swamp, and that it was the duty of anyone attempting to write English to drain this swamp quickly and get his man up on dry ground, or at least throw him a rope. In revising the text, I have tried to hold steadily in mind this belief of his, this concern for the bewildered reader (p. xvi, Third Edition).

Descriptivists would let readers founder in the swamp of incomprehensibility. If descriptivists had their way — or what they claim to be their way — American English would, like the arts, recede into formless primitivism.

Eternal vigilance about language is the price of comprehensibility.

ILLEGITIMI NON CARBORUNDUM LINGO

The vigilant are sorely tried these days. What follows are several restrained rants about some practices that should be resisted and repudiated.

Eliminate Filler Words

When I was a child, most parents and all teachers promptly ordered children to desist from saying “uh” between words. “Uh” was then the filler word favored by children, adolescents, and even adults. The resort to “uh” meant that the speaker was stalling because he had opened his mouth without having given enough thought to what he meant to say.

Next came “you know.” It has been displaced, in the main, by “like,” where it hasn’t been joined to “like” in the formation “like, you know.”

The need of a filler word (or phrase) seems ineradicable. Too many people insist on opening their mouths before thinking about what they’re about to say. Given that, I urge Americans in need of a filler word to use “uh” and eschew “like” and “like, you know.” “Uh” is far less distracting and irritating than the rat-a-tat of “like-like-like-like.”

Of course, it may be impossible to return to “uh.” Its brevity may not give the users of “like” enough time to organize their TV-smart-phone-video-game-addled brains and deliver coherent speech.

In any event, speech influences writing. Sloppy speech begets sloppy writing, as I know too well. I have spent the past 50 years of my life trying to undo habits of speech acquired in my childhood and adolescence — habits that still creep into my writing if I drop my guard.

Don’t Abuse Words

How am I supposed to know what you mean if you abuse perfectly good words? Here I discuss four prominent examples of abuse.

Anniversary

Too many times in recent years I’ve heard or read something like this: “Sally and me are celebrating our one-year anniversary.” The “me” is bad enough; “one-year anniversary” (or any variation of it) is truly egregious.

The word “anniversary” means “the annually recurring date of a past event.” To write or say “x-year anniversary” is redundant as well as graceless. Just write or say “first anniversary,” “two-hundred fiftieth anniversary,” etc., as befits the occasion.

To write or say “x-month anniversary” is nonsensical. Something that happened less than a year ago can’t have an anniversary. What is meant is that such-and-such happened “x” months ago. Just say it.

Data

A person who writes or says “data is” is at best an ignoramus and at worst a Philistine.

Language, above all else, should be used to make one’s thoughts clear to others. The pairing of a plural noun and a singular verb form is distracting, if not confusing. Even though datum is seldom used by Americans, it remains the singular foundation of data, which is the plural form. Data, therefore, never “is”; data always “are.”

H.W. Fowler says:

Latin plurals sometimes become singular English words (e.g., agenda, stamina) and data is often so treated in U.S.; in Britain this is still considered a solecism… (A Dictionary of Modern English Usage, Second Edition, p.119).

But Wilson Follett is better on the subject:

Those who treat data as a singular doubtless think of it as a generic noun, comparable to knowledge or information… [TEA: a generous interpretation]. The rationale of agenda as a singular is its use to mean a collective program of action, rather than separate items to be acted on. But there is as yet no obligation to change the number of data under the influence of error mixed with innovation (op. cit., pp. 130-1).

Hopefully and Its Brethren

Mark Liberman of Language Log discusses

the AP Style Guide’s decision to allow the use of hopefully as a sentence adverb, announced on Twitter at 6:22 a.m. on 17 April 2012:

Hopefully, you will appreciate this style update, announced at ‪#aces2012‬. We now support the modern usage of hopefully: it’s hoped, we hope.

Liberman, who is a descriptivist, defends AP’s egregious decision. His defense consists mainly of citing noted writers who have used “hopefully” where they meant “it is to be hoped.” I suppose that if those same noted writers had chosen to endanger others by driving on the wrong side of the road, Liberman would praise them for their “enlightened” approach to driving.

Geoff Nunberg also defends “hopefully“ in “The Word ‘Hopefully’ Is Here to Stay, Hopefully,” which appears at npr.org. Numberg (or the headline writer) may be right in saying that “hopefully” is here to stay. But that does not excuse the widespread use of the word in ways that are imprecise and meaningless.

The crux of Nunberg’s defense is that “hopefully” conveys a nuance that “language snobs” (like me) are unable to grasp:

Some critics object that [“hopefully” is] a free-floating modifier (a Flying Dutchman adverb, James Kirkpatrick called it) that isn’t attached to the verb of the sentence but rather describes the speaker’s attitude. But floating modifiers are mother’s milk to English grammar — nobody objects to using “sadly,” “mercifully,” “thankfully” or “frankly” in exactly the same way.

Or people complain that “hopefully” doesn’t specifically indicate who’s doing the hoping. But neither does “It is to be hoped that,” which is the phrase that critics like Wilson Follett offer as a “natural” substitute. That’s what usage fetishism can drive you to — you cross out an adverb and replace it with a six-word impersonal passive construction, and you tell yourself you’ve improved your writing.

But the real problem with these objections is their tone-deafness. People get so worked up about the word that they can’t hear what it’s really saying. The fact is that “I hope that” doesn’t mean the same thing that “hopefully” does. The first just expresses a desire; the second makes a hopeful prediction. I’m comfortable saying, “I hope I survive to 105″ — it isn’t likely, but hey, you never know. But it would be pushing my luck to say, “Hopefully, I’ll survive to 105,” since that suggests it might actually be in the cards.

Floating modifiers may be common in English, but that does not excuse them. Given Numberg’s evident attachment to them, I am unsurprised by his assertion that “nobody objects to using ‘sadly,’ ‘mercifully,’ ‘thankfully’ or ‘frankly’ in exactly the same way.”

Nobody, Mr. Nunberg? Hardly. Anyone who cares about clarity and precision in the expression of ideas will object to such usages. A good editor would rewrite any sentence that begins with a free-floating modifier — no matter which one of them it is.

Nunberg’s defense against such rewriting is that Wilson Follett offers “It is to be hoped that” as a cumbersome, wordy substitute for “hopefully.” I assume that Nunberg refers to Follett’s discussion of “hopefully” in Modern American Usage. If so, Nunberg once again proves himself an adherent of imprecision, for this is what Follett actually says about “hopefully”:

The German language is blessed with an adverb, hoffentlich, that affirms the desirability of an occurrence that may or may not come to pass. It is generally to be translated by some such periphrasis as it is to be hoped that; but hack translators and persons more at home in German than in English persistently render it as hopefully. Now, hopefully and hopeful can indeed apply to either persons or affairs. A man in difficulty is hopeful of the outcome, or a situation looks hopeful; we face the future hopefully, or events develop hopefully. What hopefully refuses to convey in idiomatic English is the desirability of the hoped-for event. College, we read, is a place for the development of habits of inquiry, the acquisition of knowledge and, hopefully, the establishment of foundations of wisdom. Such a hopefully is un-English and eccentric; it is to be hoped is the natural way to express what is meant. The underlying mentality is the same—and, hopefully, the prescription for cure is the same (let us hope) / With its enlarged circulation–and hopefully also increased readership–[a periodical] will seek to … (we hope) / Party leaders had looked confidently to Senator L. to win . . . by a wide margin and thus, hopefully, to lead the way to victory for. . . the Presidential ticket (they hoped) / Unfortunately–or hopefully, as you prefer it–it is none too soon to formulate the problems as swiftly as we can foresee them. In the last example, hopefully needs replacing by one of the true antonyms of unfortunately–e.g. providentially.

The special badness of hopefully is not alone that it strains the sense of -ly to the breaking point, but that appeals to speakers and writers who do not think about what they are saying and pick up VOGUE WORDS [another entry in Modern American Usage] by reflex action. This peculiar charm of hopefully accounts for its tiresome frequency. How readily the rotten apple will corrupt the barrel is seen in the similar use of transferred meaning in other adverbs denoting an attitude of mind. For example: Sorrowfully (regrettably), the officials charged with wording such propositions for ballot presentation don’t say it that way / the “suicide needle” which–thankfully–he didn’t see fit to use (we are thankful to say). Adverbs so used lack point of view; they fail to tell us who does the hoping, the sorrowing, or the being thankful. Writers who feel the insistent need of an English equivalent for hoffentlich might try to popularize hopingly, but must attach it to a subject capable of hoping (op. cit., pp. 178-9).

Follett, contrary to Nunberg’s assertion, does not offer “It is to be hoped that” as a substitute for “hopefully,” which would “cross out an adverb and replace it with a six-word impersonal passive construction.” Follett gives “it is to be hoped for” as the sense of “hopefully.” But, as the preceding quotation attests, Follett is able to replace “hopefully” (where it is misused) with a few short words that take no longer to write or say than “hopefully,” and which convey the writer’s or speaker’s intended meaning more clearly. And if it does take a few extra words to say something clearly, why begrudge those words?

What about the other floating modifiers — such as “sadly,” “mercifully,” “thankfully” and “frankly” — which Nunberg defends with much passion and no logic? Follett addresses those others in the third paragraph quoted above, but he does not dispose of them properly. For example, I would not simply substitute “regrettably” for “sorrowfully”; neither is adequate. What is wanted is something like this: “The officials who write propositions for ballots should not have said … , which is misleading (vague/ambiguous).” More words? Yes, but so what? (See above.)

In any event, a writer or speaker who is serious about expressing himself clearly to an audience will never say things like “Sadly (regrettably), the old man died,” when he means either “I am (we are/they are/everyone who knew him) is saddened by (regrets) the old man’s dying,” or (less probably) “The old man grew sad as he died” or “The old man regretted dying.” I leave “mercifully,” “thankfully,” “frankly” and the rest of the overused “-ly” words as an exercise for the reader.

The aims of a writer or speaker ought to be clarity and precision, not a stubborn, pseudo-logical insistence on using a word or phrase merely because it is in vogue or (more likely) because it irritates so-called language snobs. I doubt that even the pseudo-logical “language slobs” of Nunberg’s ilk condone “like” and “you know” as interjections. But, by Nunberg’s “logic,” those interjections should be condoned — nay, encouraged — because “everyone” knows what someone who uses them is “really saying,” namely, “I am too stupid or lazy to express myself clearly and precisely.”

Literally

This is from Dana Coleman’s article “According to the Dictionary, ‘Literally’ Also Now Means ‘Figuratively’,” (Salon, August 22, 2013):

Literally, of course, means something that is actually true: “Literally every pair of shoes I own was ruined when my apartment flooded.”

When we use words not in their normal literal meaning but in a way that makes a description more impressive or interesting, the correct word, of course, is “figuratively.”

But people increasingly use “literally” to give extreme emphasis to a statement that cannot be true, as in: “My head literally exploded when I read Merriam-Webster, among others, is now sanctioning the use of literally to mean just the opposite.”

Indeed, Ragan’s PR Daily reported last week that Webster, Macmillan Dictionary and Google have added this latter informal use of “literally” as part of the word’s official definition. The Cambridge Dictionary has also jumped on board….

Webster’s first definition of literally is, “in a literal sense or matter; actually.” Its second definition is, “in effect; virtually.” In addressing this seeming contradiction, its authors comment:

“Since some people take sense 2 to be the opposition of sense 1, it has been frequently criticized as a misuse. Instead, the use is pure hyperbole intended to gain emphasis, but it often appears in contexts where no additional emphasis is necessary.”…

The problem is that a lot of people use “literally” when they mean “figuratively” because they don’t know better. It’s literally* incomprehensible to me that the editors of dictionaries would suborn linguistic anarchy. Hopefully,** they’ll rethink their rashness.
_________
* “Literally” is used correctly, though it’s superfluous here.
** “Hopefully” is used incorrectly, but in the spirit of the times.

Punctuate Properly

I can’t compete with Lynne Truss’s Eats, Shoots & Leaves: The Zero-Tolerance Approach to Punctuation (discussed in Part Three), so I won’t try. Just read it and heed it.

But I must address the use of the hyphen in compound adjectives, and the serial comma.

Regarding the hyphen, David Bernstein of The Volokh Conspiracy writes:

I frequently have disputes with law reviewer editors over the use of dashes. Unlike co-conspirator Eugene, I’m not a grammatical expert, or even someone who has much of an interest in the subject.

But I do feel strongly that I shouldn’t use a dash between words that constitute a phrase, as in “hired gun problem”, “forensic science system”, or “toxic tort litigation.” Law review editors seem to want to generally want to change these to “hired-gun problem”, “forensic-science system”, and “toxic-tort litigation.” My view is that “hired” doesn’t modify “gun”; rather “hired gun” is a self-contained phrase. The same with “forensic science” and “toxic tort.”

Most of the commenters are right to advise Bernstein that the “dashes” — he means hyphens — are necessary. Why? To avoid confusion as to what is modifying the noun “problem.”

In “hired gun,” for example, “hired” (adjective) modifies “gun” (noun, meaning “gunslinger” or the like). But in “hired-gun problem,” “hired-gun” is a compound adjective which requires both of its parts to modify “problem.” It’s not a “hired problem” or a “gun problem,” it’s a “hired-gun problem.” The function of the hyphen is to indicate that “hired” and “gun,” taken separately, are meaningless as modifiers of “problem,” that is, to ensure that the meaning of the adjective-noun phrase is not misread.

A hyphen isn’t always necessary in such instances. But the consistent use of the hyphen in such instances avoids confusion and the possibility of misinterpretation.

The consistent use of the hyphen to form a compound adjective has a counterpart in the consistent use of the serial comma, which is the comma that precedes the last item in a list of three or more items (e.g., the red, white, and blue). Newspapers (among other sinners) eschew the serial comma for reasons too arcane to pursue here. Thoughtful counselors advise its use. (See, for example, Follett at pp. 422-3.) Why? Because the serial comma, like the hyphen in a compound adjective, averts ambiguity. It isn’t always necessary, but if it is used consistently, ambiguity can be avoided. (Here’s a great example, from the Wikipedia article linked to in the first sentence of this paragraph: “To my parents, Ayn Rand and God.” The writer means, of course, “To my parents, Ayn Rand, and God.”)

A little punctuation goes a long way.

Stand Fast against Political Correctness

As a result of political correctness, some words and phrases have gone out of favor. needlessly. Others are cluttering the language, needlessly. Political correctness manifests itself in euphemisms, verboten words, and what I call gender preciousness.

Euphemisms

These are much-favored by persons of the left, who seem unable to have an aversion to reality. Thus, for example:

  • “Crippled” became “handicapped,” which became “disabled” and then “differently abled” or “something-challenged.”
  • “Stupid” became “learning disabled,” which became “special needs” (a euphemistic category that houses more than the stupid).
  • “Poor” became “underprivileged,” which became “economically disadvantaged,” which became “entitled” (to other people’s money), in fact if not in word.
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

How these linguistic contortions have helped the crippled, stupid, poor, and colored is a mystery to me. Tact is admirable, but euphemisms aren’t tactful. They’re insulting because they’re condescending.

Verboten Words

The list is long; see this and this, for example. Words become verboten for the same reason that euphemisms arise: to avoid giving offense, even where offense wouldn’t or shouldn’t be taken.

David Bernstein, writing at TCS Daily several ago, recounted some tales about political correctness. This one struck close to home:

One especially merit-less [hostile work environment] claim that led to a six-figure verdict involved Allen Fruge, a white Department of Energy employee based in Texas. Fruge unwittingly spawned a harassment suit when he followed up a southeast Texas training session with a bit of self-deprecating humor. He sent several of his colleagues who had attended the session with him gag certificates anointing each of them as an honorary Coon Ass — usually spelled coonass — a mildly derogatory slang term for a Cajun. The certificate stated that [y]ou are to sing, dance, and tell jokes and eat boudin, cracklins, gumbo, crawfish etouffe and just about anything else. The joke stemmed from the fact that southeast Texas, the training session location, has a large Cajun population, including Fruge himself.

An African American recipient of the certificate, Sherry Reid, chief of the Nuclear and Fossil Branch of the DOE in Washington, D.C., apparently missed the joke and complained to her supervisors that Fruge had called her a coon. Fruge sent Reid a formal (and humble) letter of apology for the inadvertent offense, and explained what Coon Ass actually meant. Reid nevertheless remained convinced that Coon Ass was a racial pejorative, and demanded that Fruge be fired. DOE supervisors declined to fire Fruge, but they did send him to diversity training. They also reminded Reid that the certificate had been meant as a joke, that Fruge had meant no offense, that Coon Ass was slang for Cajun, and that Fruge sent the certificates to people of various races and ethnicities, so he clearly was not targeting African Americans. Reid nevertheless sued the DOE, claiming that she had been subjected to a racial epithet that had created a hostile environment, a situation made worse by the DOEs failure to fire Fruge.

Reid’s case was seemingly frivolous. The linguistics expert her attorney hired was unable to present evidence that Coon Ass meant anything but Cajun, or that the phrase had racist origins, and Reid presented no evidence that Fruge had any discriminatory intent when he sent the certificate to her. Moreover, even if Coon Ass had been a racial epithet, a single instance of being given a joke certificate, even one containing a racial epithet, by a non-supervisory colleague who works 1,200 miles away does not seem to remotely satisfy the legal requirement that harassment must be severe and pervasive for it to create hostile environment liability. Nevertheless, a federal district court allowed the case to go to trial, and the jury awarded Reid $120,000, plus another $100,000 in attorneys fees. The DOE settled the case before its appeal could be heard for a sum very close to the jury award.

I had a similar though less costly experience some years ago, when I was chief financial and administrative officer of a defense think-tank. In the course of discussing the company’s budget during meeting with employees from across the company, I uttered “niggardly” (meaning stingy or penny-pinching). The next day a fellow vice president informed me that some of the black employees from her division had been offended by “niggardly.” I suggested that she give her employees remedial training in English vocabulary. That should have been the verdict in the Reid case.

Gender Preciousness

It has become fashionable for academicians and pseudo-serious writers to use “she” where “he” long served as the generic (and sexless) reference to a singular third person. Here is an especially grating passage from an by Oliver Cussen:

What is a historian of ideas to do? A pessimist would say she is faced with two options. She could continue to research the Enlightenment on its own terms, and wait for those who fight over its legacy—who are somehow confident in their definitions of what “it” was—to take notice. Or, as [Jonathan] Israel has done, she could pick a side, and mobilise an immense archive for the cause of liberal modernity or for the cause of its enemies. In other words, she could join Moses Herzog, with his letters that never get read and his questions that never get answered, or she could join Sandor Himmelstein and the loud, ignorant bastards (“The Trouble with the Enlightenment,” Prospect, May 5, 2013).

I don’t know about you, but I’m distracted by the use of the generic “she,” especially by a male. First, it’s not the norm (or wasn’t the norm until the thought police made it so). Thus my first reaction to reading it in place of “he” is to wonder who this “she” is; whereas,  the function of “he” as a stand-in for anyone (regardless of gender) was always well understood. Second, the usage is so obviously meant to mark the writer as “sensitive” and “right thinking” that it calls into question his sincerity and objectivity.

I could go on about the use of “he or she” in place of “he” or “she.” But it should be enough to call it what it is: verbal clutter.

Then there is “man,” which for ages was well understood (in the proper context) as referring to persons in general, not to male persons in particular. (“Mankind” merely adds a superfluous syllable.)

The short, serviceable “man” has been replaced, for the most part, by “humankind.” I am baffled by the need to replaced one syllable with three. I am baffled further by the persistence of “man” — a sexist term — in the three-syllable substitute. But it gets worse when writers strain to avoid the solo use of “man” by resorting to “human beings” and the “human species.” These are longer than “humankind,” and both retain the accursed “man.”

Don’t Split Infinitives

Just don’t do it, regardless of the pleadings of descriptivists. Even Follett counsels the splitting of infinitives, when the occasion demands it. I part ways with Follett in this matter, and stand ready to be rebuked for it.

Consider the case of Eugene Volokh, a known grammatical relativist, who scoffs at “to increase dramatically” — as if “to dramatically increase” would be better. The meaning of “to increase dramatically” is clear. The only reason to write “to dramatically increase” would be to avoid the appearance of stuffiness; that is, to pander to the least cultivated of one’s readers.

Seeming unstuffy (i.e., without standards) is neither a necessary nor sufficient reason to split an infinitive. The rule about not splitting infinitives, like most other grammatical rules, serves the valid and useful purpose of preventing English from sliding yet further down the slippery slope of incomprehensibility than it has slid.

If an unsplit infinitive makes a clause or sentence seem awkward, the clause or sentence should be recast to avoid the awkwardness. Better that than make an exception that leads to further exceptions — and thence to Babel.

A Dictionary of Modern English Usage (a.k.a. Fowler’s Modern English Usage) counsels splitting an infinitive where recasting doesn’t seem to work:

We admit that separation of to from its infinitive is not in itself desirable, and we shall not gratuitously say either ‘to mortally wound’ or ‘to mortally be wounded’…. We maintain, however, that a real [split infinitive], though not desirable in itself, is preferable to either of two things, to real ambiguity, and to patent artificiality…. We will split infinitives sooner than be ambiguous or artificial; more than that, we will freely admit that sufficient recasting will get rid of any [split infinitive] without involving either of those faults, and yet reserve to ourselves the right of deciding in each case whether recasting is worth while. Let us take an example: ‘In these circumstances, the Commission … has been feeling its way to modifications intended to better equip successful candidates for careers in India and at the same time to meet reasonable Indian demands.’… What then of recasting? ‘intended to make successful candidates fitter for’ is the best we can do if the exact sense is to be kept… (p. 581, Second Edition).

Good try, but not good enough. This would do: “In these circumstances, the Commission … has been considering modifications that would better equip successful candidates for careers in India and at the same time meet reasonable Indian demands.”

Enough said? I think so.

*     *     *

Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.

Signature

The Fallacy of Human Progress

Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined is cited gleefully by leftists and cockeyed optimists as evidence that human beings, on the whole, are becoming kinder and gentler because of:

  • The Leviathan – The rise of the modern nation-state and judiciary “with a monopoly on the legitimate use of force,” which “can defuse the [individual] temptation of exploitative attack, inhibit the impulse for revenge, and circumvent…self-serving biases.”
  • Commerce – The rise of “technological progress [allowing] the exchange of goods and services over longer distances and larger groups of trading partners,” so that “other people become more valuable alive than dead” and “are less likely to become targets of demonization and dehumanization”;
  • Feminization – Increasing respect for “the interests and values of women.”
  • Cosmopolitanism – the rise of forces such as literacy, mobility, and mass media, which “can prompt people to take the perspectives of people unlike themselves and to expand their circle of sympathy to embrace them”;
  • The Escalator of Reason – an “intensifying application of knowledge and rationality to human affairs,” which “can force people to recognize the futility of cycles of violence, to ramp down the privileging of their own interests over others’, and to reframe violence as a problem to be solved rather than a contest to be won.”

I can tell you that Pinker’s book is hogwash because two very bright leftists — Peter Singer and Will Wilkinson — have strongly and wrongly endorsed some of its key findings. Singer writes:

Pinker argues that enhanced powers of reasoning give us the ability to detach ourselves from our immediate experience and from our personal or parochial perspective, and frame our ideas in more abstract, universal terms. This in turn leads to better moral commitments, including avoiding violence. It is just this kind of reasoning ability that has improved during the 20th century. He therefore suggests that the 20th century has seen a “moral Flynn effect, in which an accelerating escalator of reason carried us away from impulses that lead to violence” and that this lies behind the long peace, the new peace, and the rights revolution. Among the wide range of evidence he produces in support of that argument is the tidbit that since 1946, there has been a negative correlation between an American president’s I.Q. and the number of battle deaths in wars involving the United States.

I disposed of this staggeringly specious correlation here:

There is the convenient cutoff point of 1946. Why 1946? Well, it enables Pinker-Singer to avoid the inconvenient fact that the Civil War, World War I, and World War II happened while the presidency was held by three men who [purportedly] had high IQs: Lincoln, Wilson, and FDR….

If you buy the brand of snake oil being peddled by Pinker-Singer, you must believe that the “dumbest” and “smartest” presidents are unlikely to get the U.S. into wars that result in a lot of battle deaths, whereas some (but, mysteriously, not all) of the “medium-smart” presidents (Lincoln, Wilson, FDR) are likely to do so….

Let us advance from one to two explanatory variables. The second explanatory variable that strongly suggests itself is political party. And because it is not good practice to omit relevant statistics (a favorite gambit of liars), I estimated an equation based on “IQ” and battle deaths for the 27 men who served as president from the first Republican presidency (Lincoln’s) through the presidency of GWB….

In other words, battle deaths rise at the rate of 841 per IQ point (so much for Pinker-Singer). But there will be fewer deaths with a Republican in the White House (so much for Pinker-Singer’s implied swipe at GWB)….

All of this is nonsense, of course, for two reasons: [the] estimates of IQ are hogwash, and the number of U.S. battle deaths is a meaningless number, taken by itself.

… [The] estimates of presidents’ IQs put every one of them — including the “dumbest,” U.S. Grant — in the top 2.3 percent of the population. And the mean of Simonton’s estimates puts the average president in the top 0.1 percent (one-tenth of one percent) of the population. That is literally incredible.

As for Wilkinson, he praises statistics adduced by Pinker that show a decline in the use of capital punishment:

In the face of such a decisive trend in moral culture, we can say a couple different things. We can say that this is just change and says nothing in particular about what is really right or wrong, good or bad. Or we can take take say this is evidence of moral progress, that we have actually become better. I prefer the latter interpretation for basically the same reasons most of us see the abolition of slavery and the trend toward greater equality between races and sexes as progress and not mere morally indifferent change. We can talk about the nature of moral progress later. It’s tricky. For now, I want you to entertain the possibility that convergence toward the idea that execution is wrong counts as evidence that it is wrong.

My observation:

I would count convergence toward the idea that execution is wrong as evidence that it is wrong, if … that idea were (a) increasingly held by individuals who (b) had arrived at their “enlightenment” unnfluenced by operatives of the state (legislatures and judges), who take it upon themselves to flout popular support of the death penalty. What we have, in the case of the death penalty, is moral regress, not moral progress.

Moral regress because the abandonment of the death penalty puts innocent lives at risk. Capital punishment sends a message, and the message is effective when it is delivered: it deters homicide. And even if it didn’t, it would at least remove killers from our midst, permanently. By what standard of morality can one claim that it is better to spare killers than to protect innocents? For that matter, by what standard of morality is it better to kill innocents (in the womb) than to spare killers? Proponents of abortion (like Singer and Wilkinson) — who by and large oppose capital punishment — are completely lacking in moral authority.

Returning to Pinker’s thesis that violence has declined, I quote a review at Foseti:

Pinker’s basic problem is that he essentially defines “violence” in such a way that his thesis that violence is declining becomes self-fulling. “Violence” to Pinker is fundamentally synonymous with behaviors of older civilizations. On the other hand, modern practices are defined to be less violent than newer practices.

A while back, I linked to a story about a guy in my neighborhood who’s been arrested over 60 times for breaking into cars. A couple hundred years ago, this guy would have been killed for this sort of vandalism after he got caught the first time. Now, we feed him and shelter him for a while and then we let him back out to do this again. Pinker defines the new practice as a decline in violence – we don’t kill the guy anymore! Someone from a couple hundred years ago would be appalled that we let the guy continue destroying other peoples’ property without consequence. In the mind of those long dead, “violence” has in fact increased. Instead of a decline in violence, this practice seems to me like a decline in justice – nothing more or less.

Here’s another example, Pinker uses creative definitions to show that the conflicts of the 20th Century pale in comparison to previous conflicts. For example, all the Mongol Conquests are considered one event, even though they cover 125 years. If you lump all these various conquests together and you split up WWI, WWII, Mao’s takeover in China, the Bolshevik takeover of Russia, the Russian Civil War, and the Chinese Civil War (yes, he actually considers this a separate event from Mao), you unsurprisingly discover that the events of the 20th Century weren’t all that violent compared to events in the past! Pinker’s third most violent event is the “Mideast Slave Trade” which he says took place between the 7th and 19th Centuries. Seriously. By this standard, all the conflicts of the 20th Century are related. Is the Russian Revolution or the rise of Mao possible without WWII? Is WWII possible without WWI? By this consistent standard, the 20th Century wars of Communism would have seen the worst conflict by far. Of course, if you fiddle with the numbers, you can make any point you like.

There’s much more to the review, including some telling criticisms of Pinker’s five reasons for the (purported) decline in violence. That the reviewer somehow still wants to believe in the rightness of Pinker’s thesis says more about the reviewer’s optimism than it does about the validity of Pinker’s thesis.

That thesis is fundamentally flawed, as Robert Epstein points out in a review at Scientific American:

[T]he wealth of data [Pinker] presents cannot be ignored—unless, that is, you take the same liberties as he sometimes does in his book. In two lengthy chapters, Pinker describes psychological processes that make us either violent or peaceful, respectively. Our dark side is driven by a evolution-based propensity toward predation and dominance. On the angelic side, we have, or at least can learn, some degree of self-control, which allows us to inhibit dark tendencies.

There is, however, another psychological process—confirmation bias—that Pinker sometimes succumbs to in his book. People pay more attention to facts that match their beliefs than those that undermine them. Pinker wants peace, and he also believes in his hypothesis; it is no surprise that he focuses more on facts that support his views than on those that do not. The SIPRI arms data are problematic, and a reader can also cherry-pick facts from Pinker’s own book that are inconsistent with his position. He notes, for example, that during the 20th century homicide rates failed to decline in both the U.S. and England. He also describes in graphic and disturbing detail the savage way in which chimpanzees—our closest genetic relatives in the animal world—torture and kill their own kind.

Of greater concern is the assumption on which Pinker’s entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease? By this logic, when we reach a world population of nine billion in 2050, Pinker will conceivably be satisfied if a mere two million people are killed in war that year.

The biggest problem with the book, though, is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Pinker’s belief that violence is on the decline reminds me of “it’s different this time,” a phrase that was on the lips of hopeful stock-pushers, stock-buyers, and pundits during the stock-market bubble of the late 1990s. That bubble ended, of course, in the spectacular crash of 2000.

Predictions about the future of humankind are better left in the hands of writers who see human nature whole, and who are not out to prove that it can be shaped or contained by the kinds of “liberal” institutions that Pinker so obviously favors.

Consider this, from an article by Robert J. Samuelson at The Washington Post:

[T]he Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar. Amid the controversy over leaks from the National Security Agency, this looms as an even bigger downside.

By cyberwarfare, I mean the capacity of groups — whether nations or not — to attack, disrupt and possibly destroy the institutions and networks that underpin everyday life. These would be power grids, pipelines, communication and financial systems, business record-keeping and supply-chain operations, railroads and airlines, databases of all types (from hospitals to government agencies). The list runs on. So much depends on the Internet that its vulnerability to sabotage invites doomsday visions of the breakdown of order and trust.

In a report, the Defense Science Board, an advisory group to the Pentagon, acknowledged “staggering losses” of information involving weapons design and combat methods to hackers (not identified, but probably Chinese). In the future, hackers might disarm military units. “U.S. guns, missiles and bombs may not fire, or may be directed against our own troops,” the report said. It also painted a specter of social chaos from a full-scale cyberassault. There would be “no electricity, money, communications, TV, radio or fuel (electrically pumped). In a short time, food and medicine distribution systems would be ineffective.”

But Pinker wouldn’t count the resulting chaos as violence, as long as human beings were merely starving and dying of various diseases. That violence would ensue, of course, is another story, which is told by John Gray in The Silence of Animals: On Progress and Other Modern Myths. Gray’s book — published  18 months after Better Angels — could be read as a refutation of Pinker’s book, though Gray doesn’t mention Pinker or his book.

The gist of Gray’s argument is faithfully recounted in a review of Gray’s book by Robert W. Merry at The National Interest:

The noted British historian J. B. Bury (1861–1927) … wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future…. The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

…Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong….

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption….

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

In the Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny….

And yet the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves….

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy.

There was an era of realism, but it was short-lived:

But other Western philosophers, particularly in the realm of Anglo-Saxon thought, viewed the idea of progress in much more limited terms. They rejected the idea that institutions could reshape mankind and usher in a golden era of peace and happiness. As Bury writes, “The general tendency of British thought was to see salvation in the stability of existing institutions, and to regard change with suspicion.” With John Locke, these thinkers restricted the proper role of government to the need to preserve order, protect life and property, and maintain conditions in which men might pursue their own legitimate aims. No zeal here to refashion human nature or remake society.

A leading light in this category of thinking was Edmund Burke (1729–1797), the British statesman and philosopher who, writing in his famous Reflections on the Revolution in France, characterized the bloody events of the Terror as “the sad but instructive monuments of rash and ignorant counsel in time of profound peace.” He saw them, in other words, as reflecting an abstractionist outlook that lacked any true understanding of human nature. The same skepticism toward the French model was shared by many of the Founding Fathers, who believed with Burke that human nature isn’t malleable but rather potentially harmful to society. Hence, it needed to be checked. The central distinction between the American and French revolutions, in the view of conservative writer Russell Kirk, was that the Americans generally held a “biblical view of man and his bent toward sin,” whereas the French opted for “an optimistic doctrine of human goodness.” Thus, the American governing model emerged as a secular covenant “designed to restrain the human tendencies toward violence and fraud . . . [and] place checks upon will and appetite.”

Most of the American Founders rejected the French philosophes in favor of the thought and history of the Roman Republic, where there was no idea of progress akin to the current Western version. “Two thousand years later,” writes Kirk, “the reputation of the Roman constitution remained so high that the framers of the American constitution would emulate the Roman model as best they could.” They divided government powers among men and institutions and created various checks and balances. Even the American presidency was modeled generally on the Roman consular imperium, and the American Senate bears similarities to the Roman version. Thus did the American Founders deviate from the French abstractionists and craft governmental structures to fit humankind as it actually is—capable of great and noble acts, but also of slipping into vice and treachery when unchecked. That ultimately was the genius of the American system.

But, as the American success story unfolded, a new collection of Western intellectuals, theorists and utopians—including many Americans—continued to toy with the idea of progress. And an interesting development occurred. After centuries of intellectual effort aimed at developing the idea of progress as an ongoing chain of improvement with no perceived end into the future, this new breed of “Progress as Power” thinkers began to declare their own visions as the final end point of this long progression.

Gray calls these intellectuals “ichthyophils,” which he defines as “devoted to their species as they think it ought to be, not as it actually is or as it truly wants to be.” He elaborates: “Ichthyophils come in many varieties—the Jacobin, Bolshevik and Maoist, terrorizing humankind in order to remake it on a new model; the neo-conservative, waging perpetual war as a means to universal democracy; liberal crusaders for human rights, who are convinced that all the world longs to become as they imagine themselves to be.” He includes also “the Romantics, who believe human individuality is everywhere repressed.”

Throughout American politics, as indeed throughout Western politics, a large proportion of major controversies ultimately are battles between the ichthyophils and the Burkeans, between the sensibility of the French Revolution and the sensibility of American Revolution, between adherents of the idea of progress and those skeptical of that potent concept. John Gray has provided a major service in probing with such clarity and acuity the impulses, thinking and aims of those on the ichthyophil side of that great divide. As he sums up, “Allowing the majority of humankind to imagine they are flying fish even as they pass their lives under the waves, liberal civilization rests on a dream.”

And so it goes. On the left there are the ichtyophils of America, represented in huge numbers by “progressives” and their constituents and dupes (i.e., a majority of the public). They are given aid and comfort by a small but vociferous number of pseudo-libertarians (as discussed here, for example). On the right stands a throng of pseudo-conservatives — mainly identified with the Republican Party — who are prone to adopt the language and ideals of progressivism, out of power-lust and ignorance. Almost entirely muted by the sound and fury emanating from left and right — and relatively few in number — are the true libertarians: Burkean conservatives.

And so Leviathan grows, crushing the liberty envisioned by our Burkean Founders in the name of “progress” (i.e., social and economic engineering). And as Robert Samuelson points out, the growth of Leviathan doesn’t ensure our immunity to chaos and barbarity in the event of a debilitating attack on our fragile infrastructure. It is ironic that we would be better able to withstand such an attack without descending into chaos and barbarity had not Leviathan weakened and sundered many true social bonds, in the name of “progress.”

Our thralldom to an essentially impotent Leviathan is of no importance to Pinker, to “progressives,” or the dupes and constituents of “progressivism.” They have struck their Faustian bargain with Leviathan, and they will pay the price, sooner or later. Unfortunately, all of us will pay the price — even those of us who despise and resist Leviathan.

*     *     *

Related reading: Wesley Morganston, “The Long, Slow Collapse: What Whig History Can’t Explain,” Theden, October 26, 2014

Related posts:
Democracy vs. Liberty
Something Controversial
More about Democracy and Liberty
Yet Another Look at Democracy
Law, Liberty, and Abortion
Abortion and the Slippery Slope
Privacy: Variations on the Theme of Liberty
An Immigration Roundup
Illogic from the Pro-Immigration Camp
The Ruinous Despotism of Democracy
On Liberty
Illegal Immigration: A Note to Libertarian Purists
Inside-Outside
A Moralist’s Moral Blindness
Pseudo-Libertarian Sophistry vs. True Libertarianism
The Folly of Pacifism
Positivism, “Natural Rights,” and Libertarianism
What Are “Natural Rights”?
The Golden Rule and the State
Libertarian Conservative or Conservative Libertarian?
Bounded Liberty: A Thought Experiment
Evolution, Human Nature, and “Natural Rights”
More Pseudo-Libertarianism
More about Conservative Governance
The Meaning of Liberty
Positive Liberty vs. Liberty
On Self-Ownership and Desert
In Defense of Marriage
Understanding Hayek
Rethinking the Constitution: Freedom of Speech and of the Press
The Golden Rule as Beneficial Learning
Why I Am Not an Extreme Libertarian
Facets of Liberty
Burkean Libertarianism
Rights: Source, Applicability, How Held
The Folly of Pacifism, Again
What Is Libertarianism?
True Libertarianism, One More Time
Human Nature, Liberty, and Rationalism
Utilitarianism and Psychopathy
Privacy Is Not Sacred
A Declaration and Defense of My Prejudices about Governance
The Libertarian-Conservative Fusion Is Alive and Well
Libertarianism and Morality
Libertarianism and Morality: A Footnote
Merit Goods, Positive Rights, and Cosmic Justice
More about Merit Goods
What Is Bleeding-Heart Libertarianism?
Society and the State
Prohibition, Abortion, and “Progressivism”
Liberty, Negative Rights, and Bleeding Hearts
Cato, the Kochs, and a Fluke
Conservatives vs. “Liberals”
Not-So-Random Thoughts (II)
Why Conservatism Works
The Pool of Liberty and “Me” Libertarianism
Bleeding-Heart Libertarians = Left-Statists
Enough with the Bleeding Hearts, Already
Not Guilty of Libertarian Purism
Liberty and Society
The Eclipse of “Old America”
Genetic Kinship and Society
Liberty as a Social Construct: Moral Relativism?
A Contrarian View of Universal Suffrage
Well-Founded Pessimism
Defending Liberty against (Pseudo) Libertarians