Is Taxation Slavery?

Yes, given the purpose of much taxation.

Thomas Nagel writes:

Some would describe taxation as a form of theft and conscription as a form of slavery — in fact some would prefer to describe taxation as slavery too, or at least as forced labor. Much might be said against these descriptions, but that is beside the point. For within proper limits, such practices when engaged in by governments are acceptable, whatever they are called. If someone with an income of $2000 a year trains a gun on someone with an income of $100000 a year and makes him hand over his wallet, that is robbery. If the federal government withholds a portion of the second person’s salary (enforcing the laws against tax evasion with threats of imprisonment under armed guard) and gives some of it to the first person in the form of welfare payments, food stamps, or free health care, that is taxation. In the first case it is (in my opinion) an impermissible use of coercive means to achieve a worthwhile end. In the second case the means are legitimate, because they are impersonally imposed by an institution designed to promote certain results. Such general methods of distribution are preferable to theft as a form of private initiative and also to individual charity. This is true not only for reasons of fairness and efficiency, but also because both theft and charity are disturbances of the relations (or lack of them) between individuals and involve their individual wills in a way that an automatic, officially imposed system of taxation does not. [Mortal Questions, “Ruthlessness in Public Life,” pp. 87-88]

How many logical and epistemic errors can a supposedly brilliant philosopher make in one (long) paragraph? Too many:

  • “For within proper limits” means that Nagel is about to beg the question by shaping an answer that fits his idea of proper limits.

  • Nagel then asserts that the use by government of coercive means to achieve the same end as robbery is “legitimate, because [those means] are impersonally imposed by an institution designed to promote certain results.” Balderdash! Nagel’s vision of government as some kind of omniscient, benevolent arbiter is completely at odds with reality.  The “certain results” (redistribution of income) are achieved by functionaries, armed or backed with the force of arms, who themselves share in the spoils of coercive redistribution. Those functionaries act under the authority of bare majorities of elected representatives, who are chosen by bare majorities of voters. And those bare majorities are themselves coalitions of interested parties — hopeful beneficiaries of redistributionist policies, government employees, government contractors, and arrogant statists — who believe, without justification, that forced redistribution is a proper function of government.

  • On the last point, Nagel ignores the sordid history of the unconstitutional expansion of the powers of government. Without justification, he aligns himself with proponents of the “living Constitution.”

  • Nagel’s moral obtuseness is fully revealed when he equates coercive redistribution with “fairness and efficiency”, as if property rights and liberty were of no account.

  • The idea that coercive redistribution fosters efficiency is laughable. It does quite the opposite because it removes resources from productive uses — including job-creating investments. The poor are harmed by coercive redistribution because it drastically curtails economic growth, from which they would benefit as job-holders and (where necessary) recipients of private charity (the resources for which would be vastly greater in the absence of coercive redistribution).

  • Finally (though not exhaustively), Nagel’s characterization of private charity as a “disturbance of the relations … among individuals” is so wrong-headed that it leaves me dumbstruck. Private charity arises from real relations among individuals — from a sense of community and feelings of empathy. It is the “automatic, officially imposed system of taxation” that distorts and thwarts (“disturbs”) the social fabric.

In any event, the answer to the question posed in the title of this post is “yes”; taxation for the purpose of redistribution is slavery (see number 2 in the second set of definitions). Taxation amounts to the subjection of most taxpayers — those who are not deadbeats, do-gooders, demagogues, and government drones — to persons who are such things (even if they pay taxes).

If “slavery” is too strong a word, “theft” will do quite well.

Social Norms and Liberty

Leftist “wokeism” and big government are destroying both.

Libertarianism, as it is usually explained and presented, lacks an essential ingredient: morality. Yes, libertarians espouse a superficially plausible version of morality — the harm principle:

[T]he sole end for which mankind are warranted, individually or collectively in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. [John Stuart Mill, On Liberty (1869), Chapter I, paragraph 9.]

This is empty rhetoric. Harm must be defined, and its definition must arise from social norms.

Liberty is not an abstraction, it is the scope of action that is allowed by socially agreed upon rights. It is that restrained scope of action which enables a people to coexist willingly, peacefully, and cooperatively for their mutual benefit. Such coexistence depends greatly on mutual aid, trust, respect, and forbearance. Liberty is therefore necessarily degraded when courts — at the behest of “liberals” and so-called libertarians sunder social restraints in the name of liberty.

Social norms have changed for the worse since the days of my Midwestern upbringing in the 1940s and 1950s. Many of those norms have gone to the great graveyard of quaint ideas; for example:

  • Behavior is shaped by social norms, like those listed here. The norms are rooted in the Ten Commandments and time-tested codes of behavior. The norms aren’t altered willy-nilly in accordance with the wishes of “activists”, as amplified through the megaphone of the mass media.

  • Rules of grammar serve the useful purpose of enabling people to understand each other easily. The flouting of grammatical rules in everyday conversation is a sign of ignorance and ill-breeding, not originality.

  • Dead, white, European males produced some of the greatest works of art, music, literature, philosophy, science, and political theory. Those dead, white, European males are to be celebrated for their accomplishments, not derided just because they are dead or were not black/brown/tan, female, of confused gender, or inhabitants of non-European places.

  • Marriage is a union of one man and one woman. Nothing else is marriage, despite legislative, executive, and judicial decrees that substitute brute force for the wisdom of the ages.

  • Marriage comes before children. This is not because people are pure at heart, but because it is the responsible way to start life together and to ensure that one’s children enjoy a stable, nurturing home life.

  • Marriage is until “death do us part”. Divorce is a recourse of last resort, not an easy way out of marital and familial responsibilities or the first recourse when one spouse disappoints or angers the other.

  • Children are disciplined — sometimes spanked — when they do wrong. They aren’t given long, boring, incomprehensible lectures about why they’re doing wrong. Why not? Because they usually know that they’re doing wrong and are just trying to see what they can get away with.

  • Gentlemen don’t swear in front of ladies, and ladies don’t swear in front of gentlemen; discourse is therefore more likely to be rational, and certainly more bearable to those within earshot.

  • A person’s “space” is respected, as long as person is being respectful of others. A person’s space is not invaded by a loud conversation of no interest to anyone but the conversant.

  • A person grows old gracefully and doesn’t subject others to the sight of flabby, wrinkled tattoos (unless you were a sailor who has one tattoo on one arm).

  • Drugs are taken for the treatment of actual illnesses, not for recreational purposes.

  • Income is earned, not “distributed”. Persons who earn a lot of money are to be respected (unless they are criminals or crony capitalists). If you envy them to the point of wanting to take their money, you’re a pinko-commie-socialist (no joke).

  • People should work, save, and pay for their own housing. The prospect of owning one’s own home, by dint of one’s own labor, is an incentive to work hard and to advance oneself through the acquisition of marketable skills.

  • Welfare is a gift that one accepts as a last resort, it is not a right or an entitlement, and it is not bestowed on persons with convenient disabilities.

  • Regarding work and welfare, a person who lacks work, is seriously ill, or disabled should be helped by his family, friends, neighbors, co-religionists, and other organs of civil society with which he is affiliated or which come to know of his plight.

  • A man holds a door open for a woman out of courtesy, and he does the same for anyone who is obviously weak or laden with packages

  • Sexism (though it wasn’t called that) is nothing more than the understanding — shared by men and women — that women are members of a different sex (the only different one); are usually weaker than men; are endowed with different brain chemistry and physical skills than men (still a fact); and enjoy discreet admiration (flirting) if they’re passably good-looking (or better). Women who reject those propositions — and who try to enforce modes of behavior that assume differently — are embittered and twisted.

  • A mother who devotes time and effort to the making of a good home and the proper rearing of her children is a pillar of civilized society. Her life is to be celebrated, not condemned as “a waste”.

  • Homosexuality is a rare, aberrant kind of behavior. (And this was before AIDS proved it to be aberrant.) It’s certainly not a “lifestyle” to be celebrated and shoved down the throats of all who object to it.

  • Privacy is a constrained right. It doesn’t trump moral obligations, among which are the obligations to refrain from spreading a deadly disease and to preserve innocent life.

  • Addiction isn’t a disease; it’s a surmountable failing.

  • Justice is for victims. Victims are persons to whom actual harm has been done by way of fraud, theft, bodily harm, murder, and suchlike. A person with a serious disease or handicap isn’t a victim, nor is a person with a drinking or drug problem.

  • Justice is a dish best served hot, so that would-be criminals can connect the dots between crime and punishment. Swift and sure punishment is the best deterrent of crime. Capital punishment is the ultimate deterrent because an executed killer can’t kill again.

  • Peace is the result of preparedness for war; lack of preparedness invites war.

The list isn’t exhaustive, but it’s certainly representative. The themes are few and simple: respect others, respect tradition, restrict government to the defense of society from predators foreign and domestic. The result is liberty: A regime of mutually beneficial coexistence based on trust. That’s all it takes — not big government bent on dictating how Americans live their lives.

Economic and social liberty are indivisible. The extent of liberty is inversely proportional to the power of government.


See also:

On Liberty

Natural Rights, Liberty, the Golden Rule, and Leviathan

Is the Police State Here?

It’s beginning to look that way.

From “What Happened to America?”:

How … did America come to be run by a cabal of super-rich “oligarchs”, politicians, bureaucrats, academics, and “journalists” who sneer at the list [of traditonal American values] and reject it, in deed if not in word?

It happened one step backward at a time. America’s old culture, along with much of its liberty and (less visibly) its prosperity, was lost step by step through a combination of chicanery (by the left) and compromise (by “centrists” and conservative dupes). The process — the culmination of which is “wokeness” — has a long history and deep roots. Those roots are not in Marxism, socialism, atheism, or any of the other left-wing “isms” (appalling and dangerous they may be). They are, as I explain here, in (classical) liberalism, the supposed bulwark of liberty and prosperity.

An “ism” is only as effective as its adherents. The adherents of (classical) liberalism are especially ineffective in the defense of liberty because they are blinded by their own rhetoric. Take Deirdre McCloskey, for example, whom Arnold Kling quotes approvingly in a piece that I eviscerated….

That drips with smugness and condescension. And it wildly mischaracterizes the wealthy “elites” who have taken charge in the West….

All that McCloskey has told us is that she (formerly he) views his/her way of life as superior to that of the unwashed masses, living and dead. Further, holding that view — which is typical of liberals classical and modern (i.e., statists) — he/she obviously believes that the superior way of life should be adopted by the unwashed — for their own good, of course. (If this isn’t to be accomplished by force, as statists would prefer, then by education and example. This would include, but not be limited to, choosing a new sexual identity if one is deluded enough to believe that he/she was “assigned” the wrong one at birth.)

It is hard to tell McCloskey’s attitude from that of a member of the “woke” elite, though he/she undoubtedly deny being such a person. I am willing to bet, however, that most of McCloskey’s ilk (if not he/she him/herself) voted enthusiastically for “moderate” Joe Biden because rude, crude Donald Trump offended their tender sensibilities (and threatened their statist agenda). And they did so knowing that Biden, despite his self-proclaimed “moderation”, was and is allied with leftists whose statist ambitions for the United States are an affront to every tenet of classical liberalism, not the least of which is freedom of speech.

Thus the unrelenting attacks on Donald Trump — the leading symbol of Americanism, anti-”wokeism”, and anti-statism — which began before he became president, persisted throughout his presidency, and have intensified since he left the presidency. The attacks, it should be emphasized, have been and are being conducting by officials and agencies of the federal government.

If Trump is to be silenced, so are his followers. Bill Vallicella depicts the horror that is to come:

With their hands on the levers of power, the Democrats can keep the borders open, empty the prisons, defund the police except for the state police, confiscate the firearms of law-abiding citizens, do away with the filibuster, give felons the right to vote while in prison, outlaw home schooling, alter curricula to promote the ‘progressive’ worldview (by among other things injecting 1619 Project fabrications into said curricula), infiltrate and ultimately destroy the institutions of civil society, pass ‘hate speech’ laws to squelch dissent, suppress religion, and so on into the abyss of leftist nihilism.

To which I would add: overriding and penalizing objections to allowing transgendered “men” into girls’ and women’s bathrooms, locker rooms, and sports; suppressing parents who object to such things and to the teaching of critical race theory; penalizing small businesses who object to forced participation in the “celebration” of LGBTQ-ness; the raising and arming of a vast cadre of IRS auditors and enforces; and on and on.

A leading example of the oppression is the elite and official response to Covid: mask mandates. Martin Gurri spells it out:

Three days after the mask mandate was struck down [by U.S. District Judge Kathryn Kimball Mizelle], on April 21 [2022], Barack Obama delivered the bad news about “disinformation” to a Stanford University forum on that subject. His unacknowledged theme, too, was the crisis of elite authority, which he explained with a history lesson. The twentieth century, Obama said, may have excluded “women and people of color,” but it was a time of information sanity, when the masses gathered in the great American family room to receive the news from Walter Cronkite…. Those were the days when a “shared culture” could operate on a “shared set of facts.”

The digital age has battered that peaceable kingdom to bits… [w]ho had the authority to make projections and recommendations[?]

Online, everyone did. People with opinions that the former president found toxic—nationalists, white supremacists, unhinged Republicans, Vladimir Putin and his gang of Russian hackers—could say anything they wished on the Web, no matter how irresponsible, including lies. A defenseless public, sunk in ignorance, could be deceived into voting against enlightened Democrats.

Total blindness to the other side of the story is a partisan affliction that Obama makes no attempt to overcome…. [H]e never mentioned the most effective disinformation campaign of recent times, conducted against Trump by the Hillary Clinton campaign, in which members of his own administration participated. He simply doesn’t believe that it works that way. Disinformation, for him, is a form of lèse-majesté—any insult to the progressive ruling class.

How are we to deal with this “tumultuous, dangerous moment in history”? Obama was clear about the answer: we must recover the power to exclude certain voices, this time through regulation. The government must assume control over disorderly online speech. First Amendment guarantees of freedom of speech don’t apply to private companies like Facebook and Twitter, he noted. At the same time, since these companies “play a unique role in how we . . . are consuming information,” the state must impose “accountability.” The examples he provided betray nostalgia for a lost era: the “meat inspector,” who would presumably check on how the algorithmic sausage is made; and the Fairness Doctrine, which somehow would be applied to an information universe virtually infinite in volume.

Obama views disinformation much as Fauci does Covid-19: as a lever of authority in the hands of the guardian class. Democracy, he tells us over and again, must be protected from “toxic content.” But by democracy, he means the rule of the righteous, a group that coincides exactly with his partisan inclinations. By toxic, he means anything that smacks of Trumpism. The former president’s speech was vague on details, but it left all options open. Who can say what pretext will be needed to expel the next rough beast from social media, tomorrow or the day after?…

Obama’s speech, in turn, took place four days before the apparent sale of Twitter to Elon Musk—at which point elite despair, always volatile, at last exploded in a fireball of rage and panic….

For a considerable number of agitated people, [Musk’s] goal of neutrality [on Twitter] was an abomination. Suddenly, “free speech” became a code for something dark and evil—racism, white nationalism, oligarchy, transphobia, “extremist rightwing Nazis”—all the phantoms and goblins that inhabit the nightmares of the progressive mind….

Following the Obama formula, the itch to control what Americans can say online was equated with the defense of freedom. Granting unfettered speech to the rabble, as Musk intended, would be “dangerous to our democracy,” Elizabeth Warren said. “For democracy to survive we need more content moderation, not less,” was how Max Boot, Washington Post columnist, put it. “We must pass laws to protect privacy and promote algorithmic justice for internet users,” was the bizarre formulation of Ed Markey, junior senator from Massachusetts. The Biden White House, never a hotbed of originality, recited the Obama refrain about holding the digital platforms “accountable” for the “harm” they inflict on us….

The second theme follows from the first. The elites are convinced that their control over American society is slipping away. They have conquered the presidency, both houses of Congress, and the entirety of our culture; yet their mood is one of panic and resentment….

[But] Starting with the onset of Covid-19 in the spring of 2020, elite fortunes took an almost magical turn. The pandemic frightened the public into docility. The Black Lives Matter riots enshrined racial doctrines that demanded constant state interference as not only legitimate but mandatory in every corner of American culture. The malevolent Trump went down to defeat, and the presidency passed to Biden, a hollow man easily led by the progressive zealots around him. The Senate flipped Democratic….

From the scientific establishment through the corporate boardroom all the way to Hollywood, elite keepers of our culture speak with a single, shrill voice—and the script always follows the dogmas of one particular war-band—the cult of identity—and the politics of one specific partisan flavor, that of progressive Democrats….

Are we on the cusp … of an anti-elite cultural revolution? I still wouldn’t bet on it. For obscure reasons of psychology, creative minds incline to radical politics. A kulturkampf directed from Tallahassee, Florida, or even Washington, D.C., won’t budge that reality much. The group portrait of American culture will continue to tilt left indefinitely.

But that’s not the question at hand. What terrifies elites is the loss of their cultural monopoly in the face of a foretold political disaster. They fear diversity of any kind, with good cause: to the extent that the public enjoys a variety of choices in cultural products, elite control will be proportionately diluted.

And the left, in its panic about the possible loss of control, is trying to tighten its grip on ideas and on its ability to make the “masses” do its bidding.

It is happening here. And it may be unstoppable.

Writing: A Guide (Part IV)

I am repeating the introduction for those readers who haven’t seen parts I, II, and III, which are here, here, and here.

This series is aimed at writers of non-fiction works, but writers of fiction may also find it helpful. There are four parts:

I. Some Writers to Heed and Emulate

A. The Essentials: Lucidity, Simplicity, Euphony
B. Writing Clearly about a Difficult Subject
C. Advice from an American Master
D. Also Worth a Look

II. Step by Step

A. The First Draft

1. Decide — before you begin to write — on your main point and your purpose for making it.
2. Avoid wandering from your main point and purpose; use an outline.
3. Start by writing an introductory paragraph that summarizes your “story line”.
4. Lay out a straight path for the reader.
5. Know your audience, and write for it.
6. Facts are your friends — unless you’re trying to sell a lie, of course.
7. Momentum is your best friend.

B. From First Draft to Final Version

1. Your first draft is only that — a draft.
2. Where to begin? Stand back and look at the big picture.
3. Nit-picking is important.
4. Critics are necessary, even if not mandatory.
5. Accept criticism gratefully and graciously.
6. What if you’re an independent writer and have no one to turn to?
7. How many times should you revise your work before it’s published?

III. Reference Works

A. The Elements of Style
B. Eats, Shoots & Leaves
C. Follett’s Modern American Usage
D. Garner’s Modern American Usage
E. A Manual of Style and More

IV. Notes about Grammar and Usage

A. Stasis, Progress, Regress, and Language
B. Illegitimi Non Carborundum Lingo

1. Eliminate filler words.
2. Don’t abuse words.
3. Punctuate properly.
4. Why ‘s matters, or how to avoid ambiguity in possessives.
5. Stand fast against political correctness.
6. Don’t split infinitives.
7. It’s all right to begin a sentence with “And” or “But” — in moderation.
8. There’s no need to end a sentence with a preposition.

Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.


IV. NOTES ABOUT GRAMMAR AND USAGE

This part delivers some sermons about practices to follow if you wish to communicate effectively and be taken seriously, and if you wish not to be thought of as a semi-literate, self-indulgent, faddish dilettante. Section A is  a defense of prescriptivism in language. Section B (the title of which is mock-Latin for “Don’t Let the Bastards Wear Down the Language”) counsels steadfastness in the face of political correctness and various sloppy usages.

A. Stasis, Progress, Regress, and Language

To every thing there is a season, and a time to every purpose under the heaven…. — Ecclesiastes 3:1 (King James Bible)

Nothing man-made is permanent; consider, for example, the list of empires here. In spite of the history of empires — and other institutions and artifacts of human endeavor — most people seem to believe that the future will be much like the present. And if the present embodies progress of some kind, most people seem to expect that progress to continue.

Things do not simply go on as they have been without the expenditure of requisite effort. Take the Constitution’s broken promises of liberty, about which I have written so much. Take the resurgence of Russia as a rival for international influence. This has been in the works for about 25 years, but didn’t register on most Americans until the Crimean crisis and the invasion of Ukraine. What did Americans expect? That the U.S. could remain the unchallenged superpower while reducing its armed forces to the point that they were strained by relatively small wars in Afghanistan and Iraq? That Vladimir Putin would be cowed by American presidents who had so blatantly advertised their hopey-changey attitudes toward Iran and Islam, while snubbing a traditional ally like Israel, failing to beef up America’s armed forces, and allowing America’s industrial capacity to wither in the name of “globalaism” (or whatever the current catch-phrase may be)?

Turning to naïveté about progress, I offer Steven Pinker’s fatuous The Better Angels of Our Nature: Why Violence Has Declined. Pinker tries to show that human beings are becoming kinder and gentler. I have much to say in another post about Pinker’s thesis. One of my sources is Robert Epstein’s review of Pinker’s book. This passage is especially apt:

The biggest problem with the book … is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Less important in the grand scheme, but no less wrong-headed, is the idea of limitless progress in the arts. To quote myself:

In the early decades of the twentieth century, the visual, auditory, and verbal arts became an “inside game”. Painters, sculptors, composers (of “serious” music), choreographers, and writers of fiction began to create works not for the enjoyment of audiences but for the sake of exploring “new” forms. Given that the various arts had been perfected by the early 1900s, the only way to explore “new” forms was to regress toward primitive ones — toward a lack of structure…. Aside from its baneful influence on many true artists, the regression toward the primitive has enabled persons of inferior talent (and none) to call themselves “artists”. Thus modernism is banal when it is not ugly.

Painters, sculptors, etc., have been encouraged in their efforts to explore “new” forms by critics, by advocates of change and rebellion for its own sake (e.g., “liberals” and “bohemians”), and by undiscriminating patrons, anxious to be au courant. Critics have a special stake in modernism because they are needed to “explain” its incomprehensibility and ugliness to the unwashed.

The unwashed have nevertheless rebelled against modernism, and so its practitioners and defenders have responded with condescension, one form of which is the challenge to be “open minded” (i.e., to tolerate the second-rate and nonsensical). A good example of condescension is heard on Composers Datebook, a syndicated feature that runs on some NPR stations. Every Composers Datebook program closes by “reminding you that all music was once new.” As if to lump Arnold Schoenberg and John Cage with Johann Sebastian Bach and Ludwig van Beethoven.

All music, painting, sculpture, dance, and literature was once new, but not all of it is good. Much (most?) of what has been produced since 1900 is inferior, self-indulgent crap.

And most of the ticket-buying public knows it. Take opera, for example. An article by Christopher Ingraham purports to show that “Opera is dead, in one chart” (The Washington Post, October 31, 2014). Here’s the chart and Ingraham’s interpretation of it:

The chart shows that opera ceased to exist as a contemporary art form roughly around 1970. It’s from a blog post by composer and programmer Suby Raman, who scraped the Met’s public database of performances going back to the 19th century. As Raman notes, 50 years is an insanely low bar for measuring the “contemporary” – in pop music terms, it would be like considering The Beatles’ I Wanna Hold Your Hand as cutting-edge.

Back at the beginning of the 20th century, anywhere from 60 to 80 percent of Met performances were of operas composed in some time in the 50 years prior. But since 1980, the share of contemporary performances has surpassed 10 percent only once.

Opera, as a genre, is essentially frozen in amber – Raman found that the median year of composition of pieces performed at the Met has always been right around 1870. In other words, the Met is essentially performing the exact same pieces now that it was 100 years ago….

Contrary to Ingraham, opera isn’t dead; for example, there are more than 220 active opera companies in the U.S. It’s just that there’s little demand for operatic works written after the late 1800s. Why? Because most opera-lovers don’t want to hear the strident, discordant, unmelodic trash that came later. Giacomo Puccini, who wrote melodic crowd-pleasers until his death in 1924, is an exception that proves the rule.

Language is in the same parlous state as the arts. Written and spoken English improved steadily as Americans became more educated — and as long as that education included courses which prescribed rules of grammar and usage. By “improved” I mean that communication became easier and more effective; specifically:

  • A larger fraction of Americans followed the same rules in formal communications (e.g., speeches, business documents, newspapers, magazines, and books,).

  • Movies and radio and TV shows also tended to follow those rules, thereby reaching vast numbers of Americans who did little or no serious reading.

  • There was a “trickle down” effect on Americans’ written and spoken discourse, especially where it involved mere acquaintances or strangers. Standard American English became a kind of lingua franca, which enabled the speaker or writer to be understood and taken seriously.

I call that progress.

There is, however, an (unfortunately) influential attitude toward language known as descriptivism. It is distinct from (and often opposed to) rule-setting (prescriptivism). Consider this passage from the first chapter of an online text:

Prescriptive grammar is based on the idea that there is a single right way to do things. When there is more than one way of saying something, prescriptive grammar is generally concerned with declaring one (and only one) of the variants to be correct. The favored variant is usually justified as being better (whether more logical, more euphonious, or more desirable on some other grounds) than the deprecated variant. In the same situation of linguistic variability, descriptive grammar is content simply to document the variants – without passing judgment on them.

This misrepresents the role of prescriptive grammar. It’s widely understood that there’s  more than one way of expressing an idea and more than one way in which the idea can be made understandable to others. The rules of prescriptive grammar, when followed, improve understanding, in two ways. First, by avoiding utterances that would be incomprehensible or, at least, very hard to understand. Second, by ensuring that utterances aren’t simply ignored or rejected out of hand because their form indicates that the writer or speaker is either ill-educated or stupid.

What, then, is the role of descriptive grammar? The authors offer this:

[R]ules of descriptive grammar have the status of scientific observations, and they are intended as insightful generalizations about the way that speakers use language in fact, rather than about the way that they ought to use it. Descriptive rules are more general and more fundamental than prescriptive rules in the sense that all sentences of a language are formed in accordance with them, not just a more or less arbitrary subset of shibboleth sentences. A useful way to think about the descriptive rules of a language … is that they produce, or generate, all the sentences of a language. The prescriptive rules can then be thought of as filtering out some (relatively minute) portion of the entire output of the descriptive rules as socially unacceptable.

Let’s consider the assertion that descriptive rules produce all the sentences of a language. What does that mean? It seems to mean the actual rules of a language can be inferred by examining sentences uttered or written by users of the language. But which users? Native users? Adults? Adults who have graduated from high-school? Users with IQs of at least 100?

Pushing on, let’s take a closer look at descriptive rules and their utility. The authors say that

we adopt a resolutely descriptive perspective concerning language. In particular, when linguists say that a sentence is grammatical, we don’t mean that it is correct from a prescriptive point of view, but rather that it conforms to descriptive rules….

The descriptive rules amount to this: They conform to practices that a speakers and writers actually use in an attempt to convey ideas, whether or not the practices state the ideas clearly and concisely. Thus the authors approve of these sentences because they’re of a type that might well occur in colloquial speech:

  • Over there is the guy who I went to the party with.

  • Over there is the guy with whom I went to the party.

  • (Both are clumsy ways of saying “I went to the party with that person.”)

  • Bill and me went to the store.

(Alternatively: “Bill and I went to the store.” “Bill went to the store with me.” “I went to the store with Bill.” Aha! Three ways to say it correctly, not just one way.)

But the authors label the following sentences as ungrammatical because they don’t comport with the colloquial speech:

  • Over there is guy the who I went to party the with.

  • Over there is the who I went to the party with guy.

  • Bill and me the store to went.

In other words, the authors accept as grammatical anything that a speaker or writer is likely to say, according to the “rules” that can be inferred from colloquial speech and writing. It follows that whatever is is right, even “Bill and me to the store went” or “Went to the store Bill and me”, which aren’t far-fetched variations on “Bill and me went to the store.” (Yoda-isms they read like.) They’re understandable, but only with effort. And further evolution would obliterate their meaning.

The fact is that the authors of the online text — like descriptivists generally — don’t follow their own anarchistic prescription. Follett puts it this way in Modern American Usage:

It is … one of the striking features of the libertarian position [with respect to language] that it preaches an unbuttoned grammar in a prose style that is fashioned with the utmost grammatical rigor. H.L. Mencken’s two thousand pages on the vagaries of the American language are written in the fastidious syntax of a precisian. If we go by what these men do instead of by what they say, we conclude that they all believe in conventional grammar, practice it against their own preaching, and continue to cultivate the elegance they despise in theory….

[T]he artist and the user of language for practical ends share an obligation to preserve against confusion and dissipation the powers that over the centuries the mother tongue has acquired. It is a duty to maintain the continuity of speech that makes the thought of our ancestors easily understood, to conquer Babel every day against the illiterate and the heedless, and to resist the pernicious and lulling dogma that in language … whatever is is right and doing nothing is for the best. [Pp. 30-31]

Follett also states the true purpose of prescriptivism, which isn’t to prescribe rules for their own sake:

[This book] accept[s] the long-established conventions of prescriptive grammar … on the theory that freedom from confusion is more desirable than freedom from rule…. [P. 243]

E.B. White says it more colorfully in his introduction to The Elements of Style. Writing about William Strunk Jr., author of the original version of the book, White says:

All through The Elements of Style one finds evidence of the author’s deep sympathy for the reader. Will felt that the reader was in serious trouble most of the time, a man floundering in a swamp, and that it was the duty of anyone attempting to write English to drain this swamp quickly and get his man up on dry ground, or at least throw him a rope. In revising the text, I have tried to hold steadily in mind this belief of his, this concern for the bewildered reader. [P. xvi, third edition]

Descriptivists would let readers founder in the swamp of incomprehensibility. If descriptivists had their way — or what they claim to be their way — American English would, like the arts, recede into formless primitivism.

Eternal vigilance about language is the price of comprehensibility.

B. Illegitimi Non Carborundum Lingo

The vigilant are sorely tried these days. What follows are several restrained rants about some practices that should be resisted and repudiated.

1. Eliminate filler words.

When I was a child, most parents and all teachers told children to desist from saying “uh” between words. “Uh” was then the filler word favored by children, adolescents, and even adults. The resort to “uh” meant that the speaker was stalling because he had opened his mouth without having given enough thought to what he meant to say.

Next came “you know”. It has been displaced, in the main, by “like”, where it hasn’t been joined to “like” in the formation “like, you know”.

The need of a filler word (or phrase) seems ineradicable. Too many people insist on opening their mouths before thinking about what they’re about to say. Given that, I urge Americans in need of a filler word to use “uh” and eschew “like” and “like, you know”. “Uh” is far less distracting and irritating than the rat-a-tat of “like-like-like-like”.

Of course, it may be impossible to return to “uh”. Its brevity may not give the users of “like” enough time to organize their TV-smart-phone-video-game-addled brains and deliver coherent speech.

In any event, speech influences writing. Sloppy speech begets sloppy writing, as I know too well. I have spent the past 60 years of my life trying to undo habits of speech acquired in my childhood and adolescence — habits that still creep into my writing if I drop my guard.

2. Don’t abuse words.

How am I supposed to know what you mean if you abuse perfectly good words? Here I discuss four prominent examples of abuse.

Anniversary

Too many times in recent years I’ve heard or read something like this: “Sally and me are celebrating our one-year anniversary.” The “and me” is bad enough; “one-year anniversary” (or any variation of it) is truly egregious.

The word “anniversary” means “the annually recurring date of a past event”. To write or say “x-year anniversary” is redundant as well as graceless. Just write or say “first anniversary”, “two-hundred fiftieth anniversary”, etc., as befits the occasion.

To write or say “x-month anniversary” is nonsensical. Something that happened less than a year ago can’t have an anniversary. What is meant is that such-and-such happened “x” months ago. Just say it.

Data

A person who writes or says “data is” is at best an ignoramus and at worst a Philistine.

Language, above all else, should be used to make one’s thoughts clear to others. The pairing of a plural noun and a singular verb form is distracting, if not confusing. Even though datum is seldom used by Americans, it remains the singular form of data, which is the plural form. Data, therefore, never “is”; data always “are”.

H.W. Fowler says:

Latin plurals sometimes become singular English words (e.g., agenda, stamina) and data is often so treated in U.S.; in Britain this is still considered a solecism… [A Dictionary of Modern English Usage, p.119, second edition]

But Follett’s Modern American Usage is better on the subject:

Those who treat data as a singular doubtless think of it as a generic noun, comparable to knowledge or information [a generous interpretation]…. The rationale of agenda as a singular is its use to mean a collective program of action, rather than separate items to be acted on. But there is as yet no obligation to change the number of data under the influence of error mixed with innovation. [Pp. 130-131]

Hopefully and Its Brethren

Mark Liberman of Language Logdiscusses

the AP Style Guide’s decision to allow the use of hopefully as a sentence adverb, announced on Twitter at 6:22 a.m. on 17 April 2012:

Hopefully, you will appreciate this style update, announced at ‪#aces2012‬. We now support the modern usage of hopefully: it’s hoped, we hope.

Liberman, who is a descriptivist, defends AP’s egregious decision. His defense consists mainly of citing noted writers who have used “hopefully” where they meant “it is to be hoped”. I suppose that if those same noted writers had chosen to endanger others by driving on the wrong side of the road, Liberman would praise them for their “enlightened” approach to driving.

Geoff Nunberg also defends “hopefully“ in “The Word ‘Hopefully’ Is Here to Stay, Hopefully”, which appears at npr.org. Numberg (or the headline writer) may be right in saying that “hopefully” is here to stay. But that does not excuse the widespread use of the word in ways that are imprecise and meaningless.

The crux of Nunberg’s defense is that “hopefully” conveys a nuance that “language snobs” (e.g., me) are unable to grasp:

Some critics object that [“hopefully” is] a free-floating modifier (a Flying Dutchman adverb, James Kirkpatrick called it) that isn’t attached to the verb of the sentence but rather describes the speaker’s attitude. But floating modifiers are mother’s milk to English grammar — nobody objects to using “sadly,” “mercifully,” “thankfully” or “frankly” in exactly the same way.

Or people complain that “hopefully” doesn’t specifically indicate who’s doing the hoping. But neither does “It is to be hoped that,” which is the phrase that critics like Wilson Follett offer as a “natural” substitute. That’s what usage fetishism can drive you to — you cross out an adverb and replace it with a six-word impersonal passive construction, and you tell yourself you’ve improved your writing.

But the real problem with these objections is their tone-deafness. People get so worked up about the word that they can’t hear what it’s really saying. The fact is that “I hope that” doesn’t mean the same thing that “hopefully” does. The first just expresses a desire; the second makes a hopeful prediction. I’m comfortable saying, “I hope I survive to 105″ — it isn’t likely, but hey, you never know. But it would be pushing my luck to say, “Hopefully, I’ll survive to 105,” since that suggests it might actually be in the cards.

Floating modifiers may be common in English, but that does not excuse them. Given Numberg’s evident attachment to them, I am unsurprised by his assertion that “nobody objects to using ‘sadly,’ ‘mercifully,’ ‘thankfully’ or ‘frankly’ in exactly the same way.”

Nobody, Mr. Nunberg? Hardly. Anyone who cares about clarity and precision in the expression of ideas will object to such usages. A good editor would rewrite any sentence that begins with a free-floating modifier — no matter which one of them it is.

Nunberg’s defense against such rewriting is that Wilson Follett offers “It is to be hoped that” as a cumbersome, wordy substitute for “hopefully”. I assume that Nunberg refers to Follett’s discussion of “hopefully” in Modern American Usage. If so, Nunberg once again proves himself an adherent of imprecision, for this is what Follett actually says about “hopefully”:

The German language is blessed with an adverb, hoffentlich, that affirms the desirability of an occurrence that may or may not come to pass. It is generally to be translated by some such periphrasis as it is to be hoped that; but hack translators and persons more at home in German than in English persistently render it as hopefully. Now, hopefully and hopeful can indeed apply to either persons or affairs. A man in difficulty is hopeful of the outcome, or a situation looks hopeful; we face the future hopefully, or events develop hopefully. What hopefully refuses to convey in idiomatic English is the desirability of the hoped-for event. College, we read, is a place for the development of habits of inquiry, the acquisition of knowledge and, hopefully, the establishment of foundations of wisdom. Such a hopefully is un-English and eccentric; it is to be hoped is the natural way to express what is meant. The underlying mentality is the same—and, hopefully, the prescription for cure is the same (let us hope) / With its enlarged circulation–and hopefully also increased readership–[a periodical] will seek to … (we hope) / Party leaders had looked confidently to Senator L. to win . . . by a wide margin and thus, hopefully, to lead the way to victory for. . . the Presidential ticket (they hoped) / Unfortunately–or hopefully, as you prefer it–it is none too soon to formulate the problems as swiftly as we can foresee them. In the last example, hopefully needs replacing by one of the true antonyms of unfortunately–e.g. providentially.

The special badness of hopefully is not alone that it strains the sense of -ly to the breaking point, but that appeals to speakers and writers who do not think about what they are saying and pick up VOGUE WORDS [another entry in Modern American Usage] by reflex action. This peculiar charm of hopefully accounts for its tiresome frequency. How readily the rotten apple will corrupt the barrel is seen in the similar use of transferred meaning in other adverbs denoting an attitude of mind. For example: Sorrowfully (regrettably), the officials charged with wording such propositions for ballot presentation don’t say it that way / the “suicide needle” which–thankfully–he didn’t see fit to use (we are thankful to say). Adverbs so used lack point of view; they fail to tell us who does the hoping, the sorrowing, or the being thankful. Writers who feel the insistent need of an English equivalent for hoffentlich might try to popularize hopingly, but must attach it to a subject capable of hoping. [Op. cit., pp. 178-179]

Follett, contrary to Nunberg’s assertion, does not offer “It is to be hoped that” as a substitute for “hopefully”, which would “cross out an adverb and replace it with a six-word impersonal passive construction”. Follett gives “it is to be hoped for” as the sense of “hopefully”. But, as the preceding quotation attests, Follett is able to replace “hopefully” (where it is misused) with a few short words that take no longer to write or say than “hopefully”, and which convey the writer’s or speaker’s intended meaning more clearly. And if it does take a few extra words to say something clearly, why begrudge those words?

What about the other floating modifiers — such as “sadly”, “mercifully”, “thankfully”, and “frankly” — which Nunberg defends with much passion and no logic? Follett addresses those others in the third paragraph quoted above, but he does not dispose of them properly. For example, I would not simply substitute “regrettably” for “sorrowfully”; neither is adequate. What is wanted is something like this: “The officials who write propositions for ballots should not have said … , which is misleading (vague/ambiguous).” More words? Yes, but so what? (See above.)

In any event, a writer or speaker who is serious about expressing himself clearly to an audience will never say things like “Sadly (regrettably), the old man died” when he means either “I am (we are/they are/everyone who knew him is) saddened by (regrets) the old man’s dying”, or (less probably) “The old man grew sad as he died” or “The old man regretted dying.” I leave “mercifully”, “thankfully”, “frankly”, and the rest of the overused “-ly” words as an exercise for the reader.

The aims of a writer or speaker ought to be clarity and precision, not a stubborn, pseudo-logical insistence on using a word or phrase merely because it is in vogue or (more likely) because it irritates so-called language snobs. I doubt that even the pseudo-logical “language slobs” of Nunberg’s ilk condone “like” and “you know” as interjections. But by Nunberg’s “logic” those interjections should be condoned — nay, encouraged — because “everyone” knows what someone who uses them is “really saying”, namely, “I am too stupid or lazy to express myself clearly and precisely.”

Literally

This is from Dana Coleman’s article “According to the Dictionary, ‘Literally’ Also Now Means ‘Figuratively’”, (Salon, August 22, 2013):

Literally, of course, means something that is actually true: “Literally every pair of shoes I own was ruined when my apartment flooded.”

When we use words not in their normal literal meaning but in a way that makes a description more impressive or interesting, the correct word, of course, is “figuratively”.

But people increasingly use “literally” to give extreme emphasis to a statement that cannot be true, as in: “My head literally exploded when I read Merriam-Webster, among others, is now sanctioning the use of literally to mean just the opposite.”

Indeed, Ragan’s PR Daily reported last week that Webster, Macmillan Dictionary and Google have added this latter informal use of “literally” as part of the word’s official definition. The Cambridge Dictionary has also jumped on board….

Webster’s first definition of literally is, “in a literal sense or matter; actually”. Its second definition is, “in effect; virtually”. In addressing this seeming contradiction, its authors comment:

“Since some people take sense 2 to be the opposition of sense 1, it has been frequently criticized as a misuse. Instead, the use is pure hyperbole intended to gain emphasis, but it often appears in contexts where no additional emphasis is necessary.”…

The problem is that a lot of people use “literally” when they mean “figuratively” because they don’t know better. It’s literally* incomprehensible to me that the editors of dictionaries would suborn linguistic anarchy. Hopefully,** they’ll rethink their rashness.
_________
* “Literally” is used correctly, though it’s superfluous here.
** “Hopefully” is used incorrectly, but in the spirit of the times.

3. Punctuate properly.

I can’t compete with Lynne Truss’s Eats, Shoots & Leaves, so I won’t try. Just read it and heed it.

But I must address the use of the hyphen in compound adjectives, and the serial comma.

Regarding the hyphen, David Bernstein of The Volokh Conspiracy writes:

I frequently have disputes with law review editors over the use of dashes. Unlike co-conspirator Eugene, I’m not a grammatical expert, or even someone who has much of an interest in the subject.

But I do feel strongly that I shouldn’t use a dash between words that constitute a phrase, as in “hired gun problem”, “forensic science system”, or “toxic tort litigation.” Law review editors seem to want to generally want to change these to “hired-gun problem”, “forensic-science system”, and “toxic-tort litigation.” My view is that “hired” doesn’t modify “gun”; rather “hired gun” is a self-contained phrase. The same with “forensic science” and “toxic tort.”

Most of the commenters are right to advise Bernstein that the “dashes” — he means hyphens — are necessary. Why? To avoid confusion as to what is modifying the noun “problem”.

In “hired gun”, for example, “hired” (adjective) modifies “gun” (noun) to convey the idea of a bodyguard or paid assassin. But in “hired-gun problem”, “hired-gun” is a compound adjective which requires both of its parts to modify “problem”. It’s not a “hired problem” or a “gun problem”, it’s a “hired-gun problem”. The function of the hyphen is to indicate that “hired” and “gun”, taken separately, are meaningless as modifiers of “problem”, that is, to ensure that the meaning of the adjective-noun phrase is not misread.

A hyphen isn’t always necessary in such instances. But the consistent use of the hyphen in such instances avoids confusion and the possibility of misinterpretation.

The consistent use of the hyphen to form a compound adjective has a counterpart in the consistent use of the serial comma, which is the comma that precedes the last item in a list of three or more items (e.g., the red, white, and blue). Newspapers (among other sinners) eschew the serial comma for reasons too arcane to pursue here. Thoughtful counselors advise its use. (See, for example, Modern American Usage at pp. 422-423.) Why? Because the serial comma, like the hyphen in a compound adjective, averts ambiguity. It isn’t always necessary, but if it’s used consistently, ambiguity can be avoided. (Here’s a great example, from the Wikipedia article linked to in the first sentence of this paragraph: “To my parents, Ayn Rand and God”. The writer means, of course, “To my parents, Ayn Rand, and God”.)

A little punctuation goes a long way.

Addendum

I have reverted to the British style of punctuating in-line quotations, which I followed 45 years ago when I published a weekly newspaper. The British style is to enclose within quotation marks only (a) the punctuation that appears in quoted text or (b) the title of a work (e.g., a blog post) that is usually placed within quotation marks.

I have reverted because of the confusion and unsightliness caused by the American style. It calls for the placement of periods and commas within quotation marks, even if the periods and commas don’t occur in the quoted material or title. Also, if there is a question mark at the end of quoted material, it replaces the comma or period that might otherwise be placed there.

If I had continued to follow American style, I would have referred to a series of blog posts in this way:

… “A New (Cold) Civil War or Secession?” “The Culture War,” “Polarization and De-facto Partition,” and “Civil War?

What a hodge-podge. There’s no comma between the first two entries, and the sentence ends with an inappropriate question mark. With two titles ending in question marks, there was no way for me to avoid a series in which a comma is lacking. I could have avoided the sentence-ending question mark by recasting the list, but the items are listed chronologically, which is how they should be read.

I solved these problems easily by reverting to the British style:

… “A New (Cold) Civil War or Secession?”, “The Culture War“, “Polarization and De-facto Partition“, and “Civil War?“.

This not only eliminates the hodge-podge, but is also more logical and accurate. All items are separated by commas, commas aren’t displaced by question marks, and the declarative sentence ends with a period instead of a question mark.

4. Why “‘s” matters, or how to avoid ambiguity in possessives.

Most newspapers and magazines follow the convention of forming the possessive of a word ending in “s” by putting an apostrophe after the “s”; for example:

  • Dallas’ (for Dallas’s)

  • Texas’ (for Texas’s)

  • Jesus’ (for Jesus’s)

This may work on a page or screen, but it can cause ambiguity if carried over into speech*. (If I were the kind of writer who condescends to his readers, I would at this point insert the following “trigger warning”: I am about to take liberties with the name of Jesus and the New Testament, about which I will write as if it were a contemporary document. Read no further if you are easily offended.)

What sounds like “Jesus walks on water” could mean just what it sounds like: a statement about a feat of which Jesus is capable or is performing. But if Jesus walks on the water more than once, it could refer to his plural perambulations: “Jesus’ walks on water”**, as it would appear in a newspaper.

The simplest and best way to avoid the ambiguity is to insist on “Jesus’s walks on water”** for the possessive case, and to inculcate the practice of saying it as it reads. How else can the ambiguity be avoided, in the likely event that the foregoing advice will be ignored?

If what is meant is “Jesus walks on water”, one could say “Jesus can [is able to] walk on water” or “Jesus is walking on water”, according to the situation.

If what is meant is that Jesus walks on water more than once, “Jesus’s walks on water” is unambiguous (assuming, of course, that one’s listeners have an inkling about the standard formation of a singular possessive). There’s no need to work around it, as there is in the non-possessive case. But if you insist on avoiding the “‘s” formation, you can write or say “the water-walks of Jesus”.

I now take it to the next level.

What if there’s more than one Jesus who walks on water? Well, if they all can walk on water and the idea is to say so, it’s “the Jesuses walk on water”. And if they all walk on water and the idea is to refer to those outings as the outings of them all, it’s “the water-walks of the Jesuses”.

Why? Because the standard formation of the plural possessive of Jesus is Jesuses’. Jesusues’s would be too hard to say or comprehend. But Jesuses’ sounds the same as Jesuses, and must therefore be avoided in speech, and in writing intended to be read aloud. Thus “the water walks of the Jesuses” instead of “the Jesuses’ walks on water”, which is ambiguous to a listener.
__________
* A good writer will think about the effect of his writing if it is read aloud.

** “Jesus’ walks on water” and “Jesus’s walks on water” misuse the possessive case, though it’s a standard kind misuse that is too deeply entrenched to be eradicated. Strictly speaking, Jesus doesn’t own walks on water, he does them. The alternative construction, “the water-walks of Jesus”, is better; “the water-walks by Jesus” is best.

5. Stand fast against political correctness.

As a result of political correctness, some words and phrases have gone out of favor. needlessly. Others are cluttering the language, needlessly. Political correctness manifests itself in euphemisms, verboten words, and what I call gender preciousness.

Euphemisms

These are much-favored by persons of the left, who seem unable to have an aversion to reality. Thus, for example:

  • “Crippled” became “handicapped”, which became “disabled” and then “differently-abled” or “mobility-challenged”.

  • “Stupid” and “slow” became “learning disabled”, which became “special needs” (a euphemistic category that houses more than the stupid).

  • “Poor” became “underprivileged”, which became “economically disadvantaged”, which became “entitled” (to other people’s money).

  • “Colored persons” became “Negroes”, who became “blacks”, “Blacks”, and “African-Americans”. They are now often called “persons of color”. That looks like a variant of “colored persons”, but it also refers to a segment of humanity that includes almost everyone but white persons who are descended solely from long-time inhabitants the British Isles and continental Europe.

How these linguistic contortions have helped the crippled, stupid, poor, and colored is a mystery to me. Tact is admirable, but euphemisms aren’t tactful. They’re insulting because they’re condescending.

Verboten Words

The list of such words is long and growing longer. Words become verboten for the same reason that euphemisms arise: to avoid giving offense, even where offense wouldn’t or shouldn’t be taken.

David Bernstein, writing at TCS Daily several ago, recounted some tales about political correctness (source no longer available online). This one struck close to home:

One especially merit-less [hostile work environment] claim that led to a six-figure verdict involved Allen Fruge, a white Department of Energy employee based in Texas. Fruge unwittingly spawned a harassment suit when he followed up a southeast Texas training session with a bit of self-deprecating humor. He sent several of his colleagues who had attended the session with him gag certificates anointing each of them as an honorary Coon Ass — usually spelled coonass — a mildly derogatory slang term for a Cajun. The certificate stated that [y]ou are to sing, dance, and tell jokes and eat boudin, cracklins, gumbo, crawfish etouffe and just about anything else. The joke stemmed from the fact that southeast Texas, the training session location, has a large Cajun population, including Fruge himself.

An African American recipient of the certificate, Sherry Reid, chief of the Nuclear and Fossil Branch of the DOE in Washington, D.C., apparently missed the joke and complained to her supervisors that Fruge had called her a coon. Fruge sent Reid a formal (and humble) letter of apology for the inadvertent offense, and explained what Coon Ass actually meant. Reid nevertheless remained convinced that Coon Ass was a racial pejorative, and demanded that Fruge be fired. DOE supervisors declined to fire Fruge, but they did send him to diversity training. They also reminded Reid that the certificate had been meant as a joke, that Fruge had meant no offense, that Coon Ass was slang for Cajun, and that Fruge sent the certificates to people of various races and ethnicities, so he clearly was not targeting African Americans. Reid nevertheless sued the DOE, claiming that she had been subjected to a racial epithet that had created a hostile environment, a situation made worse by the DOEs failure to fire Fruge.

Reid’s case was seemingly frivolous. The linguistics expert her attorney hired was unable to present evidence that Coon Ass meant anything but Cajun, or that the phrase had racist origins, and Reid presented no evidence that Fruge had any discriminatory intent when he sent the certificate to her. Moreover, even if Coon Ass had been a racial epithet, a single instance of being given a joke certificate, even one containing a racial epithet, by a non-supervisory colleague who works 1,200 miles away does not seem to remotely satisfy the legal requirement that harassment must be severe and pervasive for it to create hostile environment liability. Nevertheless, a federal district court allowed the case to go to trial, and the jury awarded Reid $120,000, plus another $100,000 in attorneys fees. The DOE settled the case before its appeal could be heard for a sum very close to the jury award.

I had a similar though less costly experience some years ago, when I was chief financial and administrative officer of a defense think-tank. In the course of discussing the company’s budget during meeting with employees from across the company, I uttered “niggardly” (meaning stingy or penny-pinching). The next day a fellow vice president informed me that some of the black employees from her division had been offended by “niggardly”. I suggested that she give her employees remedial training in English vocabulary. That should have been the verdict in the Reid case.

Gender Preciousness

It has become fashionable for academicians and pseudo-serious writers to use “she” where “he” long served as the generic (and sexless) reference to a singular third person. Here is an especially grating passage from an by Oliver Cussen:

What is a historian of ideas to do? A pessimist would say she is faced with two options. She could continue to research the Enlightenment on its own terms, and wait for those who fight over its legacy—who are somehow confident in their definitions of what “it” was—to take notice. Or, as [Jonathan] Israel has done, she could pick a side, and mobilise an immense archive for the cause of liberal modernity or for the cause of its enemies. In other words, she could join Moses Herzog, with his letters that never get read and his questions that never get answered, or she could join Sandor Himmelstein and the loud, ignorant bastards. [“The Trouble with the Enlightenment”, Prospect, May 5, 2013]

I don’t know about you, but I’m distracted by the use of the generic “she”, especially by a male. First, it’s not the norm (or wasn’t the norm until the thought police made it so). Thus my first reaction to reading it in place of “he” is to wonder who this “she” is; whereas,  the function of “he” as a stand-in for anyone (regardless of gender) was always well understood. Second, the usage is so obviously meant to mark the writer as “sensitive” and “right thinking” that it calls into question his sincerity and objectivity. I call this in evidence of my position:

The use of the traditional inclusive generic pronoun “he” is a decision of language, not of gender justice. There are only six alternatives. (1) We could use the grammatically misleading and numerically incorrect “they.” But when we say “one baby was healthier than the others because they didn’t drink that milk,” we do not know whether the antecedent of “they” is “one” or “others,” so we don’t know whether to give or take away the milk. Such language codes could be dangerous to baby’s health. (2) Another alternative is the politically intrusive “in-your-face” generic “she,” which I would probably use if I were an angry, politically intrusive, in-your-face woman, but I am not any of those things. (3) Changing “he” to “he or she” refutes itself in such comically clumsy and ugly revisions as the following: “What does it profit a man or woman if he or she gains the whole world but loses his or her own soul? Or what shall a man or woman give in exchange for his or her soul?” The answer is: he or she will give up his or her linguistic sanity. (4) We could also be both intrusive and clumsy by saying “she or he.” (5) Or we could use the neuter “it,” which is both dehumanizing and inaccurate. (6) Or we could combine all the linguistic garbage together and use “she or he or it,” which, abbreviated, would sound like “sh . . . it.” I believe in the equal intelligence and value of women, but not in the intelligence or value of “political correctness,” linguistic ugliness, grammatical inaccuracy, conceptual confusion, or dehumanizing pronouns. [Peter Kreeft, Socratic Logic, 3rd ed., p. 36, n. 1, as quoted by Bill Vallicella, Maverick Philosopher, May 9, 2015]

I could go on about the use of “he or she” in place of “he” or “she”. But it should be enough to call it what it is: verbal clutter. (As for “they”, “them”, and “their” in place of plural pronouns, see “Encounters with Pronouns” at Imlac’s Journal.)

Then there is “man”, which for ages was well understood (in the proper context) as referring to persons in general, not to male persons in particular. (“Mankind” merely adds a superfluous syllable.)

The short, serviceable “man” has been replaced, for the most part, by “humankind”. I am baffled by the need to replaced one syllable with three. I am baffled further by the persistence of “man” — a “sexist” term — in the three-syllable substitute. But it gets worse when writers strain to avoid the solo use of “man” by resorting to “human beings” and the “human species”. These are longer than “humankind”, and both retain the accursed “man”.

6. Don’t split infinitives.

Just don’t do it, regardless of the pleadings of descriptivists. Even Follett counsels the splitting of infinitives, when the occasion demands it. I part ways with Follett in this matter, and stand ready to be rebuked for it.

Consider the case of Eugene Volokh, a known grammatical relativist, who scoffs at “to increase dramatically” — as if “to dramatically increase” would be better. The meaning of “to increase dramatically” is clear. The only reason to write “to dramatically increase” would be to avoid the appearance of stuffiness; that is, to pander to the least cultivated of one’s readers.

Seeming unstuffy (i.e., without standards) is neither a necessary nor sufficient reason to split an infinitive. The rule about not splitting infinitives, like most other grammatical rules, serves the valid and useful purpose of preventing English from sliding yet further down the slippery slope of incomprehensibility than it has slid.

If an unsplit infinitive makes a clause or sentence seem awkward, the clause or sentence should be recast to avoid the awkwardness. Better that than make an exception that leads to further exceptions — and thence to Babel.

Modern English Usage counsels splitting an infinitive where recasting doesn’t seem to work:

We admit that separation of to from its infinitive is not in itself desirable, and we shall not gratuitously say either ‘to mortally wound’ or ‘to mortally be wounded’…. We maintain, however, that a real [split infinitive], though not desirable in itself, is preferable to either of two things, to real ambiguity, and to patent artificiality…. We will split infinitives sooner than be ambiguous or artificial; more than that, we will freely admit that sufficient recasting will get rid of any [split infinitive] without involving either of those faults, and yet reserve to ourselves the right of deciding in each case whether recasting is worth while. Let us take an example: ‘In these circumstances, the Commission … has been feeling its way to modifications intended to better equip successful candidates for careers in India and at the same time to meet reasonable Indian demands.’… What then of recasting? ‘intended to make successful candidates fitter for’ is the best we can do if the exact sense is to be kept… [P. 581]

Good try, but not good enough. This would do: “In these circumstances, the Commission … has been considering modifications that would better equip successful candidates for careers in India and at the same time meet reasonable Indian demands.”

Enough said? I think so.

7. It’s all right to begin a sentence with “And” or “But” — in moderation.

It has been a very long time since a respected grammarian railed against the use of “And” or “But” at the start of a sentence. But if you have been warned against such usage, ignore the warning and heed Follett:

A prejudice lingers from the days of schoolmarmish rhetoric that a sentence should not begin with and. The supposed rule is without foundation in grammar, logic, or art. And can join separate sentences and their meanings just as well as but can both join sentences and disjoin meanings. The false rule used to apply to but equally; it is now happily forgotten. What has in fact happened is that the traditionally acceptable but after a semicolon has been replaced by the same but after a period. Let us do the same thing with and, taking care, of course, not to write long strings of sentences each headed by And or by But.

8. There’s No Need to End a Sentence with a Preposition

Garner says this:

The spurious rule about not ending sentences with prepositions is a remnant of Latin grammar, in which a preposition was the one word that a writer could not end a sentence with….

The idea that a preposition is ungrammatical at the end of a sentence is often attributed to 18th-century grammarians. But [there it is] that idea is greatly overstated. Bishop Robert Lowth, the most prominent 18th-century grammarian, wrote that the final preposition “is an idiom, which our language is strongly inclined to: it prevails in common conversation, and suits very well with the familiar style in writing.”…

Perfectly natural-sounding sentences end with prepositions, particularly when a verb with a preposition-particle appears at the end (as in follow up or ask for). E.g.: “The act had no causal connection with the injury complained of.”

Garner goes on to warn against “such … constructions as of which, on which, and for which” that are sometimes used to avoid the use of a preposition at the end of a sentence. He argues that

“This is a point on which I must insist” becomes far more natural as “This is a point that I must insist on.”

Better yet: “I must insist on the point.”

Avoiding the sentence-ending preposition really isn’t difficult (as I just showed), unnatural, or “bad”. Benjamin Dreyer, in “Three Writing Rules to Disregard“, acknowledges as much:

Ending a sentence with a preposition (as, at, by, for, from, of, etc.) isn’t always such a hot idea, mostly because a sentence should, when it can, aim for a powerful finale and not simply dribble off like an old man’s unhappy micturition. A sentence that meanders its way to a prepositional finish is often, I find, weaker than it ought to or could be.

What did you do that for?

is passable, but

Why did you do that?

has some snap to it.

Exactly.

Dreyer tries to rescue the sentence-ending preposition by adding this:

But to tie a sentence into a strangling knot to avoid a prepositional conclusion is unhelpful and unnatural, and it’s something no good writer should attempt and no eager reader should have to contend with.

He should have followed his own advice, and written this:

But to tie a sentence into a strangling knot to avoid a prepositional conclusion is unhelpful and unnatural. It’s something that no good writer should attempt, nor foist upon the eager reader.

See? No preposition at the end, and a punchier paragraph (especially with the elimination of Dreyer’s run-on sentence).

I remain convinced that the dribbly, sentence-ending preposition is easily avoided. And, by avoiding it, the writer or speaker conveys his meaning more clearly and forcefully.

Writing: A Guide (Part III)

I am repeating the introduction for those readers who haven’t seen part I, which is here. Parts II and IV are here and here.

This series is aimed at writers of non-fiction works, but writers of fiction may also find it helpful. There are four parts:

I. Some Writers to Heed and Emulate

A. The Essentials: Lucidity, Simplicity, Euphony
B. Writing Clearly about a Difficult Subject
C. Advice from an American Master
D. Also Worth a Look

II. Step by Step

A. The First Draft

1. Decide — before you begin to write — on your main point and your purpose for making it.
2. Avoid wandering from your main point and purpose; use an outline.
3. Start by writing an introductory paragraph that summarizes your “story line”.
4. Lay out a straight path for the reader.
5. Know your audience, and write for it.
6. Facts are your friends — unless you’re trying to sell a lie, of course.
7. Momentum is your best friend.

B. From First Draft to Final Version

1. Your first draft is only that — a draft.
2. Where to begin? Stand back and look at the big picture.
3. Nit-picking is important.
4. Critics are necessary, even if not mandatory.
5. Accept criticism gratefully and graciously.
6. What if you’re an independent writer and have no one to turn to?
7. How many times should you revise your work before it’s published?

III. Reference Works

A. The Elements of Style
B. Eats, Shoots & Leaves
C. Follett’s Modern American Usage
D. Garner’s Modern American Usage
E. A Manual of Style and More

IV. Notes about Grammar and Usage

A. Stasis, Progress, Regress, and Language
B. Illegitimi Non Carborundum Lingo

1. Eliminate filler words.
2. Don’t abuse words.
3. Punctuate properly.
4. Why ‘s matters, or how to avoid ambiguity in possessives.
5. Stand fast against political correctness.
6. Don’t split infinitives.
7. It’s all right to begin a sentence with “And” or “But” — in moderation.
8. There’s no need to end a sentence with a preposition.

Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.


III. REFERENCE WORKS

A. The Elements of Style

If you could have only one book to help you write better, it would be The Elements of Style. Admittedly, Strunk & White, as the book is also known, has a vociferous critic, one Geoffrey K. Pullum. But Pullum documents only one substantive flaw: an apparent mischaracterization of what constitutes the passive voice. What Pullum doesn’t say is that the book correctly flays the kind of writing that it calls passive (correctly or not). Further, Pullum derides the book’s many banal headings, while ignoring what follows them: sound advice, backed by concrete examples. (There’s a nice rebuttal of Pullum here.) It’s evident that the book’s real sin — in Pullum’s view — is “bossiness” (prescriptivism), which is no sin at all, as I’ll explain in part IV.

There are so many good writing tips in Strunk & White that it was hard for me to choose a sample. I randomly chose “Omit Needless Words” (one of the headings derided by Pullum), which opens with a statement of principles:

Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine to unnecessary parts. This requires not that the writer make all of his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell. [P. 23]

That would be empty rhetoric, were it not followed by further discussion and 17 specific examples. Here are a few:

  • the question as to whether should be replaced by whether or the question whether

  • the reason why is that should be replaced by because

  • I was unaware of the fact that should be replace by I was unaware that or I did not know that

  • His brother, who is a member of the same firm should be replaced by His brother, a member of the same firm [P. 24]

There’s much more than that to Strunk & White, of course, (Go here to see table of contents.) You’ll become a better writer — perhaps an excellent one — if you carefully read Strunk & White, re-read it occasionally, and apply the principles that it espouses and illustrates.

B. Eats, Shoots & Leaves

After Strunk & White, my favorite instructional work is Lynne Truss‘s Eats, Shoots & Leaves: The Zero-Tolerance Approach to Punctuation. I vouch for the accuracy of this description of the book (Publishers Weekly via Amazon.com):

Who would have thought a book about punctuation could cause such a sensation? Certainly not its modest if indignant author, who began her surprise hit motivated by “horror” and “despair” at the current state of British usage: ungrammatical signs (“BOB,S PETS”), headlines (“DEAD SONS PHOTOS MAY BE RELEASED”) and band names (“Hear’Say”) drove journalist and novelist Truss absolutely batty. But this spirited and wittily instructional little volume, which was a U.K. #1 bestseller, is not a grammar book, Truss insists; like a self-help volume, it “gives you permission to love punctuation.” Her approach falls between the descriptive and prescriptive schools of grammar study, but is closer, perhaps, to the latter. (A self-professed “stickler,” Truss recommends that anyone putting an apostrophe in a possessive “its”-as in “the dog chewed it’s bone”-should be struck by lightning and chopped to bits.) Employing a chatty tone that ranges from pleasant rant to gentle lecture to bemused dismay, Truss dissects common errors that grammar mavens have long deplored (often, as she readily points out, in isolation) and makes elegant arguments for increased attention to punctuation correctness: “without it there is no reliable way of communicating meaning.” Interspersing her lessons with bits of history (the apostrophe dates from the 16th century; the first semicolon appeared in 1494) and plenty of wit, Truss serves up delightful, unabashedly strict and sometimes snobby little book, with cheery Britishisms (“Lawks-a-mussy!”) dotting pages that express a more international righteous indignation.

C. Follett’s Modern American Usage

Next up is Wilson Follett’s Modern American Usage: A Guide. The link points to a newer edition than the one that I’ve relied on for about 50 years. Reviews of the newer edition, edited by one Erik Wensberg, are mixed but generally favorable. However, the newer edition seems to lack Follett’s “Introductory” which is divided into “Usage, Purism, and Pedantry” and “The Need of an Orderly Mind”. If that is so, the newer edition is likely to be more compromising toward language relativists like Geoffrey Pullum. The following quotations from Follett’s “Introductory” (one from each section), will give you an idea of Follett’s stand on relativism:

[F]atalism about language cannot be the philosophy of those who care abut language; it is the illogical philosophy of their opponents. Surely the notion that, because usage is ultimately what everybody does to words, nobody can or should do anything about them is self-contradictory. Somebody, by definition does something, and this something is best done by those with convictions and a stake in the outcome, whether the stake of private pleasure or of professional duty or both does not matter. Resistance always begins with individuals. [Pp. 12-3]

*     *     *

A great deal of our language is so automatic that even the thoughtful never think about it, and this mere not-thinking is the gate through which solecisms or inferior locutions slip in. Some part, greater or smaller, of every thousand words is inevitably parroted, even by the least parrotlike. [P. 14]

(A reprint of the original edition is available here.)

D. Garner’s Modern American Usage

I also like Garner’s Modern American Usage, by Bryan A. Garner. Though Garner doesn’t write as elegantly as Follett, he is just as tenacious and convincing as Follett in defense of prescriptivism. And Garner’s book far surpasses Follett’s in scope and detail; it’s twice the length, and the larger pages are set in smaller type.

E. A Manual of Style and More

I have one more book to recommend: The Chicago Manual of Style. Though the book is a must-have for editors, serious writers should also own a copy and consult it often. If you’re unfamiliar with the book, you can get an idea of its vast range and depth of coverage by following the preceding link, clicking on “Look inside”, and perusing the table of contents, first pages, and index.

Every writer should have a good dictionary and thesaurus at hand. I use The Free Dictionary, and am seldom disappointed by it. There also look promising: Dictionary.com and Merriam-Webster.

The Mar-a-Lago Affair

What is it?

A distraction from Hunter’s laptop. If Hunter is going down in flames — with Joe to follow — Trump will go down, too.

Peak Civilization

It’s in the rear-view mirror.

Here is an oft-quoted observation, spuriously attributed to Socrates, about youth:

The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers.

Even though Socrates didn’t say it, the sentiment has nevertheless been stated and restated since 1907, when the observation was concocted, and probably had been shared widely for decades, and even centuries, before that. I use a form of it when I discuss the spoiled children of capitalism (e.g., here).

Is there something to it? No and yes.

No, because rebelliousness and disrespect for elders and old ways seem to be part of the natural processes of physical and mental maturation.

Not all adolescents and young adults are rebellious and disrespectful. But many rebellious and disrespectful adolescents and young adults carry their attitudes with them through life, even if less obviously than in youth, as they climb the ladders of various callings. The callings that seem to be most attractive to the rebellious are the arts (especially the written, visual, thespian, terpsichorial, musical, and cinematic ones), the professoriate, the punditocracy, journalism, and politics.

Which brings me to the yes answer, and to the spoiled children of capitalism. Rebelliousness, though in some persons never entirely outgrown or suppressed by maturity, will more often be outgrown or suppressed in economically tenuous conditions, the challenges of which which almost fully occupied their bodies and minds. (Opinionizers and sophists were accordingly much thinner on the ground in the parlous days of yore.)

However, as economic growth and concomitant technological advances have yielded abundance far beyond the necessities of life for most inhabitants of the Western world, the beneficiaries of that abundance have acquired yet another luxury: the luxury of learning about and believing in systems that, in the abstract, seem to offer vast improvements on current conditions. It is the old adage “Idle hands are the devil’s tools” brought up to date, with “minds” joining “hands” in the devilishness.

Among many bad things that result from such foolishness (e.g., the ascendancy of ideologies that crush liberty and, ironically, economic growth) is the loss of social cohesion. I was reminded of this by Noah Smith’s fatuous article, “The 1950s Are Greatly Overrated“.

Smith is an economist who blogs and writes an opinion column for Bloomberg News. My impression of him is that he is a younger version of Paul Krugman, the former economist who has become a leftist whiner. The difference between them is that Krugman remembers the 1950s fondly, whereas Smith does not.

I once said this about Krugman’s nostalgia for the 1950s, a decade during which he was a mere child:

[The nostalgia] is probably rooted in golden memories of his childhood in a prosperous community, though he retrospectively supplies an economic justification. The 1950s were (according to him) an age of middle-class dominance before the return of the Robber Barons who had been vanquished by the New Deal. This is zero-sum economics and class warfare on steroids — standard Krugman fare.

Smith, a mere toddler relative to Krugman and a babe in arms relative to me, takes a dim view of the 1950s:

For all the rose-tinted sentimentality, standards of living were markedly lower in the ’50s than they are today, and the system was riddled with vast injustice and inequality.

Women and minorities are less likely to have a wistful view of the ’50s, and with good reason. Segregation was enshrined in law in much of the U.S., and de facto segregation was in force even in Northern cities. Black Americans, crowded into ghettos, were excluded from economic opportunity by pervasive racism, and suffered horrendously. Even at the end of the decade, more than half of black Americans lived below the poverty line:

Women, meanwhile, were forced into a narrow set of occupations, and few had the option of pursuing fulfilling careers. This did not mean, however, that a single male breadwinner was always able to provide for an entire family. About a third of women worked in the ’50s, showing that many families needed a second income even if it defied the gender roles of the day:

For women who didn’t work, keeping house was no picnic. Dishwashers were almost unheard of in the 1950s, few families had a clothes dryer, and fewer than half had a washing machine.

But even beyond the pervasive racism and sexism, the 1950s wasn’t a time of ease and plenty compared to the present day. For example, by the end of the decade, even after all of that robust 1950s growth, the white poverty rate was still 18.1%, more than double that of the mid-1970s:

Nor did those above the poverty line enjoy the material plenty of later decades. Much of the nation’s housing stock in the era was small and cramped. The average floor area of a new single-family home in 1950 was only 983 square feet, just a bit bigger than the average one-bedroom apartment today.

To make matters worse, households were considerably larger in the ’50s, meaning that big families often had to squeeze into those tight living spaces. Those houses also lacked many of the things that make modern homes comfortable and convenient — not just dishwashers and clothes dryers, but air conditioning, color TVs and in many cases washing machines.

And those who did work had to work significantly more hours per year. Those jobs were often difficult and dangerous. The Occupational Safety and Health Administration wasn’t created until 1971. As recently as 1970, the rate of workplace injury was several times higher than now, and that number was undoubtedly even higher in the ’50s. Pining for those good old factory jobs is common among those who have never had to stand next to a blast furnace or work on an unautomated assembly line for eight hours a day.

Outside of work, the environment was in much worse shape than today. There was no Environmental Protection Agency, no Clean Air Act or Clean Water Act, and pollution of both air and water was horrible. The smog in Pittsburgh in the 1950s blotted out the sun. In 1952 the Cuyahoga River in Cleveland caught fire. Life expectancy at the end of the ’50s was only 70 years, compared to more than 78 today.

So life in the 1950s, though much better than what came before, wasn’t comparable to what Americans enjoyed even two decades later. In that space of time, much changed because of regulations and policies that reduced or outlawed racial and gender discrimination, while a host of government programs lowered poverty rates and cleaned up the environment.

But on top of these policy changes, the nation benefited from rapid economic growth both in the 1950s and in the decades after. Improved production techniques and the invention of new consumer products meant that there was much more wealth to go around by the 1970s than in the 1950s. Strong unions and government programs helped spread that wealth, but growth is what created it.

So the 1950s don’t deserve much of the nostalgia they receive. Though the decade has some lessons for how to make the U.S. economy more equal today with stronger unions and better financial regulation, it wasn’t an era of great equality overall. And though it was a time of huge progress and hope, the point of progress and hope is that things get better later. And by most objective measures they are much better now than they were then.

See? A junior Krugman who sees the same decade as a glass half-empty instead of half-full.

In the end, Smith admits the irrelevance of his irreverence for the 1950s when he says that “the point of progress and hope is that things get better later.” In other words, if there is progress the past will always look inferior to the present. (And, by the same token, the present will always look inferior to the future when it becomes the present.)

I could quibble with some of Smith’s particulars (e.g., racism may be less overt than it was in the 1950s, but it still boils beneath the surface, and isn’t confined to white racism; stronger unions and stifling financial regulations hamper economic growth, which Smith prizes so dearly). But I will instead take issue with his assertion, which precedes the passages quoted above, that “few of those who long for a return to the 1950s would actually want to live in those times.”

It’s not that anyone yearns for a return to the 1950s as it was in all respects, but for a return to the 1950s as it was in some crucial ways:

There is … something to the idea that the years between the end of World War II and the early 1960s were something of a Golden Age…. But it was that way for reasons other than those offered by Krugman [and despite Smith’s demurrer].

Civil society still flourished through churches, clubs, civic associations, bowling leagues, softball teams and many other voluntary organizations that (a) bound people and (b) promulgated and enforced social norms.

Those norms proscribed behavior considered harmful — not just criminal, but harmful to the social fabric (e.g., divorce, unwed motherhood, public cursing and sexuality, overt homosexuality). The norms also prescribed behavior that signaled allegiance to the institutions of civil society (e.g., church attendance, veterans’ organizations) , thereby helping to preserve them and the values that they fostered.

Yes, it was an age of “conformity”, as sneering sophisticates like to say, even as they insist on conformity to reigning leftist dogmas that are destructive of the social fabric. But it was also an age of widespread mutual trust, respect, and forbearance.

Those traits, as I have said many times (e.g., here) are the foundations of liberty, which is a modus vivendi, not a mystical essence. The modus vivendi that arises from the foundations is peaceful, willing coexistence and its concomitant: beneficially cooperative behavior —  liberty, in other words.

The decade and a half after the end of World War II wasn’t an ideal world of utopian imagining. But it approached a realizable ideal. That ideal — for the nation as a whole — has been put beyond reach by the vast, left-wing conspiracy that has subverted almost every aspect of life in America.

What happened was the 1960s — and its long aftermath — which saw the rise of capitalism’s spoiled children (of all ages), who have spat on and shredded the very social norms that in the 1940s and 1950s made the United States of America as united they ever would be. Actual enemies of the nation — communists — were vilified and ostracized, and that’s as it should have been. And people weren’t banned and condemned by “friends”, “followers”, Facebook, Twitter, etc. etc., for the views that they held. Not even on college campuses, on radio and TV shows, in the print media, or in Hollywood moves.

What do the spoiled children have to show for their rejection of social norms — other than economic progress that is actually far less robust than it would have been were it not for the  interventions of their religion-substitute, the omnipotent central government? Well, omnipotent at home and impotent (or drastically weakened) abroad, thanks to rounds of defense cuts and perpetual hand-wringing about what the “world” might think or some militarily inferior opponents might do if the U.S. government were to defend Americans and protect their interests abroad?

The list of the spoiled children’s “accomplishments” is impossibly long to recite here, so I will simply offer a very small sample of things that come readily to mind:

  • California wildfires caused by misguided environmentalism.

  • The excremental wasteland that is San Francisco. (And Blue cities, generally.)

  • Flight from California wildfires, high taxes, excremental streets, and anti-business environment.

  • The killing of small businesses, especially restaurants, by imbecilic Blue-State minimum wage laws.

  • The killing of businesses, period, by oppressive Blue-State regulations.

  • The killing of jobs for people who need them the most, by ditto and ditto.

  • Bloated pension schemes for Blue-State (and city) employees, which are bankrupting those States (and cities) and penalizing their citizens who aren’t government employees.

  • The hysteria (and even punishment) that follows from drawing a gun or admitting gun ownership

  • The idea that men can become women and should be allowed to compete with women in athletic competitions because the men in question have endured some surgery and taken some drugs.

  • The idea that it doesn’t and shouldn’t matter to anyone that a self-identified “woman” uses women’s rest-rooms where real women and girls became prey for prying eyes and worse.

  • Mass murder on a Hitlerian-Stalinist scale in the name of a “woman’s right to choose”, when she made that choice (in almost every case) by engaging in consensual sex.

  • Disrespect for he police and military personnel who keep them safe in their cosseted existences.

  • Applause for attacks on the same.

  • Applause for America’s enemies, which the delusional, spoiled children won’t recognize as their enemies until it’s too late.

  • Longing for impossible utopias (e.g., “true” socialism) because they promise what is actually impossible in the real world — and result in actual dystopias (e.g., the USSR, Cuba, Britain’s National Health Service).

Noah Smith is far too young to remember an America in which such things were almost unthinkable — rather than routine. People then didn’t have any idea how prosperous they would become, or how morally bankrupt and divided.

Every line of human endeavor reaches a peak, from which decline is sure to follow if the things that caused it to peak are mindlessly rejected for the sake of novelty (i.e., rejection of old norms just because they are old). This is nowhere more obvious than in the arts.

It should be equally obvious to anyone who takes an objective look at the present state of American society and is capable of comparing it with American society of the 1940s and 1950s. For all of its faults it was a golden age. Unfortunately, most Americans now living (Noah Smith definitely included) are too young and too fixated on material things to understand what has been lost — irretrievably, I fear.

See also “1963: The Year Zero” and “Whither (Wither) America?”.

Whither (Wither) America?

You don’t want to know the answer.

John O. McGinnis, in “The Waning Fortunes of Classical Liberalism“, bemoans the state of the ideology which was born in the Enlightenment, came to maturity in the writings of J.S. Mill, had its identity stolen by modern “liberalism”, and was reborn (in the U.S.) as the leading variety of classical liberalism (i.e., minarchism). McGinnis says, for example that

the greatest danger to classical liberalism is the sharp left turn of the Democratic Party. This has been the greatest ideological change of any party since at least the Goldwater revolution in the Republican Party more than a half a century ago….

It is certainly possible that such candidates [as Bernie Sanders, Elizabeth Warren, and Pete Buttigieg] will lose to Joe Biden or that they will not win against Trump. But they are transforming the Democratic Party just as Goldwater did the Republican Party. And the Democratic Party will win the presidency at some time in the future. Recessions and voter fatigue guarantee rotation of parties in office….

Old ideas of individual liberty are under threat in the culture as well. On the left, identity politics continues its relentless rise, particularly on university campuses. For instance, history departments, like that at my own university, hire almost exclusively those who promise to impose a gender, race, or colonial perspective on the past. The history that our students hear will be one focused on the West’s oppression of the rest rather than the reality that its creation of the institutions of free markets and free thought has brought billions of people out of poverty and tyranny that was their lot before….

And perhaps most worrying of all, both the political and cultural move to the left has come about when times are good. Previously, pressure on classical liberalism most often occurred when times were bad. The global trend to more centralized forms of government and indeed totalitarian ones in Europe occurred in the 1920s and 1930s in the midst of a global depression. The turbulent 1960s with its celebration of social disorder came during a period of hard economic times. Moreover, in the United States, young men feared they might be killed in faraway land for little purpose.

But today the economy is good, the best it has been in at least a decade. Unemployment is at a historical low. Wages are up along with the stock market. No Americans are dying in a major war. And yet both here and abroad parties that want to fundamentally shackle the market economy are gaining more adherents. If classical liberalism seems embattled now, its prospects are likely far worse in the next economic downturn or crisis of national security.

McGinnis is wrong about the 1960s being “a period of hard economic times” — in America, at least. The business cycle that began in 1960 and ended in 1970 produced the second-highest rate of growth in real GDP since the end of World War II. (The 1949-1954 cycle produced the highest rate of growth.)

But in being wrong about that non-trivial fact, McGinnis inadvertently points to the reason that “the political and cultural move to the left has come about when times are good”. The reason is symbolized by main cause of social disorder in the 1960s (and into the early 1970s), namely, that “young men feared they might be killed in faraway land for little purpose”.

The craven behavior of supposedly responsible adults like LBJ, Walter Cronkite, Clark Kerr, and many other well-known political, media, educational, and cultural leaders — who allowed themselves to be bullied by essentially selfish protests against the Vietnam War — revealed the greatest failing of the so-called greatest generation: a widespread failure to inculcate personal responsibility in their children. The same craven behavior legitimated the now-dominant tool of political manipulation: massive, boisterous, emotion-laden appeals for this, that, and the other privilege du jour — appeals that left-wing politicians encourage and often lead; appeals that nominal conservatives often accede to rather than seem “mean”.

The “greatest” generation spawned the first generation of the spoiled children of capitalism:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it. And, sadly, they passed their principles, if one may use the term so loosely, down the generations to the point where young people today are scarcely worth using for fertilizer.

In 1919, or 1929, or especially 1939, the adolescents of 1969 would have had neither the leisure nor the money to create the Woodstock Nation. But mommy and daddy shelled out because they didn’t want their little darlings to be caught short, and consequently their little darlings became the worthless whiners who voted for people like Bill Clinton and Barack Obama [and who Bill Clinton and Barack Obama], with results as you see them. Now that history is catching up to them, a third generation of losers can think of nothing better to do than camp out on Wall Street in hopes that the Cargo will suddenly begin to arrive again.

Good luck with that.

[From “The Spoiled Children of Capitalism”, posted in October 2011 at Dyspepsia Generation but no longer available there.]

I have long shared that assessment of the Boomer generation, and subscribe to the view that the rot set in after World War II, and became rampant after 1963., when the post-World War II children of the “greatest generation” came of age.

Which brings me to Bryan Caplan’s post, “Poverty, Conscientiousness, and Broken Families”. Caplan — who is all wet when it comes to pacifism and libertarianism — usually makes sense when he describes the world as it is rather than as he would like it to be. He writes:

[W]hen leftist social scientists actually talk to and observe the poor, they confirm the stereotypes of the harshest Victorian.  Poverty isn’t about money; it’s a state of mind.  That state of mind is low conscientiousness.

Case in point: Kathryn Edin and Maria KefalasPromises I Can Keep: Why Poor Women Put Motherhood Before Marriage.  The authors spent years interviewing poor single moms.  Edin actually moved into their neighborhood to get closer to her subjects.  One big conclusion:

Most social scientists who study poor families assume financial troubles are the cause of these breakups [between cohabitating parents]… Lack of money is certainly a contributing cause, as we will see, but rarely the only factor.  It is usually the young father’s criminal behavior, the spells of incarceration that so often follow, a pattern of intimate violence, his chronic infidelity, and an inability to leave drugs and alcohol alone that cause relationships to falter and die.

Furthermore:

Conflicts over money do not usually erupt simply because the man cannot find a job or because he doesn’t earn as much as someone with better skills or education.  Money usually becomes an issue because he seems unwilling to keep at a job for any length of time, usually because of issues related to respect.  Some of the jobs he can get don’t pay enough to give him the self-respect he feels he needs, and others require him to get along with unpleasant customers and coworkers, and to maintain a submissive attitude toward the boss.

These passages focus on low male conscientiousness, but the rest of the book shows it’s a two-way street.  And even when Edin and Kefalas are talking about men, low female conscientiousness is implicit.  After all, conscientious women wouldn’t associate with habitually unemployed men in the first place – not to mention alcoholics, addicts, or criminals.

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs”. It will be the bane of the Gen Yers and Zers who do the same thing. But, as usual, “society” will be expected to pick up the tab, with food stamps, subsidized housing, drug rehab programs, Medicaid, and so on.

Before the onset of America’s welfare state in the 1930s, there were two ways to survive: work hard or accept whatever charity came your way. And there was only one way for most persons to thrive: work hard. That all changed after World War II, when power-lusting politicians sold an all-too-willing-to-believe electorate a false and dangerous bill of goods, namely, that government is the source of prosperity — secular salvation. It is not, and never has been.

McGinnis is certainly right about the decline of classical liberalism and probably right about the rise of leftism. But why is he right? Leftism will continue to ascend as long as the children of capitalism are spoiled. Classical liberalism will continue to wither because it has no moral center. There is no there there to combat the allure of “free stuff“.

Scott Yenor, writing in “The Problem with the ‘Simple Principle’ of Liberty”, makes a point about J.S. Mill’s harm principle — the heart of classical liberalism — that I have made many times. Yenor begins by quoting the principle:

The sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. . . . The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. . . .The only part of the conduct of anyone, for which he is amenable to society, is that which concerns others. In the part that merely concerns himself, his independence is, of right, absolute. Over himself, over his own body and mind, the individual is sovereign.

This is the foundational principle of classical liberalism, and it is deeply flawed, as Yenor argues (successfully, in my view). He ends with this:

[T]he simple principle of [individual] liberty undermines community and compromises character by compromising the family. As common identity and the family are necessary for the survival of liberal society—or any society—I cannot believe that modes of thinking based on the “simple principle” alone suffice for a governing philosophy. The principle works when a country has a moral people, but it doesn’t make a moral people.

Conservatism, by contrast, is anchored in moral principles, which are reflected in deep-seated social norms, at the core of which are religious norms — a bulwark of liberty. But principled conservatism (as opposed to the attitudinal kind) isn’t a big seller in this age of noise:

I mean sound, light, and motion — usually in combination. There are pockets of serenity to be sure, but the amorphous majority wallows in noise: in homes with blaring TVs; in stores, bars, clubs, and restaurants with blaring music, TVs, and light displays; in movies (which seem to be dominated by explosive computer graphics), in sports arenas (from Olympic and major-league venues down to minor-league venues, universities, and schools); and on an on….

The prevalence of noise is telling evidence of the role of mass media in cultural change. Where culture is “thin” (the vestiges of the past have worn away) it is susceptible of outside influence…. Thus the ease with which huge swaths of the amorphous majority were seduced, not just by noise but by leftist propaganda. The seduction was aided greatly by the parallel, taxpayer-funded efforts of public-school “educators” and the professoriate….

Thus did the amorphous majority bifurcate. (I locate the beginning of the bifurcation in the 1960s.) Those who haven’t been seduced by leftist propaganda have instead become resistant to it. This resistance to nanny-statism — the real resistance in America — seems to be anchored by members of that rapidly dwindling lot: adherents and practitioners of religion, especially between the two Left Coasts.

That they are also adherents of traditional social norms (e.g., marriage can only be between a man and a woman), upholders of the Second Amendment, and (largely) “blue collar” makes them a target of sneering (e.g., Barack Obama who called them “bitter clingers”; Hillary Clinton called them “deplorables”)….

[But as long] as a sizeable portion of the populace remains attached to traditional norms — mainly including religion — there will be a movement in search of and in need of a leader [after Trump]. But the movement will lose potency if such a leader fails to emerge.

Were that to happen, something like the old, amorphous society might re-form, but along lines that the remnant of the old, amorphous society wouldn’t recognize. In a reprise of the Third Reich, the freedoms of association, speech, and religious would have been bulldozed with such force that only the hardiest of souls would resist going over to the dark side. And their resistance would have to be covert.

Paradoxically, 1984 may lie in the not-too-distant future, not 36 years in the past. When the nation is ruled by one party (guess which one), footvoting will no longer be possible and the nation will settle into a darker version of the Californian dystopia.

It is quite possible that — despite current disenchantment with Democrats and given the fickleness of the electorate’s “squishy center” — the election of 2024 will bring about the end of the great experiment in liberty that began in 1776. And with that end, the traces of classical liberalism will all but vanish, along with liberty. Unless something catastrophic shakes the spoiled children of capitalism so hard that their belief in salvation through statism is destroyed. Not just destroyed, but replaced by a true sense of fellowship with other Americans (including “bitter clingers” and “deplorables”) — not the ersatz fellowship with convenient objects of condescension that elicits virtue-signaling political correctness.

Writing: A Guide (Part II)

I am repeating the introduction for those readers who may not have seen part I, which is here. Parts III and IV are here and here.

This series is aimed at writers of non-fiction works, but writers of fiction may also find it helpful. There are four parts:

I. Some Writers to Heed and Emulate

A. The Essentials: Lucidity, Simplicity, Euphony
B. Writing Clearly about a Difficult Subject
C. Advice from an American Master
D. Also Worth a Look

II. Step by Step

A. The First Draft

1. Decide — before you begin to write — on your main point and your purpose for making it.
2. Avoid wandering from your main point and purpose; use an outline.
3. Start by writing an introductory paragraph that summarizes your “story line”.
4. Lay out a straight path for the reader.
5. Know your audience, and write for it.
6. Facts are your friends — unless you’re trying to sell a lie, of course.
7. Momentum is your best friend.

B. From First Draft to Final Version

1. Your first draft is only that — a draft.
2. Where to begin? Stand back and look at the big picture.
3. Nit-picking is important.
4. Critics are necessary, even if not mandatory.
5. Accept criticism gratefully and graciously.
6. What if you’re an independent writer and have no one to turn to?
7. How many times should you revise your work before it’s published?

III. Reference Works

A. The Elements of Style
B. Eats, Shoots & Leaves
C. Follett’s Modern American Usage
D. Garner’s Modern American Usage
E. A Manual of Style and More

IV. Notes about Grammar and Usage

A. Stasis, Progress, Regress, and Language
B. Illegitimi Non Carborundum Lingo

1. Eliminate filler words.
2. Don’t abuse words.
3. Punctuate properly.
4. Why ‘s matters, or how to avoid ambiguity in possessives.
5. Stand fast against political correctness.
6. Don’t split infinitives.
7. It’s all right to begin a sentence with “And” or “But” — in moderation.
8. There’s no need to end a sentence with a preposition.

Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.


II. STEP BY STEP

A. The First Draft

1. Decide — before you begin to write — on your main point and your purpose for making it.

Can you state your main point in a sentence? If you can’t, you’re not ready to write about whatever it is that’s on your mind.

Your purpose for writing about a particular subject may be descriptive, explanatory, or persuasive. An economist may, for example, begin an article by describing the state of the economy, as measured by Gross Domestic Product (GDP). He may then explain that the rate of growth in GDP has receded since the end of World War II, because of greater  government spending and the cumulative effect of regulatory activity. He is then poised to make a case for less spending and for the cancellation of regulations that impede economic growth.

2. Avoid wandering from your main point and purpose; use an outline.

You can get by with a bare outline, unless you’re writing a book, a manual, or a long article. Fill the outline as you go. Change the outline if you see that you’ve omitted a step or put some steps in the wrong order. But always work to an outline, however sketchy and malleable it may be. (The outline may be a mental one if you are deeply knowledgeable about the material you’re working with.)

3.  Start by writing an introductory paragraph that summarizes your “story line”.

The introductory paragraph in a news story is known as  “the lead” or “the lede” (a spelling that’s meant to convey the correct pronunciation). A classic lead gives the reader the who, what, why, when, where, and how of the story. As noted in Wikipedia, leads aren’t just for journalists:

Leads in essays summarize the outline of the argument and conclusion that follows in the main body of the essay. Encyclopedia leads tend to define the subject matter as well as emphasize the interesting points of the article. Features and general articles in magazines tend to be somewhere between journalistic and encyclopedian in style and often lack a distinct lead paragraph entirely. Leads or introductions in books vary enormously in length, intent and content.

Think of the lead as a target toward which you aim your writing. You should begin your first draft with a lead, even if you later decide to eliminate, prune, or expand it.

4. Lay out a straight path for the reader.

You needn’t fill your outline sequentially, but the outline should trace a linear progression from statement of purpose to conclusion or call for action. Trackbacks and detours can be effective literary devices in the hands of a skilled writer of fiction. But you’re not writing fiction, let alone mystery fiction. So just proceed in a straight line, from beginning to end.

Quips, asides, and anecdotes should be used sparingly, and only if they reinforce your message and don’t distract the reader’s attention from it.

5. Know your audience, and write for it.

I aim at readers who can grasp complex concepts and detailed arguments. But if you’re writing something like a policy manual for employees at all levels of your company, you’ll want to keep it simple and well-marked: short words, short sentences, short paragraphs, numbered sections and sub-sections, and so on.

6. Facts are your friends — unless you’re trying to sell a lie, of course.

Unsupported generalities will defeat your purpose, unless you’re writing for a gullible, uneducated audience. Give concrete examples and cite authoritative references. If your work is technical, show your data and calculations, even if you must put the details in footnotes or appendices to avoid interrupting the flow of your argument. Supplement your words with tables and graphs, if possible, but make them as simple as you can without distorting the underlying facts.

7. Momentum is your best friend.

Write a first draft quickly, even if you must leave holes to be filled later. I’ve always found it easier to polish a rough draft that spans the entire outline than to work from a well-honed but unaccompanied introductory section.

B. From First Draft to Final Version

1. Your first draft is only that — a draft.

Unless you’re a prodigy, you’ll have to do some polishing (probably a lot) before you have something that a reader can follow with ease.

2. Where to begin? Stand back and look at the big picture.

Is your “story line” clear? Are your points logically connected? Have you omitted key steps or important facts? If you find problems, fix them before you start nit-picking your grammar, syntax, and usage.

3. Nit-picking is important.

Errors of grammar, syntax, and usage can (and probably will) undermine your credibility. Thus, for example, subject and verb must agree (“he says” not “he say”); number must be handled correctly (“there are two” not “there is two”); tense must make sense (“the shirt shrank” not “the shirt shrunk”); usage must be correct (“its” is the possessive pronoun, “it’s” is the contraction for “it is”).

4. Critics are necessary, even if not mandatory.

Unless you’re a skilled writer and objective self-critic, you should ask someone to review your work before you publish it or submit it for publication. If your work must be reviewed by a boss or an editor, count yourself lucky. Your boss is responsible for the quality of your work; he therefore has a good reason to make it better (unless he’s a jerk or psychopath). If your editor isn’t qualified to do substantive editing, he can at least correct your syntax, grammar, and usage.

5. Accept criticism gratefully and graciously.

Bad writers don’t, which is why they remain bad writers. Yes, you should reject (or fight against) changes and suggestions if they are clearly wrong, and if you can show that they’re wrong. But if your critic tells you that your logic is muddled, your facts are inapt, and your writing stinks (in so many words), chances are that your critic is right. And you’ll know that your critic is dead right if your defense (perhaps unvoiced) is “That’s just my style of writing.”

6. What if you’re an independent writer and have no one to turn to?

Be your own worst critic. If you have the time, let your first draft sit for a day or two before you return to it. Then look at it as if you’d never seen it before, as if someone else had written it. Ask yourself if it makes sense, if every key point is well-supported, and if key points are missing, Look for glaring errors in syntax, grammar, and usage. (I’ll list and discuss some useful reference works in part III.) If you can’t find any problems or more than trivial ones, you shouldn’t be a self-critic — and you’re probably a terrible writer. If you make extensive revisions, you’re on the way to become an excellent writer.

7. How many times should you revise your work before it’s published?

That depends, of course, on the presence or absence of a deadline. The deadline may be a formal one, geared to a production schedule. Or it may be an informal but real one, driven by current events (e.g., the need to assess a new economics text while it’s in the news). But even without a deadline, two revisions of a rough draft should be enough. A piece that’s rewritten several times can lose its (possessive pronoun) edge. And unless you’re an amateur with time to spare (e.g., a blogger like me), every rewrite represents a forgone opportunity to begin a new work.


If you act on this advice you’ll become a better writer. But be patient with yourself. Improvement takes time, and perfection never arrives.

The State of the Economy and the Myth of the "Red Hot" Labor Market

Lying liars and the lies they tell.

I’ll start with a look at the state of the economy before turning to the labor market.

The Bureau of Economic Analysis (BEA) issues a quarterly estimate of constant-dollar (year 2009) GDP, from 1947 to the present. BEA’s numbers yield several insights about the course of economic growth in the U.S.

I begin with this graph:

FIGURE 1

The exponential trend line indicates a constant-dollar (real) growth rate for the entire period of 0.77 percent quarterly, or 3.1 percent annually. The actual beginning-to-end annual growth rate is 3.1 percent.

The red bands parallel to the trend line delineate the 95-percent (1.96 sigma) confidence interval around the trend. GDP has been below the confidence interval since the recession of 2020, which has been succeeded by the incipient recession of 2022.

Recessions are represented by the vertical gray bars in figure 1. Here’s my definition of a recession: two or more quarters in which real GDP (annualized) is below real GDP (annualized) for an earlier quarter.

Recessions as I define them don’t correspond exactly to recessions as defined by the National Bureau of Economic Research (NBER). NBER, for example, dates the Great Recession from December 2007 to June 2009 — 18 months in all; whereas, I date it from the first quarter of 2008 through the second quarter of 2011 — 42 months in all. The higher figure seems right to me, and probably to most people who bore the brunt of the Great Recession (i.e., prolonged joblessness, loss of savings, foreclosure on home, bankruptcy).

My method of identifying recessions is more objective and consistent than the NBER’s method, which one economist describes as “The NBER will know it when it sees it.” Moreover, unlike the NBER, I would not presume to pinpoint the first and last months of a recession, given the volatility of GDP estimates.

The following graph illustrates that volatility, and something much worse — the downward drift of the rate of real economic growth:

FIGURE 2

It’s not a pretty picture. The dead hand of the “tax, spend, and regulate” economy lies heavy on the economy. (See “The Bad News about Economic Growth”.)

Here’s another ugly picture:

FIGURE 3

Rates of growth (depicted by the exponential regression lines) clearly are lower in later cycles than in earlier ones, and lowest of all represents the 2009-2020 cycle (the most recent of completed cycles).

In tabular form:

There is a statistically significant, negative relationship between the length of a cycle and the robustness of a recovery. But the 2009-2020 cycle (represented by the data point at 2 months and 3.0% growth) stands out as an exception:

FIGURE 4

Note: The first, and brief, post-World War II cycle is omitted.

By now, it should not surprise you to learn that the 2009-2020 cycle was the weakest of all post-war cycles (though the previous one took a dive when it ended in the Great Recession):

FIGURE 5

Which brings me to the labor market. How can it be “red hot” when the economy is obviously so weak? In a phrase, it isn’t.

The real unemployment rate is several percentage points above the nominal rate. Officially, the unemployment rate stood at 3.5 percent as of July 2022. Unofficially — but in reality — the unemployment rate was 10.8 percent.

How can I say that the real unemployment rate was 10.8 percent, even though the official rate was 3.5 percent? Easily. Just follow this trail of definitions, provided by the official purveyor of unemployment statistics, the Bureau of Labor Statistics:

Unemployed persons (Current Population Survey)
Persons aged 16 years and older who had no employment during the reference week, were available for work, except for temporary illness, and had made specific efforts to find employment sometime during the 4-week period ending with the reference week. Persons who were waiting to be recalled to a job from which they had been laid off need not have been looking for work to be classified as unemployed.

Unemployment rate
The unemployment rate represents the number unemployed as a percent of the labor force.

Labor force (Current Population Survey)
The labor force includes all persons classified as employed or unemployed in accordance with the definitions contained in this glossary.

Labor force participation rate
The labor force as a percent of the civilian noninstitutional population.

Civilian noninstitutional population (Current Population Survey)
Included are persons 16 years of age and older residing in the 50 States and the District of Columbia who are not inmates of institutions (for example, penal and mental facilities, homes for the aged), and who are not on active duty in the Armed Forces.

In short, if you are 16 years of age and older, not confined to an institution or on active duty in the armed forces, but have not recently made specific efforts to find employment, you are not (officially) a member of the labor force. And if you are not (officially) a member of the labor force because you have given up looking for work, you are not (officially) unemployed — according to the BLS. Of course, you are really unemployed, but your unemployment is well disguised by the BLS’s contorted definition of unemployment.

What has happened is this: Since the first four months of 2000, when the labor-force participation rate peaked at 67.3 percent, it declined to 62.3 percent in 2015 before rising to 63.4 percent just before the pandemic wreaked havoc on the economy. The participation rate dropped to 60.2 percent during the COVID recession before recovering to a post-recession peak of 62.4 percent and dropping to 62.1 percent in July 2022 (during what may prove to be another recession). The post-recession recovery still leaves the participation rate well below its (depressed but recovering) pre-recession level:

FIGURE 6

Source: See figure 7.

The decline that began in 2000 came to a halt in 2005, but resumed in late 2008. The economic slowdown in 2001 (which followed the bursting of the dot-com bubble) can account for the decline through 2005, as workers chose to withdraw from the labor force when faced with dimmer employment prospects. But what about the sharper decline that began near the end of Bush’s second term?

There we see not only the demoralizing effects of the Great Recession but also the growing allure of incentives to refrain from work, namely, disability payments, extended unemployment benefits, the relaxation of welfare rules, the aggressive distribution of food stamps, and “free” healthcare” for an expanded Medicaid enrollment base and 20-somethings who live in their parents’ basements*. That’s on the supply side. On the demand side, there are the phony and even negative effects of “stimulus” spending; the chilling effects of regime uncertainty, persisted beyond the official end of the Great Recession; and the expansion of government spending and regulation (e.g., Dodd-Frank), as discussed in Part III.

More recently, COVID caused many workers to withdraw from the labor force out of an abundance of caution, because they couldn’t work from home, and because of the resulting recession. As noted, the recovery has stalled, resulting in a low but phony unemployment rate.

I constructed the actual unemployment rate by adjusting the nominal rate for the change in the labor-force participation rate. The disparity between the actual and nominal unemployment rates is evident in this graph:

FIGURE 7

Derived from SeriesLNS12000000, Seasonally Adjusted Employment Level; SeriesLNS11000000, Seasonally Adjusted Civilian Labor Force Level; Series LNS11300000, Seasonally Adjusted Civilian labor force participation rate; and Series LNS12500000, Employed, Usually Work Full Time. All are available at BLS, Labor Force Statistics from the Current Population Survey.

So, forget the Biden administration’s hoopla about the “red hot” labor market. The real unemployment rate has actually risen recently and is about where it was at the beginning of the Trump administration .
_________
* Contrary to some speculation, the labor-force participation rate is not declining because older workers are retiring earlier. The participation rate among workers 55 and older rose between 2002 and 2012. The decline is concentrated among workers under the age of 55, and especially workers in the 16-24 age bracket. (See this table at BLS.gov.) Why? My conjecture: The Great Recession caused a shakeout of marginal (low-skill) workers, many of whom simply dropped out of the labor market. And it became easier for them to drop out because, under Obamacare, many of them became eligible for Medicaid and many others enjoy prolonged coverage (until age 26) under their parents’ health plans. For more on this point, see Salim Furth’s “In the Obama Economy, a Decline in Teen Workers” (The Daily Signal, April 11, 2015)and Stephen Moore’s “Why Are So Many Employers Unable to Fill Jobs?” (The Daily Signal, April 6, 2015). On the general issue of declining participation among males aged 25-54, see Timothy Taylor’s “Why Are Men Detaching from the Labor Force?“, (The Conversible Economist, January 16, 2020), and follow the links therein. See also Scott Winship’s “Declining Prime-Age Male Labor Force Participation” (The Bridge, Mercatus Center, September 26, 2017). More recently, there have been rounds of “stimmies” issued by the federal government and some State governments in response to the COVID crisis (inflicted by the governments). The “stimmies” were topped off by extended, expanded, and downright outlandish unemployment benefits (e.g., these).

Writing: A Guide (Introduction and Part I)

This series is aimed at writers of non-fiction works, but writers of fiction may also find it helpful. There are four parts:

I. Some Writers to Heed and Emulate

A. The Essentials: Lucidity, Simplicity, Euphony
B. Writing Clearly about a Difficult Subject
C. Advice from an American Master
D. Also Worth a Look

II. Step by Step

A. The First Draft

1. Decide — before you begin to write — on your main point and your purpose for making it.
2. Avoid wandering from your main point and purpose; use an outline.
3. Start by writing an introductory paragraph that summarizes your “story line”.
4. Lay out a straight path for the reader.
5. Know your audience, and write for it.
6. Facts are your friends — unless you’re trying to sell a lie, of course.
7. Momentum is your best friend.

B. From First Draft to Final Version

1. Your first draft is only that — a draft.
2. Where to begin? Stand back and look at the big picture.
3. Nit-picking is important.
4. Critics are necessary, even if not mandatory.
5. Accept criticism gratefully and graciously.
6. What if you’re an independent writer and have no one to turn to?
7. How many times should you revise your work before it’s published?

III. Reference Works

A. The Elements of Style
B. Eats, Shoots & Leaves
C. Follett’s Modern American Usage
D. Garner’s Modern American Usage
E. A Manual of Style and More

IV. Notes about Grammar and Usage

A. Stasis, Progress, Regress, and Language
B. Illegitimi Non Carborundum Lingo

1. Eliminate filler words.
2. Don’t abuse words.
3. Punctuate properly.
4. Why ‘s matters, or how to avoid ambiguity in possessives.
5. Stand fast against political correctness.
6. Don’t split infinitives.
7. It’s all right to begin a sentence with “And” or “But” — in moderation.
8. There’s no need to end a sentence with a preposition.

Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.

This guide is long because It covers a lot of topics aside from the essentials of clear writing. The best concise guide to clear writing that I’ve come across is by David Randall: “Twenty-five Guidelines for Writing Prose” (National Association of Scholars).

Parts II, III, and IV are here, here, and here.


I. SOME WRITERS TO HEED AND EMULATE

A. The Essentials: Lucidity, Simplicity, Euphony

I begin with the insights of a great writer, W. Somerset Maugham (English, 1874-1965). Maugham was a prolific and popular playwright, novelist, short-story writer, and author of non-fiction works. He reflected on his life and career as a writer in The Summing Up. It appeared in 1938, when Maugham was 64 years old and more than 40 years into his very long career. I first read The Summing Up about 40 years ago, and immediately became an admirer of Maugham’s candor and insight. This led me to become an avid reader of Maugham’s novels and short-story collections. And I have continued to consult The Summing Up for booster shots of Maugham’s wisdom.

Maugham’s advice to “write lucidly, simply, euphoniously and yet with liveliness” is well-supported by examples and analysis. I offer the following excerpts of the early pages of The Summing Up, where Maugham discusses the craft of writing:

I have never had much patience with the writers who claim from the reader an effort to understand their meaning…. There are two sorts of obscurity that you find in writers. One is due to negligence and the other to wilfulness. People often write obscurely because they have never taken the trouble to learn to write clearly. This sort of obscurity you find too often in modern philosophers, in men of science, and even in literary critics. Here it is indeed strange. You would have thought that men who passed their lives in the study of the great masters of literature would be sufficiently sensitive to the beauty of language to write if not beautifully at least with perspicuity. Yet you will find in their works sentence after sentence that you must read twice to discover the sense. Often you can only guess at it, for the writers have evidently not said what they intended.

Another cause of obscurity is that the writer is himself not quite sure of his meaning. He has a vague impression of what he wants to say, but has not, either from lack of mental power or from laziness, exactly formulated it in his mind and it is natural enough that he should not find a precise expression for a confused idea. This is due largely to the fact that many writers think, not before, but as they write. The pen originates the thought…. From this there is only a little way to go to fall into the habit of setting down one’s impressions in all their original vagueness. Fools can always be found to discover a hidden sense in them….

Simplicity is not such an obvious merit as lucidity. I have aimed at it because I have no gift for richness. Within limits I admire richness in others, though I find it difficult to digest in quantity. I can read one page of Ruskin with delight, but twenty only with weariness. The rolling period, the stately epithet, the noun rich in poetic associations, the subordinate clauses that give the sentence weight and magnificence, the grandeur like that of wave following wave in the open sea; there is no doubt that in all this there is something inspiring. Words thus strung together fall on the ear like music. The appeal is sensuous rather than intellectual, and the beauty of the sound leads you easily to conclude that you need not bother about the meaning. But words are tyrannical things, they exist for their meanings, and if you will not pay attention to these, you cannot pay attention at all. Your mind wanders…..

But if richness needs gifts with which everyone is not endowed, simplicity by no means comes by nature. To achieve it needs rigid discipline…. To my mind King James’s Bible has been a very harmful influence on English prose. I am not so stupid as to deny its great beauty, and it is obvious that there are passages in it of a simplicity which is deeply moving. But the Bible is an oriental book. Its alien imagery has nothing to do with us. Those hyperboles, those luscious metaphors, are foreign to our genius…. The plain, honest English speech was overwhelmed with ornament. Blunt Englishmen twisted their tongues to speak like Hebrew prophets. There was evidently something in the English temper to which this was congenial, perhaps a native lack of precision in thought, perhaps a naive delight in fine words for their own sake, an innate eccentricity and love of embroidery, I do not know; but the fact remains that ever since, English prose has had to struggle against the tendency to luxuriance…. It is obvious that the grand style is more striking than the plain. Indeed many people think that a style that does not attract notice is not style…. But I suppose that if a man has a confused mind he will write in a confused way, if his temper is capricious his prose will be fantastical, and if he has a quick, darting intelligence that is reminded by the matter in hand of a hundred things he will, unless he has great self-control, load his pages with metaphor and simile….

Whether you ascribe importance to euphony … must depend on the sensitiveness of your ear. A great many readers, and many admirable writers, are devoid of this quality. Poets as we know have always made a great use of alliteration. They are persuaded that the repetition of a sound gives an effect of beauty. I do not think it does so in prose. It seems to me that in prose alliteration should be used only for a special reason; when used by accident it falls on the ear very disagreeably. But its accidental use is so common that one can only suppose that the sound of it is not universally offensive. Many writers without distress will put two rhyming words together, join a monstrous long adjective to a monstrous long noun, or between the end of one word and the beginning of another have a conjunction of consonants that almost breaks your jaw. These are trivial and obvious instances. I mention them only to prove that if careful writers can do such things it is only because they have no ear. Words have weight, sound and appearance; it is only by considering these that you can write a sentence that is good to look at and good to listen to.

I have read many books on English prose, but have found it hard to profit by them; for the most part they are vague, unduly theoretical, and often scolding. But you cannot say this of Fowler’s Dictionary of Modern English Usage. It is a valuable work. I do not think anyone writes so well that he cannot learn much from it. It is lively reading. Fowler liked simplicity, straightforwardness and common sense. He had no patience with pretentiousness. He had a sound feeling that idiom was the backbone of a language and he was all for the racy phrase. He was no slavish admirer of logic and was willing enough to give usage right of way through the exact demesnes of grammar. English grammar is very difficult and few writers have avoided making mistakes in it….

But Fowler had no ear. He did not see that simplicity may sometimes make concessions to euphony. I do not think a far-fetched, an archaic or even an affected word is out of place when it sounds better than the blunt, obvious one or when it gives a sentence a better balance. But, I hasten to add, though I think you may without misgiving make this concession to pleasant sound, I think you should make none to what may obscure your meaning. Anything is better than not to write clearly. There is nothing to be said against lucidity, and against simplicity only the possibility of dryness. This is a risk that is well worth taking when you reflect how much better it is to be bald than to wear a curly wig. But there is in euphony a danger that must be considered. It is very likely to be monotonous…. I do not know how one can guard against this. I suppose the best chance is to have a more lively faculty of boredom than one’s readers so that one is wearied before they are. One must always be on the watch for mannerisms and when certain cadences come too easily to the pen ask oneself whether they have not become mechanical. It is very hard to discover the exact point where the idiom one has formed to express oneself has lost its tang….

If you could write lucidly, simply, euphoniously and yet with liveliness you would write perfectly: you would write like Voltaire. And yet we know how fatal the pursuit of liveliness may be: it may result in the tiresome acrobatics of Meredith. Macaulay and Carlyle were in their different ways arresting; but at the heavy cost of naturalness. Their flashy effects distract the mind. They destroy their persuasiveness; you would not believe a man was very intent on ploughing a furrow if he carried a hoop with him and jumped through it at every other step. A good style should show no sign of effort. What is written should seem a happy accident…. [Pp. 23-32, passim, Pocket Book edition, 1967]

You should also study Maugham’s The Summing Up for its straightforward style. I return to these opening sentences of a paragraph:

Another cause of obscurity is that the writer is himself not quite sure of his meaning. He has a vague impression of what he wants to say, but has not, either from lack of mental power or from laziness, exactly formulated it in his mind and it is natural enough that he should not find a precise expression for a confused idea. This is due largely to the fact that many writers think, not before, but as they write. The pen originates the thought…. [Pp. 23-24]

This is a classic example of good writing (and it conveys excellent advice). The first sentence states the topic of the paragraph. The following sentences elaborate it. Each sentence is just long enough to convey a single, complete thought. Because of that, even the rather long second sentence in the second block quotation should be readily understood by a high-school graduate (a graduate of a small-city high school in the 1950s, at least).

B. Writing Clearly about a Difficult Subject

I offer a great English mathematician, G.H. Hardy, as a second exemplar. In particular, I recommend Hardy’s A Mathematician’s Apology. (It’s an apology in the sense of “a formal written defense of something you believe in strongly”, where the something is the pursuit of pure mathematics.) The introduction by C.P Snow is better than Hardy’s long essay, but Snow was a published novelist as well as a trained scientist. Hardy’s publications, other than the essay, are mathematical. The essay is notable for its accessibility, even to non-mathematicians. Of its 90 pages, only 23 (clustered near the middle) require a reader to cope with mathematics, but it’s mathematics that shouldn’t daunt a person who has taken and passed high-school algebra.

Hardy’s prose is flawed, to be sure. He overuses shudder quotes, and occasionally gets tangled in a too-long sentence. But I’m taken by his exposition of the art of doing higher mathematics, and the beauty of doing it well. Hardy, in other words, sets an example to be followed by writers who wish to capture the essence of a technical subject and convey that essence to intelligent laymen.

Here are some samples:

There are many highly respectable motives which may lead men to prosecute research, but three which are much more important than the rest. The first (without which the rest must come to nothing) is intellectual curiosity, desire to know the truth. Then, professional pride, anxiety to be satisfied with one’s performance, the shame that overcomes any self-respecting craftsman when his work is unworthy of his talent. Finally, ambition, desire for reputation, and the position, even the power or the money, which it brings. It may be fine to feel, when you have done your work, that you have added to the happiness or alleviated the sufferings of others, but that will not be why you did it. So if a mathematician, or a chemist, or even a physiologist, were to tell me that the driving force in his work had been the desire to benefit humanity, then I should not believe him (nor should I think any better of him if I did). His dominant motives have been those which I have stated and in which, surely, there is nothing of which any decent man need be ashamed. [Pp. 78-79, 1979 paperback edition]

*     *     *

A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas. A painter makes patters with shapes and colors, a poet with words. A painting may embody an ‘idea’, but the idea is usually commonplace and unimportant. In poetry, ideas count for a good deal more; but, as Housman insisted, the importance of ideas in poetry is habitually exaggerated…

… A mathematician, on the other hand, has no material to work with but ideas, and his patterns are likely to last longer, since ideas wear less with time than words. [Pp. 84-85]

C. Advice from an American Master

A third exemplar is E.B. White, a successful writer of fiction who is probably best known for The Elements of Style. (It’s usually called “Strunk & White” or “the little book”.) It’s an outgrowth of a slimmer volume of the same name by William Strunk Jr. (Strunk had been dead for 13 years when White produced the first edition of Strunk & White.)

I’ll address the little book’s authoritativeness in a later section. Here, I’ll highlight White’s style of writing. This is from the introduction to the third edition (the last one edited by White):

The Elements of Style, when I re-examined it in 1957, seemed to me to contain rich deposits of gold. It was Will Strunk’s parvum opus, his attempt to cut the vast tangle of English rhetoric down to size and write its rules and principles on the head of a pin. Will himself had hung the tag “little” on the book; he referred to it sardonically and with secret pride as “the little book,” always giving the word “little” a special twist, as though he were putting a spin on a ball. In its original form, it was  forty-three-page summation of the case for cleanliness, accuracy, and brevity in the use of English…. [P. xi]

Vivid, direct, and engaging. And the whole book reads like that.

D. Also Worth a Look

Read Steven Pinker‘s essay, “Why Academics Stink at Writing” (The Chronicle Review, September 26, 2014). You may not be an academic, but I’ll bet that you sometimes lapse into academese. (I know that I sometimes do.) Pinker’s essay will help you to recognize academese, and to understand why it’s to be avoided.

Pinker’s essay also appears in a booklet, “Why Academics Stink at Writing–and How to Fix It”, which is available here in exchange for your name, your job title, the name of your organization, and your e-mail address. (Whether you wish to give true information is up to you.) Of the four essays that follow Pinker’s, I prefer the one by Michael Munger.

Beyond that, pick and chose by searching on “writers on writing”. Google gave me 391,000 hits on the day that I published this post. Hidden among the dross, I found this, which led me to this gem: “George Orwell on Writing, How to Counter the Mindless Momentum of Language, and the Four Questions a Great Writer Must Ask Herself”. (“Herself”? I’ll say something about gender in part IV.)

Jerks and Psychopaths

Crudeness vs. subtlety.

Regarding jerks, here’s Eric Schwitzgebel, writing in “How to Tell if You’re a Jerk” (Nautilus, November 16, 2017):

Jerks are people who culpably fail to appreciate the perspectives of the people around them, treating others as tools to be manipulated or fools to be dealt with, rather than as moral and epistemic peers….

Jerks see the world through goggles that dim others’ humanity. The server at the restaurant is not a potentially interesting person with a distinctive personality, life story, and set of goals to which you might possibly relate. Instead, he is merely a tool by which to secure a meal or a fool on which you can vent your anger.

Why is it jerky to view a server waiter as a “tool” (loaded word) by which to secure a meal? That’s his job, just as it’s the job of a clerk to ring up your order, the job of a service advisor to see that your car is serviced, etc. Pleasantness and politeness are called for in dealings with people in service occupations — as in dealings with everyone — though it may be necessary to abandon them in the face of incompetence, rudeness, or worse.

What’s not called for is a haughty or dismissive air, as if the waiter, clerk, etc., were a lesser being. I finally drew a line (mentally) through a long-time friendship when the friend — a staunch “liberal” who, by definition, didn’t view people as mere tools — was haughty and dismissive toward a waiter, and a black one at that. His behavior exemplified jerkiness. Whatever he thought about the waiter as a human being (and I have no way of knowing that), he acted the way he did because he sees himself as a superior being — an attitude to which I can attest by virtue of long acquaintance. (When haughtiness wasn’t called for, condescension was. Here‘s a perfect example of it.)

That’s what makes a jerk a jerk: an overt attitude of superiority. It usually comes out as rudeness, pushiness, or loudness — in short, dominating a situation by assertive behavior.

Does the jerk have an inferiority complex for which he is compensating? Was he a spoiled child? Is he a neurotic who tries to conquer his insecurity by behaving more assertively than necessary? Does he fail to appreciate the perspectives of other people, as Schwitzgebel puts it?

Who knows? And why does it matter? When confronted with a jerk, I deal with the behavior — or ignore or avoid it. The cause would matter only if I could do something about it. But jerks (like the relatively poor) are always with us.

So are psychopaths, though they must be dealt with differently.

Schwitzgebel addresses the connection between jerkiness and psychopathy, but gets it wrong:

People with psychopathic personalities are selfish and callous, as is the jerk, but they also incline toward impulsive risk-taking, while jerks can be calculating and risk-averse.

A jerk doesn’t care (or think) about his treatment of other people in mundane settings. He is just getting away with what he can get away with at the moment; that is, he is being impulsive. Nor is jerky behavior necessarily risk-averse; it often invites a punch in the mouth.

A psychopath, by contrast, is often calculating and ingratiating — especially when he is setting up a victim for whatever he has in mind, be it setting up a co-worker for termination or setting up an innocent for seduction and murder.

A psychopath doesn’t do such things because he is devoid of empathy. A successful psychopath is skilled at “reading” his victims — empathizing with them — in order to entice them into a situation where he gets what he wants from them.

In evidence, I turn to Paul Bloom’s “The Root of All Cruelty?” (The New Yorker, November 27, 2017):

The thesis that viewing others as objects or animals enables our very worst conduct would seem to explain a great deal. Yet there’s reason to think that it’s almost the opposite of the truth.

At some European soccer games, fans make monkey noises at African players and throw bananas at them. Describing Africans as monkeys is a common racist trope, and might seem like yet another example of dehumanization. But plainly these fans don’t really think the players are monkeys; the whole point of their behavior is to disorient and humiliate. To believe that such taunts are effective is to assume that their targets would be ashamed to be thought of that way—which implies that, at some level, you think of them as people after all.

Consider what happened after Hitler annexed Austria, in 1938. Timothy Snyder offers a haunting description in Black Earth: The Holocaust as History and Warning:

The next morning the “scrubbing parties” began. Members of the Austrian SA, working from lists, from personal knowledge, and from the knowledge of passersby, identified Jews and forced them to kneel and clean the streets with brushes. This was a ritual humiliation. Jews, often doctors and lawyers or other professionals, were suddenly on their knees performing menial labor in front of jeering crowds. Ernest P. remembered the spectacle of the “scrubbing parties” as “amusement for the Austrian population.” A journalist described “the fluffy Viennese blondes, fighting one another to get closer to the elevating spectacle of the ashen-faced Jewish surgeon on hands and knees before a half-dozen young hooligans with Swastika armlets and dog-whips.” Meanwhile, Jewish girls were sexually abused, and older Jewish men were forced to perform public physical exercise.

The Jews who were forced to scrub the streets—not to mention those subjected to far worse degradations—were not thought of as lacking human emotions. Indeed, if the Jews had been thought to be indifferent to their treatment, there would have been nothing to watch here; the crowd had gathered because it wanted to see them suffer. The logic of such brutality is the logic of metaphor: to assert a likeness between two different things holds power only in the light of that difference. The sadism of treating human beings like vermin lies precisely in the recognition that they are not.

As with jerkiness, I don’t care what motivates psychopathy. If jerks are to be avoided, psychopaths are to be punished — good and hard — by firing them (if they are workplace psychopaths) or jailing and executing them (if they are criminal psychopaths).

Come to think of it, if jerks were punched in the mouth more often, perhaps there would be less jerky behavior. And, for most of us, it is jerks — not psychopaths — who make life less pleasant than it could be.

1963: The Year Zero

From steady progress under sane governance to regress spurred by the counter-culture.

[A] long habit of not thinking a thing WRONG, gives it a superficial appearance of being RIGHT…. Time makes more converts than reason. — Thomas Paine, Common Sense

If ignorance and passion are the foes of popular morality, it must be confessed that moral indifference is the malady of the cultivated classes. The modern separation of enlightenment and virtue, of thought and conscience, of the intellectual aristocracy from the honest and common crowd is the greatest danger that can threaten liberty. — Henri Frédéric Amiel, Journal

The Summer of Love ignited the loose, Dionysian culture that is inescapable today. The raunch and debauchery, radical individualism, stylized non-conformity, the blitzkrieg on age-old authorities, eventually impaired society’s ability to function. — Gilbert T. Sewall, “Summer of Love, Winter of Decline

If, like me, you were an adult when John F. Kennedy was assassinated, you may think of his death as a watershed moment in American history. I say this not because I’m an admirer of Kennedy the man (I am not), but because American history seemed to turn a corner after Kennedy was murdered. To take the metaphor further, the corner marked the juncture of a sunny, tree-lined street (America from the end of World War II to November 22, 1963) and a dingy, littered street (America since November 22, 1963).

Changing the metaphor, I acknowledge that the first 18 years after V-J Day were by no means halcyon, but they were the spring that followed the long, harsh winter of the Great Depression and World War II. Yes, there was the Korean War, but that failure of political resolve was only a rehearsal for later debacles. McCarthyism, a political war waged (however clumsily) on America’s actual enemies, was benign compared with the war on civil society that began in the 1960s and continues to this day. The threat of nuclear annihilation, which those of you who were schoolchildren of the 1950s will remember well, had begun to subside with the advent of JFK’s military policy of flexible response, and seemed to evaporate with JFK’s resolution of the Cuban Missile Crisis (however poorly he managed it). And for all of his personal faults, JFK was a paragon of grace, wit, and charm — a movie-star president — compared with his many successors, with the possible exception of Ronald Reagan, who had been a real movie star.

What follows is an impression of America since November 22, 1963, when spring became a long, hot summer, followed by a dismal autumn and another long, harsh winter — not of deprivation, and perhaps not of war, but of rancor and repression.

This petite histoire begins with the Vietnam War and its disastrous mishandling by LBJ, its betrayal by the media, and its spawning of the politics of noise. “Protests” in public spaces (which spill destructively onto private property) are a main feature of the politics of noise. In the new age of instant and sympathetic media attention to “protests”, civil and university authorities often refuse to enforce order. The media portray obstructive and destructive disorder as “free speech”. Thus do “protestors” learn that they can, with impunity, inconvenience and cow the masses who simply want to get on with their lives and work.

Whether “protestors” learned from rioters, or vice versa, they learned the same lesson. Authorities, in the age of Dr. Spock, lack the guts to use force, as necessary, to restore civil order. (LBJ’s decision to escalate gradually in Vietnam — “signaling” to Hanoi — instead of waging all-out war was of a piece with the “understanding” treatment of demonstrators and rioters.) Rioters learned another lesson — if a riot follows the arrest, beating, or death of a black person, it’s a “protest” against something (usually white-racist oppression, regardless of the facts), not wanton mayhem. After a hiatus of 21 years, urban riots resumed in 1964, and continue to this day.

LBJ’s “Great Society” marked the resurgence of FDR’s New Deal — with a vengeance — and the beginning of a long decline of America’s economic vitality. The combination of the Great Society (and its later extensions, such as Medicare Part D and Obamacare) with the rampant growth of regulatory activity has cut the rate of real economic growth from more than 4 percent to less than percent (and falling). Work has been discouraged; dependency has been encouraged. America since 1963 has been visited by a perfect storm of economic destruction that seems to have been designed by America’s enemies.

The Civil Rights Act of 1964 unnecessarily crushed property rights, along with freedom of association, to what end? So that a violent, dependent, Democrat-voting underclass could arise from the Great Society? So that future generations of privilege-seekers could cry “discrimination” if anyone dares to denigrate their “lifestyles”? There was a time when immigrants and other persons who seemed “different” had the good sense to strive for success and acceptance as good neighbors, employees, and merchants. But the Civil Rights Act of 1964 and its various offspring — State and local as well as federal — are meant to short-circuit that striving and to force acceptance, whether or not a person has earned it. The vast, silent majority is caught between empowered privilege-seekers and powerful privilege-granters. The privilege-seekers and privilege-granters are abetted by dupes who have, as usual, succumbed to the people’s romance — the belief that government represents society.

Presidents, above all, like to think that they represent society. What they represent, of course, are their own biases and the interests to which they are beholden. Truman, Ike, and JFK were imperfect presidential specimens, but they are shining idols by contrast with most of their successors. The downhill slide from the Vietnam and the Great Society to Obamacare, lawlessness on immigration, the bugout from Afghanistan, and the feckless war on “climate change” has been punctuated by many shameful episodes; for example:

  • LBJ — the botched war in Vietnam, repudiation of property rights and freedom of association (the Civil Rights Act)

  • Nixon — price controls, Watergate

  • Carter — dispiriting leadership and impotence in the Iran hostage crisis

  • Reagan — bugout from Lebanon, rescue of Social Security

  • Bush I — failure to oust Saddam when it could have been done easily, the broken promise about taxes

  • Clinton — bugout from Somalia, push for an early version of Obamacare, budget-balancing at the cost of defense, and perjury

  • Bush II — No Child Left Behind Act, Medicare Part D, the initial mishandling of Iraq, and Wall Street bailouts

  • Obama — stimulus spending, Obamacare, reversal of Bush II’s eventual success in Iraq, naive backing for the “Arab spring”,  acquiescence to Iran’s nuclear ambitions, unwillingness to acknowledge or do anything about the expansionist aims of Russia and China, neglect or repudiation of traditional allies (especially Israel), and refusal to take care that the immigration laws are executed faithfully

  • Trump — many good policies (e.g., immigration control, regulatory rollbacks, more defense spending, energy self-sufficiency) offset by reckelss spending and a failure to conquer the “deep state”

  • Biden — Obama on steroids, with the reversal of Trump’s policies accompanied by the self-inflicted wound of energy dependence, socially divisive and wrong-headed policies, and the movement toward outlawing political opponents.

Only Reagan’s defense buildup and its result — victory in the Cold War — stands out as a great accomplishment. But the victory was squandered: The “peace dividend” should have been peace through continued strength, not unpreparedness for the post 9/11 wars and the resurgence of Russia and China.

The war on defense has been accompanied by a war on science. The party that proclaims itself the party of science is anything but that. It is the party of superstitious, Luddite anti-science. Witness the embrace of extreme environmentalism, the arrogance of proclamations that AGW is “settled science”, unjustified fear of genetically modified foodstuffs, the implausible doctrine that race is nothing but a social construct, and on and on.

With respect to the nation’s moral well-being, the most destructive war of all has been the culture war, which assuredly began in the 1960s. Almost overnight, it seems, the nation was catapulted from the land of Ozzie and Harriet, Father Knows Best, and Leave It to Beaver to the land of the free- filthy-speech movement, Altamont, Woodstock, Hair, and the unspeakably loud, vulgar, and violent offerings that are now plastered all over the air waves, the internet, theater screens, and “entertainment” venues.

Adherents of the ascendant culture esteem protest for its own sake, and have stock explanations for all perceived wrongs (whether or not they are wrongs): racism, sexism, homophobia, Islamophobia, hate, white privilege, inequality (of any kind), Wall  Street, climate change, Zionism, and so on.

Then there is the campaign to curtail freedom of speech. This purported beneficiaries of the campaign are the gender-confused and the easily offended (thus “microagressions” and “trigger warnings”). The true beneficiaries are leftists. Free speech is all right if it’s acceptable to the left. Otherwise, it’s “hate speech,” and must be stamped out. This is McCarthyism on steroids. McCarthy, at least, was pursuing actual enemies of liberty; today’s leftists are the enemies of liberty.

There’s a lot more, unfortunately. The organs of the state have been enlisted in an unrelenting campaign against civilizing social norms. As I say here,

we now have not just easy divorce, subsidized illegitimacy, and legions of non-mothering mothers, but also abortion, concerted (and deluded) efforts to defeminize females and to neuter or feminize males, forced association (with accompanying destruction of property and employment rights), suppression of religion, absolution of pornography, and the encouragement of “alternative lifestyles” that feature disease, promiscuity, and familial instability. The state, of course, doesn’t act of its own volition. It acts at the behest of special interests — interests with a “cultural” agenda….  They are bent on the eradication of civil society — nothing less — in favor of a state-directed Rousseauvian dystopia from which morality and liberty will have vanished, except in Orwellian doublespeak.

If there are unifying themes in this petite histoire, they are the death of common sense and the rising tide of moral vacuity — thus the epigrams at the top of the post. The history of the United States since 1963 supports the proposition that the nation is indeed going to hell in a handbasket.


Related reading (a small sample of writings that attest to the decline of America and its civilizing norms):

The British Roots of the Founding, and of Liberty in America

Both have withered and are almost dead.

Before America became purely a “proposition nation”, it was also an “ethnic nation”. As Malcolm Pollack puts it here:

Once upon a time, an ordinary understanding of nationalism embraced all of this: love for, and loyalty to, not only shared beliefs, but also for one’s people, their common heritage and traditions, and their homeland. But in these withered times, we must pry it all apart and pare away everything, no matter how common and natural and healthy, that violates our new ideological orthodoxy. We have to be content, now, with what our grandparents would surely have seen as a sad and shriveled “patriotism”: all that is left for us to love about our nation is a handful of philosophical postulates….

There should be no doubt that the founding of the United States rests upon a set of propositions that articulate a theory of natural law and natural rights, chief among which is the proposition that no human being is by nature rightfully sovereign over any other. (This, and pretty much only this, is what the Founders meant when they said “created equal”.) So in that sense it is correct to call the United States a “proposition nation”.

The problem is that nowadays it is all too common to stop there: to declare the United States to be a “proposition nation” and nothing more….

The founders knew very well that for a society based on natural liberty and limited government to flourish would require civic virtue, and a sense of civic duty, and that these in turn required commonality: not just the commonality of assent to a set of political abstracta, but also the natural cohesion of a community of people who share history, culture, traditions, and a broad sense of actual kinship.

John Jay wrote about this in Federalist 2 (my emphasis):

It has often given me pleasure to observe that independent America was not composed of detached and distant territories, but that one connected, fertile, widespreading country was the portion of our western sons of liberty. Providence has in a particular manner blessed it with a variety of soils and productions, and watered it with innumerable streams, for the delight and accommodation of its inhabitants. A succession of navigable waters forms a kind of chain round its borders, as if to bind it together; while the most noble rivers in the world, running at convenient distances, present them with highways for the easy communication of friendly aids, and the mutual transportation and exchange of their various commodities.

With equal pleasure I have as often taken notice that Providence has been pleased to give this one connected country to one united people–a people descended from the same ancestors, speaking the same language, professing the same religion, attached to the same principles of government, very similar in their manners and customs, and who, by their joint counsels, arms, and efforts, fighting side by side throughout a long and bloody war, have nobly established general liberty and independence.

This country and this people seem to have been made for each other, and it appears as if it was the design of Providence, that an inheritance so proper and convenient for a band of brethren, united to each other by the strongest ties, should never be split into a number of unsocial, jealous, and alien sovereignties.

… The American Founding could not have happened elsewhere: swap out the colonial population of 1776 with a random assortment of people from everywhere on Earth and it would quickly have failed. The particularities of the “matter” upon which the American propositions were to act were every bit as determining as the “form” — the propositions — themselves.

But the unique circumstances of the Founding could not be preserved against the onslaught of abstract legalisms, which put the “propositions” of the Founding ahead of its substance: the culture of America’s predominantly British Founders. Examine the lists of signatories of America’s Founding Documents in the table below and you will see, in addition to many duplicated names and family names, only a few obviously non-British person among the 123 listed.

Substantive liberty in America — the true liberty of beneficial cooperation based on mutual trust, respect, and forbearance — could not withstand the onslaught of three forces: (1) cultural fragmentation, (2) the concomitant rise of legalistic abstraction (e.g., free-speech absolutism), and (3) the aggressive growth of the central government, which is both the beneficiary and initiator of the first and second forces.

Monarchs of Wessex, England, and the United Kingdom

From Cerdic (r. 519-534) to Charles III (r. 2022- )

Charles III is the 83nd monarch. His royal lineage goes back to Sweyn, the 34th monarch (b. 960, r. 1013-1014).

Click on the image to zoom in.

"Intelligence" as a Dirty Word …

… and other evasions of the truth.

Once upon a time I read a post, “The Nature of Intelligence”,  at a now-defunct blog called MBTI Truths. (MBTI refers to a controversial personality test: Myers-Briggs Type Indicator.) Here is the entire text of the post:

A commonly held misconception within the MBTI community is that iNtuitives are smarter than Sensors. They are thought to have higher intelligence, but this belief is misguided. In an assessment of famous people with high IQs, the vast majority of them are iNtuitive. However, IQ tests measure only two types of intelligences: linguistic and logical-mathematical. In addition to these, there are six other types of intelligence: spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalistic. Sensors would probably outscore iNtuitives in several of these areas. Perhaps MBTI users should come to see iNtuitives, who make up 25 percent of the population, as having a unique type of intelligence instead of superior intelligence.

The use of “intelligence” with respect to traits other than brain-power is misguided. “Intelligence” has a clear and unambiguous meaning in everyday language; for example:

The capacity to acquire, understand, and use knowledge.

That is the way in which I use “intelligence” in “Intelligence, Personality, Politics, and Happiness”, and it is the way in which the word is commonly understood. The application of “intelligence” to other kinds of ability — musical, interpersonal, etc. — is a fairly recent development that smacks of anti-elitism. It is a way of saying that highly intelligent individuals (where “intelligence” carries its traditional meaning) are not necessarily superior in all respects. No kidding!

As to the merits of the post at MBTI Truths, it is mere speculation to say that “Sensors would probably outscore iNtuitives in several of these” other types of ability. (And what is “naturalistic intelligence”, anyway?)

Returning to a key point of “Intelligence, Personality, Politics, and Happiness”, the claim that iNtuitives are generally smarter than Sensors is nothing but a claim about the relative capacity of iNtuitives to acquire and apply knowledge. It is quite correct to say that iNtuitives are not necessarily better than Sensors at, say, sports, music, glad-handing, and so one. It is also quite correct to say that iNtuitives generally are more intelligent than Sensors, in the standard meaning of “intelligence”.

Other so-called types of intelligence are not types of intelligence, at all. They are simply other types of ability, each of them is (perhaps) valuable in its own way. But calling them types of intelligence is a transparent effort to denigrate the importance of real intelligence, which is an important determinant of significant life outcomes: learning, job performance, income, health, and criminality (in the negative).

It is a sign of the times that an important human trait is played down in an effort to inflate the egos of persons who are not well endowed with respect to that trait. The attempt to redefine or minimize intelligence is of a piece with the use of genteelisms, which Wilson Follett defines as

soft-spoken expressions that are either unnecessary or too regularly used. The modern world is much given to making up euphemisms that turn into genteelisms. Thus newspapers and politicians shirk speaking of the poor and the crippled. These persons become, respectively, the underprivileged (or disadvantaged) and the handicapped [and now -challenged and -abled: ED]. (Modern American Usage (1966), p. 169)

Finally:

Genteelisms may be of … the old-fashioned sort that will not name common things outright, such as the absurd plural bosoms for breasts, and phrases that try to conceal accidental associations of ideas, such as back of for behind. The advertiser’s genteelisms are too numerous to count. They range from the false comparative (e.g., the better hotels) to the soapy phrase (e.g., gracious living), which is supposed to poeticize and perfume the proffer of bodily comforts. (Ibid., p. 170)

And so it is that such traits as athleticism, musical virtuosity, and garrulousness become kinds of intelligence. Why? Because it is somehow inegalitarian — and therefore unmentionable — that some persons are smarter than others.

Life just isn’t fair, so get over it.

A closely related matter is the use of euphemisms. A euphemism is

an innocuous word or expression used in place of one that is deemed offensive or suggests something unpleasant.

The market in euphemisms has been cornered by politically correct leftists, who can’t confront reality and wish to erect a fantasy in its place. A case in point is a “bias-free language guide” that was posted on the website of the University of New Hampshire in 2013 and stayed there for a few years. The guide disappeared after Mark Huddleston, the university’s president, issued this statement:

While individuals on our campus have every right to express themselves, I want to make it absolutely clear that the views expressed in this guide are NOT the policy of the University of New Hampshire. I am troubled by many things in the language guide, especially the suggestion that the use of the term ‘American’ is misplaced or offensive. The only UNH policy on speech is that it is free and unfettered on our campuses. It is ironic that what was probably a well-meaning effort to be ‘sensitive’ proves offensive to many people, myself included. [Quoted in “University President Offended by Bias-Free Language Guide,” an Associated Press story published in USA Today, July 30, 2015]

The same story adds some detail about the contents of the guide:

One section warns against the terms “older people, elders, seniors, senior citizens.” It suggests “people of advanced age” as preferable, though it notes that some have “reclaimed” the term “old people.” Other preferred terms include “person of material wealth” instead of rich, “person who lacks advantages that others have” instead of poor and “people of size” to replace the word overweight.

There’s more from another source:

Saying “American” to reference Americans is also problematic. The guide encourages the use of the more inclusive substitutes “U.S. citizen” or “Resident of the U.S.”

The guide notes that “American” is problematic because it “assumes the U.S. is the only country inside [the continents of North and South America].” (The guide doesn’t address whether or not the terms “Canadians” and “Mexicans” should be abandoned in favor of “Residents of Canada” and “Residents of Mexico,” respectively.)

The guide clarifies that saying “illegal alien” is also problematic. While “undocumented immigrant” is acceptable, the guide recommends saying “person seeking asylum,” or “refugee,” instead. Even saying “foreigners” is problematic; the preferred term is “international people.”

Using the word “Caucasian” is considered problematic as well, and should be discontinued in favor of “European-American individuals.” The guide also states that the notion of race is “a social construct…that was designed to maintain slavery.”

The guide also discourages the use of “mothering” or “fathering,” so as to “avoid gendering a non-gendered activity.”

Even saying the word “healthy” is problematic, the university says. The “preferred term for people without disabilities,” the university says, is “non-disabled.” Similarly, saying “handicapped” or “physically-challenged” is also problematic. Instead, the university wants people to use the more inclusive “wheelchair user,” or “person who is wheelchair mobile.”

Using the words “rich” or “poor” is also frowned upon. Instead of saying “rich,” the university encourages people to say “person of material wealth.” Rather than saying a person is “poor,” the university encourages its members to substitute “person who lacks advantages that others have” or “low economic status related to a person’s education, occupation and income.”

Terms also considered problematic include: “elders,” “senior citizen,” “overweight” (which the guide says is “arbitrary”), “speech impediment,” “dumb,” “sexual preference,” “manpower,” “freshmen,” “mailman,” and “chairman,” in addition to many others. [Peter Hasson, “Bias-Free Language Guide Claims the Word ‘American’ Is ‘Problematic’,” Campus Reform, July 28, 2015]

And more, from yet another source:

Problematic: Opposite sex. Preferred: Other sex.

Problematic: Homosexual. Preferred: Gay, Lesbian, Same Gender Loving

Problematic: Normal … healthy or whole. Preferred: Non-disabled.

Problematic/Outdated: Mothering, fathering. Preferred: Parenting, nurturing. [Jennifer Kabbany, “University’s ‘Bias-Free Language Guide’ Criticizing Word ‘American’ Prompts Shock, Anger,” The College Fix, July 30, 2015

The UNH students who concocted the guide — and the thousands (millions?) at other campuses who think similarly — must find it hard to express themselves clearly. Every word must be weighed before it is written or spoken, for fear of giving offense to a favored group or implying support of an idea, cause, institution, or group of which the left disapproves. (But it’s always open season on “fascist, capitalist pigs”.)

Gee, it must be nice to live in a fantasy world, where reality can be obscured or changed just by saying the right words. Here’s a thought for the fantasists of the left: You don’t need to tax, spend, and regulate Americans until they’re completely impoverished and subjugated. Just say that it’s so — and leave the rest of us alone.

Getting It Perfect

Wry commentary about the Constitution and other things.

Have you ever noticed that Americans are perfectionists? It’s true.

It all began with the U.S. Constitution. The preamble to the Constitution says it was “ordained and established” (a ringing phrase, that) “in order to form a more perfect union” — among other things. It’s been all downhill since then.

The Federalists (pro-Constitution) and anti-Federalists (anti-Constitution) continued to squabble for a decade or so after ratification of the Constitution. The anti-Federalists believed the union to be perfect enough under the Articles of Confederation. But those blasted perfectionist Federalists won the debate. So here we are.

The Federalists were such perfectionists that they left room in the Constitution for amending it. After all, a “more perfect union” can’t be attained in a day. Thus, in our striving toward perfection — Consitution-wise, that is — we have now amended it 27 times. We even adopted an amendment (XVIII, the Prohibition amendment, 1919) and only 14 years later amended it out of existence (XXI, the Repeal amendment, 1933).

But we can be very patient when it comes to perfecting the Constitution through amendments. Amendment XXVII (the most recent amendment) was submitted to the States on September 25, 1789, as part of the proposed Bill of Rights. It wasn’t ratified until May 7, 1992. Not to worry, though, Amendment XXVII isn’t about rights, it merely prevents a sitting Congress from raising its own pay:

No law, varying the compensation for the services of the Senators and Representatives, shall take effect, until an election of Representatives shall have intervened.

So now the only group of public servants that can vote itself a pay raise must wait out an election before a raise takes effect. Big deal. Most members of Congress get re-elected, anyway.

Where was I? Oh, yes, perfectionism. Well, after the Constitution was ratified, the next big squabble was about States’ rights. Some politicians from the North preferred to secede rather than remain in a union that permitted slavery. Some politicians from the South said the slavery issue was just a Northern excuse to bully the South; the South, they said, should secede from the union. The union, it seems, just wasn’t perfect enough for either the North or the South. Well, the South won that squabble by seceding first, which so ticked off the North that it dragged the South back into the union, kickin’ and hollerin’. The North had decided that the only perfect union was a whole union, rednecks and all.

The union didn’t get noticeably more perfect with the South back in the fold. Things just went squabbling along through the Spanish-American War and World War I. There was a lot more prosperity in the Roaring ’20s, but that was spoiled by Prohibition. It wasn’t hard to find a drink, but you never knew when your local speakeasy might be raided by Elliot Ness or when you might get caught in a shoot-out between rival bootleggers.

The Great Depression put an end to the Roaring ’20s, and that sent perfection for a real loop. But Franklin D. Roosevelt got the idea that he could help us out of the Depression by creating a bunch of alphabet-soup agencies, including the CCC, the PWA, the FSA, and the WPA. I guess he got his idea from his older cousin, Teddy, who created his own alphabet-soup agencies back in the early 1900s.

Well, Franklin really got the ball rolling, and it seems like almost every president since him has added a bunch of alphabet-soup agencies to the executive branch. And when a president has been unable to think of new alphabet-soup agencies, Congress has stepped in and helped him out. (It’s not yet necessary to say “him, her, or it”) It seems that our politicians think we’ll attain perfection when there are enough agencies to use every possible combination of three letters from the alphabet. (That’s only 17,576 agencies; we must be getting close to perfection by now.)

During the Great Depression some people began to think that criminals (especially juvenile delinquents) weren’t naturally bad. Nope, their criminality was “society’s fault” (not enough jobs), and juvenile delinquents could be rehabilitated — made more perfect, if you will — through “understanding”, which would make model citizens of psychopaths. That idea was put on hold during World War II because we needed those former juvenile delinquents and their younger brothers to kill Krauts and Japs. (Oops, spell-checker doesn’t like “Japs”; “Nips” is okay, though.)

The idea of rehabilitating juvenile delinquents through “understanding” took hold after the war. (There was no longer a Great Depression, but there still weren’t enough jobs because of the minimum wage, another great advance toward perfection.) In fact, the idea “undertanding” miscreants spread beyond the ranks of juvenile delinquents to encompass every tot and pre-adolescent in the land. Corporal punishment became a no-no. Giving into Johnny and Jill’s every whim became a yes-yes. Guess what? What: Johnny and Jill grew up to be voters. Politicians quickly learned not to say “no” to Johnny and Jill’s demands for — whatever — otherwise Johnny and Jill would throw a fit (and throw a politician out of office). So, politicians just got in the habit of approving things Johnny and Jill asked for. In fact, they even got in the habit of approving things Johnny and Jill might ask for. (Better safe than out of office.) A perfect union, after all, is one that grants our every wish — isn’t it? We’re not there yet, but we’re trying like hell.

Sometimes you can’t attain perfection through legislation. Then you go to court. Remember a few years ago when an Alabama jury awarded millions (millions!) of dollars to the purchaser of a new BMW who discovered that its paint job was not pristine? Or how about the small machine-tool company that was sued by a workman who lost three fingers while using (or misusing) the company’s product, even though the machine had been rebuilt at least once and had changed hands four times. (Somebody’s gotta pay for my stupidity.) Then there was the infamous case in which a jury found in favor of a woman who had burned herself with hot coffee (what did she expect?) dispensed by a fast-food chain.

The upshot of our litigiousness? The politicians elected by Johnny and Jill — ever in the pursuit of more perfection — have mandated warning labels for everything. THIS SAW IS SHARP. THIS COFFEE IS HOT. DON’T PUT THIS PLASTIC BAG OVER YOUR HEAD, STUPID. DON’T STICK YOUR HAND DOWN THIS GARBAGE DISPOSAL, YOU MORON. THIS TOY GUN WON’T KILL AN ARMED INTRUDER (HA, HA, HA, YOU GUN NUT!).

You may have noticed a trend in my tale. Politicians quit trying some years ago to perfect the union; their aim is to perfect US (We the People). That’s why they keep raising cigarette and gasoline taxes. Everyone knows that smoking is a slovenly redneck habit (movie stars excepted, of course).

As for gasoline, it’s a fossil fuel. (Think of all the dinosaurs who gave their lives so that you can guzzle gas.) But lots of people — especially politicians — “know” (because “the science” says so) that the use of fossil fuels has caused a (rather erratic) rise in Earth’s average temperature (as measured mainly by thermometers situated in urban heat-islands) amounting to 2 degrees in 170 years. (You can get same effect by sitting in the same place for a few minutes after the sun rises.)

According to “the science” this rather puny (and mostly phony) phenomenon is due to something called the “greenhouse effect”, in which atmospheric gases capture heat and prevent it from escaping to outer space. One of the gases (a very minor one) is CO2, some of which is emitted by human activity (e.g., guzzling gas), although the amount of CO2 in the atmosphere keeps rising even when human activity slows down (as in during economic recessions and pandemics). Neverthelyess, politicians believe “the science” and therefore believe that the use of fossil fuels must be stopped (STOPPED) even if it means another Great Depression and mass starvation. Or, even better, stopping the use of fossil fuels will result in the near-extinction of the human race so that evil human beings will no longer be able to use fossil fuels, and good ones (who also use them but are excused because they believe in “the science”) will have all the fossil fuels to themselves.

Ah, perfection at last.

Cass Sunstein, Part 6

Farewell — for now — to the plausible authoritarian.

This is the sixth (and perhaps final) installment of a series about Cass Sunstein, whom I have dubbed the plausible authoritarian because of his ability to make authoritarian measures seem like reasonable ways of advancing democratic participation and social comity. The first five installments are here, here, here, here, and here.

Cass Sunstein (CS) became Barack Obama’s “regulatory czar” (Administrator of the Office of Information and Regulatory Affairs), following a prolonged delay in action on his nomination to the office because of his controversial views. This post draws on posts that I wrote during and after CS’s “czarship”, which lasted from September 2009 to August 2012.

Alec Rawls, writing at his blog, Error Theory, found CS on the wrong side of history (to borrow one of his boss’s favorite slogans):

As Congress considers vastly expanding the power of copyright holders to shut down fair use of their intellectual property, this is a good time to remember the other activities that Obama’s “regulatory czar” Cass Sunstein wants to shut down using the tools of copyright protection. For a couple of years now, Sunstein has been advocating that the “notice and take down” model from copyright law should be used against rumors and conspiracy theories, “to achieve the optimal chilling effect.”

Why?

Sunstein seems most intent on suppressing is the accusation, leveled during the 2008 election campaign, that Barack Obama “pals around with terrorists.” (“Look Inside” page 3.) Sunstein fails to note that the “palling around with terrorists” language was introduced by the opposing vice presidential candidate, Governor Sarah Palin (who was implicating Obama’s relationship with domestic terrorist Bill Ayers). Instead Sunstein focuses his ire on “right wing websites” that make “hateful remarks about the alleged relationship between Barack Obama and the former radical Bill Ayers,” singling out Sean Hannity for making hay out of Obama’s “alleged associations” (op. cit., pages 13-14, no longer displayed).

What could possibly be more important than whether a candidate for president does indeed “pal around with terrorists”? Of all the subjects to declare off limits, this one is right up there with whether the anti-CO2 alarmists who are trying to unplug the modern world are telling the truth. And Sunstein’s own bias on the matter could hardly be more blatant. Bill Ayers is a “former” radical? Bill “I don’t regret setting bombs” Ayers? Bill “we didn’t do enough” Ayers?

For the facts of the Obama-Ayers relationship, Sunstein apparently accepts Obama’s campaign dismissal of Ayers as just “a guy who lives in my neighborhood.” In fact their relationship was long and deep. Obama’s political career was launched via a fundraiser in Bill Ayers’ living room; Obama was appointed the first chairman of the Ayers-founded Annenberg Challenge, almost certainly at Ayers’ request [link broken]; Ayers and Obama served together on the board of the Woods Foundation, distributing money to radical left-wing causes; and it has now been reported by full-access White House biographer Christopher Andersen (and confirmed by Bill Ayers) that Ayers actually ghost wrote Obama’s first book Dreams from My Father.

Whenever free speech is attacked, the real purpose is to cover up the truth. Not that Sunstein himself knows the truth about anything. He just knows what he wants to suppress, which is exactly why government must never have this power.

As Rawls further noted, CS also wanted to protect “warmists” from their critics, that is, to suppress science in the name of science:

In climate science, there is no avoiding “reference to the machinations of powerful people, who have also managed to conceal their role.” The Team has always been sloppy about concealing its machinations, but that doesn’t stop Sunstein from using climate skepticism as an exemplar of pernicious conspiracy theorizing, and his goal is perfectly obvious: he wants the state to take aggressive action that will make it easier for our powerful government funded scientists to conceal their machinations.

After CS returned to academe, his spirit lived on in the White House, particularly with regard to CS’s advocacy of thought control, which I exposed at length in part 4 of this series.

Thus:

[Obama] Administration officials have asked YouTube to review a controversial video that many blame for spurring a wave of anti-American violence in the Middle East.

The administration flagged the 14-minute “Innocence of Muslims” video and asked that YouTube evaluate it to determine whether it violates the site’s terms of service, officials said Thursday. The video, which has been viewed by nearly 1.7 million users, depicts Muhammad as a child molester, womanizer and murderer — and has been decried as blasphemous and Islamophobic.

“Review” it, or else. When the 500-pound gorilla speaks, you say “yes, sir.”

Way to go, O-blame-a. Do not stand up for Americans. Suppress them instead. It’s the CS way.

CS later regaled his followers with this:

Suppose that an authoritarian government decides to embark on a program of curricular reform, with the explicit goal of indoctrinating the nation’s high school students. Suppose that it wants to change the curriculum to teach students that their government is good and trustworthy, that their system is democratic and committed to the rule of law, and that free markets are a big problem.

Will such a government succeed? Or will high school students simply roll their eyes?

Questions of this kind have long been debated, but without the benefit of reliable evidence. New research, from Davide Cantoni of the University of Munich and several co-authors, shows that recent curricular reforms in China, explicitly designed to transform students’ political views, have mostly worked….

… [G]overnment planners were able to succeed in altering students’ views on fundamental questions about their nation. As Cantoni and his co-authors summarize their various findings, “the state can effectively indoctrinate students.” To be sure, families and friends matter, as do economic incentives, but if an authoritarian government is determined to move students in major ways, it may well be able to do so.

Is this conclusion limited to authoritarian nations? In a democratic country with a flourishing civil society, a high degree of pluralism, and ample room for disagreement and dissent — like the U.S. — it may well be harder to use the curriculum to change the political views of young people. But even in such societies, high schools probably have a significant ability to move students toward what they consider “a correct worldview, a correct view on life, and a correct value system.” That’s an opportunity, to be sure, but it is also a warning. [“Open Brain, Insert Ideology,” Bloomberg View, May 20, 2014]

Where had CS been? He seemed unaware of the left-wing ethos that has long prevailed in most of America’s so-called institutions of learning. It doesn’t take an authoritarian government (well, not one as authoritarian as China’s) to indoctrinate students in “a correct worldview, a correct view on life, and a correct value system”. All it takes is the spread of left-wing “values” by the media and legions of pedagogues, most of them financed (directly and indirectly) by a thoroughly subverted government. It’s almost a miracle — and something of a moral victory — that there are still tens of millions of Americans who resist and oppose left-wing “values”.

Moving on, I found CS arguing circularly in his contribution to a collection of papers entitled “Economists on the Welfare State and the Regulatory State: Why Don’t Any Argue in Favor of One and Against the Other?” (Econ Journal Watch, Volume 12, Issue 1, January 2015):

… [I]t seems unhelpful, even a recipe for confusion, to puzzle over the question whether economists (or others) ‘like,’ or ‘lean toward,’ both the regulatory state and the welfare state, or neither, or one but not the other. But there is a more fine-grained position on something like that question, and I believe that many (not all) economists would support it. The position is this: The regulatory state should restrict itself to the correction of market failures, and redistributive goals are best achieved through the tax system. Let’s call this (somewhat tendentiously) the Standard View….

My conclusion is that it is not fruitful to puzzle over the question whether economists and others ‘favor’ or ‘lean’ toward the regulatory or welfare state, and that it is better to begin by emphasizing that the first should be designed to handle market failures, and that the second should be designed to respond to economic deprivation and unjustified inequality…. [Sunstein, “Unhelpful Abstractions and the Standard View,” op cit.]

“Market failures” and “unjustified inequality” are the foundation stones of what passes for economic and social thought on the left. Every market outcome that falls short of the left’s controlling agenda is a “failure”. And market and social outcomes that fall short of the left’s illusory egalitarianism are “unjustified”. CS, in other words, couldn’t (and probably still can’t) see that he is a typical leftist who (implicitly) favors both the regulatory state and the welfare state. He is like a fish in water.

Along came a writer who seemed bent on garnering sympathy for CS. I am referring to Andrew Marantz, who wrote “How a Liberal Scholar of Conspiracy Theories Became the Subject of a Right-Wing Conspiracy Theory” (New Yorker, December 27, 2017):

In 2010, Marc Estrin, a novelist and far-left activist from Vermont, found an online version of a paper by Cass Sunstein, a professor at Harvard Law School and the most frequently cited legal scholar in the world. The paper, called “Conspiracy Theories,” was first published in 2008, in a small academic journal called the Journal of Political Philosophy. In it, Sunstein and his Harvard colleague Adrian Vermeule attempted to explain how conspiracy theories spread, especially online. At one point, they made a radical proposal: “Our main policy claim here is that government should engage in cognitive infiltration of the groups that produce conspiracy theories.” The authors’ primary example of a conspiracy theory was the belief that 9/11 was an inside job; they defined “cognitive infiltration” as a program “whereby government agents or their allies (acting either virtually or in real space, and either openly or anonymously) will undermine the crippled epistemology of believers by planting doubts about the theories and stylized facts that circulate within such groups.”

Nowhere in the final version of the paper did Sunstein and Vermeule state the obvious fact that a government ban on conspiracy theories would be unconstitutional and possibly dangerous. (In a draft that was posted online, which remains more widely read, they emphasized that censorship is “inconsistent with principles of freedom of expression,” although they “could imagine circumstances in which a conspiracy theory became so pervasive, and so dangerous, that censorship would be thinkable.”)* “I was interested in the mechanisms by which information, whether true or false, gets passed along and amplified,” Sunstein told me recently. “I wanted to know how extremists come to believe the warped things they believe, and, to a lesser extent, what might be done to interrupt their radicalization. But I suppose my writing wasn’t very clear.”

On the contrary, CS’s writing was quite clear. So clear that even leftists were alarmed by it. Returning to Marantz’s account:

When Barack Obama became President, in 2009, he appointed Sunstein, his friend and former colleague at the University of Chicago Law School, to be the administrator of the Office of Information and Regulatory Affairs. The O.I.R.A. reviews drafts of federal rules, and, using tools such as cost-benefit analysis, recommends ways to make them more efficient. O.I.R.A. administrator is the sort of in-the-weeds post that even lifelong technocrats might find unglamorous; Sunstein had often described it as his “dream job.” He took a break from academia and moved to Washington, D.C. It soon became clear that some of his published views, which he’d thought of as “maybe a bit mischievous, but basically fine, within the context of an academic journal,” could seem far more nefarious in the context of the open Internet.

Estrin, who seems to have been the first blogger to notice the “Conspiracy Theories” paper, published a post in January, 2010, under the headline “Got Fascism?” “Put into English, what Sunstein is proposing is government infiltration of groups opposing prevailing policy,” he wrote on the “alternative progressive” Web site the Rag Blog. Three days later, the journalist Daniel Tencer (Twitter bio: “Lover of great narratives in all their forms”) expanded on Estrin’s post, for Raw Story. Two days after that, the civil-libertarian journalist Glenn Greenwald wrote a piece for Salon headlined “Obama Confidant’s Spine-Chilling Proposal.” Greenwald called Sunstein’s paper “truly pernicious,” concluding, “The reason conspiracy theories resonate so much is precisely that people have learned—rationally—to distrust government actions and statements. Sunstein’s proposed covert propaganda scheme is a perfect illustration of why that is.” Sunstein’s “scheme,” as Greenwald put it, wasn’t exactly a government action or statement. Sunstein wasn’t in government when he wrote it, in 2008; he was in the academy, where his job was to invent thought experiments, including provocative ones. But Greenwald was right that not all skepticism is paranoia.

And then:

Three days after Estrin’s post was published on the Rag Blog, the fire jumped to the other side of the road. Paul Joseph Watson, writing for the libertarian conspiracist outfit InfoWars, linked to Estrin’s post and riffed on it, in a free-associative mode, for fifteen hundred words. “It is a firmly established fact that the military-industrial complex which also owns the corporate media networks in the United States has numerous programs aimed at infiltrating prominent Internet sites and spreading propaganda to counter the truth,” Watson wrote. His boss at InfoWars, Alex Jones, began expanding on this talking point on his daily radio show: “Cass Sunstein says ban conspiracy theories, and that’s whatever he says it is. That’s on record.”

At the time, Glenn Beck hosted both a daily TV show on Fox News and a syndicated radio show; according to a Harris poll, he was the country’s second-favorite TV personality, after Oprah Winfrey. Beck had been delivering impassioned rants against Sunstein for months, calling him “the most dangerous man in America.” Now he added the paper about conspiracy theories to his litany of complaints. In one typical TV segment, in April of 2010, he devoted several minutes to a close reading of the paper, which lists five possible ways that a government might respond to conspiracy theories, including banning them outright. “The government should ban them,” Beck said, over-enunciating to express his incredulity. “How a government with an amendment guaranteeing freedom of speech bans a conspiracy theory is absolutely beyond me, but it’s not beyond a great mind and a great thinker like Cass Sunstein.” In another show, Beck insinuated that Sunstein had been inspired by Edward Bernays, the author of a 1928 book called “Propaganda.” “I got a flood of messages that night, saying, ‘You should be ashamed of yourself, you’re a disciple of Bernays,’ ” Sunstein recalled. “The result was that I was led to look up this interesting guy Bernays, whom I might not have heard of otherwise.”

For much of 2010 and 2011, Sunstein was such a frequent target on right-wing talk shows that some Tea Party-affiliated members of Congress started to invoke his name as a symbol of government overreach. Earlier in the Obama Administration, Beck had targeted Van Jones, now of CNN, who was then a White House adviser on green jobs. After a few weeks of Beck’s attacks, Jones resigned. “Then Beck made it sort of clear that he wanted me to be next,” Sunstein said. “It wasn’t a pleasant fact, but I didn’t see what I could do about it. So I put it out of my mind.”

Sunstein was never asked to resign. He served as the head of O.I.R.A. for three years, then returned to Harvard, in 2012. Two years later, he published an essay collection called “Conspiracy Theories and Other Dangerous Ideas.” The first chapter was a revised version of the “Conspiracy Theories” paper, with several qualifications added and with Vermeule’s name removed. But the revisions did nothing to improve Sunstein’s standing on far-right talk shows, where he had already earned a place, along with Saul Alinsky and George Soros and Al Gore, in the pantheon of globalist bogeymen. Beck referred to Sunstein as recently as last year, on his radio show, while discussing the Obama Administration’s “propaganda” in favor of the Iran nuclear deal. “We no longer have Jefferson and Madison leading us,” Beck said. “We have Saul Alinsky and Cass Sunstein. Whatever it takes to win, you do.” Last December, Alex Jones—who is, improbably, now taken more seriously than Beck by many conservatives, including some in the White House—railed against a recent law, the Countering Foreign Propaganda and Disinformation Act, claiming, speciously, that it would “completely federalize all communications in the United States” and “put the C.I.A. in control of media.” According to Jones, blame for the law rested neither with the members of Congress who wrote it nor with President Obama, who signed it. “I was sitting here this morning . . . And I keep thinking, What are you looking at that’s triggered a memory here?” Jones said. “And then I remembered, Oh, my gosh! It’s Cass Sunstein.”

Cue the tears for Sunstein:

Recently, on the Upper East Side, Sunstein stood behind a Lucite lectern and gave a talk about “#Republic.” Attempting to end on a hopeful note, he quoted John Stuart Mill: “It is hardly possible to overrate the value . . . of placing human beings in contact with persons dissimilar to themselves.” He then admitted, with some resignation, that this describes the Internet we should want, not the Internet we have.

After the talk, we sat in a hotel restaurant and ordered coffee. Sunstein has a sense of humor about his time in the spotlight—what he calls not his fifteen minutes of fame but his Two Minutes Hate, an allusion to “1984”—and yet he wasn’t sure what lessons he had learned from the experience, if any. “I can’t say I spent much time thinking about it, then or now,” he said. “The rosy view would be that it says something hopeful about us—about Americans, that is. We’re highly distrustful of anything that looks like censorship, or spying, or restriction of freedom in any way. That’s probably a good impulse.” He folded his hands on the table, as if to signal that he had phrased his thoughts as diplomatically as possible.

I’m not buying it. CS deserved (and deserves) every bit of blame that has come his way, and I certainly wouldn’t buy a car or house from him. He was attacked from the left and right for good reason, and portraying his attackers as kooks and extremists doesn’t change the facts of the matter. Sunstein’s 2010 article wasn’t a one-off thing. Six years earlier he published “The Future of Free Speech”, which I quoted from and analyzed in part 4 of this series. I ended with this:

[T]he fundamental reason to reject [CS’s] scheme its authoritarianism. It would effectively bring the broadcast media and the internet under control by a government bureaucracy. Any bureaucracy that is empowered to insist upon “completeness”, “fairness”, and “balance” in the exposition of ideas is thereby empowered to define and enforce its conception of those attributes. It is easy to imagine how a bureaucracy that is dominated by power-crazed zealots who espouse socialism, gender fluidity, “equity”, etc., etc., would deploy its power.

In an earlier post I said that Cass Sunstein is to the integrity of constitutional law as Pete Rose was to the integrity of baseball. It’s worse than that: Sunstein’s willingness to abuse constitutional law in the advancement of a statist agenda reminds me of Hitler’s abuse of German law to advance his repugnant agenda.

There is remorse for having done something wrong, and there is chagrin at having been caught doing something wrong. CS’s conversation-over-coffee with Marantz reads very much like the latter.

It remains a mystery to me why CS has been called a “legal Olympian.” Then, again, if there were a legal Olympics, its main events would be Obfuscation and Casuistry, and CS would be a formidable contestant in both events.

How's Your Implicit Attitude?

Mine’s just fine, thank you.

I was unaware of the Implicit Association Test (IAT) until a few years ago, when I took a test at YourMorals.Org that purported to measure my implicit racial preferences. I’ll say more about that after discussing IAT, which has been exposed as junk. That’s what John. J. Ray calls it:

Psychologists are well aware that people often do not say what they really think.  It is therefore something of a holy grail among them to find ways that WILL detect what people really think. A very popular example of that is the Implicit Associations test (IAT).  It supposedly measures racist thoughts whether you are aware of them or not.  It sometimes shows people who think they are anti-racist to be in fact secretly racist.

I dismissed it as a heap of junk long ago (here and here) but it has remained very popular and is widely accepted as revealing truth.  I am therefore pleased that a very long and thorough article has just appeared which comes to the same conclusion that I did. [“Psychology’s Favorite Tool for Measuring Racism Isn’t Up to the Job“, Political Correctness Watch, September 6, 2017]

The article in question (which has the same title as Ray’s post) is by Jesse Singal. It appeared at Science of Us on January 11, 2017. Here are some excerpts:

Perhaps no new concept from the world of academic psychology has taken hold of the public imagination more quickly and profoundly in the 21st century than implicit bias — that is, forms of bias which operate beyond the conscious awareness of individuals. That’s in large part due to the blockbuster success of the so-called implicit association test, which purports to offer a quick, easy way to measure how implicitly biased individual people are….

Since the IAT was first introduced almost 20 years ago, its architects, as well as the countless researchers and commentators who have enthusiastically embraced it, have offered it as a way to reveal to test-takers what amounts to a deep, dark secret about who they are: They may not feel racist, but in fact, the test shows that in a variety of intergroup settings, they will act racist….

[The] co-creators are Mahzarin Banaji, currently the chair of Harvard University’s psychology department, and Anthony Greenwald, a highly regarded social psychology researcher at the University of Washington. The duo introduced the test to the world at a 1998 press conference in Seattle — the accompanying press release noted that they had collected data suggesting that 90–95 percent of Americans harbored the “roots of unconscious prejudice.” The public immediately took notice: Since then, the IAT has been mostly treated as a revolutionary, revelatory piece of technology, garnering overwhelmingly positive media coverage….

Maybe the biggest driver of the IAT’s popularity and visibility, though, is the fact that anyone can take the test on the Project Implicit website, which launched shortly after the test was unveiled and which is hosted by Harvard University. The test’s architects reported that, by October 2015, more than 17 million individual test sessions had been completed on the website. As will become clear, learning one’s IAT results is, for many people, a very big deal that changes how they view themselves and their place in the world.

Given all this excitement, it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way….

Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such.

How does IAT work? Singal summarizes:

You sit down at a computer where you are shown a series of images and/or words. First, you’re instructed to hit ‘i’ when you see a “good” term like pleasant, or to hit ‘e’ when you see a “bad” one like tragedy. Then, hit ‘i’ when you see a black face, and hit ‘e’ when you see a white one. Easy enough, but soon things get slightly more complex: Hit ‘i’ when you see a good word or an image of a black person, and ‘e’ when you see a bad word or an image of a white person. Then the categories flip to black/bad and white/good. As you peck away at the keyboard, the computer measures your reaction times, which it plugs into an algorithm. That algorithm, in turn, generates your score.

If you were quicker to associate good words with white faces than good words with black faces, and/or slower to associate bad words with white faces than bad words with black ones, then the test will report that you have a slight, moderate, or strong “preference for white faces over black faces,” or some similar language. You might also find you have an anti-white bias, though that is significantly less common. By the normal scoring conventions of the test, positive scores indicate bias against the out-group, while negative ones indicate bias against the in-group.

The rough idea is that, as humans, we have an easier time connecting concepts that are already tightly linked in our brains, and a tougher time connecting concepts that aren’t. The longer it takes to connect “black” and “good” relative to “white” and “good,” the thinking goes, the more your unconscious biases favor white people over black people.

Singal continues (at great length) to pile up the mountain of evidence against IAT, and to caution against reading anything into the results it yields.

Having become aware of the the debunking of IAT, I went to the website of Project Implicit. I was surprised to learn that I could not only find out whether I’m a closet racist but also whether I prefer dark or light skin tones, Asians or non-Asians, Trump or a previous president, and several other things or their opposites. I chose to discover my true feelings about Trump vs. a previous president, and was faced with a choice between Trump and Clinton.

What was the result of my several minutes of tapping “e” and “i” on the keyboard of my PC? This:

Your data suggest a moderate automatic preference for Bill Clinton over Donald Trump.

Balderdash! Though Trump is obviously not of better character than Clinton, he’s obviously not of worse character. And insofar as policy goes, the difference between Trump and Clinton is somewhat like the difference between a non-silent Calvin Coolidge and an FDR without the patriotism. (With apologies to the memory of Coolidge, my favorite president.)

Now, what did IAT say about my racism, or lack thereof? For years I proudly posted these results at the bottom of my “About” page and in the accompanying moral profile:

The study you just completed is an Implicit Association Test (IAT) that compares the strength of automatic mental associations. In this version of the IAT, we investigated positive and negative associations with the categories of “African Americans” and “European Americans”.

The idea behind the IAT is that concepts with very closely related (vs. unrelated) mental representations are more easily and quickly responded to as a single unit. For example, if “European American” and “good” are strongly associated in one’s mind, it should be relatively easy to respond quickly to this pairing by pressing the “E” or “I” key. If “European American” and “good” are NOT strongly associated, it should be more difficult to respond quickly to this pairing. By comparing reaction times on this test, the IAT gives a relative measure of how strongly associated the two categories (European Americans, African Americans) are to mental representations of “good” and “bad”. Each participant receives a single score, and your score appears below.

Your score on the IAT was 0.07.

Positive scores indicate a greater implicit preference for European Americans relative to African Americans, and negative scores indicate an implicit preference for African Americans relative to European Americans.

Your score appears in the graph below in green. The score of the average Liberal visitor to this site is shown in blue and the average Conservative visitor’s score is shown in red.

Moral profile-implicit association test

It should be noted that my slightly positive score probably was influenced by the order in which choices were presented to me. Initially, pleasant concepts were associated with photos of European-Americans. I became used to that association, and so found that it affected my reaction time when I was faced with pairings of pleasant concepts and photos of African-Americans. The bottom line: My slight preference for European-Americans probably is an artifact of test design.

In other words, I believed that my very low score, despite the test set-up, “proved” that I am not a racist. But thanks (or no thanks) to John Ray and Jesse Singal, I must conclude, sadly, that I have no “official” proof of my non-racism.

I suspect that I am not a racist. I don’t despise blacks as a group, nor do I believe that they should have fewer rights and privileges than whites. (Neither do I believe that they should have more rights and privileges than whites or persons of Asian or Ashkenazi Jewish descent — but they certainly do when it comes to college admissions, hiring, and firing.) It isn’t racist to understand that race isn’t a social construct (except in a meaningless way) and that there are general differences between races (see many of the posts listed here). That’s just a matter of facing facts, not ducking them, as leftists are wont to do.

What have I learned from the IAT? I must have very good reflexes. A person who processes information rapidly and then almost instantly translates it into a physical response should be able to “beat” the IAT. And that’s probably what I did in the Trump vs. Clinton test, if not in the racism test. I’m a fast typist and very quick at catching dropped items before they hit the floor. (My IQ, or what’s left of it, isn’t bad either; go here and scroll down to the section headed “Intelligence, Temperament, and Beliefs”.)

Perhaps the IAT for racism could be used to screen candidates for fighter-pilot training. Only “non-racists” would be admitted. Anyone who isn’t quick enough to avoid the “racist” label isn’t quick enough to win a dogfight.

The Iraq War in Retrospect

Hold your moral judgements.

The Iraq War has been called many things, “immoral” being among the leading adjectives for it. Was it altogether immoral? Was it immoral to remain in favor of the war after it was (purportedly) discovered that Saddam Hussein didn’t have an active program for the production of weapons of mass destruction? Or was the war simply misdirected from its proper — and moral — purpose: the service of Americans’ interests by stabilizing the Middle East? I address those and other questions about the war in what follows.

THE WAR-MAKING POWER AND ITS PURPOSE

The sole justification for the United States government is the protection of Americans’ interests. Those interests are spelled out broadly in the Preamble to the Constitution: justice, domestic tranquility, the common defense, the general welfare, and the blessings of liberty.

Contrary to leftist rhetoric, the term “general welfare” in the Preamble (and in Article I, Section 8) doesn’t grant broad power to the national government to do whatever it deems to be “good”. “General welfare” — general well-being, not the well-being of particular regions or classes — is merely one of the intended effects of the enumerated and limited powers granted to the national government by conventions of the States.

One of the national government’s specified powers is the making of war. In the historical context of the adoption of the Constitution, it is clear the the purpose of the war-making power is to defend Americans and their legitimate interests: liberty generally and, among other things, the free flow of trade between American and foreign entities. The war-making power carries with it the implied power to do harm to foreigners in the course of waging war. I say that because the Framers, many of whom fought for independence from Britain, knew from experience that war, of necessity, must sometimes cause damage to the persons and property of non-combatants.

In some cases, the only way to serve the interests of Americans is to inflict deliberate damage on non-combatants. That was the case, for example, when U.S. air forces dropped atomic bombs on Hiroshima and Nagasaki to force Japan’s surrender and avoid the deaths and injuries of perhaps a million Americans. Couldn’t Japan have been “quarantined” instead, once its forces had been driven back to the homeland? Perhaps, but at great cost to Americans. Luckily, in those days American leaders understood that the best way to ensure that an enemy didn’t resurrect its military power was to defeat it unconditionally and to occupy its homeland. You will have noticed that as a result, Germany and Japan are no longer military threats to the U.S., whereas Iraq remained one after the Gulf War of 1990-1991 because Saddam wasn’t deposed. Russia, which the U.S. didn’t defeat militarily — only symbolically — is resurgent militarily. China, which wasn’t even defeated symbolically in the Cold War, is similarly resurgent, and bent on regional if not global hegemony, necessarily to the detriment of Americans’ interests. To paraphrase: There is no substitute for unconditional military victory.

That is a hard and unfortunate truth, but it eludes many persons, especially those of the left. They suffer under dual illusions, namely, that the Constitution is an outmoded document and that “world opinion” trumps the Constitution and the national sovereignty created by it. Neither illusion is shared by Americans who want to live in something resembling liberty and to enjoy the advantages pertaining thereto, including prosperity.

CASUS BELLI

The invasion of Iraq in 2003 by the armed forces of the U.S. government (and those of other nations) had explicit and implicit justifications. The explicit justifications for the U.S. government’s actions are spelled out in the Authorization for Use of Military Force Against Iraq of 2002 (AUMF). It passed the House by a vote of 296 – 133 and the Senate by a vote of 77 – 23, and was signed into law by President George W. Bush on October 16, 2002.

There are some who focus on the “weapons of mass destruction” (WMD) justification, which figures prominently in the “whereas” clauses of the AUMF. But the war, as it came to pass when Saddam failed to respond to legitimate demands spelled out in the AUMF, had a broader justification than whatever Saddam was (or wasn’t) doing with weapons of mass destruction (WMD). The final “whereas” puts it succinctly: it is in the national security interests of the United States to restore international peace and security to the Persian Gulf region.

An unstated but clearly understood implication of “peace and security in the Persian Gulf region” was the security of the region’s oil supply against Saddam’s capriciousness. The mantra “no blood for oil” to the contrary notwithstanding, it is just as important to defend the livelihoods of Americans as it is to defend their lives — and in many instances it comes to the same thing.

In sum, I disregard the WMD rationale for the Iraq War. The real issue is whether the war secured the stability of the Persian Gulf region (and the Middle East in general). And if it didn’t, why did it fail to do so?

ROADS TAKEN AND NOT TAKEN

One can only speculate about what might have happened in the absence of the Iraq War. For instance, how many more Iraqis might have been killed and tortured by Saddam’s agents? How many more terrorists might have been harbored and financed by Saddam? How long might it have taken him to re-establish his WMD program or build a nuclear weapons program? Saddam, who started it all with the invasion of Kuwait, wasn’t a friend of the U.S. or the West in general. The U.S. isn’t the world’s policeman, but the U.S. government has a moral obligation to defend the interests of Americans, preemptively if necessary.

By the same token, one can only speculate about what might have happened if the U.S. government had prosecuted the war differently than it did, which was “on the cheap”. There weren’t enough boots on the ground to maintain order in the way that it was maintained by the military occupations in Germany and Japan after World War II. Had there been, there wouldn’t have been a kind of “civil war” or general chaos in Iraq after Saddam was deposed. (It was those things, as much as the supposed absence of a WMD program that turned many Americans against the war.)

Speculation aside, I supported the invasion of Iraq, the removal of Saddam, and the rout of Iraq’s armed forces with the following results in mind:

  • A firm military occupation of Iraq, for some years to come.

  • The presence in Iraq and adjacent waters and airspace of U.S. forces in enough strength to control Iraq and deter misadventures by other nations in the region (e.g., Iran and Syria) and prospective interlopers (e.g., Russia).

  • Israel’s continued survival and prosperity under the large shadow cast by U.S. forces in the region.

  • Secure production and shipment of oil from Iraq and other oil-producing nations in the region.

All of that would have happened but for (a) too few boots on the ground (later remedied in part by the “surge”); (b) premature “nation-building”, which helped to stir up various factions in Iraq; (c) Obama’s premature surrender, which he was shamed into reversing; and (d) Obama’s deal with Iran, with its bundles of cash and blind-eye enforcement that supported Iran’s rearmament and growing boldness in the region. (The idea that Iraq, under Saddam, had somehow contained Iran is baloney; Iran was contained only until its threat to go nuclear found a sucker in Obama.)

In sum, the war was only a partial success because (once again) U.S. leaders failed to wage it fully and resolutely. This was due in no small part to incessant criticism of the war, stirred up and sustained by Democrats and the media.

WHO HAD THE MORAL HIGH GROUND?

In view of the foregoing, the correct answer is: the U.S. government, or those of its leaders who approved, funded, planned, and executed the war with the aim of bringing peace and security to the Persian Gulf region for the sake of Americans’ interests.

The moral high ground was shared by those Americans who, understanding the war’s justification on grounds broader than WMD, remained steadfast in support of the war despite the tumult and shouting that arose from its opponents.

There were Americans whose support of the war was based on the claim that Saddam had ore was developing WMD, and whose support ended or became less ardent when WMD seemed not to be in evidence. I wouldn’t presume to judge them harshly for withdrawing their support, but I would judge them myopic for basing it on solely on the WMD predicate. And I would judge them harshly if they joined the outspoken opponents of the war, whose opposition I address below.

What about those Americans who supported the war simply because they believed that President Bush and his advisers “knew what they were doing” or out of a sense of patriotism? That is to say, they had no particular reason for supporting the war other than a general belief that its successful execution would be a “good thing”. None of those Americans deserves moral approbation or moral blame. They simply had better things to do with their lives than to parse the reasons for going to war and for continuing it. And it is no one’s place to judge them for not having wasted their time in thinking about something that was beyond their ability to influence. (See the discussion of “public opinion” below.)

What about those Americans who publicly opposed the war, either from the beginning or later? I cannot fault all of them for their opposition — and certainly not  those who considered the costs (human and monetary) and deemed them not worth the possible gains.

But there were (and are) others whose opposition to the war was and is problematic:

  • Critics of the apparent absence of an active WMD program in Iraq, who seized on the WMD justification and ignored (or failed to grasp) the war’s broader justification.

  • Political opportunists who simply wanted to discredit President Bush and his party, which included most Democrats (eventually), effete elites generally, and particularly most members of the academic-media-information technology complex.

  • An increasingly large share of the impressionable electorate who could not (and cannot) resist a bandwagon.

  • Reflexive pro-peace/anti-war posturing by the young, who are prone to oppose “the establishment” and to do so loudly and often violently.

The moral high ground isn’t gained by misguided criticism, posturing, joining a bandwagon, or hormonal emotionalism.

WHAT ABOUT “PUBLIC OPINION”?

Suppose you had concluded that the Iraq War was wrong because the WMD justification seemed to have been proven false as the war went on. Perhaps even than false: a fraud perpetrated by officials of the Bush administration, if not by the president himself, to push Congress and “public opinion” toward support for an invasion of Iraq.

If your main worry about Iraq, under Saddam, was the possibility that WMD would be used against Americans, the apparent falsity of the WMD claim — perhaps fraudulent falsity — might well have turned you against the war. Suppose that there were many millions of Americans like you, whose initial support of the war turned to disillusionment as evidence of an active WMD program failed to materialize. Would voicing your opinion on the matter have helped to end the war? Did you have a moral obligation to voice your opinion? And, in any event, should wars be ended because of “public opinion”? I will try to answer those questions in what follows.

The strongest case to be made for the persuasive value of voicing one’s opinion might be found in the median-voter theorem. According to Wikipedia, the median-voter theorem

“states that ‘a majority rule voting system will select the outcome most preferred by the median voter”….

The median voter theorem rests on two main assumptions, with several others detailed below. The theorem is assuming [sic] that voters can place all alternatives along a one-dimensional political spectrum. It seems plausible that voters could do this if they can clearly place political candidates on a left-to-right continuum, but this is often not the case as each party will have its own policy on each of many different issues. Similarly, in the case of a referendum, the alternatives on offer may cover more than one issue. Second, the theorem assumes that voters’ preferences are single-peaked, which means that voters have one alternative that they favor more than any other. It also assumes that voters always vote, regardless of how far the alternatives are from their own views. The median voter theorem implies that voters have an incentive to vote for their true preferences. Finally, the median voter theorem applies best to a majoritarian election system.

The article later specifies seven assumptions underlying the theorem. None of the assumptions is satisfied in the real world of American politics. Complexity never favors the truth of any proposition; it simply allows the proposition to be wrong in more ways if all of the assumptions must be true, as is the case here.

There is a weak form of the theorem, which says that

the median voter always casts his or her vote for the policy that is adopted. If there is a median voter, his or her preferred policy will beat any other alternative in a pairwise vote.

That still leaves the crucial assumption that voters are choosing between two options. This is superficially true in the case of a two-person race for office or a yes-no referendum. But, even then, a binary option usually masks non-binary ramifications that voters take into account.

In any case, it is trivially true to say that the preference of the median voter foretells the outcome of an election in a binary election, if the the outcome is decided by majority vote and there isn’t a complicating factor like the electoral college. One could say, with equal banality, that the stronger man wins the weight-lifting contest, the outcome of which determines who is the stronger man.

Why am I giving so much attention to the median-voter theorem? Because, according to a blogger whose intellectual prowess I respect, if enough Americans believe a policy of the U.S. government to be wrong, the policy might well be rescinded if the responsible elected officials (or, presumably, their prospective successors) believe that the median voter wants the policy rescinded. How would that work?

The following summary of the blogger’s case is what I gleaned from his original post on the subject and several comments and replies. I have inserted parenthetical commentary throughout.

  • The pursuit of the Iraq War after the WMD predicate for it was (seemingly) falsified — hereinafter policy X — was immoral because X led unnecessarily to casualties, devastation, and other costs. (As discussed above, there were other predicates for X and other consequences of X, some of them good, but they don’t seem to matter to the blogger.)

  • Because X was immoral (in the blogger’s reckoning), X should have been rescinded.

  • Rescission would have (might have?/should have?) occurred through the operation of the median-voter theorem if enough persons had made known their opposition to X. (How might the median-voter theorem have applied when X wasn’t on a ballot? See below.)

  • Any person who had taken the time to consider X (taking into account only the WMD predicate and unequivocally bad consequences) could only have deemed it immoral. (The blogger originally excused persons who deemed X proper, but later made a statement equivalent to the preceding sentence. This is a variant of “heads, I win; tails, you lose”.)

  • Having deemed X immoral, a person (i.e., a competent, adult American) would have been morally obliged to make known his opposition to X. Even if the person didn’t know of the spurious median-voter theorem, his opposition to X (which wasn’t on a ballot) would somehow have become known and counted (perhaps in a biased opinion poll conducted by an entity opposed to X) and would therefore have helped to move the median stance of the (selectively) polled fragment of the populace toward opposition to X, whereupon X would be rescinded, according to the median-voter theorem. (Or perhaps vociferous opposition, expressed in public protests, would be reported by the media — especially by those already opposed to X — as indicative of public opinion, whether or not it represented a median view of X.)

  • Further, any competent, adult American who didn’t bother to take the time to evaluate X would have been morally complicit in the continuation of X. (This must be the case because the blogger says so, without knowing each person’s assessment of the slim chance that his view of the matter would affect X, or the opportunity costs of evaluating X and expressing his view of it.)

  • So the only moral course of action, according to the blogger, was for every competent, adult American to have taken the time to evaluate X (in terms of the WMD predicate), to have deemed it immoral (there being no other choice given the constraint just mentioned), and to have made known his opposition to the policy. (This despite the fact that most competent, adult Americans know viscerally or from experience that the median-voter theorem is hooey — more about that below — and that it would therefore have been a waste of their time to get worked up about a policy that wasn’t unambiguously immoral. Further, they were and are rightly reluctant to align themselves with howling mobs and biased media — even by implication, as in a letter to the editor — in protest of a policy that wasn’t unambiguously immoral.)

  • Then, X (which wasn’t on a ballot) would have been rescinded, pursuant to the median-voter theorem (or, properly, the outraged/vociferous-pollee/protester-biased pollster/media theorem). (Except that X wasn’t, in fact, rescinded despite massive outpourings of outrage by small fractions of the populace, which were gleefully reflected in biased polls and reported by biased media. Nor was it rescinded by implication when President Bush was up for re-election — he won. It might have been rescinded by implication when the Bush was succeeded by Obama — an opponent of X — but there were many reasons other than X for Obama’s victory: mainly the financial crisis, McCain’s lame candidacy, and a desire by many voters to signal — to themselves, at least — their non-racism by voting for Obama. And X wasn’t doing all that badly at the time of Obama’s election because of the troop “surge” authorized by Bush. Further, Obama’s later attempt to rescind X had consequences that caused him to reverse his attempted rescission, regardless of any lingering opposition to X.)

What about other salient, non-ballot issues? Does “public opinion” make a difference? Sometimes yes, sometimes no. Obamacare, for example, was widely opposed until it was enacted by Congress and signed into law by Obama. It suddenly became popular because much of the populace wants to be on the “winning side” of an issue. (So much for the moral value of public opinion.) Similarly, abortion was widely deemed to be immoral until the Supreme Court legalized it. Suddenly, it began to become acceptable according to “public opinion”. I could go on an on, but you get the idea: Public opinion often follows policy rather than leading it, and its moral value is dubious in any event.

But what about cases where government policy shifted in the aftermath of widespread demonstrations and protests? Did demonstrations and protests lead to the enactment of the Civil Rights Acts of the 1960s? Did they cause the U.S. government to surrender, in effect, to North Vietnam? No and no. From where I sat — and I was a politically aware, voting-age, adult American of the “liberal” persuasion at the time of those events — public opinion had little effect on the officials who were responsible for the Civil Rights Acts or the bug-out from Vietnam.

The civil-rights movement of the 1950s and 1960s and the anti-war movement of the 1960s and 1970s didn’t yield results until years after their inception. And those results didn’t (at the time, at least) represent the views of most Americans who (I submit) were either indifferent or hostile to the advancement of blacks and to the anti-patriotic undertones of the anti-war movement. In both cases, mass protests were used by the media (and incited by the promise of media attention) to shame responsible officials into acting as media elites wanted them to.

Further, it is a mistake to assume that the resulting changes in law (writ broadly to include policy) were necessarily good changes. The stampede to enact civil-rights laws in the 1960s, which hinged not so much on mass protests but on LBJ”s “white guilt” and powers of persuasion, resulted in the political suppression of an entire region, the loss of property rights, and the denial of freedom of association. (See, for example, Christopher Caldwell’s “The Roots of Our Partisan Divide“, Imprimis, February 2020.)

The bug-out from Vietnam foretold the U.S. government’s fecklessness in the Iran hostage crisis; the withdrawal of U.S. forces from Lebanon after the bombing of Marine barracks there; the failure of G.H.W. Bush to depose Saddam when it would have been easy to do so; the legalistic response to the World Trade Center bombing; the humiliating affair in Somalia; Clinton’s failure to take out Osama bin Laden; Clinton’s tepid response to Saddam’s provocations; nation-building (vice military occupation) in Iraq; and Obama’s attempt to pry defeat from the jaws of something resembling victory in Iraq.

All of that, and more, is symptomatic of the influence that “liberal” elites came to exert on American foreign and defense policy after World War II. Public opinion has been a side show, and protestors have been useful idiots to the cause of “liberal internationalism”, that is, the surrender of Americans’ economic and security interests for the sake of various rapprochements toward “allies” who scorn America when it veers ever so slightly from the road to serfdom, and enemies — Russia and China — who have never changed their spots, despite “liberal” wishful thinking. Handing America’s manufacturing base to China in the name of free trade is of a piece with all the rest.

IN CONCLUSION . . .

It is irresponsible to call a policy immoral without evaluating all of its predicates and consequences. One might as well call the Allied leaders of World War II immoral because they chose war — with all of its predictably terrible consequences — rather than abject surrender.

It is fatuous to ascribe immorality to anyone who was supportive of or indifferent to the war. One might as well ascribe immorality to the economic and political ignoramuses who failed to see that FDR’s policies would prolong the Great Depression, that Social Security and its progeny (Medicare and Medicaid) would become entitlements that paved the way for the central government’s commandeering of vast portions of the economy, or that the so-called social safety net would discourage work and permanently depress economic growth in America.

If I were in the business of issuing moral judgments about the Iraq War, I would condemn the strident anti-war faction for its perfidy.