Culture – Language – The Arts

Self-Made Victims

The author of Imlac’s Journal quotes Malcolm Muggeridge on George Bernard Shaw:

He wanted to make a lot of money without being considered rich.

Here is Theodore Dalrymple, writing in the same vein:

[D]uring the early years of the AIDS epidemic … it was demanded of us that we should believe incompatible things simultaneously, for example that it was simply a disease like any other and that it was a disease of unprecedented importance and unique significance; that it could strike anybody but that certain group were martyrs to it; that it must be normalized and yet treated differently….  It was a bit like living under a small version of a communist dictatorship, in which the law of noncontradiction had been abrogated in favor of dialectics, under which all contradictions were compatible, but which contradictions had to be accepted was a matter of the official policy of the moment….

The demand for recognition and nonrecognition at the same time is surely one of the reasons for the outbreak of mass self-mutilation in the Western world in an age of celebrity. A person who treats his face and body like an ironmongery store can hardly desire or expect that you fail to notice it, but at the same time demands that you make no comment about it, draw no conclusions from it, express no aversion toward it, and treat him no differently because of it. You must accept him as he is, however he is, because he has an inalienable right to such acceptance….

I think the same dynamic (if I may call it such) is at work in the current vogue for transsexualism: “You must recognize me and not recognize me at the same time.” In this way, people can simultaneously enjoy the fruits of being normal and very different. To be merely the same as others is a wound to the ego in an age of celebrity, and yet we are herd animals who do not want to wander too far from the herd. And in an age of powerlessness we want to exert power.

What will be the next attempted reconciliation of our incompatible desires? [“Everyday Snowflakes“, Taki’s Magazine, July 15, 2017]

Good question. I don’t have a ready answer, but I have some other examples of incompatible desiderata. Each entry in the list below has two parts: (on the left) an objective that most leftists would claim to support and (on the right) the left-wing policy that hinders attainment of the objective.

Ample employment opportunities for low-skill workers – Minimum wage

Vigorous economic growth – Regulation

Property rights* and freedom of association – Public-accommodation laws

Less crime – Strict gun control or confiscation of guns*

Peace – Less defense spending (and therefore lack of deterrence)

The result of each left-wing policy is to create victims, ranging from young black men to law-abiding citizens to most Americans. The left’s constant search for “victims” is evidently hindered by intellectual myopia.

Moreover, in many cases leftists are actual or potential victims of their own policy preferences. But their magical thinking (unconstrained vision) blinds them to the incompatibility of their desires.

* There are many hypocrites on the left (like Shaw) who would vigorously defend their property rights while proclaiming their attachment to socialism, and who employ guards (with guns) to protect their property.

More posts about the left and magical thinking:
The Left and Its Delusions
A Keynesian Fantasy Land
The Keynesian Fallacy and Regime Uncertainty
America: Past, Present, and Future
IQ, Political Correctness, and America’s Present Condition
The Barbarians Within and the State of the Union
The Pretence of Knowledge
“The Science Is Settled”
The Harmful Myth of Inherent Equality
“And the Truth Shall Set You Free”
The Transgender Fad and Its Consequences

Death of a Nation

More than 50 years ago I heard a white woman say of blacks “They’re not Americans.” I was appalled by that statement, for it contradicted what I had been taught to believe about America, namely, this:

“America is not just a country,” said the rock singer Bono, in Pennsylvania in 2004: “It’s an idea.”

That’s the opening of John O’Sullivan’s essay, “A People, Not Just an Idea” (National Review, November 19, 2015).

Bono is a decent, thoughtful, and public-spirited man. I didn’t choose his quotation to suggest that this view of America is a kind of pop opinion. It just happened that in my Google search his name came ahead of many others, from George Will to Irving Kristol to almost every recent presidential candidate, all of whom had described America either as an idea or as a “proposition nation,” to distinguish it from dynastic realms or “blood and soil” ethnicities. This philosophical definition of America is now the conventional wisdom of Left and Right, at least among people who write and talk of such things.

Indeed, we have heard variations on Bono’s formulation so many times that we probably fail to notice how paradoxical it is. But listen to how it sounds when reversed: “America is not just an idea; it is a nation.” Surely that version has much more of the ring of common sense. For a nation is plainly something larger, more complex, and richer than an idea. A nation may include ideas. It may have evolved under the influence of a particular set of ideas. But because it encompasses so many other things — notably the laws, institutions, language of the nation; the loyalties, stories, and songs of the people; and above all Lincoln’s “mystic chords of memory” — the nation becomes more than an idea with every election, every battle, every hero, every heroic tale, every historical moment that millions share.

That is not to deny that the United States was founded on some very explicit political ideas, notably liberty and equality, which Jefferson helpfully wrote down in the Declaration of Independence. To be founded on an idea, however, is not the same thing as to be an idea. A political idea is not a destination or a conclusion but the starting point of an evolution — and, in the case of the U.S., not really a starting point, either. The ideas in the Declaration on which the U.S. was founded were not original to this country but drawn from the Anglo-Scottish tradition of Whiggish liberalism. Not only were these ideas circulating well before the Revolution, but when the revolutionaries won, they succeeded not to a legal and political wasteland but to the institutions, traditions, and practices of colonial America — which they then reformed rather than abolished….

As John Jay pointed out, Americans were fortunate in having the same religion (Protestantism), the same language, and the same institutions from the first. Given the spread of newspapers, railways, and democratic debate, that broad common culture would intensify the sense of a common American identity over time. It was a cultural identity more than an ethnic one, and one heavily qualified by regional loyalties… And the American identity might have become an ethnic one in time if it had not been for successive waves of immigration that brought other ethnicities into the nation.

That early American identity was robust enough to absorb these new arrivals and to transform them into Americans. But it wasn’t an easy or an uncomplicated matter. America’s emerging cultural identity was inevitably stretched by the arrivals of millions of people from different cultures. The U.S. government, private industry, and charitable organizations all set out to “Americanize” them. It was a great historical achievement and helped to create a new America that was nonetheless the old America in all essential respects….

By World War II, … all but the most recent migrants had become culturally American. So when German commandos were wandering behind American lines in U.S. uniforms during the Battle of the Bulge, the G.I.s testing their identity asked not about … the First Amendment but questions designed to expose their knowledge (or ignorance) of American life and popular culture….

Quite a lot flows from this history. Anyone can learn philosophical Americanism in a civics class; for a deeper knowledge and commitment, living in America is a far surer recipe…. Americans are a distinct and recognizable people with their own history, culture, customs, loyalties, and other qualities that are wider and more various than the most virtuous summary of liberal values….

… If Americans are a distinct people, with their own history, traditions, institutions, and common culture, then they can reasonably claim that immigrants should adapt to them and to their society rather than the reverse. For most of the republic’s history, that is what happened. And in current circumstances, it would imply that Muslim immigrants should adapt to American liberty as Catholic immigrants once did.

If America is an idea, however, then Americans are not a particular people but simply individuals or several different peoples living under a liberal constitution.

For a long time the “particular people” were not just Protestants but white Protestants of European descent. As O’Sullivan points out, Catholics (of European descent) eventually joined the ranks of “particular people”. But there are others — mostly blacks and Hispanics — who never did and never will join those ranks. Whatever the law may say about equality, access to housing, access to public accommodations, and so on, membership in the ranks of “particular people” is up to those who are already members.

The woman who claimed that blacks weren’t Americans was a member. She was a dyed-in-the-wool Southerner, but her attitude wasn’t untypical of the attitudes of many white Americans — Northern and Southern, past and present. Like it or not, the attitude remains prevalent in the country. (Don’t believe polls that purport to demonstrate racial comity; there’s a well-known aversion to giving a “wrong” answer to a pollster.)

The revealed preference of most whites (a preference shared by most blacks) is for racial segregation. Aggregate statistics hide the real story, which is the gentrification of some parts of inner cities (i.e., the creation of white enclaves) and “white flight” from suburbs to which inner-city blacks are fleeing. (See this article, for instance.)

The taste for segregation shows up in statistics about public-school enrollment. (See this article, for instance.) White parents (and affluent blacks) are more often keeping their children out of local public schools with large “minority” enrollments by choosing one of the alternatives legally available to them (e.g., home schooling). (Presidents with school-age children — including Barack Obama — have done the same thing to avoid sending their children to the public schools of the District of Columbia, whose students are predominantly black and Hispanic.)

I have focused on voluntary racial segregation because it underscores the fact — not lost on the white, Southern woman of my acquaintance — that the United States was once built upon the “blood and soil” ethnicity of whites whose origins lay in Europe. Blacks can never be part of that nation. Neither can Hispanics, who now outnumber blacks in America. Blacks and Hispanics belong to the “proposition” nation.

They have been joined by the large numbers of Americans who no longer claim allegiance to the “blood and soil” nation, regardless of their race or ethnicity — leftists, in other words. Since the 1960s leftists have played an ever-larger, often dominant, role in the governance of America. They have rejected the “history, culture, customs, [and] loyalties” which once bound most Americans. In fact they are working daily — through the academy, the media, and the courts — to transform America fundamentally by erasing the “history, culture, customs, [and] loyalties” of Americans from the people’s consciousness and the nation’s laws.

Pat Buchanan, who is usually too strident for my taste, hits it on the head:

In Federalist No. 2, John Jay writes of them as “one united people . . . descended from the same ancestors, speaking the same language, professing the same religion, attached to the same principles of government, very similar in their manners and customs . . .”

If such are the elements of nationhood and peoplehood, can we still speak of Americans as one nation and one people?

We no longer have the same ancestors. They are of every color and from every country. We do not speak one language, but rather English, Spanish and a host of others. We long ago ceased to profess the same religion. We are Evangelical Christians, mainstream Protestants, Catholics, Jews, Mormons, Muslims, Hindus and Buddhists, agnostics and atheists.

Federalist No. 2 celebrated our unity. Today’s elites proclaim that our diversity is our strength. But is this true or a tenet of trendy ideology?

After the attempted massacre of Republican Congressmen at that ball field in Alexandria, Fareed Zakaria wrote: “The political polarization that is ripping this country apart” is about “identity . . . gender, race, ethnicity, sexual orientation (and) social class.” He might have added — religion, morality, culture and history.

Zakaria seems to be tracing the disintegration of our society to that very diversity that its elites proclaim to be its greatest attribute: “If the core issues are about identity, culture and religion … then compromise seems immoral. American politics is becoming more like Middle Eastern politics, where there is no middle ground between being Sunni or Shiite.”

Among the issues on which we Americans are at war with one another — abortion, homosexuality, same-sex marriage, white cops, black crime, Confederate monuments, LGBT rights, affirmative action.

America is no longer a nation whose inhabitants are bound mainly by “blood and soil”. Worse than that, it was — until the election of 2016 — fast becoming a nation governed by the proposition that liberty is only what leftists say it is: the liberty not to contradict the left’s positions on climate, race, intelligence, economics, religion, marriage, the right to life, and government’s intrusive role in all of those things and more. The resistance to Donald Trump is fierce and unforgiving because his ascendancy threatens what leftists have worked so hard to achieve in the last 50 years: the de-Americanization of America.

Is all of this just the grumbling of white men of European descent? I think not. Measures of national unity are hard to come by. Opinion polls, aside from their relatively brief history (compared with the age of the Union), are notoriously unreliable. Presidential elections are more meaningful because (some degree of chicanery aside) they reflect voters’ feelings about the state of the Union. Regardless of the party affiliation of the winning candidate, a strong showing usually reflects broad satisfaction with the nation’s direction; a weak showing usually reflects the opposite.

Popular votes were first recorded in the election of 1824. Here is a graphical history of the winning candidate’s percentages of the vote in each election from 1824 through 2016 (with the exclusion of 1864, when the South wasn’t in the Union):

Derived from this table in this article at Wikipedia.

Election-to-election variations reflect the personal popularity of some candidates, the strength of third-party movements, and various other transitory factors. The 5-election average smooths those effects and reveals what is (to me) an obvious story: national disunity in the years before and after the Civil War; growing unity during the first half of the 20th century, peaking during the Great Depression and World War II; modest post-war decline followed by stability through the 1980s; and rapid decline since then because of the left’s growing power and the rapid rise of the Hispanic population.

The graph underscores what I already knew: The America in which I was born and raised — the America of the 1940s and 1950s — has been beaten down. It is more likely to die than it is to revive. And even if it revives to some degree, it will never be the same.

Related posts:
Academic Bias
Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Liberty and Society
The Eclipse of “Old America”
Genetic Kinship and Society
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
Class in America
A Word of Warning to Leftists (and Everyone Else)
Another Thought or Two about Class
The Vast Left-Wing Conspiracy
The Left and Evergreen State: Reaping What Was Sown

Academic Freedom, Freedom of Speech, and the Demise of Civility

A professor has been suspended after claiming that anyone witnessing white people in mortal danger should “let them fucking die.”

Daniel Payne, The College Fix (June 27, 2017)

Predictably, the suspension of the professor — one Johnny Eric Williams of Trinity College in Hartford, Connecticut — and a similar case at Essex County (New Jersey) College caused the usual (left-wing) suspects to defend the offenders and claim that their “academic freedom” and “freedom of speech” were being violated. (Boo hoo.) I will be unsurprised if the ACLU doesn’t weigh in on the side of hate.

This is what happens when the law becomes an abstraction, separate and apart from social norms. There is no better example of the degradation of the law, and of public discourse, than the case of Snyder v. Phelps, which I addressed in “Rethinking the Constitution: ‘Freedom of Speech and of the Press’“. What follows is based on that post.

Contrary to the current state of constitutional jurisprudence, freedom of speech and freedom of the press — and, by implication, academic freedom — do not comprise an absolute license to “express” almost anything, regardless of the effects on the social fabric.

One example of misguided absolutism is found in Snyder v. Phelps, a case recently and wrongly decided by the U.S. Supreme Court. This is from “The Burkean Justice” (The Weekly Standard, July 18, 2011):

When the Supreme Court convened for oral argument in Snyder v. Phelps, judicial formalities only thinly veiled the intense bitterness smoldering among the parties and their supporters. At one table sat counsel for Albert Snyder, father of the late Marine Lance Corporal Matthew Snyder, who was killed in al Anbar Province, Iraq. At the other sat Margie Phelps, counsel for (and daughter of) Fred Phelps, whose notorious Westboro Baptist Church descended upon Snyder’s Maryland funeral, waving signs bearing such startlingly offensive slogans as “Thank God for IEDs,” “God Hates Fags,” and “Thank God for Dead Soldiers.” A federal jury had awarded Snyder nearly $11 million for the “severe depression” and “exacerbated preexisting health conditions” that Phelps’s protest had caused him.

In the Supreme Court, Phelps argued that the jury’s verdict could not stand because the First Amendment protected Westboro’s right to stage their protest outside the funeral. As the Court heard the case on a gray October morning, Westboro protesters marched outside the courthouse, informing onlookers that God still “Hates Fags” and advising them to “Pray for More Dead Soldiers.”

Amidst that chaos, the Court found not division, but broad agreement. On March 2, 2011, it held that Westboro’s slurs were protected by the First Amendment, and that Snyder would receive no compensation, let alone punitive damages, for the emotional injuries that he had suffered. Chief Justice John Roberts wrote the Court’s opinion, speaking for all of his brethren, conservatives and liberals alike—except one.

Justice Samuel Alito rejected the Court’s analysis and wrote a stirring lone dissent. “The Court now holds that the First Amendment protected respondents’ right to brutalize Mr. Snyder. I cannot agree.” Repeatedly characterizing Westboro’s protest as not merely speech but “verbal assaults” that “brutally attacked” the fallen Snyder and left the father with “wounds that are truly severe and incapable of healing themselves,” Justice Alito concluded that the First Amendment’s text and precedents did not bar Snyder’s lawsuit. “In order to have a society in which public issues can be openly and vigorously debated, it is not necessary to allow the brutalization of innocent victims. .  .  . I therefore respectfully dissent.”

There is more:

Snyder v. Phelps would not be the last time that Alito stood nearly alone in a contentious free speech case this term. Just weeks ago, as the Court issued its final decisions of the term, Alito rejected the Court’s broad argument that California could not ban the distribution of violent video games without parental consent. Although he shared the Court’s bottom-line conclusion that the particular statute at issue was unconstitutional, he criticized the majority’s analysis in Brown v. Entertainment Merchants Association as failing to give states and local communities latitude to promote parental control over children’s video-game habits. The states, he urged, should not be foreclosed from passing better-crafted statutes achieving that legitimate end.

Moreover, Alito’s opinions in those cases followed a solo dissent late in the previous term, in United States v. Stevens, where eight of the nine justices struck down a federal law barring the distribution of disturbing “crush videos” in which, for example, a woman stabs a kitten through the eye with her high heel, all for the gratification of anonymous home audiences.

The source of Alito’s positions:

[T]hose speculating as to the roots of Alito’s jurisprudence need look no further than his own words—in public documents, at his confirmation hearing, and elsewhere. Justice Alito is uniquely attuned to the space that the Constitution preserves for local communities to defend the vulnerable and to protect traditional values. In these three new opinions, more than any others, he has emerged as the Court’s Burkean justice….

A review of Alito’s Snyder, Brown, and Stevens opinions quickly suggests the common theme: Alito, more than any of his colleagues, would not allow broad characterizations of the freedom of speech effectively to immunize unlawful actions. He sharply criticized the Court for making generalized pronouncements on the First Amendment’s reach, when the Court’s reiterations of theory glossed over the difficult factual questions that had given rise to regulation in the first place—whether in grouping brutal verbal attacks with protected political speech; or in equating interactive Duke Nukem games with the text of Grimm’s Fairy Tales; or in extending constitutional protection to the video of women illegally crushing animals. And Alito was particularly sensitive to the Court’s refusal to grant at least a modicum of deference to the local communities and state officials who were attempting to protect their populations against actions that they found so injurious as to require state intervention.

A general and compelling case against the current reign of absolutism is made by David Lowenthal in No Liberty for License: The Forgotten Logic of the First Amendment. My copy is now in someone else’s hands, so I must rely on Edward J. Erler’s review of the book:

Liberty is lost when the law allows “freedom of speech, and of the press” to undermine the social norms that enable liberty. Liberty is not an abstraction, it is it is the scope of action that is allowed by socially agreed upon rights. It is that restrained scope of action which enables people to coexist willingly, peacefully, and cooperatively for their mutual benefit. Such coexistence depends greatly on mutual trust, respect, and forbearance. Liberty is therefore necessarily degraded when courts sunder social restraints in the name of liberty.

Other related posts:
On Liberty
Line-Drawing and Liberty
The Meaning of Liberty
Positive Liberty vs. Liberty
Facets of Liberty
Burkean Libertarianism
What Is Libertarianism?
True Libertarianism, One More Time
Human Nature, Liberty, and Rationalism
Liberty, Negative Rights, and Bleeding Hearts
Why Conservatism Works
Liberty and Society
Liberty as a Social Construct: Moral Relativism?
Defending Liberty against (Pseudo) Libertarians
Defining Liberty
The Futile Search for “Natural Rights”
The Pseudo-Libertarian Temperament
Parsing Political Philosophy (II)
Getting Liberty Wrong
Libertarianism and the State
“Liberalism” and Personal Responsibility
My View of Libertarianism
More About Social Norms and Liberty
The War on Conservatism
Social Justice vs. Liberty
Economically Liberal, Socially Conservative
The Harm Principle Revisited: Mill Conflates Society and State
Liberty and Social Norms Re-examined
Natural Law, Natural Rights, and the Real World
Rescuing Conservatism
If Men Were Angels
The Left and Evergreen State: Reaping What Was Sown

Suicidal Despair and the “War on Whites”

This entry is prompted by a recent spate of posts and articles about the rising mortality rate among non-Hispanic whites without a college degree (hereinafter working-class whites, for convenience). Thomas Lifson characterizes the trend as “a spiritual crisis”, after saying this:

White males, in large numbers, are simply losing their will to live, and as a result, they are dying so prematurely and in such large numbers that a startling demographic gap has emerged. [“Stunning Evidence that the Left Has Won its War on White Males“, American Thinker, March 26, 2017]

Later in the piece, Lifson gets to the “war” on white males:

For at least four decades, white males have been under continuous assault as bearers of “white privilege” and beneficiaries of sexism. Special preferences and privileges have been granted to other groups, but that is the least of it.  More importantly, the very basis of the psychological self-worth of white males have been under attack.  White males are frequently instructed by authority figures in education and the media that they are responsible for most of the evils of the modern world, that the achievements of Euro-American civilization are a net loss for humanity, stained by exploitation, racism, unfairness, and every other collective evil the progressive mind can manufacture.

Some white males are relatively unscathed by the psychological warfare, but others are more vulnerable. Those who have educational, financial, or employment achievements that have rewarded their efforts may be able to keep going as productive members of society, their self-esteem resting on tangible fruits of their work and social position. But other white males, especially those who work with their hands and have been seeing job opportunities contract or disappear, have been losing the basis for a robust sense of self-worth as their job opportunities disappear.

We now have statistical evidence that political correctness kills.

We have no such thing. The recent trend isn’t yet significant. But it is real, and government is the underlying cause.

To begin at the beginning, the source of the spate of articles about the rising mortality rate of working-class whites is Anne Case and Angus Deaton’s “Mortality and Morbidity in the 21st Century” (Brookings Institution, Brookings Papers on Economic Activity (conference edition), March 17, 2017). Three of the paper’s graphs set the scene. This one shows mortality trends in the United States:

The next figure indicates that the phenomenon isn’t unique to non-Hispanic whites in the age 50-54 bracket:

But the trend among American whites defies the trends in several other Western nations:

Whence the perverse trend? It seems due mainly to suicidal despair:

How do these recent trends stack up against the long view? I couldn’t find a long time series for drug, alcohol, and suicide mortality. But I did find a study by Feijin Luo et al. that traces suicide rates from just before the onset of the Great Depression to just before the onset of the Great Recession — “Impact of Business Cycles on US Suicide Rates, 1928–2007” (American Journal of Public Health, June 2011). Here are two key graphs from the report:

The graphs don’t reproduce well, so the following quotations will be of help:

The overall suicide rate fluctuated from 10.4 to 22.1 over the 1928–2007 period. It peaked in 1932, the last full year of the Great Depression, and bottomed in 2000. The overall suicide rate decreased from 18.0 in 1928 to 11.2 in 2007. However, most of the decline occurred before 1945; after that it fluctuated until the mid-1950s, and then it gradually moved up until the late 1970s. The overall suicide rate resumed its downward trend from the mid-1980s to 2000, followed by a trend reversal in the new millennium.

Figure 1a [top] shows that the overall suicide rate generally increased in recessions, especially in severe recessions that lasted longer than 1 year. The largest increase in the overall suicide rate occurred during the Great Depression (1929–1933), when it surged from 18.0 in 1928 to 22.1 (the all-time high) in 1932, the last full year of the Great Depression. [The Great Depression actually lasted until 1940: TEA.] This increase of 22.8% was the highest recorded for any 4-year interval during the study period. The overall suicide rate also rose during 3 other severe recessions: [the recession inside the Great Depression] (1937–1938), the oil crisis (1973–1975), and the double-dip recession (1980–1982). Not only did the overall suicide rate generally rise during recessions; it also mostly fell during expansions…. However, the overall suicide rate did not fall during the 1960s (i.e., 1961–1969), a notable phenomenon that will be explained by the different trends of age-specific suicide rates.

The age-specific suicide rates displayed more variations than did the overall suicide rate, and the trends of those age-specific suicide rates were largely different. As shown in Figure 1b [bottom], from 1928–2007, the suicide rates of the 2 elderly groups (65–74 years and 75 years and older) and the oldest middle-age group (55–64 years) experienced the most remarkable decline. The suicide rates of those groups declined in both pre- and postwar periods. The suicide rates of the other 2 middle-aged groups (45–54 years and 35–44 years) also declined from 1928–2007, which we attributed to the decrease during the war period more than offsetting the increase in the postwar period. In contrast with the declining suicide rates of the 2 elderly and 3 middle-age groups, the suicide rates of the 2 young groups (15–24 years and 25–34 years) increased or just marginally decreased from 1928–2007. The 2 young groups experienced a marked increase in suicide rates in the postwar period. The suicide rate of the youngest group (5–14 years) also increased from 1928–2007. However, because of its small magnitude, we do not include this increase in the subsequent discussion.

We noted that the suicide rate of the group aged 65–74 years, the highest of all age groups until 1936, declined the most from 1928 to 2007. That rate started at 41.2 in 1928 and dropped to 12.6 in 2007, peaking at 52.3 in 1932 and bottoming at 12.3 in 2004. By contrast, the suicide rate of the group aged 15–24 years increased from 6.7 in 1928 to 9.7 in 2007. That rate peaked at 13.7 in 1994 and bottomed at 3.8 in 1944, and it generally trended upward from the late 1950s to the mid-1990s. The suicide rate differential between the group aged 65–74 years and the group aged 15–24 years generally decreased until 1994, from 34.5 in 1928 to 1.6 in 1994.

All age groups experienced a substantial increase in their suicide rates during the Great Depression, and most groups (35–44 years, 45–54 years, 55–64 years, 65–74 years, and 75 years and older) set record-high suicide rates in 1932; but they reacted differently to many other recessions, including severe recessions such as the [1937-1938 recession] and the oil crisis. Their reactions were different during expansions as well, most notably in the 1960s, when the suicide rates of the 3 oldest groups (75 years and older, 65–74 years, and 55–64 years) declined moderately, and those of the 3 youngest groups (15–24 years, 25–34 years, and 35–44 years) rose noticeably….

[T]he overall suicide rate and the suicide rate of the group aged 45–54 years were associated with business cycles at the significance level of 1%; the suicide rates of the groups aged 25–34 years, 35–44 years, and 55–64 years were associated with business cycles at the significance level of 5%; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were associated with business cycles at nonsignificant levels. To sumarize, the overall suicide rate was significantly countercyclical; the suicide rates of the groups aged 25–34 years, 35–44 years, 45–54 years, and 55–64 years were significantly countercyclical; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were not significantly countercyclical.

The following graph, obtained from the website of the American Foundation for Suicide Prevention, extends the age-related analysis to 2015:

And this graph, from the same source, shows that the rising suicide rate is concentrated among whites and American Indians:

Though this graph encompasses deaths from all causes, the opposing trends for blacks and whites suggest strongly that working-class whites in all age groups have become much more prone to suicidal despair in the past 20 years. Moreover, the despair has persisted through periods of economic decline and economic growth (slow as it has been).

Why? Case and Deaton opine:

[S]ome of the most convincing discussions of what has happened to working class whites emphasize a long-term process of decline, or of cumulative deprivation, rooted in the steady deterioration in job opportunities for people with low education…. This process … worsened over time, and caused, or at least was accompanied by, other changes in society that made life more difficult for less-educated people, not only in their employment opportunities, but in their marriages, and in the lives of and prospects for their children. Traditional structures of social and economic support slowly weakened; no longer was it possible for a man to follow his father and grandfather into a manufacturing job, or to join the union. Marriage was no longer the only way to form intimate partnerships, or to rear children. People moved away from the security of legacy religions or the churches of their parents and grandparents, towards churches that emphasized seeking an identity, or replaced membership with the search for connections…. These changes left people with less structure when they came to choose their careers, their religion, and the nature of their family lives. When such choices succeed, they are liberating; when they fail, the individual can only hold him or herself responsible….

As technical change and globalization reduced the quantity and quality of opportunity in the labor market for those with no more than a high school degree, a number of things happened that have been documented in an extensive literature. Real wages of those with only a high school degree declined, and the college premium increased….

Lower wages made men less marriageable, marriage rates declined, and there was a marked rise in cohabitation, then much less frowned upon than had been the case a generation before…. [B]eyond the cohort of 1940, men and women with less than a BA degree are less likely to have ever been married at any given age. Again, this is not occurring among those with a four-year degree. Unmarried cohabiting partnerships are less stable than marriages. Moreover, among those who do marry, those without a college degree are also much more likely to divorce than are those with a degree….

These accounts share much, though not all, with Murray’s … account [in Coming Apart] of decline among whites in his fictional “Fishtown.” Murray argues that traditional American virtues are being lost among working-class white Americans, especially the virtue of industriousness. The withdrawal of men from the labor force reflects this loss of industriousness; young men in particular prefer leisure—which is now more valuable because of video games … —though much of the withdrawal of young men is for education…. The loss of virtue is supported and financed by government payments, particularly disability payments….

In our account here, we emphasize the labor market, globalization and technical change as the fundamental forces, and put less focus on any loss of virtue, though we certainly accept that the latter could be a consequence of the former. Virtue is easier to maintain in a supportive environment. Yet there is surely general agreement on the roles played by changing beliefs and attitudes, particularly the acceptance of cohabitation, and of the rearing of children in unstable cohabiting unions.

These slow-acting and cumulative social forces seem to us to be plausible candidates to explain rising morbidity and mortality, particularly their role in suicide, and with the other deaths of despair, which share much with suicides. As we have emphasized elsewhere, … purely economic accounts of suicide have consistently failed to explain the phenomenon. If they work at all, they work through their effects on family, on spiritual fulfillment, and on how people perceive meaning and satisfaction in their lives in a way that goes beyond material success. At the same time, cumulative distress, and the failure of life to turn out as expected is consistent with people compensating through other risky behaviors such as abuse of alcohol, overeating, or drug use that predispose towards the outcomes we have been discussing….

What our data show is that the patterns of mortality and morbidity for white non-Hispanics without a college degree move together over lifetimes and birth cohorts, and that they move in tandem with other social dysfunctions, including the decline of marriage, social isolation, and detachment from the labor force…. Whether these factors (or factor) are “the cause” is more a matter of semantics than statistics. The factor could certainly represent some force that we have not identified, or we could try to make a case that declining real wages is more fundamental than other forces. Better, we can see globalization and automation as the underlying deep causes. Ultimately, we see our story as about the collapse of the white, high school educated, working class after its heyday in the early 1970s, and the pathologies that accompany that decline. [Op. cit., pp. 29-38]

The seemingly rigorous and well-reasoned analyses reported by Case-Deaton and Luo et al. are seriously flawed, for these reasons:

  • Case and Deaton’s focus on events since 1990 is analogous to a search for lost keys under a street lamp because that’s where the light is. As shown in the graphs taken from Luo et al., suicide rates have at various times risen (and dropped) as sharply as they have in recent years.
  • Luo et al. address a much longer time span but miss an important turning point, which came during World War II. Because of that, they resort to a strained, non-parametric analysis of the relationship between the suicide rate and business cycles.
  • It is misleading to focus on age groups, as opposed to birth-year cohorts. For example, persons in the 50-54 age group in 1990 were born between 1936 and 1940, but in 2010 persons in the 50-54 age group were born between 1956 and 1960. The groups, in other words, don’t represent the same cohort. The only meaningful suicide rate for a span of more than a few years is the rate for the entire population.

I took a fresh look at the overall suicide rate and its relationship to the state of the economy. First, I extended the overall, age-adjusted suicide rate for 1928-2007 provided by Luo et al. in Supplementary Table B (purchase required) by splicing it with a series for 1999-2014 from Centers for Disease Control and Prevention, National Center for Health Statistics, Data Brief 241. I then drew on the database at Measuring Worth to derive year-over-year changes in real GDP for 1928-2014. Here’s an overview of the two time series:

The suicide rate doesn’t drop below 15 until 1942. From 1943 through 2014 it vacillates in a narrow range between 10.4 (2000) and 13.6 (1975). Despite the rise since 2000, the overall rate still hasn’t returned to the 1975 peak. And only in recent years has the overall rate reached figures that were reached often between 1943 and 1994.

Moreover, the suicide rate from 1928 through 1942 is strongly correlated with changes in real GDP. But the rate from 1943 through 2014 is not:

Something happened during the war years to loosen the connection between the state of the economy and the suicide rate. That something was the end of the pervasive despair that the Great Depression inflicted on huge numbers of Americans. It’s as if America had a mood transplant, one which has lasted for more than 70 years. The recent uptick in the rate of suicide (and the accompanying rise in slow-motion suicide) is sad because it represents wasted lives. But it is within one standard deviation of the 1943-2014 average of  12.2 suicides per 100,000 persons:

(It seems to me that researchers ought to be asking why the rate was so low for about 20 years, beginning in the 1980s.)

Perhaps the recent uptick among working-class whites can be blamed, in part, on loss of “virtue”, technological change, and globalization — as Case and Deaton claim. But they fail to notice the bigger elephant in the room: the destructive role of government.

Technological change and globalization simply reinforce the disemployment effects of the long-term decline in the rate of economic growth. I’ve addressed the decline many times, most recently in “Presidents and Economic Growth“. The decline has three main causes, all attributable to government action, which I’ve assessed in “The Rahn Curve Revisited“: the rise in government spending as a fraction of GDP, the rise in the number of regulations on the books, and the (unsurprising) effect of those variables on private business investment. The only silver lining has been a decline in the rate of inflation, which is unsurprising in view of the general slow-down of economic growth. Many jobs may have disappeared because of technological change and many jobs may have been “shipped overseas”, but there would be a lot more jobs if government had kept its hands off the economy and out of Americans’ wallets.

Moreover, the willingness of Americans — especially low-skill Americans — to seek employment has been eroded by various government programs: aid to the families of dependent children (a boon to unwed mothers and a bane to family stability), food stamps, disability benefits, the expansion of Medicaid, subsidized health-care for “children” under the age of 26, and various programs that encourage women to work outside the home, thus fostering male unemployment.

Economist Edward Glaeser puts it this way:

The rise of joblessness—especially among men—is the great American domestic crisis of the twenty-first century. It is a crisis of spirit more than of resources. The jobless are far more prone to self-destructive behavior than are the working poor. Proposed solutions that focus solely on providing material benefits are a false path. Well-meaning social policies—from longer unemployment insurance to more generous disability diagnoses to higher minimum wages—have only worsened the problem; the futility of joblessness won’t be solved with a welfare check….

The New Deal saw the rise of public programs that worked against employment. Wage controls under the National Recovery Act made it difficult for wages to fall enough to equilibrate the labor market. The Wagner Act strengthened the hand of unions, which kept pay up and employment down. Relief efforts for the unemployed, including federal make-work jobs, eased the pressure on the jobless to find private-sector work….

… In 2011, more than one in five prime-age men were out of work, a figure comparable with the Great Depression. But while employment came back after the Depression, it hasn’t today. The unemployment rate may be low, but many people have quit the labor force entirely and don’t show up in that number. As of December 2016, 15.2 percent of prime-age men were jobless—a figure worse than at any point between World War II and the Great Recession, except during the depths of the early 1980s recession….

Joblessness is disproportionately a condition of the poorly educated. While 72 percent of college graduates over age 25 have jobs, only 41 percent of high school dropouts are working. The employment-rate gap between the most and least educated groups has widened from about 6 percent in 1977 to almost 15 percent today….

Both Franklin Roosevelt and Lyndon Johnson aggressively advanced a stronger safety net for American workers, and other administrations largely supported these efforts. The New Deal gave us Social Security and unemployment insurance, which were expanded in the 1950s. National disability insurance debuted in 1956 and was made far more accessible to people with hard-to-diagnose conditions, like back pain, in 1984. The War on Poverty delivered Medicaid and food stamps. Richard Nixon gave us housing vouchers. During the Great Recession, the federal government temporarily doubled the maximum eligibility time for receiving unemployment insurance.

These various programs make joblessness more bearable, at least materially; they also reduce the incentives to find work. Consider disability insurance. Industrial work is hard, and plenty of workers experience back pain. Before 1984, however, that pain didn’t mean a disability check for American workers. After 1984, though, millions went on the disability rolls. And since disability payments vanish if the disabled person starts earning more than $1,170 per month, the disabled tend to stay disabled…. Disability insurance alone doesn’t entirely explain the rise of long-term joblessness—only one-third or so of jobless males get such benefits. But it has surely played a role.

Other social-welfare programs operate in a similar way. Unemployment insurance stops completely when someone gets a job, … [thus] the unemployed tend to find jobs just as their insurance payments run out. Food-stamp and housing-voucher payments drop 30 percent when a recipient’s income rises past a set threshold by just $1. Elementary economics tells us that paying people to be or stay jobless will increase joblessness….

The rise of joblessness among the young has been a particularly pernicious effect of the Great Recession. Job loss was extensive among 25–34-year-old men and 35–44-year-old men between 2007 and 2009. The 25–34-year-olds have substantially gone back to work, but the number of employed 35–44-year-olds, which dropped by 2 million at the start of the Great Recession, hasn’t recovered. The dislocated workers in this group seem to have left the labor force permanently.

Unfortunately, policymakers seem intent on making the joblessness crisis worse. The past decade or so has seen a resurgent progressive focus on inequality—and little concern among progressives about the downsides of discouraging work. Advocates of a $15 minimum hourly wage, for example, don’t seem to mind, or believe, that such policies deter firms from hiring less skilled workers. The University of California–San Diego’s Jeffrey Clemens examined states where higher federal minimum wages raised the effective state-level minimum wage during the last decade. He found that the higher minimum “reduced employment among individuals ages 16 to 30 with less than a high school education by 5.6 percentage points,” which accounted for “43 percent of the sustained, 13 percentage point decline in this skill group’s employment rate.”

The decision to prioritize equality over employment is particularly puzzling, given that social scientists have repeatedly found that unemployment is the greater evil…. One recent study estimated that unemployment leads to 45,000 suicides worldwide annually. Jobless husbands have a 50 percent higher divorce rate than employed husbands. The impact of lower income on suicide and divorce is much smaller. The negative effects of unemployment are magnified because it so often becomes a semipermanent state.

Time-use studies help us understand why the unemployed are so miserable. Jobless men don’t do a lot more socializing; they don’t spend much more time with their kids. They do spend an extra 100 minutes daily watching television, and they sleep more. The jobless also are more likely to use illegal drugs….

Joblessness and disability are also particularly associated with America’s deadly opioid epidemic…. The strongest correlate of those deaths is the share of the population on disability. That connection suggests a combination of the direct influence of being disabled, which generates a demand for painkillers; the availability of the drugs through the health-care system; and the psychological misery of having no economic future.

Increasing the benefits received by nonemployed persons may make their lives easier in a material sense but won’t help reattach them to the labor force. It won’t give them the sense of pride that comes from economic independence. It won’t give them the reassuring social interactions that come from workplace relationships. When societies sacrifice employment for a notion of income equality, they make the wrong choice. [“The War on Work — And How to End It“, City Journal, special issue: The Shape of Work to Come 2017]

In sum, the rising suicide rate — whatever its significance — is a direct and indirect result of government policies. “We’re from the government and we’re here to help” is black humor, at best. The left is waging a war on white males. But the real war — the war that kills — is hidden from view behind the benign facade of governmental “compassion”.

Having said all of that, I will end on a cautiously positive note. There still is upward mobility in America. Not all working-class people are destined for suicidal despair, only those at the margin who have pocketed the fool’s gold of government handouts.

Other related posts:
Bubbling Along
Economic Mobility Is Alive and Well in America
H.L. Mencken’s Final Legacy
The Problem with Political Correctness
“They Deserve to Die”?
Mencken’s Pearl of Wisdom
Class in America
Another Thought or Two about Class
The Midwest Is a State of Mind

A Personality Test: Which Antagonist Do You Prefer?

1. Archangel Michael vs. Lucifer (good vs. evil)

2. David vs. Goliath (underdog vs. bully)

3. Alexander Hamilton vs. Aaron Burr (a slippery politician vs. a slippery politician-cum-traitor)

4. Richard Nixon vs. Alger Hiss (a slippery politician vs. a traitorous Soviet spy)

5. Sam Ervin vs. Richard Nixon (an upholder of the Constitution vs. a slippery politician)

6. Kenneth Starr vs. Bill Clinton (a straight arrow vs. a slippery politician)

7. Elmer Fudd vs. Bugs Bunny (a straight arrow with a speech impediment vs. a rascally rabbit)

8. Jerry vs. Tom (a clever mouse vs. a dumb but determined cat)

9. Tweety Bird vs. Sylvester the Cat (a devious bird vs. a predatory cat)

10. Road Runner vs. Wile E. Coyote (a devious bird vs. a stupid canine)

11. Rocky & Bullwinkle vs. Boris & Natasha (fun-loving good guys vs. funny bad guys)

12. Dudley Do-Right vs. Snidely Whiplash (a straight arrow vs. a stereotypical villain)

Summarize and explain your choices in the comments. Suggestions for other pairings are welcome.

The Midwest Is a State of Mind

I am a son of the Middle Border,* now known as the Midwest. I left the Midwest, in spirit, almost 60 years ago, when I matriculated at a decidedly cosmopolitan State university. It was in my home State, but not much of my home State.

Where is the Midwest? According to Wikipedia, the U.S. Census Bureau defines the Midwest as comprising the 12 States shaded in red:

They are, from north to south and west to east, North Dakota, South Dakota, Nebraska, Kansas, Minnesota, Iowa Missouri, Wisconsin, Illinois, Michigan, Indiana, and Ohio.

In my experience, the Midwest really begins on the west slope of the Appalachians and includes much of New York State and Pennsylvania. I have lived and traveled in that region, and found it, culturally, to be much like the part of “official” Midwest where I was born and raised.

I am now almost 60 years removed from the Midwest (except for a three-year sojourn in the western part of New York State, near the Pennsylvania border). Therefore, I can’t vouch for the currency of a description that appears in Michael Dirda’s review of Jon K. Lauck’s From Warm Center to Ragged Edge: The Erosion of Midwestern Literary and Historical Regionalism, 1920-1965 (Iowa and the Midwest Experience). Dirda writes:

[Lauck] surveys “the erosion of Midwestern literary and historical regionalism” between 1920 and 1965. This may sound dull as ditch water to those who believe that the “flyover” states are inhabited largely by clodhoppers, fundamentalist zealots and loudmouthed Babbitts. In fact, Lauck’s aim is to examine “how the Midwest as a region faded from our collective imagination” and “became an object of derision.” In particular, the heartland’s traditional values of hard work, personal dignity and loyalty, the centrality it grants to family, community and church, and even the Jeffersonian ideal of a democracy based on farms and small land-holdings — all these came to be deemed insufferably provincial by the metropolitan sophisticates of the Eastern Seaboard and the lotus-eaters of the West Coast.

That was the Midwest of my childhood and adolescence. I suspect that the Midwest of today is considerably different. American family life is generally less stable than it was 60 years ago; Americans generally are less church-going than they were 60 years ago; and social organizations are less robust than they were 60 years ago. The Midwest cannot have escaped two generations of social and cultural upheaval fomented by the explosion of mass communications, the debasement of mass culture, the rise of the drugs-and-rock culture, the erasure of social norms by government edicts, and the creation of a culture of dependency on government.

I nevertheless believe that there is a strong, residual longing for and adherence to the Midwestern culture of 60 years ago — though it’s not really unique to the Midwest. It’s a culture that persists throughout America, in rural areas, villages, towns, small cities, and even exurbs of large cities.

The results of last year’s presidential election bear me out. Hillary Clinton represented the “sophisticates” of the Eastern Seaboard and the lotus-eaters of the West Coast. She represented the supposed superiority of technocracy over the voluntary institutions of civil society. She represented a kind of smug pluralism and internationalism that smirks at traditional values and portrays as clodhoppers and fundamentalist zealots those who hold such values. Donald Trump, on the other hand (and despite his big-city roots and great wealth), came across as a man of the people who hold such values.

What about Clinton’s popular-vote “victory”? Nationally, she garnered 2.9 million more votes than Trump. But the manner of Clinton’s “victory” underscores the nation’s cultural divide and the persistence of a Midwestern state of mind. Clinton’s total margin of victory in California, New York, and the District of Columbia was 6.3 million votes. That left Trump ahead of Clinton by 3.4 million votes in the other 48 States, and even farther ahead in non-metropolitan areas. Clinton’s “appeal” (for want of a better word) was narrow; Trump’s was much broader (e.g., winning a higher percentage than Romney did of the two-party vote in 39 States). Arguably, it was broader than that of every Republican presidential candidate since Ronald Reagan won a second term in 1984.

The Midwestern state of mind, however much it has weakened in the last 60 years, remains geographically dominant. In the following graph, counties won by Clinton are shaded in blue; counties won by Trump are shaded in red:

Source: Wikipedia article about the 2016 presidential election.

* This is an allusion to Hamlin Garland‘s novel, A Son of the Middle Border. Garland, a native of Wisconsin, was himself a son of the Middle Border.

Related posts:
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
A Dose of Reality
God-Like Minds
An Addendum to (Asymmetrical) Ideological Warfare
Khizr Khan’s Muddled Logic
A Lesson in Election-Rigging
My Platform (which reflects a Midwestern state of mind)
Polarization and De-facto Partition
H.L. Mencken’s Final Legacy
The Shy Republican Supporters
Roundup (see “Civil War II”)
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
You Can’t Go Home Again
Class in America
A Word of Warning to Leftists (and Everyone Else)
Another Thought or Two about Class

Punctuation within Quotation Marks: British vs. American Style

I’ve added this to my page “On Writing“:

I have reverted to the British style of punctuating in-line quotations, which I followed 40 years ago when I published a weekly newspaper. The British style is to enclose within quotation marks only (a) the punctuation that appears in quoted text or (b) the title of a work (e.g., a blog post) that is usually placed within quotation marks.

I have reverted because of the confusion and unsightliness caused by the American style. It calls for the placement of periods and commas within quotation marks, even if the periods and commas don’t occur in the quoted material or title. Also, if there is a question mark at the end of quoted material, it replaces the comma or period that might otherwise be placed there.

If I had continued to follow American style, I would have ended a sentence in a recent post with this:

… “A New (Cold) Civil War or Secession?” “The Culture War,” “Polarization and De-facto Partition,” and “Civil War?

What a hodge-podge. There’s no comma between the first two entries, and the sentence ends with an inappropriate question mark. With two titles ending in question marks, there was no way for me to avoid a series in which a comma is lacking. I could have avoided the sentence-ending question mark by recasting the list, but the items are listed chronologically, which is how they should be read.

I solved these problems easily by reverting to the British style:

… “A New (Cold) Civil War or Secession?”, “The Culture War“, “Polarization and De-facto Partition“, and “Civil War?“.

This not only eliminates the hodge-podge, but is also more logical and accurate. All items are separated by commas, commas aren’t displaced by question marks, and the declarative sentence ends with a period instead of a question mark.

Another Thought or Two about Class

My recent post, “Class in America,” offers a straightforward taxonomy of the socioeconomic pecking order in the United States. The post doesn’t address the dynamics of movement between classes, so I want to say something about dynamics. And I want to address the inevitability of class-like distinctions, despite the avowed (but hypocritical) goals of leftists to erase such distinctions.

With respect to dynamics, I begin with these observations from “Class in America”:

Class in America isn’t a simple thing. It has something to do with one’s inheritance, which is not only (or mainly) wealth. It has mainly to do with one’s intelligence (which is largely of genetic origin) and behavior (which also has a genetic component). Class also has a lot to do with what one does with one’s genetic inheritance, however rich or sparse it is. Class still depends a lot on acquired skills, drive, and actual achievements — even dubious ones like opining, acting, and playing games — and the income and wealth generated by them.

Class distinctions depend on the objective facts (whether observable or not) about genetic inheritance and one’s use (or not) thereof. Class distinctions also depend on broadly shared views about the relative prestige of various combinations of wealth, income (which isn’t the same as wealth), power, influence, and achievement. Those broadly shared views shift over time.

For example, my taxonomy includes three “suspect” classes whose denizens are athletes and entertainers. There were relatively few highly paid entertainers and almost no highly paid athletes in the late 1800s, when some members of today’s old-wealth aristocracy (e.g., Rockefeller and Ford) had yet to rise to that pinnacle. Even those few athletes and entertainers, unless they had acquired a patina of “culture,” would have been considered beyond the pale of class distinctions — oddities to be applauded (or not) and rewarded for the exercise of their talents, but not to be emulated by socially striving youngsters.

How the world has changed. Now that sports and entertainment have become much more visible and higher-paying than they were in the Gilded Age, there are far more Americans who accord high status to the practitioners in those fields. This is not only a matter of income, but also a matter of taste. If the American Dream of the late 19th century was dominated by visions of rising to the New-Wealth Aristocracy, the American Dream of the early 21st century gives a place of prominence to visions of becoming the next LaBron James or Lady Gaga.

I should qualify the preceding analysis by noting that it applies mainly to whites of European descent and those blacks who are American-born or more than a generation removed from foreign shores. I believe that the old American Dream still prevails among Americans of Asian descent and blacks who are less than two generations removed from Africa or the Caribbean. The Dream prevails to a lesser extent among Latinos — who have enjoyed great success in baseball — but probably more than it does among the aforementioned whites and blacks. As a result, the next generations of upper classes (aside from the Old-Wealth Aristocracy) will become increasingly Asian and Latino in complexion.

Yes, there are millions of white and black Americans (of non-recent vintage) who still share The Dream, though millions more have abandoned it. Their places will be taken by Americans of Asian descent, Latinos, and African-Americans of recent vintage. (I should add that, in any competition based on intellectual merit, Asians generally have the advantage of above-average-to-high intelligence.)

Which brings me to my brief and unduly dismissive rant about the predominantly white and

growing mob of whiny, left-wing fascists[.] For now, they’re sprinkled among the various classes depicted in the table, even classes at or near the top. In their vision of a “classless” society, they would all be at the top, of course, flogging conservatives, plutocrats, malefactors of great wealth, and straight, white (non-Muslim, non-Hispanic), heterosexual males — other than those of their whiny, fascist ilk.

The whiny left is not only predominantly white but also predominantly college-educated, and therefore probably of above-average intelligence. Though there is a great deal of practiced glibness at work among the left-wingers who dominate the professoriate and punditocracy, the generally high intelligence of the whiny class can’t be denied. But the indisputable fact of its class-ness testifies to an inconvenient truth: It is natural for people to align themselves in classes.

Class distinctions are status distinctions. But they can also connote the solidarity of an in-group that is united by a worldview of some kind. The worldview is usually of a religious character, where “religious” means a cult-like devotion to certain beliefs that are taken on faith. Contemporary leftists signal their solidarity — and class superiority — in several ways:

They proclaim themselves on the side of science, though most of them aren’t scientists and wouldn’t know real science if it bit them in the proverbial hindquarters.

There are certain kinds of “scientific” dangers and catastrophes that attract leftists because they provide a pretext for shaping people’s lives in puritanical ways: catastrophic anthropogenic global warming; extreme environmentalism, which stretches to the regulation of mud puddles; second-hand smoking as a health hazard; the “evident” threat posed by the mere depiction or mention of guns; “overpopulation” (despite two centuries of it); obesity (a result, God forbid, of market forces that result in the greater nourishment of poor people); many claims about the ill effects of alcohol, salt, butter, fats, etc., that have been debunked; any number of regulated risks that people would otherwise treat as IQ tests thrown up by life and opportunities to weed out the gene pool; and on and on.

They are in constant search of victims to free from oppression, whether it is the legal oppression of the Jim Crow South or simply the “oppression” of hurt feelings inflicted on the left itself by those who dare to hold different views. (The left isn’t always wrong about the victims it claims to behold, but it has been right only when its tender sensibilities have been confirmed by something like popular consensus.)

Their victim-olatry holds no place, however, for the white working class, whose degree of “white privilege” is approximately zero. To earn one’s daily bread by sweating seems to be honorable only for those whose skin isn’t white or whose religion isn’t Christian.

They are astute practitioners of moral relativism. The inferior status of women in Islam is evidently of little or no account to them. Many of them were even heard to say, in the wake of 9/11, that “we had it coming,” though they were not among the “we.” And “we had it coming” for what, the audacity of protecting access to a vital resource (oil) that helps to drive an economy whose riches subsidize their juvenile worldview? It didn’t occur to those terrorists manqué that it was Osama bin Laden who had it coming. (And he finally “got” it, but Obama — one of their own beneath his smooth veneer — was too sensitive to the feelings of our Muslim enemies to show the proof that justice was done. This was also done to spite Americans who, rightly, wanted more than a staged photo of Obama and his stooges watching the kill operation unfold.)

To their way of thinking, justice — criminal and “social” — consists of outcomes that favor certain groups. For example, it is prima facie wrong that blacks are disproportionately convicted of criminal offenses, especially violent crimes, because … well, just because. It is right (“socially just”) that blacks and other “protected” groups get jobs, promotions, and university admissions for which they are less-qualified than whites and Asians because slavery happened more than 160 years ago and blacks still haven’t recovered from it. (It is, of course, futile and “racist” to mention that blacks are generally less intelligent than whites and Asians.)

Their economic principles (e.g., “helping” the poor through minimum wage and “living wage” laws, buying local because … whatever, promoting the use of bicycles to reduce traffic congestion, favoring strict zoning laws while bemoaning a lack of “affordable” housing) are anti-scientific but virtuous. With leftists, the appearance of virtuousness always trumps science.

All of this mindless posturing has only two purposes, as far as I can tell. The first is to make leftists feel good about themselves, which is important because most of them are white and therefore beneficiaries of “white privilege.” (They are on a monumental guilt-trip, in other words.) The second, as I have said, is to signal their membership in a special class that is bound by attitudes rather than wealth, income, tastes, and other signals that have deep roots in social evolution.

I now therefore conclude that the harsh, outspoken, virulent, violence-prone left is a new class unto itself, though some of its members may retain the outward appearance of belonging to other classes.

Related posts:
Academic Bias
Intellectuals and Capitalism
The Cocoon Age
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Tolerance on the Left
The Eclipse of “Old America”
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Leftist Condescension
Beating Religion with the Wrong End of the Stick
Psychological Insights into Leftism
Nature, Nurture, and Leniency
Red-Diaper Babies and Enemies Within
A Word of Warning to Leftists (and Everyone Else)

Class in America

I often refer to class — or socioeconomic status (SES) — as do many other writers. SES is said to be a function of “a person’s work experience and of an individual’s or family’s economic and social position in relation to others, based on income, education, and occupation.” Wealth counts, too. As do race and ethnicity, to be candid.

Attempts to quantify SES are psuedo-scientific, so I won’t play that game. It’s obvious that class distinctions are subtle and idiosyncratic. Class is in the eye of the beholder.

I am a beholder, and what I behold is parsed in the table below. There I have sorted Americans into broad, fuzzy, and overlapping classes, in roughly descending order of prestige. The indented entries pertain to a certain “type” of person who doesn’t fit neatly into the usual taxonomy of class. What is the type? You’ll see as you read the table.

(To enlarge the image, open it in a new tab and use your browser’s “magnifying glass.”)

What about retirees? If their financial status or behavioral traits don’t change much after retirement, they generally stay in the class to which they belonged at retirement.

Where are the “rednecks”? Most of them are probably in the bottom six rungs, but so are huge numbers of other Americans who (mostly) escape opprobrium for being there. Many “rednecks” have risen to higher classes, especially but not exclusively the indented ones.

What about the growing mob of whiny, left-wing fascists? For now, they’re sprinkled among the various classes depicted in the table, even classes at or near the top. In their vision of a “classless” society, they would all be at the top, of course, flogging conservatives, plutocrats, malefactors of great wealth, and straight, white (non-Muslim, non-Hispanic), heterosexual males — other than those of their whiny, fascist ilk.

Here’s what I make of all this. Class in America isn’t a simple thing. It has something to do with one’s inheritance, which is not only (or mainly) wealth. It has mainly to do with one’s intelligence (which is largely of genetic origin) and behavior (which also has a genetic component). Class also has a lot to do with what one does with one’s genetic inheritance, however rich or sparse it is. Class still depends a lot on acquired skills, drive, and actual achievements — even dubious ones like opining, acting, and playing games — and the income and wealth generated by them. Some would call that quintessentially American.

My family background, on both sides, is blue-collar. I wound up on the senior manager-highly educated rung. That’s quintessentially American.

There’s a lot here to quibble with. Have at it.

If comments are closed by the time you read this post, you may send them to me by e-mail at the Germanic nickname for Friedrich followed by the last name of the great Austrian economist and Nobel laureate whose first name is Friedrich followed by the 3rd and 4th digits of his birth year followed by the usual typographic symbol followed by the domain and extension for Google’s e-mail service — all run together.

Related posts:
Are You in the Bubble?
Race and Reason: The Achievement Gap — Causes and Implications
Not-So-Random Thoughts (X) (last item)
Privilege, Power, and Hypocrisy
Thinkers vs. Doers
Bubbling Along
Intelligence, Assortative Mating, and Social Engineering
“They Deserve to Die”?
More about Intelligence

Language Peeves

Maverick Philosopher has many language peeves, as do I. Our peeves overlap considerably. Here are some of mine:


Today is the ten-year tenth anniversary of our wedding.

Anniversary means the annually recurring date of a past event. To write or say “x-year anniversary” is redundant as well as graceless. A person who says or writes “x-month” anniversary is probably a person whose every sentence includes “like.”


The data is are conclusive.

“Data” is a plural noun. A person who writes or says “data is” is at best an ignoramus and at worst a Philistine.


Would you guys you like to order now?

Regarding this egregious usage, I admit to occasionally slipping from my high horse — an event that is followed immediately by self-censure. (Bonus observation: “Now” is often superfluous, as in the present example.)


Hopefully, I expect the shipment will to arrive today.

Hopefully, I hope that the shipment will arrive today.

I say a lot about “hopefully” and other floating modifiers (e.g., sadly, regrettably, thanfkfully) under the heading “Hopefully and Its Brethren” at “On Writing.”


My head literally figuratively exploded when I read Pope Francis’s recent statement about economic policy.

See “Literally” at “On Writing.”

No problem

Me: Thank you.

Waiter: No problem. You’re welcome.

“No problem” suggests that the person saying it might have been inconvenienced by doing what was expected of him, such as placing a diner’s food on the table.

Reach out/reached out

We reached out to him for comment asked him to comment/called him and asked him to comment/sent him a message asking for a comment.

“Reach out” sometimes properly refers to the act of reaching for a physical object, though “out” is usually redundant.


I shared a story told with her a story.

To share is to allow someone to partake of or temporarily use something of one’s own, not to impart information to someone.

That (for who)

Josh Hamilton was the last player that who hit four home runs in a game.

Better: Josh Hamilton was the last player to hit four home runs in a game.

Their (for “his” or “hers”), etc.

An employee forfeits their his/her accrued vacation time if they are he is/she is fired for cause.

Better: An employee who is fired for cause forfeits accrued vacation time.

Where the context calls for a singular pronoun, “he” and its variants are time-honored, gender-neutral choices. There is no need to add “or her” (or a variant), unless the context demands it. “Her” (or a variant) will be the obvious and correct choice in some cases.

Malapropisms and solecisms peeve me as well. Here are some corrected examples:

I will try and to find it.

He took it for granite granted.

She comes here once and in a while.

At “On Writing” you will also find my reasoned commentary about filler words (e.g., like), punctuation, the corruptions wrought by political correctness and the euphemisms which serve it, and the splitting of infinitives.

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.

The Internet-Media-Academic Complex vs. Real Life

I spend an inordinate share of my time at my PC. (Unlike smart-phone and tablet users, I prefer to be seated in a comfortable desk chair, viewing a full-size screen, and typing on a real keyboard.) When I’m not composing a blog post or playing spider solitaire, I’m reading items from several dozen RSS feeds.

My view of the world is shaped, for the worse, by what I read. If it’s not about leftist cant and scientific fraud, it’s about political warfare on many levels. But my view of the world is more sanguine when I reflect on real life as I experience it when I’m away from my PC.

When the subject isn’t politics, and the politics of the other person are hidden from view, I experience a world of politeness, competence (even unto excellence), and intelligence. Most of the people in that world are owners of small businesses, their employees, and the employees of larger businesses.

In almost every case, their attitude of friendliness is sincere — and I’ve been around the block enough tines to spot insincerity. There’s an innate goodness in most people, regardless of their political views, that comes out when you’re interacting with them as a “real” human being.

The exception to the rule, in my experience, is the highly educated analyst or academic — regardless of political outlook — who thinks he is smarter than everyone else. And it shows in his abrupt, superior attitude toward others, especially if they are strangers whom he is unlikely to encounter again.

The moral of the story: If government were far less powerful, and if it were kept that way, the political noise level would be much reduced and the world would be a far more pleasant place.

Thoughts for the Day

Excerpts of recent correspondence.

Robots, and their functional equivalents in specialized AI systems, can either replace people or make people more productive. I suspect that the latter has been true in the realm of medicine — so far, at least. But I have seen reportage of robotic units that are beginning to perform routine, low-level work in hospitals. So, as usual, the first people to be replaced will be those with rudimentary skills, not highly specialized training. Will it go on from there? Maybe, but the crystal ball is as cloudy as an old-time London fog.

In any event, I don’t believe that automation is inherently a job-killer. The real job-killer consists of government programs that subsidize non-work — early retirement under Social Security, food stamps and other forms of welfare, etc. Automation has been in progress for eons, and with a vengeance since the second industrial revolution. But, on balance, it hasn’t killed jobs. It just pushes people toward new and different jobs that fit the skills they have to offer. I expect nothing different in the future, barring government programs aimed at subsidizing the “victims” of technological displacement.

*      *      *

It’s civil war by other means (so far): David Wasserman, “Purple America Has All but Disappeared” (The New York Times, March 8, 2017).

*      *      *

I know that most of what I write (even the non-political stuff) has a combative edge, and that I’m therefore unlikely to persuade people who disagree with me. I do it my way for two reasons. First, I’m too old to change my ways, and I’m not going to try. Second, in a world that’s seemingly dominated by left-wing ideas, it’s just plain fun to attack them. If what I write happens to help someone else fight the war on leftism — or if it happens to make a young person re-think a mindless commitment to leftism — that’s a plus.

*     *     *

I am pessimistic about the likelihood of cultural renewal in America. The populace is too deeply saturated with left-wing propaganda, which is injected from kindergarten through graduate school, with constant reinforcement via the media and popular culture. There are broad swaths of people — especially in low-income brackets — whose lives revolve around mindless escape from the mundane via drugs, alcohol, promiscuous sex, etc. Broad swaths of the educated classes have abandoned erudition and contemplation and taken up gadgets and entertainment.

The only hope for conservatives is to build their own “bubbles,” like those of effete liberals, and live within them. Even that will prove difficult as long as government (especially the Supreme Court) persists in storming the ramparts in the name of “equality” and “self-creation.”

*     *     *

I correlated Austin’s average temperatures in February and August. Here are the correlation coefficients for following periods:

1854-2016 = 0.001
1875-2016 = -0.007
1900-2016 = 0.178
1925-2016 = 0.161
1950-2016 = 0.191
1975-2016 = 0.126

Of these correlations, only the one for 1900-2016 is statistically significant at the 0.05 level (less than a 5-percent chance of a random relationship). The correlations for 1925-2016 and 1950-2016 are fairly robust, and almost significant at the 0.05 level. The relationship for 1975-2016 is statistically insignificant. I conclude that there’s a positive relationship between February and August temperatures, but weak one. A warm winter doesn’t necessarily presage an extra-hot summer in Austin.

The Age of Noise

Aldous Huxley says this in The Perennial Philosophy:

The twentieth century is, among other things, the Age of Noise. Physical noise, mental noise and noise of desire — we hold history’s record for all of them. And no wonder; for all the resources of our almost miraculous technology have been thrown into the current assault against silence. That most popular and influential of all recent inventions, the radio, is nothing but a conduit through which pre-fabricated din can flow into our homes. And this din goes far deeper, of course, than the ear-drums. It penetrates the mind, filling it with a babel of distractions – news items, mutually irrelevant bits of information, blasts of corybantic or sentimental music, continually repeated doses of drama that bring no catharsis, but merely create a craving for daily or even hourly emotional enemas. And where, as in most countries, the broadcasting stations support themselves by selling time to advertisers, the noise is carried from the ears, through the realms of phantasy, knowledge and feeling to the ego’s central core of wish and desire.

Mr. Huxley would hate the twenty-first century. The noise is beyond deafening. And it’s everywhere: beeping cell phones; loud one-sided conversations into cell phones; talking automobiles; ear-shattering “music” blasting from nearby automobiles, stadium loudspeakers, computers, TVs, and (yes) radios; screeching MoTown (or whatever it’s now called) blasting away in grocery stores (at the request of employees, I suspect); movie soundtracks worthy of the Siege of Stalingrad; and on and on.

I’m glad that my hearing aids have a “mute” function. When engaged, it takes the sharp edges off the knives of sound that assail me whenever I venture into the outside world.

Sound has become a substitute for the absorption and processing of information, that is, for thought. The decades-long crescendo in the West’s sound track lends support to Richard Lynn’s hypothesis that intelligence is on the decline.

*     *     *

Related posts:
In Praise of Solitude
There’s Always Solitude
Intelligence, Personality, Politics, and Happiness

Retrospective Virtue-Signalling

I’ve become fond of “virtue-signalling” — the phrase, that is, not the hypocritical act itself, which is

the conspicuous expression of moral values by an individual done primarily to enhance [his] standing within a social group.

A minor act of virtue-signalling is the grammatically incorrect use of “their” (a plural pronoun), to avoid the traditional “his” (sexist!) or the also correct but awkward “his or her.” That’s why you see “[his]” in the quotation, which replaces the abominable “their.”

Anyway, virtue-signalling is almost exclusively a left-wing phenomenon. And it often takes more acidic forms than the prissy use of “their” to avoid an imaginary lapse into sexism. For example:

An aging, has-been singer shrieks her fantasy of blowing up the White House. Later, she walks it back, saying she was quoted “out of context,” though her words are perfectly clear….

A multimillionaire actress/talk show host not only pretends to believe that Republicans want to return her race to picking cotton, but now wonders aloud just how different we Trump voters are from the Taliban…..

…These awards shows now look more like Comintern speeches with competition to see which androgynous little wussie-pants can sound more macho encouraging “fighting” in the streets, “punching people in the face,” or “Resistance” to frenzied applause from the gathered trained seals. Remember, guys, in a real fight that’s not choreographed there are no stuntmen.

But perhaps the nastiest attacks of all have been on Trump’s family. There were the horrible Tweets about his blameless young son. There were the charming “rape Melania” banners, so odious I thought for sure they had to be photoshopped. They were not. There is an alleged comedienne I had never heard of named Chelsea Handler who announced apropos of nothing that she won’t have Melania on her talk show because “she can’t speak English well enough to be understood.”…

So this is the genius who won’t interview Melania, who speaks five languages. I hope our elegant First Lady can recover from the snub. None of these Mean Girls of any gender would dream of criticizing a Spanish speaker trying to speak English, or correct a black person who said “axe” for “ask” (something I believe they do just to be annoying…). But a beautiful immigrant woman whose English is not perfect deserves mocking. Because liberals are so nice. Virtuous, even. Not Taliban-y at all.

My strongest scorn is reserved for the hordes that have decided to change the names of streets and buildings, to tear down statues, and to remove paintings from public view because the persons thus named or depicted are heroes of the past who committed acts that leftists must deplore — or risk being targeted by other left-wing virtue-signalers.

Retrospective virtue-signalling has been going on for several years now. I’m not sure, but I think it blossomed into an epidemic in the aftermath of the despicable shooting of blacks at a prayer meeting in a Charleston church. A belated example is the recent 3-2 vote by the city council of Charlottesville, Virginia, to remove a statue of Robert E. Lee from a city park.

What can such actions accomplish other than raising the level of smugness of their perpetrators? If they have any effect on racism in the United States, it’s probably to raise its level, too. If I were a racist, I’d be outraged by the cumulative effect of such actions, of which there have been dozens or hundreds in recent years. I’d certainly be a more committed racist than I was before, just as a matter of psychological self-defense.

But such things don’t matter to virtue-signalers. It’s all a matter of reinforcing their feelings of superiority and flaunting their membership in the left-wing goody-two-shoes club — the club that proudly boasts of wanting to “blow up the White House” and “rape Melania.”

And the post-election riots prove that the club has some members who are more than prissy, armchair radicals. They’re the same kind of people who used to wear brown shirts and beat up Jews.

A Nation of Immigrants, a Nation of Enemies

I’m sick and tired of hearing that the United States is a nation of immigrants. So what if the United States is a nation of immigrants? The real issue is whether immigrants wish to become Americans in spirit, not in name only — loyal to the libertarian principles of the Constitution or cynical abusers of it.

I understand and sympathize with the urge to live among people with whom one shares a religion, a language, and customs. Tribalism is a deeply ingrained trait. It is not necessarily a precursor to aggression, contrary to the not-so-subtle message (aimed at white Americans) of the UN propaganda film that I was subjected to in high school. And the kind of tribalism found in many American locales, from the barrios of Los Angeles to the disappearing German communities of Texas to the Orthodox Jewish enclaves of New York City, is harmless compared with  Reconquista and Sharia.

Proponents of such creeds don’t want to become Americans whose allegiance is to the liberty promised by the Constitution. They are cynical abusers of that liberty, whose insidious rhetoric is evidence against free-speech absolutism.

But they are far from the only abusers of that liberty. It is unnecessary to import enemies when there is an ample supply of them among native-born Americans. Well, they are Americans in name because they were born in the United States and (in most cases) haven’t formally renounced their allegiance to the Constitution. But they are its enemies, no matter how cleverly they twist its meaning to support their anti-libertarian creed.

I am speaking of the left, of course. Lest we forget, the real threat to liberty in America is home-grown. The left’s recent hysterical hypocrisy leads me to renounce my naive vow to be a kinder, gentler critic of the left’s subversive words and deeds.

*     *     *

Related posts:
IQ, Political Correctness, and America’s Present Condition
Greed, Conscience, and Big Government
Privilege, Power, and Hypocrisy
Thinkers vs. Doers
Society, Polarization, and Dissent
Another Look at Political Labels
Individualism, Society, and Liberty
Social Justice vs. Liberty
My Platform
Polarization and De-facto Partition
How America Has Changed
The Left and “the People”
Why Conservatives Shouldn’t Compromise
Liberal Nostrums
Politics, Personality, and Hope for a New Era

A Non-Tribute to Fidel

The death of Fidel Castro, which came 90 years too late for the suffering people of Cuba, rates a musical celebration. Here are links to some snappy tunes of the 1920s and 1930s that have “Cuba” or “Cuban” in the title (RealPlayer required):

H.L. Mencken’s Final Legacy

I used to think of H.L. Mencken as a supremely witty person. My intellectual infatuation began with his Chrestomathy, which I read with relish many years ago.

In recent decades my infatuation with Mencken’s acerbic wit dimmed and died, for the reason given by Fred Siegel in The Revolt Against the Masses: How Liberalism Has Undermined the Middle Class. There, Siegel rightly observes that Mencken “learned from [George Bernard] Shaw how to be narrow-minded in a witty, superior way.”

I was reminded of that passage by Peter Berger’s recent account of Mencken’s role in the marginalization of Evangelicals:

The Evangelical sense of marginalization can be conveniently dated—1925. Until then Evangelical Protestantism was at the core of American culture. Think of the role it played in the anti-slavery and temperance movements. Between 1910 and 1915 a series of four books was published under the title The Fundamentals: A Testimony to the Truth. The term “fundamentalism” derives from this title—today a pejorative term applied to all kinds of religious extremes. The aforementioned books were hardly extreme. They came out of the heart of mainline Protestantism, which today would be called Evangelical. Many of the authors were orthodox Presbyterians, then-centered at Princeton Theological Seminary, which in the 1920s split into an orthodox Calvinist and a “modernist” faculty. What happened in 1925 was a watershed in the history of American Evangelicalism—the so-called “monkey trial.”

Under the influence of a conservative Protestant/Evangelical lobby the state of Tennessee passed a law prohibiting the teaching of evolution in public schools. John Scopes, a school teacher in Dayton, Tennessee, was charged with having violated the law. The trial turned into a celebrity event. William Jennings Bryan, former presidential candidate and prominent Evangelical leader, volunteered to act for the prosecution, and the famous trial lawyer Clarence Darrow defended Scopes. The trial had virtually nothing to do with the offence in question (which was not in doubt). Bryan used it to defend his literal understanding of the Bible, Darrow to make Bryan ridiculous. In this he succeeded, reducing Bryan to petulant babbling. Both men were propagandists for two forms of “fundamentalism,” a primitive view of the Bible against a primitive view of science. Unfortunately for Bryan’s reputation, the brilliant satirist H.L. Mencken covered the trial for the Baltimore Sun. His account was widely reprinted and read. He was contemptuous not only of Bryan but of Christianity and of the local people (he called them “yokels”). The event had an enormous effect on American Evangelicals. It demoralized them, making them feel marginalized in a hostile environment. The result was an Evangelical subculture, turned inward and defensive in its relation to the outside society. Mark Noll sums this up in the title of one of his books, The Closing of the Evangelical Mind. [“Religion, Class, and the Evangelical Vote,” The American Interest, November 23, 2016]

I would have to read and consider Noll’s book before I sign on to Berger’s claim that it was Mencken’s account of the “monkey trial” which demoralized and marginalized Evangelicals. But it didn’t help, and it ushered in 90 years of Mencken-like portrayals of Evangelicals and, more generally, of the mid-to-low-income whites who populate much of what’s referred to sneeringly as flyover country. As Berger observes,

During the 2008 campaign Obama slipped out this description of people in economically deprived small towns: “They get bitter, they cling to guns or religion or antipathy to people who aren’t like them.” And during the just-concluded presidential campaign Clinton described Trump voters as a “basket of deplorables.”

Is it any surprise that Trump — who appealed strongly to the kinds of people disparaged by Mencken, Obama, and Clinton — carried these States?

  • Florida — won by Obama in 2008 and 2012
  • Pennsylvania — the first time for a GOP presidential candidate since 1988
  • Ohio — won by Obama in 2008 and 2012
  • Michigan — the first GOP presidential win since 1988
  • Wisconsin — last won by a GOP candidate in 1984
  • Iowa — won by the Democrat presidential candidate in every election (but one) since 1984.

And how did Trump do it? Mainly by running strongly in the areas outside big cities. It’s true that Clinton outpolled Trump nationally, but so what? It’s the electoral vote that matters, and that’s what the candidates strive to win. Trump won it on the strength of his appeal to the descendants of Mencken’s yokels: Obama’s gun-clingers and Clinton’s deplorables.

A digression about election statistics is in order:

Based on total popular votes cast, 2016 surpasses all previous elections by more than 5 million votes (they’re still being counted in some places). Trump now holds the record for the most votes cast for a GOP presidential candidate. Clinton, however, probably won’t match Obama’s 2012 total, and certainly won’t match his 2008 total (the size of which testifies to the gullibility of a large fraction of the electorate).

Did the big turnout for Gary Johnson (pseudo-libertarian) and the somewhat-better-than 2012 turnout for Jill Stein (socialist crank) take votes that “should have been” Clinton’s? Obviously not. Those who cast their ballots for Johnson and Stein were, by definition, voting against Clinton (and Trump).

But what if Johnson and Stein hadn’t been on the ballot and some of the votes that went them had gone instead to Clinton and Trump? My analyses of several polls leads me to the conclusion that the presence of Johnson and Stein hurt Trump more than Clinton. Johnson voters would have defected to Trump more often than to Clinton. Stein voters would have defected to Clinton more often than to Trump. On balance, because there were three times as many Johnson voters as Stein voters, Trump (not Clinton) would have done better if the election had been a two-person race. Moreover, Trump improved slightly on recent GOP showings among blacks and Hispanics.

What about Clinton’s popular-vote “victory”? As of today (11/24/16) she’s running ahead of Trump by 2.1 million votes nationally, and by 3.8 million votes in California and 1.5 million votes in New York. That leaves Trump ahead of Clinton by 3.2 million votes in the other 48 States and D.C. I could go on about D.C. and the Northeast in general, but you get the idea. Clinton’s “appeal” (for want of a better word) was narrow; Trump’s was much broader (e.g., winning a higher percentage than Romney did of the two-party vote in 39 States). Arguably, it was broader than that of every Republican presidential candidate since Ronald Reagan won a second term in 1984.

The election of 2016 probably rang down the final curtain on the New Deal alliance of white Southerners (long-since defected), union members (a dying breed), and other denizens of the mid-to-low-income brackets. The alliance was built on the illusory success  of FDR’s New Deal, which prolonged the Great Depression by several years. But FDR, his henchmen, his sycophants in the media and academe, and those tens of millions who were gulled by him didn’t know that. And so the Democrat Party became the majority party for the most of final eight decades of the 20th century, and has enjoyed periods of resurgence in the 21st century.

The modern Democrat Party — the one that arose in the 1950s with Adlai Stevenson at its helm — long held the allegiance of the yokels, even as it was betraying them by buying the votes of blacks and Hispanics and trolling for the votes of marginal groups (queers, Muslims, and “liberal arts” majors) in order to wear the mantle of moral superiority. The yokels were taken for granted. Worse than that, they were openly disdained in Menckian language.

Trump wisely avoided the Democrat-lite stance of recent GOP candidates — the two Bushes, McCain, and Romney (Dole was simply a ballot-filler) — and went after the modern descendants of the yokels. And in response to that unaccustomed attention, huge numbers of mid-to-low-income voters  — joined by those traditional Republicans who wisely refused to abandon Trump — produced a stunning electoral upset that encompassed most of the country.

As for Mencken, where he is remembered at all it is mainly as a curmudgeonly quipster with views that wouldn’t pass muster among today’s smart set. Though Mencken’s flirtation with anti-Semitism might commend him to the alt-left.

Here, then, is H.L. Mencken’s lasting legacy: There has arisen a huge bloc of voters whose members are through with being ridiculed and ignored by the pseudo-sophisticates who lead and populate the Democrat Party. It is now up to Trump and the Republican Party to retain the allegiance of that bloc. And if they do not, a third party will arise, and — for the first time in American history — it will be a third party with long-lasting clout. Think of it as a more muscular incarnation of the Tea Party, which was its vanguard.

*     *     *

Related reading:
Mike Lee, “Conservatives Should Embrace Principled Populism,” National Review, November 24. 2016
Yuval Levin, “The New Republican Coalition,” National Review, November 17, 2016
Henry Olsen, “For Trump Voters There Is No Left or Right,The Washington Post, November 18, 2016
Fred Reed, “Uniquely Talented: Only the Democrats Could Have Lost to Trump,” Fred on Everything, November 24, 2016 (Published after this post, and eerily similar, in keeping with the adage that great minds think alike.)

*     *      *

Related posts:
1963: The Year Zero
How Democracy Works
“Cheerful” Thoughts
How Government Subverts Social Norms
Turning Points
The Twilight’s Last Gleaming?
Winners and Losers
Pontius Pilate: Modern Politician
Should You Vote for a Third-Party Candidate?
My Platform
How America Has Changed
Civil War?