Suicidal Despair and the “War on Whites”

THREE GRAPHS AND ONE LINK UPDATED ON 06/08/18. SUBSTANCE OF TEXT UNCHANGED.

This entry is prompted by a recent spate of posts and articles about the rising mortality rate among non-Hispanic whites without a college degree (hereinafter working-class whites, for convenience). Thomas Lifson characterizes the trend as “a spiritual crisis”, after saying this:

White males, in large numbers, are simply losing their will to live, and as a result, they are dying so prematurely and in such large numbers that a startling demographic gap has emerged. [“Stunning Evidence that the Left Has Won its War on White Males“, American Thinker, March 26, 2017]

Later in the piece, Lifson gets to the “war” on white males:

For at least four decades, white males have been under continuous assault as bearers of “white privilege” and beneficiaries of sexism. Special preferences and privileges have been granted to other groups, but that is the least of it.  More importantly, the very basis of the psychological self-worth of white males have been under attack.  White males are frequently instructed by authority figures in education and the media that they are responsible for most of the evils of the modern world, that the achievements of Euro-American civilization are a net loss for humanity, stained by exploitation, racism, unfairness, and every other collective evil the progressive mind can manufacture.

Some white males are relatively unscathed by the psychological warfare, but others are more vulnerable. Those who have educational, financial, or employment achievements that have rewarded their efforts may be able to keep going as productive members of society, their self-esteem resting on tangible fruits of their work and social position. But other white males, especially those who work with their hands and have been seeing job opportunities contract or disappear, have been losing the basis for a robust sense of self-worth as their job opportunities disappear.

We now have statistical evidence that political correctness kills.

We have no such thing. The recent trend isn’t yet significant. But it is real, and government is the underlying cause.

To begin at the beginning, the source of the spate of articles about the rising mortality rate of working-class whites is Anne Case and Angus Deaton’s “Mortality and Morbidity in the 21st Century” (Brookings Institution, Brookings Papers on Economic Activity (conference edition), March 17, 2017). Three of the paper’s graphs set the scene. This one shows mortality trends in the United States:

The next figure indicates that the phenomenon isn’t unique to non-Hispanic whites in the age 50-54 bracket:

But the trend among American whites defies the trends in several other Western nations:

Whence the perverse trend? It seems due mainly to suicidal despair:

How do these recent trends stack up against the long view? I couldn’t find a long time series for drug, alcohol, and suicide mortality. But I did find a study by Feijin Luo et al. that traces suicide rates from just before the onset of the Great Depression to just before the onset of the Great Recession — “Impact of Business Cycles on US Suicide Rates, 1928–2007” (American Journal of Public Health, June 2011). Here are two key graphs from the report:

The graphs don’t reproduce well, so the following quotations will be of help:

The overall suicide rate fluctuated from 10.4 to 22.1 over the 1928–2007 period. It peaked in 1932, the last full year of the Great Depression, and bottomed in 2000. The overall suicide rate decreased from 18.0 in 1928 to 11.2 in 2007. However, most of the decline occurred before 1945; after that it fluctuated until the mid-1950s, and then it gradually moved up until the late 1970s. The overall suicide rate resumed its downward trend from the mid-1980s to 2000, followed by a trend reversal in the new millennium.

Figure 1a [top] shows that the overall suicide rate generally increased in recessions, especially in severe recessions that lasted longer than 1 year. The largest increase in the overall suicide rate occurred during the Great Depression (1929–1933), when it surged from 18.0 in 1928 to 22.1 (the all-time high) in 1932, the last full year of the Great Depression. [The Great Depression actually lasted until 1940: TEA.] This increase of 22.8% was the highest recorded for any 4-year interval during the study period. The overall suicide rate also rose during 3 other severe recessions: [the recession inside the Great Depression] (1937–1938), the oil crisis (1973–1975), and the double-dip recession (1980–1982). Not only did the overall suicide rate generally rise during recessions; it also mostly fell during expansions…. However, the overall suicide rate did not fall during the 1960s (i.e., 1961–1969), a notable phenomenon that will be explained by the different trends of age-specific suicide rates.

The age-specific suicide rates displayed more variations than did the overall suicide rate, and the trends of those age-specific suicide rates were largely different. As shown in Figure 1b [bottom], from 1928–2007, the suicide rates of the 2 elderly groups (65–74 years and 75 years and older) and the oldest middle-age group (55–64 years) experienced the most remarkable decline. The suicide rates of those groups declined in both pre- and postwar periods. The suicide rates of the other 2 middle-aged groups (45–54 years and 35–44 years) also declined from 1928–2007, which we attributed to the decrease during the war period more than offsetting the increase in the postwar period. In contrast with the declining suicide rates of the 2 elderly and 3 middle-age groups, the suicide rates of the 2 young groups (15–24 years and 25–34 years) increased or just marginally decreased from 1928–2007. The 2 young groups experienced a marked increase in suicide rates in the postwar period. The suicide rate of the youngest group (5–14 years) also increased from 1928–2007. However, because of its small magnitude, we do not include this increase in the subsequent discussion.

We noted that the suicide rate of the group aged 65–74 years, the highest of all age groups until 1936, declined the most from 1928 to 2007. That rate started at 41.2 in 1928 and dropped to 12.6 in 2007, peaking at 52.3 in 1932 and bottoming at 12.3 in 2004. By contrast, the suicide rate of the group aged 15–24 years increased from 6.7 in 1928 to 9.7 in 2007. That rate peaked at 13.7 in 1994 and bottomed at 3.8 in 1944, and it generally trended upward from the late 1950s to the mid-1990s. The suicide rate differential between the group aged 65–74 years and the group aged 15–24 years generally decreased until 1994, from 34.5 in 1928 to 1.6 in 1994.

All age groups experienced a substantial increase in their suicide rates during the Great Depression, and most groups (35–44 years, 45–54 years, 55–64 years, 65–74 years, and 75 years and older) set record-high suicide rates in 1932; but they reacted differently to many other recessions, including severe recessions such as the [1937-1938 recession] and the oil crisis. Their reactions were different during expansions as well, most notably in the 1960s, when the suicide rates of the 3 oldest groups (75 years and older, 65–74 years, and 55–64 years) declined moderately, and those of the 3 youngest groups (15–24 years, 25–34 years, and 35–44 years) rose noticeably….

[T]he overall suicide rate and the suicide rate of the group aged 45–54 years were associated with business cycles at the significance level of 1%; the suicide rates of the groups aged 25–34 years, 35–44 years, and 55–64 years were associated with business cycles at the significance level of 5%; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were associated with business cycles at nonsignificant levels. To sumarize, the overall suicide rate was significantly countercyclical; the suicide rates of the groups aged 25–34 years, 35–44 years, 45–54 years, and 55–64 years were significantly countercyclical; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were not significantly countercyclical.

The following graph, obtained from the website of the American Foundation for Suicide Prevention, extends the age-related analysis to 2015:

And this graph, from the same source, shows that the rising suicide rate is concentrated among whites and American Indians:

Though this graph encompasses deaths from all causes, the opposing trends for blacks and whites suggest strongly that working-class whites in all age groups have become much more prone to suicidal despair in the past 20 years. Moreover, the despair has persisted through periods of economic decline and economic growth (slow as it has been).

Why? Case and Deaton opine:

[S]ome of the most convincing discussions of what has happened to working class whites emphasize a long-term process of decline, or of cumulative deprivation, rooted in the steady deterioration in job opportunities for people with low education…. This process … worsened over time, and caused, or at least was accompanied by, other changes in society that made life more difficult for less-educated people, not only in their employment opportunities, but in their marriages, and in the lives of and prospects for their children. Traditional structures of social and economic support slowly weakened; no longer was it possible for a man to follow his father and grandfather into a manufacturing job, or to join the union. Marriage was no longer the only way to form intimate partnerships, or to rear children. People moved away from the security of legacy religions or the churches of their parents and grandparents, towards churches that emphasized seeking an identity, or replaced membership with the search for connections…. These changes left people with less structure when they came to choose their careers, their religion, and the nature of their family lives. When such choices succeed, they are liberating; when they fail, the individual can only hold him or herself responsible….

As technical change and globalization reduced the quantity and quality of opportunity in the labor market for those with no more than a high school degree, a number of things happened that have been documented in an extensive literature. Real wages of those with only a high school degree declined, and the college premium increased….

Lower wages made men less marriageable, marriage rates declined, and there was a marked rise in cohabitation, then much less frowned upon than had been the case a generation before…. [B]eyond the cohort of 1940, men and women with less than a BA degree are less likely to have ever been married at any given age. Again, this is not occurring among those with a four-year degree. Unmarried cohabiting partnerships are less stable than marriages. Moreover, among those who do marry, those without a college degree are also much more likely to divorce than are those with a degree….

These accounts share much, though not all, with Murray’s … account [in Coming Apart] of decline among whites in his fictional “Fishtown.” Murray argues that traditional American virtues are being lost among working-class white Americans, especially the virtue of industriousness. The withdrawal of men from the labor force reflects this loss of industriousness; young men in particular prefer leisure—which is now more valuable because of video games … —though much of the withdrawal of young men is for education…. The loss of virtue is supported and financed by government payments, particularly disability payments….

In our account here, we emphasize the labor market, globalization and technical change as the fundamental forces, and put less focus on any loss of virtue, though we certainly accept that the latter could be a consequence of the former. Virtue is easier to maintain in a supportive environment. Yet there is surely general agreement on the roles played by changing beliefs and attitudes, particularly the acceptance of cohabitation, and of the rearing of children in unstable cohabiting unions.

These slow-acting and cumulative social forces seem to us to be plausible candidates to explain rising morbidity and mortality, particularly their role in suicide, and with the other deaths of despair, which share much with suicides. As we have emphasized elsewhere, … purely economic accounts of suicide have consistently failed to explain the phenomenon. If they work at all, they work through their effects on family, on spiritual fulfillment, and on how people perceive meaning and satisfaction in their lives in a way that goes beyond material success. At the same time, cumulative distress, and the failure of life to turn out as expected is consistent with people compensating through other risky behaviors such as abuse of alcohol, overeating, or drug use that predispose towards the outcomes we have been discussing….

What our data show is that the patterns of mortality and morbidity for white non-Hispanics without a college degree move together over lifetimes and birth cohorts, and that they move in tandem with other social dysfunctions, including the decline of marriage, social isolation, and detachment from the labor force…. Whether these factors (or factor) are “the cause” is more a matter of semantics than statistics. The factor could certainly represent some force that we have not identified, or we could try to make a case that declining real wages is more fundamental than other forces. Better, we can see globalization and automation as the underlying deep causes. Ultimately, we see our story as about the collapse of the white, high school educated, working class after its heyday in the early 1970s, and the pathologies that accompany that decline. [Op. cit., pp. 29-38]

The seemingly rigorous and well-reasoned analyses reported by Case-Deaton and Luo et al. are seriously flawed, for these reasons:

  • Case and Deaton’s focus on events since 1990 is analogous to a search for lost keys under a street lamp because that’s where the light is. As shown in the graphs taken from Luo et al., suicide rates have at various times risen (and dropped) as sharply as they have in recent years.
  • Luo et al. address a much longer time span but miss an important turning point, which came during World War II. Because of that, they resort to a strained, non-parametric analysis of the relationship between the suicide rate and business cycles.
  • It is misleading to focus on age groups, as opposed to birth-year cohorts. For example, persons in the 50-54 age group in 1990 were born between 1936 and 1940, but in 2010 persons in the 50-54 age group were born between 1956 and 1960. The groups, in other words, don’t represent the same cohort. The only meaningful suicide rate for a span of more than a few years is the rate for the entire population.

I took a fresh look at the overall suicide rate and its relationship to the state of the economy. First, I extended the overall, age-adjusted suicide rate for 1928-2007 provided by Luo et al. in Supplementary Table B (purchase required) by splicing it with a series for 1981-2016 from the Centers for Disease Control and Prevention. I then drew on the database at Measuring Worth to derive year-over-year changes in real GDP for 1928-2016. Here’s an overview of the two time series:

Suicide rate and change in real GDP by year

The suicide rate doesn’t drop below 15 until 1942. From 1943 through 2016 it vacillates in a narrow range between 10.4 (2000) and 13.7 (1977). Despite the rise since 2000, the overall rate still hasn’t returned to the 1977 peak. And only in recent years has the overall rate reached figures that were reached often between 1943 and 1994.

Moreover, the suicide rate from 1928 through 1942 is strongly correlated with changes in real GDP. But the rate from 1943 through 2016 (or any significant subset of those years) is not:

Suicide rate vs. change in real GDP

Something happened during the war years to loosen the connection between the state of the economy and the suicide rate. That something was the end of the pervasive despair that the Great Depression inflicted on huge numbers of Americans. It’s as if America had a mood transplant, one which has lasted for more than 70 years. The recent uptick in the rate of suicide (and the accompanying rise in slow-motion suicide) is sad because it represents wasted lives. But, as noted above, it hasn’t returned to its 1977 peak, and is just slightly more than one standard deviation above the 1943-2016 average of  12.3 suicides per 100,000 persons:

Age-adjusted suicide rate 1943-2016

(It seems to me that researchers ought to be asking why the rate was so low for about 20 years, beginning in the 1980s.)

Perhaps the recent uptick among working-class whites can be blamed, in part, on loss of “virtue”, technological change, and globalization — as Case and Deaton claim. But they fail to notice the bigger elephant in the room: the destructive role of government.

Technological change and globalization simply reinforce the disemployment effects of the long-term decline in the rate of economic growth. I’ve addressed the decline many times, most recently in “Presidents and Economic Growth“. The decline has three main causes, all attributable to government action, which I’ve assessed in “The Rahn Curve Revisited“: the rise in government spending as a fraction of GDP, the rise in the number of regulations on the books, and the (unsurprising) effect of those variables on private business investment. The only silver lining has been a decline in the rate of inflation, which is unsurprising in view of the general slow-down of economic growth. Many jobs may have disappeared because of technological change and many jobs may have been “shipped overseas”, but there would be a lot more jobs if government had kept its hands off the economy and out of Americans’ wallets.

Moreover, the willingness of Americans — especially low-skill Americans — to seek employment has been eroded by various government programs: aid to the families of dependent children (a boon to unwed mothers and a bane to family stability), food stamps, disability benefits, the expansion of Medicaid, subsidized health-care for “children” under the age of 26, and various programs that encourage women to work outside the home, thus fostering male unemployment.

Economist Edward Glaeser puts it this way:

The rise of joblessness—especially among men—is the great American domestic crisis of the twenty-first century. It is a crisis of spirit more than of resources. The jobless are far more prone to self-destructive behavior than are the working poor. Proposed solutions that focus solely on providing material benefits are a false path. Well-meaning social policies—from longer unemployment insurance to more generous disability diagnoses to higher minimum wages—have only worsened the problem; the futility of joblessness won’t be solved with a welfare check….

The New Deal saw the rise of public programs that worked against employment. Wage controls under the National Recovery Act made it difficult for wages to fall enough to equilibrate the labor market. The Wagner Act strengthened the hand of unions, which kept pay up and employment down. Relief efforts for the unemployed, including federal make-work jobs, eased the pressure on the jobless to find private-sector work….

… In 2011, more than one in five prime-age men were out of work, a figure comparable with the Great Depression. But while employment came back after the Depression, it hasn’t today. The unemployment rate may be low, but many people have quit the labor force entirely and don’t show up in that number. As of December 2016, 15.2 percent of prime-age men were jobless—a figure worse than at any point between World War II and the Great Recession, except during the depths of the early 1980s recession….

Joblessness is disproportionately a condition of the poorly educated. While 72 percent of college graduates over age 25 have jobs, only 41 percent of high school dropouts are working. The employment-rate gap between the most and least educated groups has widened from about 6 percent in 1977 to almost 15 percent today….

Both Franklin Roosevelt and Lyndon Johnson aggressively advanced a stronger safety net for American workers, and other administrations largely supported these efforts. The New Deal gave us Social Security and unemployment insurance, which were expanded in the 1950s. National disability insurance debuted in 1956 and was made far more accessible to people with hard-to-diagnose conditions, like back pain, in 1984. The War on Poverty delivered Medicaid and food stamps. Richard Nixon gave us housing vouchers. During the Great Recession, the federal government temporarily doubled the maximum eligibility time for receiving unemployment insurance.

These various programs make joblessness more bearable, at least materially; they also reduce the incentives to find work. Consider disability insurance. Industrial work is hard, and plenty of workers experience back pain. Before 1984, however, that pain didn’t mean a disability check for American workers. After 1984, though, millions went on the disability rolls. And since disability payments vanish if the disabled person starts earning more than $1,170 per month, the disabled tend to stay disabled…. Disability insurance alone doesn’t entirely explain the rise of long-term joblessness—only one-third or so of jobless males get such benefits. But it has surely played a role.

Other social-welfare programs operate in a similar way. Unemployment insurance stops completely when someone gets a job, … [thus] the unemployed tend to find jobs just as their insurance payments run out. Food-stamp and housing-voucher payments drop 30 percent when a recipient’s income rises past a set threshold by just $1. Elementary economics tells us that paying people to be or stay jobless will increase joblessness….

The rise of joblessness among the young has been a particularly pernicious effect of the Great Recession. Job loss was extensive among 25–34-year-old men and 35–44-year-old men between 2007 and 2009. The 25–34-year-olds have substantially gone back to work, but the number of employed 35–44-year-olds, which dropped by 2 million at the start of the Great Recession, hasn’t recovered. The dislocated workers in this group seem to have left the labor force permanently.

Unfortunately, policymakers seem intent on making the joblessness crisis worse. The past decade or so has seen a resurgent progressive focus on inequality—and little concern among progressives about the downsides of discouraging work. Advocates of a $15 minimum hourly wage, for example, don’t seem to mind, or believe, that such policies deter firms from hiring less skilled workers. The University of California–San Diego’s Jeffrey Clemens examined states where higher federal minimum wages raised the effective state-level minimum wage during the last decade. He found that the higher minimum “reduced employment among individuals ages 16 to 30 with less than a high school education by 5.6 percentage points,” which accounted for “43 percent of the sustained, 13 percentage point decline in this skill group’s employment rate.”

The decision to prioritize equality over employment is particularly puzzling, given that social scientists have repeatedly found that unemployment is the greater evil…. One recent study estimated that unemployment leads to 45,000 suicides worldwide annually. Jobless husbands have a 50 percent higher divorce rate than employed husbands. The impact of lower income on suicide and divorce is much smaller. The negative effects of unemployment are magnified because it so often becomes a semipermanent state.

Time-use studies help us understand why the unemployed are so miserable. Jobless men don’t do a lot more socializing; they don’t spend much more time with their kids. They do spend an extra 100 minutes daily watching television, and they sleep more. The jobless also are more likely to use illegal drugs….

Joblessness and disability are also particularly associated with America’s deadly opioid epidemic…. The strongest correlate of those deaths is the share of the population on disability. That connection suggests a combination of the direct influence of being disabled, which generates a demand for painkillers; the availability of the drugs through the health-care system; and the psychological misery of having no economic future.

Increasing the benefits received by nonemployed persons may make their lives easier in a material sense but won’t help reattach them to the labor force. It won’t give them the sense of pride that comes from economic independence. It won’t give them the reassuring social interactions that come from workplace relationships. When societies sacrifice employment for a notion of income equality, they make the wrong choice. [“The War on Work — And How to End It“, City Journal, special issue: The Shape of Work to Come 2017]

In sum, the rising suicide rate — whatever its significance — is a direct and indirect result of government policies. “We’re from the government and we’re here to help” is black humor, at best. The left is waging a war on white males. But the real war — the war that kills — is hidden from view behind the benign facade of governmental “compassion”.

Having said all of that, I will end on a cautiously positive note. There still is upward mobility in America. Not all working-class people are destined for suicidal despair, only those at the margin who have pocketed the fool’s gold of government handouts.


Other related posts:
Bubbling Along
Economic Mobility Is Alive and Well in America
H.L. Mencken’s Final Legacy
The Problem with Political Correctness
“They Deserve to Die”?
Mencken’s Pearl of Wisdom
Class in America
Another Thought or Two about Class
The Midwest Is a State of Mind

A Personality Test: Which Antagonist Do You Prefer?

1. Archangel Michael vs. Lucifer (good vs. evil)

2. David vs. Goliath (underdog vs. bully)

3. Alexander Hamilton vs. Aaron Burr (a slippery politician vs. a slippery politician-cum-traitor)

4. Richard Nixon vs. Alger Hiss (a slippery politician vs. a traitorous Soviet spy)

5. Sam Ervin vs. Richard Nixon (an upholder of the Constitution vs. a slippery politician)

6. Kenneth Starr vs. Bill Clinton (a straight arrow vs. a slippery politician)

7. Elmer Fudd vs. Bugs Bunny (a straight arrow with a speech impediment vs. a rascally rabbit)

8. Jerry vs. Tom (a clever mouse vs. a dumb but determined cat)

9. Tweety Bird vs. Sylvester the Cat (a devious bird vs. a predatory cat)

10. Road Runner vs. Wile E. Coyote (a devious bird vs. a stupid canine)

11. Rocky & Bullwinkle vs. Boris & Natasha (fun-loving good guys vs. funny bad guys)

12. Dudley Do-Right vs. Snidely Whiplash (a straight arrow vs. a stereotypical villain)

Summarize and explain your choices in the comments. Suggestions for other pairings are welcome.

The Midwest Is a State of Mind

I am a son of the Middle Border,* now known as the Midwest. I left the Midwest, in spirit, almost 60 years ago, when I matriculated at a decidedly cosmopolitan State university. It was in my home State, but not much of my home State.

Where is the Midwest? According to Wikipedia, the U.S. Census Bureau defines the Midwest as comprising the 12 States shaded in red:

They are, from north to south and west to east, North Dakota, South Dakota, Nebraska, Kansas, Minnesota, Iowa Missouri, Wisconsin, Illinois, Michigan, Indiana, and Ohio.

In my experience, the Midwest really begins on the west slope of the Appalachians and includes much of New York State and Pennsylvania. I have lived and traveled in that region, and found it, culturally, to be much like the part of “official” Midwest where I was born and raised.

I am now almost 60 years removed from the Midwest (except for a three-year sojourn in the western part of New York State, near the Pennsylvania border). Therefore, I can’t vouch for the currency of a description that appears in Michael Dirda’s review of Jon K. Lauck’s From Warm Center to Ragged Edge: The Erosion of Midwestern Literary and Historical Regionalism, 1920-1965 (Iowa and the Midwest Experience). Dirda writes:

[Lauck] surveys “the erosion of Midwestern literary and historical regionalism” between 1920 and 1965. This may sound dull as ditch water to those who believe that the “flyover” states are inhabited largely by clodhoppers, fundamentalist zealots and loudmouthed Babbitts. In fact, Lauck’s aim is to examine “how the Midwest as a region faded from our collective imagination” and “became an object of derision.” In particular, the heartland’s traditional values of hard work, personal dignity and loyalty, the centrality it grants to family, community and church, and even the Jeffersonian ideal of a democracy based on farms and small land-holdings — all these came to be deemed insufferably provincial by the metropolitan sophisticates of the Eastern Seaboard and the lotus-eaters of the West Coast.

That was the Midwest of my childhood and adolescence. I suspect that the Midwest of today is considerably different. American family life is generally less stable than it was 60 years ago; Americans generally are less church-going than they were 60 years ago; and social organizations are less robust than they were 60 years ago. The Midwest cannot have escaped two generations of social and cultural upheaval fomented by the explosion of mass communications, the debasement of mass culture, the rise of the drugs-and-rock culture, the erasure of social norms by government edicts, and the creation of a culture of dependency on government.

I nevertheless believe that there is a strong, residual longing for and adherence to the Midwestern culture of 60 years ago — though it’s not really unique to the Midwest. It’s a culture that persists throughout America, in rural areas, villages, towns, small cities, and even exurbs of large cities.

The results of last year’s presidential election bear me out. Hillary Clinton represented the “sophisticates” of the Eastern Seaboard and the lotus-eaters of the West Coast. She represented the supposed superiority of technocracy over the voluntary institutions of civil society. She represented a kind of smug pluralism and internationalism that smirks at traditional values and portrays as clodhoppers and fundamentalist zealots those who hold such values. Donald Trump, on the other hand (and despite his big-city roots and great wealth), came across as a man of the people who hold such values.

What about Clinton’s popular-vote “victory”? Nationally, she garnered 2.9 million more votes than Trump. But the manner of Clinton’s “victory” underscores the nation’s cultural divide and the persistence of a Midwestern state of mind. Clinton’s total margin of victory in California, New York, and the District of Columbia was 6.3 million votes. That left Trump ahead of Clinton by 3.4 million votes in the other 48 States, and even farther ahead in non-metropolitan areas. Clinton’s “appeal” (for want of a better word) was narrow; Trump’s was much broader (e.g., winning a higher percentage than Romney did of the two-party vote in 39 States). Arguably, it was broader than that of every Republican presidential candidate since Ronald Reagan won a second term in 1984.

The Midwestern state of mind, however much it has weakened in the last 60 years, remains geographically dominant. In the following graph, counties won by Clinton are shaded in blue; counties won by Trump are shaded in red:


Source: Wikipedia article about the 2016 presidential election.


* This is an allusion to Hamlin Garland‘s novel, A Son of the Middle Border. Garland, a native of Wisconsin, was himself a son of the Middle Border.


Related posts:
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
An Addendum to (Asymmetrical) Ideological Warfare
Khizr Khan’s Muddled Logic
A Lesson in Election-Rigging
My Platform (which reflects a Midwestern state of mind)
Polarization and De-facto Partition
H.L. Mencken’s Final Legacy
The Shy Republican Supporters
Roundup (see “Civil War II”)
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
You Can’t Go Home Again
Class in America
A Word of Warning to Leftists (and Everyone Else)
Another Thought or Two about Class

Punctuation within Quotation Marks: British vs. American Style

I’ve added this to my page “On Writing“:

I have reverted to the British style of punctuating in-line quotations, which I followed 40 years ago when I published a weekly newspaper. The British style is to enclose within quotation marks only (a) the punctuation that appears in quoted text or (b) the title of a work (e.g., a blog post) that is usually placed within quotation marks.

I have reverted because of the confusion and unsightliness caused by the American style. It calls for the placement of periods and commas within quotation marks, even if the periods and commas don’t occur in the quoted material or title. Also, if there is a question mark at the end of quoted material, it replaces the comma or period that might otherwise be placed there.

If I had continued to follow American style, I would have ended a sentence in a recent post with this:

… “A New (Cold) Civil War or Secession?” “The Culture War,” “Polarization and De-facto Partition,” and “Civil War?

What a hodge-podge. There’s no comma between the first two entries, and the sentence ends with an inappropriate question mark. With two titles ending in question marks, there was no way for me to avoid a series in which a comma is lacking. I could have avoided the sentence-ending question mark by recasting the list, but the items are listed chronologically, which is how they should be read.

I solved these problems easily by reverting to the British style:

… “A New (Cold) Civil War or Secession?”, “The Culture War“, “Polarization and De-facto Partition“, and “Civil War?“.

This not only eliminates the hodge-podge, but is also more logical and accurate. All items are separated by commas, commas aren’t displaced by question marks, and the declarative sentence ends with a period instead of a question mark.

Another Thought or Two about Class

My recent post, “Class in America,” offers a straightforward taxonomy of the socioeconomic pecking order in the United States. The post doesn’t address the dynamics of movement between classes, so I want to say something about dynamics. And I want to address the inevitability of class-like distinctions, despite the avowed (but hypocritical) goals of leftists to erase such distinctions.

With respect to dynamics, I begin with these observations from “Class in America”:

Class in America isn’t a simple thing. It has something to do with one’s inheritance, which is not only (or mainly) wealth. It has mainly to do with one’s intelligence (which is largely of genetic origin) and behavior (which also has a genetic component). Class also has a lot to do with what one does with one’s genetic inheritance, however rich or sparse it is. Class still depends a lot on acquired skills, drive, and actual achievements — even dubious ones like opining, acting, and playing games — and the income and wealth generated by them.

Class distinctions depend on the objective facts (whether observable or not) about genetic inheritance and one’s use (or not) thereof. Class distinctions also depend on broadly shared views about the relative prestige of various combinations of wealth, income (which isn’t the same as wealth), power, influence, and achievement. Those broadly shared views shift over time.

For example, my taxonomy includes three “suspect” classes whose denizens are athletes and entertainers. There were relatively few highly paid entertainers and almost no highly paid athletes in the late 1800s, when some members of today’s old-wealth aristocracy (e.g., Rockefeller and Ford) had yet to rise to that pinnacle. Even those few athletes and entertainers, unless they had acquired a patina of “culture,” would have been considered beyond the pale of class distinctions — oddities to be applauded (or not) and rewarded for the exercise of their talents, but not to be emulated by socially striving youngsters.

How the world has changed. Now that sports and entertainment have become much more visible and higher-paying than they were in the Gilded Age, there are far more Americans who accord high status to the practitioners in those fields. This is not only a matter of income, but also a matter of taste. If the American Dream of the late 19th century was dominated by visions of rising to the New-Wealth Aristocracy, the American Dream of the early 21st century gives a place of prominence to visions of becoming the next LaBron James or Lady Gaga.

I should qualify the preceding analysis by noting that it applies mainly to whites of European descent and those blacks who are American-born or more than a generation removed from foreign shores. I believe that the old American Dream still prevails among Americans of Asian descent and blacks who are less than two generations removed from Africa or the Caribbean. The Dream prevails to a lesser extent among Latinos — who have enjoyed great success in baseball — but probably more than it does among the aforementioned whites and blacks. As a result, the next generations of upper classes (aside from the Old-Wealth Aristocracy) will become increasingly Asian and Latino in complexion.

Yes, there are millions of white and black Americans (of non-recent vintage) who still share The Dream, though millions more have abandoned it. Their places will be taken by Americans of Asian descent, Latinos, and African-Americans of recent vintage. (I should add that, in any competition based on intellectual merit, Asians generally have the advantage of above-average-to-high intelligence.)

Which brings me to my brief and unduly dismissive rant about the predominantly white and

growing mob of whiny, left-wing fascists[.] For now, they’re sprinkled among the various classes depicted in the table, even classes at or near the top. In their vision of a “classless” society, they would all be at the top, of course, flogging conservatives, plutocrats, malefactors of great wealth, and straight, white (non-Muslim, non-Hispanic), heterosexual males — other than those of their whiny, fascist ilk.

The whiny left is not only predominantly white but also predominantly college-educated, and therefore probably of above-average intelligence. Though there is a great deal of practiced glibness at work among the left-wingers who dominate the professoriate and punditocracy, the generally high intelligence of the whiny class can’t be denied. But the indisputable fact of its class-ness testifies to an inconvenient truth: It is natural for people to align themselves in classes.

Class distinctions are status distinctions. But they can also connote the solidarity of an in-group that is united by a worldview of some kind. The worldview is usually of a religious character, where “religious” means a cult-like devotion to certain beliefs that are taken on faith. Contemporary leftists signal their solidarity — and class superiority — in several ways:

They proclaim themselves on the side of science, though most of them aren’t scientists and wouldn’t know real science if it bit them in the proverbial hindquarters.

There are certain kinds of “scientific” dangers and catastrophes that attract leftists because they provide a pretext for shaping people’s lives in puritanical ways: catastrophic anthropogenic global warming; extreme environmentalism, which stretches to the regulation of mud puddles; second-hand smoking as a health hazard; the “evident” threat posed by the mere depiction or mention of guns; “overpopulation” (despite two centuries of it); obesity (a result, God forbid, of market forces that result in the greater nourishment of poor people); many claims about the ill effects of alcohol, salt, butter, fats, etc., that have been debunked; any number of regulated risks that people would otherwise treat as IQ tests thrown up by life and opportunities to weed out the gene pool; and on and on.

They are in constant search of victims to free from oppression, whether it is the legal oppression of the Jim Crow South or simply the “oppression” of hurt feelings inflicted on the left itself by those who dare to hold different views. (The left isn’t always wrong about the victims it claims to behold, but it has been right only when its tender sensibilities have been confirmed by something like popular consensus.)

Their victim-olatry holds no place, however, for the white working class, whose degree of “white privilege” is approximately zero. To earn one’s daily bread by sweating seems to be honorable only for those whose skin isn’t white or whose religion isn’t Christian.

They are astute practitioners of moral relativism. The inferior status of women in Islam is evidently of little or no account to them. Many of them were even heard to say, in the wake of 9/11, that “we had it coming,” though they were not among the “we.” And “we had it coming” for what, the audacity of protecting access to a vital resource (oil) that helps to drive an economy whose riches subsidize their juvenile worldview? It didn’t occur to those terrorists manqué that it was Osama bin Laden who had it coming. (And he finally “got” it, but Obama — one of their own beneath his smooth veneer — was too sensitive to the feelings of our Muslim enemies to show the proof that justice was done. This was also done to spite Americans who, rightly, wanted more than a staged photo of Obama and his stooges watching the kill operation unfold.)

To their way of thinking, justice — criminal and “social” — consists of outcomes that favor certain groups. For example, it is prima facie wrong that blacks are disproportionately convicted of criminal offenses, especially violent crimes, because … well, just because. It is right (“socially just”) that blacks and other “protected” groups get jobs, promotions, and university admissions for which they are less-qualified than whites and Asians because slavery happened more than 160 years ago and blacks still haven’t recovered from it. (It is, of course, futile and “racist” to mention that blacks are generally less intelligent than whites and Asians.)

Their economic principles (e.g., “helping” the poor through minimum wage and “living wage” laws, buying local because … whatever, promoting the use of bicycles to reduce traffic congestion, favoring strict zoning laws while bemoaning a lack of “affordable” housing) are anti-scientific but virtuous. With leftists, the appearance of virtuousness always trumps science.

All of this mindless posturing has only two purposes, as far as I can tell. The first is to make leftists feel good about themselves, which is important because most of them are white and therefore beneficiaries of “white privilege.” (They are on a monumental guilt-trip, in other words.) The second, as I have said, is to signal their membership in a special class that is bound by attitudes rather than wealth, income, tastes, and other signals that have deep roots in social evolution.

I now therefore conclude that the harsh, outspoken, virulent, violence-prone left is a new class unto itself, though some of its members may retain the outward appearance of belonging to other classes.


Related posts:
Academic Bias
Intellectuals and Capitalism
The Cocoon Age
Inside-Outside
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Tolerance on the Left
The Eclipse of “Old America”
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Leftist Condescension
Beating Religion with the Wrong End of the Stick
Psychological Insights into Leftism
Nature, Nurture, and Leniency
Red-Diaper Babies and Enemies Within
A Word of Warning to Leftists (and Everyone Else)

Class in America

I often refer to class — or socioeconomic status (SES) — as do many other writers. SES is said to be a function of “a person’s work experience and of an individual’s or family’s economic and social position in relation to others, based on income, education, and occupation.” Wealth counts, too. As do race and ethnicity, to be candid.

Attempts to quantify SES are psuedo-scientific, so I won’t play that game. It’s obvious that class distinctions are subtle and idiosyncratic. Class is in the eye of the beholder.

I am a beholder, and what I behold is parsed in the table below. There I have sorted Americans into broad, fuzzy, and overlapping classes, in roughly descending order of prestige. The indented entries pertain to a certain “type” of person who doesn’t fit neatly into the usual taxonomy of class. What is the type? You’ll see as you read the table.

(To enlarge the image, open it in a new tab and use your browser’s “magnifying glass.”)

What about retirees? If their financial status or behavioral traits don’t change much after retirement, they generally stay in the class to which they belonged at retirement.

Where are the “rednecks”? Most of them are probably in the bottom six rungs, but so are huge numbers of other Americans who (mostly) escape opprobrium for being there. Many “rednecks” have risen to higher classes, especially but not exclusively the indented ones.

What about the growing mob of whiny, left-wing fascists? For now, they’re sprinkled among the various classes depicted in the table, even classes at or near the top. In their vision of a “classless” society, they would all be at the top, of course, flogging conservatives, plutocrats, malefactors of great wealth, and straight, white (non-Muslim, non-Hispanic), heterosexual males — other than those of their whiny, fascist ilk.

Here’s what I make of all this. Class in America isn’t a simple thing. It has something to do with one’s inheritance, which is not only (or mainly) wealth. It has mainly to do with one’s intelligence (which is largely of genetic origin) and behavior (which also has a genetic component). Class also has a lot to do with what one does with one’s genetic inheritance, however rich or sparse it is. Class still depends a lot on acquired skills, drive, and actual achievements — even dubious ones like opining, acting, and playing games — and the income and wealth generated by them. Some would call that quintessentially American.

My family background, on both sides, is blue-collar. I wound up on the senior manager-highly educated rung. That’s quintessentially American.

There’s a lot here to quibble with. Have at it.

If comments are closed by the time you read this post, you may send them to me by e-mail at the Germanic nickname for Friedrich followed by the last name of the great Austrian economist and Nobel laureate whose first name is Friedrich followed by the 3rd and 4th digits of his birth year followed by the usual typographic symbol followed by the domain and extension for Google’s e-mail service — all run together.


Related posts:
Are You in the Bubble?
Race and Reason: The Achievement Gap — Causes and Implications
Not-So-Random Thoughts (X) (last item)
Privilege, Power, and Hypocrisy
Thinkers vs. Doers
Bubbling Along
Intelligence, Assortative Mating, and Social Engineering
“They Deserve to Die”?
More about Intelligence

Language Peeves

Maverick Philosopher has many language peeves, as do I. Our peeves overlap considerably. Here are some of mine:

Anniversary

Today is the ten-year tenth anniversary of our wedding.

Anniversary means the annually recurring date of a past event. To write or say “x-year anniversary” is redundant as well as graceless. A person who says or writes “x-month” anniversary is probably a person whose every sentence includes “like.”

Data

The data is are conclusive.

“Data” is a plural noun. A person who writes or says “data is” is at best an ignoramus and at worst a Philistine.

Guy/guys

Would you guys you like to order now?

Regarding this egregious usage, I admit to occasionally slipping from my high horse — an event that is followed immediately by self-censure. (Bonus observation: “Now” is often superfluous, as in the present example.)

Hopefully

Hopefully, I expect the shipment will to arrive today.

Hopefully, I hope that the shipment will arrive today.

I say a lot about “hopefully” and other floating modifiers (e.g., sadly, regrettably, thanfkfully) under the heading “Hopefully and Its Brethren” at “On Writing.”

Literally

My head literally figuratively exploded when I read Pope Francis’s recent statement about economic policy.

See “Literally” at “On Writing.”

No problem

Me: Thank you.

Waiter: No problem. You’re welcome.

“No problem” suggests that the person saying it might have been inconvenienced by doing what was expected of him, such as placing a diner’s food on the table.

Reach out/reached out

We reached out to him for comment asked him to comment/called him and asked him to comment/sent him a message asking for a comment.

“Reach out” sometimes properly refers to the act of reaching for a physical object, though “out” is usually redundant.

Share/shared

I shared a story told with her a story.

To share is to allow someone to partake of or temporarily use something of one’s own, not to impart information to someone.

That (for who)

Josh Hamilton was the last player that who hit four home runs in a game.

Better: Josh Hamilton was the last player to hit four home runs in a game.

Their (for “his” or “hers”), etc.

An employee forfeits their his/her accrued vacation time if they are he is/she is fired for cause.

Better: An employee who is fired for cause forfeits accrued vacation time.

Where the context calls for a singular pronoun, “he” and its variants are time-honored, gender-neutral choices. There is no need to add “or her” (or a variant), unless the context demands it. “Her” (or a variant) will be the obvious and correct choice in some cases.

Malapropisms and solecisms peeve me as well. Here are some corrected examples:

I will try and to find it.

He took it for granite granted.

She comes here once and in a while.

At “On Writing” you will also find my reasoned commentary about filler words (e.g., like), punctuation, the corruptions wrought by political correctness and the euphemisms which serve it, and the splitting of infinitives.

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.

The Internet-Media-Academic Complex vs. Real Life

I spend an inordinate share of my time at my PC. (Unlike smart-phone and tablet users, I prefer to be seated in a comfortable desk chair, viewing a full-size screen, and typing on a real keyboard.) When I’m not composing a blog post or playing spider solitaire, I’m reading items from several dozen RSS feeds.

My view of the world is shaped, for the worse, by what I read. If it’s not about leftist cant and scientific fraud, it’s about political warfare on many levels. But my view of the world is more sanguine when I reflect on real life as I experience it when I’m away from my PC.

When the subject isn’t politics, and the politics of the other person are hidden from view, I experience a world of politeness, competence (even unto excellence), and intelligence. Most of the people in that world are owners of small businesses, their employees, and the employees of larger businesses.

In almost every case, their attitude of friendliness is sincere — and I’ve been around the block enough tines to spot insincerity. There’s an innate goodness in most people, regardless of their political views, that comes out when you’re interacting with them as a “real” human being.

The exception to the rule, in my experience, is the highly educated analyst or academic — regardless of political outlook — who thinks he is smarter than everyone else. And it shows in his abrupt, superior attitude toward others, especially if they are strangers whom he is unlikely to encounter again.

The moral of the story: If government were far less powerful, and if it were kept that way, the political noise level would be much reduced and the world would be a far more pleasant place.

Vera Lynn

Today is Dame Vera Lynn‘s 100th birthday. If you don’t recognize the name, follow the link in the preceding sentence, and this one.

Thoughts for the Day

Excerpts of recent correspondence.

Robots, and their functional equivalents in specialized AI systems, can either replace people or make people more productive. I suspect that the latter has been true in the realm of medicine — so far, at least. But I have seen reportage of robotic units that are beginning to perform routine, low-level work in hospitals. So, as usual, the first people to be replaced will be those with rudimentary skills, not highly specialized training. Will it go on from there? Maybe, but the crystal ball is as cloudy as an old-time London fog.

In any event, I don’t believe that automation is inherently a job-killer. The real job-killer consists of government programs that subsidize non-work — early retirement under Social Security, food stamps and other forms of welfare, etc. Automation has been in progress for eons, and with a vengeance since the second industrial revolution. But, on balance, it hasn’t killed jobs. It just pushes people toward new and different jobs that fit the skills they have to offer. I expect nothing different in the future, barring government programs aimed at subsidizing the “victims” of technological displacement.

*      *      *

It’s civil war by other means (so far): David Wasserman, “Purple America Has All but Disappeared” (The New York Times, March 8, 2017).

*      *      *

I know that most of what I write (even the non-political stuff) has a combative edge, and that I’m therefore unlikely to persuade people who disagree with me. I do it my way for two reasons. First, I’m too old to change my ways, and I’m not going to try. Second, in a world that’s seemingly dominated by left-wing ideas, it’s just plain fun to attack them. If what I write happens to help someone else fight the war on leftism — or if it happens to make a young person re-think a mindless commitment to leftism — that’s a plus.

*     *     *

I am pessimistic about the likelihood of cultural renewal in America. The populace is too deeply saturated with left-wing propaganda, which is injected from kindergarten through graduate school, with constant reinforcement via the media and popular culture. There are broad swaths of people — especially in low-income brackets — whose lives revolve around mindless escape from the mundane via drugs, alcohol, promiscuous sex, etc. Broad swaths of the educated classes have abandoned erudition and contemplation and taken up gadgets and entertainment.

The only hope for conservatives is to build their own “bubbles,” like those of effete liberals, and live within them. Even that will prove difficult as long as government (especially the Supreme Court) persists in storming the ramparts in the name of “equality” and “self-creation.”

*     *     *

I correlated Austin’s average temperatures in February and August. Here are the correlation coefficients for following periods:

1854-2016 = 0.001
1875-2016 = -0.007
1900-2016 = 0.178
1925-2016 = 0.161
1950-2016 = 0.191
1975-2016 = 0.126

Of these correlations, only the one for 1900-2016 is statistically significant at the 0.05 level (less than a 5-percent chance of a random relationship). The correlations for 1925-2016 and 1950-2016 are fairly robust, and almost significant at the 0.05 level. The relationship for 1975-2016 is statistically insignificant. I conclude that there’s a positive relationship between February and August temperatures, but weak one. A warm winter doesn’t necessarily presage an extra-hot summer in Austin.

The Age of Noise

Aldous Huxley says this in The Perennial Philosophy:

The twentieth century is, among other things, the Age of Noise. Physical noise, mental noise and noise of desire — we hold history’s record for all of them. And no wonder; for all the resources of our almost miraculous technology have been thrown into the current assault against silence. That most popular and influential of all recent inventions, the radio, is nothing but a conduit through which pre-fabricated din can flow into our homes. And this din goes far deeper, of course, than the ear-drums. It penetrates the mind, filling it with a babel of distractions – news items, mutually irrelevant bits of information, blasts of corybantic or sentimental music, continually repeated doses of drama that bring no catharsis, but merely create a craving for daily or even hourly emotional enemas. And where, as in most countries, the broadcasting stations support themselves by selling time to advertisers, the noise is carried from the ears, through the realms of phantasy, knowledge and feeling to the ego’s central core of wish and desire.

Mr. Huxley would hate the twenty-first century. The noise is beyond deafening. And it’s everywhere: beeping cell phones; loud one-sided conversations into cell phones; talking automobiles; ear-shattering “music” blasting from nearby automobiles, stadium loudspeakers, computers, TVs, and (yes) radios; screeching MoTown (or whatever it’s now called) blasting away in grocery stores (at the request of employees, I suspect); movie soundtracks worthy of the Siege of Stalingrad; and on and on.

I’m glad that my hearing aids have a “mute” function. When engaged, it takes the sharp edges off the knives of sound that assail me whenever I venture into the outside world.

Sound has become a substitute for the absorption and processing of information, that is, for thought. The decades-long crescendo in the West’s sound track lends support to Richard Lynn’s hypothesis that intelligence is on the decline.

*     *     *

Related posts:
Flow
In Praise of Solitude
There’s Always Solitude
Intelligence, Personality, Politics, and Happiness

Retrospective Virtue-Signalling

I’ve become fond of “virtue-signalling” — the phrase, that is, not the hypocritical act itself, which is

the conspicuous expression of moral values by an individual done primarily to enhance [his] standing within a social group.

A minor act of virtue-signalling is the grammatically incorrect use of “their” (a plural pronoun), to avoid the traditional “his” (sexist!) or the also correct but awkward “his or her.” That’s why you see “[his]” in the quotation, which replaces the abominable “their.”

Anyway, virtue-signalling is almost exclusively a left-wing phenomenon. And it often takes more acidic forms than the prissy use of “their” to avoid an imaginary lapse into sexism. For example:

An aging, has-been singer shrieks her fantasy of blowing up the White House. Later, she walks it back, saying she was quoted “out of context,” though her words are perfectly clear….

A multimillionaire actress/talk show host not only pretends to believe that Republicans want to return her race to picking cotton, but now wonders aloud just how different we Trump voters are from the Taliban…..

…These awards shows now look more like Comintern speeches with competition to see which androgynous little wussie-pants can sound more macho encouraging “fighting” in the streets, “punching people in the face,” or “Resistance” to frenzied applause from the gathered trained seals. Remember, guys, in a real fight that’s not choreographed there are no stuntmen.

But perhaps the nastiest attacks of all have been on Trump’s family. There were the horrible Tweets about his blameless young son. There were the charming “rape Melania” banners, so odious I thought for sure they had to be photoshopped. They were not. There is an alleged comedienne I had never heard of named Chelsea Handler who announced apropos of nothing that she won’t have Melania on her talk show because “she can’t speak English well enough to be understood.”…

So this is the genius who won’t interview Melania, who speaks five languages. I hope our elegant First Lady can recover from the snub. None of these Mean Girls of any gender would dream of criticizing a Spanish speaker trying to speak English, or correct a black person who said “axe” for “ask” (something I believe they do just to be annoying…). But a beautiful immigrant woman whose English is not perfect deserves mocking. Because liberals are so nice. Virtuous, even. Not Taliban-y at all.

My strongest scorn is reserved for the hordes that have decided to change the names of streets and buildings, to tear down statues, and to remove paintings from public view because the persons thus named or depicted are heroes of the past who committed acts that leftists must deplore — or risk being targeted by other left-wing virtue-signalers.

Retrospective virtue-signalling has been going on for several years now. I’m not sure, but I think it blossomed into an epidemic in the aftermath of the despicable shooting of blacks at a prayer meeting in a Charleston church. A belated example is the recent 3-2 vote by the city council of Charlottesville, Virginia, to remove a statue of Robert E. Lee from a city park.

What can such actions accomplish other than raising the level of smugness of their perpetrators? If they have any effect on racism in the United States, it’s probably to raise its level, too. If I were a racist, I’d be outraged by the cumulative effect of such actions, of which there have been dozens or hundreds in recent years. I’d certainly be a more committed racist than I was before, just as a matter of psychological self-defense.

But such things don’t matter to virtue-signalers. It’s all a matter of reinforcing their feelings of superiority and flaunting their membership in the left-wing goody-two-shoes club — the club that proudly boasts of wanting to “blow up the White House” and “rape Melania.”

And the post-election riots prove that the club has some members who are more than prissy, armchair radicals. They’re the same kind of people who used to wear brown shirts and beat up Jews.

A Nation of Immigrants, a Nation of Enemies

I’m sick and tired of hearing that the United States is a nation of immigrants. So what if the United States is a nation of immigrants? The real issue is whether immigrants wish to become Americans in spirit, not in name only — loyal to the libertarian principles of the Constitution or cynical abusers of it.

I understand and sympathize with the urge to live among people with whom one shares a religion, a language, and customs. Tribalism is a deeply ingrained trait. It is not necessarily a precursor to aggression, contrary to the not-so-subtle message (aimed at white Americans) of the UN propaganda film that I was subjected to in high school. And the kind of tribalism found in many American locales, from the barrios of Los Angeles to the disappearing German communities of Texas to the Orthodox Jewish enclaves of New York City, is harmless compared with  Reconquista and Sharia.

Proponents of such creeds don’t want to become Americans whose allegiance is to the liberty promised by the Constitution. They are cynical abusers of that liberty, whose insidious rhetoric is evidence against free-speech absolutism.

But they are far from the only abusers of that liberty. It is unnecessary to import enemies when there is an ample supply of them among native-born Americans. Well, they are Americans in name because they were born in the United States and (in most cases) haven’t formally renounced their allegiance to the Constitution. But they are its enemies, no matter how cleverly they twist its meaning to support their anti-libertarian creed.

I am speaking of the left, of course. Lest we forget, the real threat to liberty in America is home-grown. The left’s recent hysterical hypocrisy leads me to renounce my naive vow to be a kinder, gentler critic of the left’s subversive words and deeds.

*     *     *

Related posts:
IQ, Political Correctness, and America’s Present Condition
Greed, Conscience, and Big Government
Tolerance
Privilege, Power, and Hypocrisy
Thinkers vs. Doers
Society, Polarization, and Dissent
Another Look at Political Labels
Individualism, Society, and Liberty
Social Justice vs. Liberty
My Platform
Polarization and De-facto Partition
How America Has Changed
The Left and “the People”
Why Conservatives Shouldn’t Compromise
Liberal Nostrums
Politics, Personality, and Hope for a New Era

A Non-Tribute to Fidel

The death of Fidel Castro, which came 90 years too late for the suffering people of Cuba, rates a musical celebration. Here are links to some snappy tunes of the 1920s and 1930s that have “Cuba” or “Cuban” in the title (RealPlayer required):

http://www.redhotjazz.com/songs/whiteman/cubanlovesong.ra

http://www.redhotjazz.com/Songs/hickman/cubanmoon.ra

http://www.redhotjazz.com/songs/lewis/illseeyouincuba.ra

http://www.redhotjazz.com/Songs/Louie/lao/Cubanpete.ra

Music for Black Friday

This will get you going (RealPlayer required): http://www.redhotjazz.com/songs/henderson/varietystomp.ra

H.L. Mencken’s Final Legacy

I used to think of H.L. Mencken as a supremely witty person. My intellectual infatuation began with his Chrestomathy, which I read with relish many years ago.

In recent decades my infatuation with Mencken’s acerbic wit dimmed and died, for the reason given by Fred Siegel in The Revolt Against the Masses: How Liberalism Has Undermined the Middle Class. There, Siegel rightly observes that Mencken “learned from [George Bernard] Shaw how to be narrow-minded in a witty, superior way.”

I was reminded of that passage by Peter Berger’s recent account of Mencken’s role in the marginalization of Evangelicals:

The Evangelical sense of marginalization can be conveniently dated—1925. Until then Evangelical Protestantism was at the core of American culture. Think of the role it played in the anti-slavery and temperance movements. Between 1910 and 1915 a series of four books was published under the title The Fundamentals: A Testimony to the Truth. The term “fundamentalism” derives from this title—today a pejorative term applied to all kinds of religious extremes. The aforementioned books were hardly extreme. They came out of the heart of mainline Protestantism, which today would be called Evangelical. Many of the authors were orthodox Presbyterians, then-centered at Princeton Theological Seminary, which in the 1920s split into an orthodox Calvinist and a “modernist” faculty. What happened in 1925 was a watershed in the history of American Evangelicalism—the so-called “monkey trial.”

Under the influence of a conservative Protestant/Evangelical lobby the state of Tennessee passed a law prohibiting the teaching of evolution in public schools. John Scopes, a school teacher in Dayton, Tennessee, was charged with having violated the law. The trial turned into a celebrity event. William Jennings Bryan, former presidential candidate and prominent Evangelical leader, volunteered to act for the prosecution, and the famous trial lawyer Clarence Darrow defended Scopes. The trial had virtually nothing to do with the offence in question (which was not in doubt). Bryan used it to defend his literal understanding of the Bible, Darrow to make Bryan ridiculous. In this he succeeded, reducing Bryan to petulant babbling. Both men were propagandists for two forms of “fundamentalism,” a primitive view of the Bible against a primitive view of science. Unfortunately for Bryan’s reputation, the brilliant satirist H.L. Mencken covered the trial for the Baltimore Sun. His account was widely reprinted and read. He was contemptuous not only of Bryan but of Christianity and of the local people (he called them “yokels”). The event had an enormous effect on American Evangelicals. It demoralized them, making them feel marginalized in a hostile environment. The result was an Evangelical subculture, turned inward and defensive in its relation to the outside society. Mark Noll sums this up in the title of one of his books, The Closing of the Evangelical Mind. [“Religion, Class, and the Evangelical Vote,” The American Interest, November 23, 2016]

I would have to read and consider Noll’s book before I sign on to Berger’s claim that it was Mencken’s account of the “monkey trial” which demoralized and marginalized Evangelicals. But it didn’t help, and it ushered in 90 years of Mencken-like portrayals of Evangelicals and, more generally, of the mid-to-low-income whites who populate much of what’s referred to sneeringly as flyover country. As Berger observes,

During the 2008 campaign Obama slipped out this description of people in economically deprived small towns: “They get bitter, they cling to guns or religion or antipathy to people who aren’t like them.” And during the just-concluded presidential campaign Clinton described Trump voters as a “basket of deplorables.”

Is it any surprise that Trump — who appealed strongly to the kinds of people disparaged by Mencken, Obama, and Clinton — carried these States?

  • Florida — won by Obama in 2008 and 2012
  • Pennsylvania — the first time for a GOP presidential candidate since 1988
  • Ohio — won by Obama in 2008 and 2012
  • Michigan — the first GOP presidential win since 1988
  • Wisconsin — last won by a GOP candidate in 1984
  • Iowa — won by the Democrat presidential candidate in every election (but one) since 1984.

And how did Trump do it? Mainly by running strongly in the areas outside big cities. It’s true that Clinton outpolled Trump nationally, but so what? It’s the electoral vote that matters, and that’s what the candidates strive to win. Trump won it on the strength of his appeal to the descendants of Mencken’s yokels: Obama’s gun-clingers and Clinton’s deplorables.

A digression about election statistics is in order:

Based on total popular votes cast, 2016 surpasses all previous elections by more than 5 million votes (they’re still being counted in some places). Trump now holds the record for the most votes cast for a GOP presidential candidate. Clinton, however, probably won’t match Obama’s 2012 total, and certainly won’t match his 2008 total (the size of which testifies to the gullibility of a large fraction of the electorate).

Did the big turnout for Gary Johnson (pseudo-libertarian) and the somewhat-better-than 2012 turnout for Jill Stein (socialist crank) take votes that “should have been” Clinton’s? Obviously not. Those who cast their ballots for Johnson and Stein were, by definition, voting against Clinton (and Trump).

But what if Johnson and Stein hadn’t been on the ballot and some of the votes that went them had gone instead to Clinton and Trump? My analyses of several polls leads me to the conclusion that the presence of Johnson and Stein hurt Trump more than Clinton. Johnson voters would have defected to Trump more often than to Clinton. Stein voters would have defected to Clinton more often than to Trump. On balance, because there were three times as many Johnson voters as Stein voters, Trump (not Clinton) would have done better if the election had been a two-person race. Moreover, Trump improved slightly on recent GOP showings among blacks and Hispanics.

What about Clinton’s popular-vote “victory”? As of today (11/24/16) she’s running ahead of Trump by 2.1 million votes nationally, and by 3.8 million votes in California and 1.5 million votes in New York. That leaves Trump ahead of Clinton by 3.2 million votes in the other 48 States and D.C. I could go on about D.C. and the Northeast in general, but you get the idea. Clinton’s “appeal” (for want of a better word) was narrow; Trump’s was much broader (e.g., winning a higher percentage than Romney did of the two-party vote in 39 States). Arguably, it was broader than that of every Republican presidential candidate since Ronald Reagan won a second term in 1984.

The election of 2016 probably rang down the final curtain on the New Deal alliance of white Southerners (long-since defected), union members (a dying breed), and other denizens of the mid-to-low-income brackets. The alliance was built on the illusory success  of FDR’s New Deal, which prolonged the Great Depression by several years. But FDR, his henchmen, his sycophants in the media and academe, and those tens of millions who were gulled by him didn’t know that. And so the Democrat Party became the majority party for the most of final eight decades of the 20th century, and has enjoyed periods of resurgence in the 21st century.

The modern Democrat Party — the one that arose in the 1950s with Adlai Stevenson at its helm — long held the allegiance of the yokels, even as it was betraying them by buying the votes of blacks and Hispanics and trolling for the votes of marginal groups (queers, Muslims, and “liberal arts” majors) in order to wear the mantle of moral superiority. The yokels were taken for granted. Worse than that, they were openly disdained in Menckian language.

Trump wisely avoided the Democrat-lite stance of recent GOP candidates — the two Bushes, McCain, and Romney (Dole was simply a ballot-filler) — and went after the modern descendants of the yokels. And in response to that unaccustomed attention, huge numbers of mid-to-low-income voters  — joined by those traditional Republicans who wisely refused to abandon Trump — produced a stunning electoral upset that encompassed most of the country.

As for Mencken, where he is remembered at all it is mainly as a curmudgeonly quipster with views that wouldn’t pass muster among today’s smart set. Though Mencken’s flirtation with anti-Semitism might commend him to the alt-left.

Here, then, is H.L. Mencken’s lasting legacy: There has arisen a huge bloc of voters whose members are through with being ridiculed and ignored by the pseudo-sophisticates who lead and populate the Democrat Party. It is now up to Trump and the Republican Party to retain the allegiance of that bloc. And if they do not, a third party will arise, and — for the first time in American history — it will be a third party with long-lasting clout. Think of it as a more muscular incarnation of the Tea Party, which was its vanguard.

*     *     *

Related reading:
Mike Lee, “Conservatives Should Embrace Principled Populism,” National Review, November 24. 2016
Yuval Levin, “The New Republican Coalition,” National Review, November 17, 2016
Henry Olsen, “For Trump Voters There Is No Left or Right,The Washington Post, November 18, 2016
Fred Reed, “Uniquely Talented: Only the Democrats Could Have Lost to Trump,” Fred on Everything, November 24, 2016 (Published after this post, and eerily similar, in keeping with the adage that great minds think alike.)

*     *      *

Related posts:
1963: The Year Zero
Society
How Democracy Works
“Cheerful” Thoughts
How Government Subverts Social Norms
Turning Points
The Twilight’s Last Gleaming?
Winners and Losers
“Fairness”
Pontius Pilate: Modern Politician
Should You Vote for a Third-Party Candidate?
My Platform
How America Has Changed
Civil War?

Words Fail Us

Regular readers of this blog know that I seldom use “us” and “we.” Those words are too often appropriated by writers who say such things as “we the people,” and who characterize as “society” the geopolitical entity known as the United States. There is no such thing as “we the people,” and the United States is about as far from being a “society” as Hillary Clinton is from being president (I hope).

There are nevertheless some things that are so close to being universal that it’s fair to refer to them as characteristics of “us” and “we.” The inadequacy of language is one of those things.

Why is that the case? Try to describe in words a person who is beautiful or handsome to you, and why. It’s hard to do, if not impossible. There’s something about the combination of that person’s features, coloring, expression, etc., that defies anything like a complete description. You may have an image of that person in your mind, and you may know that — to you — the person is beautiful or handsome. But you just can’t capture in words all of those attributes. Why? Because the person’s beauty or handsomeness is a whole thing. It’s everything taken together, including subtle things that nestle in your subconscious mind but don’t readily swim to the surface. One such thing could be the relative size of the person’s upper and lower lips in the context of that particular person’s face; whereas, the same lips on another face might convey plainness or ugliness.

Words are inadequate because they describe one thing at a time — the shape of a nose, the slant of a brow, the prominence of a cheekbone. And the sum of those words isn’t the same thing as your image of the beautiful or handsome person. In fact, the sum of those words may be meaningless to a third party, who can’t begin to translate your words into an image of the person you think of as beautiful or handsome.

Yes, there are (supposedly) general rules about beauty and handsomeness. One of them is the symmetry of a person’s features. But that leaves a lot of ground uncovered. And it focuses on one aspect of a person’s face, rather than all of its aspects, which are what you take into account when you judge a person beautiful or handsome.

And, of course, there are many disagreements about who is beautiful or handsome. It’s a matter of taste. Where does the taste come from? Who knows? I have a theory about why I prefer dark-haired women to women whose hair is blonde, red, or medium-to-light brown: My mother was dark-haired, and photographs of her show that she was beautiful (in my opinion) as a young woman. (Despite that, I never thought of her as beautiful because she was just Mom to me.) You can come up with your own theories — and I expect that no two of them will be the same.

What about facts? Isn’t it possible to put facts into words? Not really, and for much the same reason that it’s impossible to describe beauty, handsomeness, love, hate, or anything “subjective” or “emotional.” Facts, at bottom, are subjective, and sometimes even emotional.

Let’s take a “fact” at random: the color red. We can all agree as to whether something looks red, can’t we? Even putting aside people who are color-blind, the answer is: not necessarily. For one thing red is defined as having a “predominant light wavelength of roughly 620–740 nanometers.” “Predominant” and “roughly” are weasel-words. Clearly, there’s no definite point on the visible spectrum where light changes from orange to red. If you think there is, just look at this chart and tell me where it happens. So red comes in shades, which various people describe variously: orange-red and reddish-orange, for example.

Not only that, but the visible spectrum

does not … contain all the colors that the human eyes and brain can distinguish. Unsaturated colors such as pink, or purple variations such as magenta, are absent, for example, because they can be made only by a mix of multiple wavelengths.

Thus we have magenta, fuchsia, blood-red, scarlet, crimson, vermillion, maroon, ruby, and even the many shades of pink — some are blends, some are represented by narrow segments of the light spectrum. Do all of those kinds of red have a clear definition, or are they defined by the beholder? Well, some may be easy to distinguish from others, but the distinctions between them remain arbitrary. Where does scarlet or magenta become vermillion?

In any event, how do you describe a color (whatever you call it) in words? Referring to its wavelength or composition in terms of other colors or its relation to other colors is no help. Wavelength really is meaningless unless you can show an image of the visible spectrum to someone who perceives colors exactly as you do, and point to red — or what you call red. In doing so, you will have pointed to a range of colors, not to red, because there is no red red and no definite boundary between orange and red (or yellow and orange, or green and yellow, etc.).

Further, you won’t have described red in words. And you can’t — without descending into tautologies — because red (as you visualize it) is what’s in your mind. It’s not an objective fact.

My point is that description isn’t the same as definition. You can define red (however vaguely) as a color which has a predominant light wavelength of roughly 620–740 nanometers. But you can’t describe it. Why? Because red is just a concept.

A concept isn’t a real thing that you can see, hear, taste, touch, smell, eat, drink from, drive, etc. How do you describe a concept? You define it in terms of other concepts.

Moving on from color, I’ll take gross domestic product (GDP) as another example. GDP is an estimate of the dollar value of the output of finished goods and services produced in the United States during a particular period of time. Wow, what a string of concepts. And every one of them must be defined, in turn. Some of them can be illustrated by referring to real things; a haircut is a kind of service, for example. But it’s impossible to describe GDP and its underlying concepts because they’re all abstractions, or representations of indescribable conglomerations of real things.

All right, you say, it’s impossible to describe concepts, but surely it’s possible to describe things. People do it all the time. See that ugly, dark-haired, tall guy standing over there? I’ve already dealt with ugly, indirectly, in my discussion of beauty or handsomeness. Ugliness, like beauty, is just a concept, the idea of which differs from person to person. What about tall? It’s a relative term, isn’t it? You can measure a person’s height, but whether or not you consider him tall depends on where and when you live and the range of heights you’re used to encountering. A person who seems tall to you may not seem tall to your taller brother. Dark-haired will evoke different pictures in different minds — ranging from jet-black to dark brown and even auburn.

But if you point to the guy you call ugly, dark-haired, tall guy, I may agree with you that he’s ugly, dark-haired, and tall. Or I may disagree with you, but gain some understanding of what you mean by ugly, dark-haired, and tall.

And therein lies the tale of how people are able to communicate with each other, despite their inability to describe concepts or to define them without going in endless circles and chains of definitions. First, human beings possess central nervous systems and sensory organs that are much alike, though within a wide range of variations (e.g., many people must wear glasses with an almost-infinite variety of corrections, hearing aids are programmed to an almost-infinite variety of settings, sensitivity to touch varies widely, reaction times vary widely). Nevertheless, most people seem to perceive the same color when light with a wavelength of, say, 700 nanometers strikes the retina. The same goes for sounds, tastes, smells, etc., as various external stimuli are detected by various receptors. Those perceptions then acquire agreed definitions through acculturation. For example, an object that reflects light with a wavelength of 700 nanometers becomes known as red; a sound with a certain frequency becomes known as middle C; a certain taste is characterized as bitter, sweet, or sour.

Objects acquire names in the same way: for example: a square piece of cloth that’s wrapped around a person’s head or neck becomes a bandana, and a longish, curved, yellow-skinned fruit with a soft interior becomes a banana. And so I can visualize a woman wearing a red bandana and eating a banana.

There is less agreement about “soft” concepts (e.g., beauty) because they’re based not just on “hard” facts (e.g., the wavelength of light), but on judgments that vary from person to person. A face that’s cute to one person may be beautiful to another person, but there’s no rigorous division between cute and beautiful. Both convey a sense of physical attractiveness that many persons will agree upon, but which won’t yield a consistent image. A very large percentage of Caucasian males (of a certain age) would agree that Ingrid Bergman and Hedy Lamarr were beautiful, but there’s nothing like a consensus about Katharine Hepburn (perhaps striking but not beautiful) or Jean Arthur (perhaps cute but not beautiful).

Other concepts, like GDP, acquire seemingly rigorous definitions, but they’re based on strings of seemingly rigorous definitions, the underpinnings of which may be as squishy as the flesh of a banana (e.g., the omission of housework and the effects of pollution from GDP). So if you’re familiar with the definitions of the definitions, you have a good grasp of the concepts. If you aren’t, you don’t. But if you have a good grasp of the numbers underlying the definitions of definitions, you know that the top-level concept is actually vague and hard to pin down. The numbers not only omit important things but are only estimates, and often are estimates of disparate things that are grouped because they’re judged to “alike enough.”

Acculturation in the form of education is a way of getting people to grasp concepts that have widely agreed definitions. Mathematics, for example, is nothing but concepts, all the way down. And to venture beyond arithmetic is to venture into a world of ideas that’s held together by definitions that rest upon definitions and end in nothing real. Unless you’re one of those people who insists that mathematics is the “real” stuff of which the universe is made, which is nothing more than a leap of faith. (Math, by the way, is nothing but words in shorthand.)

And so, human beings are able to communicate and (usually) understand each other because of their physical and cultural similarities, which include education in various and sundry subjects. Those similarities also enable people of different cultures and languages to translate their concepts (and the words that define them) from one language to another.

Those similarities also enable people to “feel” what another person is feeling when he says that he’s happy, sad, drunk, or whatever. There’s the physical similarity — the physiological changes that usually occur when a person becomes what he thinks of as happy, etc. And there’s acculturation — the acquired knowledge that people feel happy (or whatever) for certain reasons (e.g., a marriage, the birth of a child) and display their happiness in certain ways (e.g., a broad smile, a “jump for joy”).

A good novelist, in my view, is one who knows how to use words that evoke vivid mental images of the thoughts, feelings, and actions of characters, and the settings in which the characters act out the plot of a novel. A novelist who can do that and also tell a good story — one with an engaging or suspenseful plot — is thereby a great novelist. I submit that a good or great novelist (an admittedly vague concept) is worth almost any number of psychologists and psychiatrists, whose vision of the human mind is too rigid to grasp the subtleties that give it life.

But good and great novelists are thin on the ground. That is to say, there are relatively few persons among us who are able to grasp and communicate effectively a broad range of the kinds of thoughts and feelings that lurk in the minds of human beings. And even those few have their blind spots. Most of them, it seems to me, are persons of the left, and are therefore unable to empathize with the thoughts and feelings of the working-class people who seethe with resentment about fawning over and favoritism toward blacks, illegal immigrants, gender-confused persons, and other so-called victims. In fact, those few otherwise perceptive and articulate writers make it a point to write off the working-class people as racists, bigots, and ignoramuses.

There are exceptions, of course. A contemporary exception is Tom Wolfe. But his approach to class issues is top-down rather than bottom-up.

Which just underscores my point that we human beings find it hard to formulate and organize our own thoughts and feelings about the world around us and the other people in it. And we’re practically tongue-tied when it comes to expressing those thoughts and feelings to others. We just don’t know ourselves well enough to explain ourselves to others. And our feelings — such as our political preferences, which probably are based more on temperament than on facts — get in the way.

Love, to take a leading example, is a feeling that just is. The why and wherefore of it is beyond our ability to understand and explain. Some of the feelings attached to it can be expressed in prose, poetry, and song, but those are superficial expressions that don’t capture the depth of love and why it exists.

The world of science is of no real help. Even if feelings of love could be expressed in scientific terms — the action of hormone A on brain region X — that would be worse than useless. It would reduce love to chemistry, when we know that there’s more to it than that. Why, for example, is hormone A activated by the presence or thought of person M but not person N, even when they’re identical twins?

The world of science is of no real help about “getting to the bottom of things.” Science is an infinite regress. S is explained in terms of T, which is explained in terms of U, which is explained in terms of V, and on and on. For example, there was the “indivisible” atom, which turned out to consist of electrons, protons, and neutrons. But electrons have turned out to be more complicated than originally believed, and protons and neutrons have been found to be made of smaller particles with distinctive characteristics. So it’s reasonable to ask if all of the particles now considered elementary are really indivisible. Perhaps there other more-elementary particles yet to be hypothesized and discovered. And even if all of the truly elementary particles are discovered, scientists will still be unable to explain what those particles really “are.”

Words fail us.

*      *      *

Related reading:
Modeling Is Not Science
Physics Envy
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
We, the Children of the Enlightenment
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Mysteries: Sacred and Profane
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
“Settled Science” and the Monty Hall Problem
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Taleb’s Ruinous Rhetoric

How America Has Changed

I believe that the morals and the mores of a populace change observably over time. That’s certainly true of Americans, even if it isn’t true of, say, many tribal peoples of distant lands. This post takes a look at how American morals and mores have changed, generally for the worse, in my lifetime.

I am an American of humble birth, with a lower-middle-class to upper-lower-class upbringing in the Upper Midwest. I’m a graduate of a huge, tax-funded university more known for its sports teams than its scholarly attainments. And I’m a person who was never fully enveloped by the bubble of elitism, even though I spent forty years living among and working with highly educated and affluent elites. (See my “About” page for more of the gory details.)

And what do I see when I look out at the America of today? It’s an America where so many collegians can’t bear to hear or read ideas unpalatable to their tender minds; where those same collegians require days of mourning to recover from the unexpected electoral victory of Donald J. Trump; where liberal elites generally view Trump’s victory as a sign that ignorant, uneducated, racist whites have conquered the country; and where many of those same liberals who had promised to leave the U.S.A. if Trump were elected but are, unfortunately for the U.S.A., reneging on their promises.

What I see are a lot of people who should be transported back to the lower-middle-class and upper-lower-class environs of the Upper Midwest of the 1940s and 1950s, where they might just learn how to face the realities of life.

POLITICS

Politics wasn’t a preoccupation in the bad old days because relatively little was expected (or wanted) from government. There was Social Security, State unemployment benefits, and workers’ comp — all of which relied heavily on taxes and “contributions” — and that was about it. I guess there were some welfare payments for the truly indigent, but there weren’t extended unemployment benefits, State and federal subsidies to keep students in college and out of the work force, low-income tax credits, low-income housing subsidies, etc., etc., etc. But those are all loose change compared with the real budget-busters: Medicare, Medicaid, and their vast expansion under Obamacare.

And despite having a much smaller government and a few recessions, the rate of economic growth then was higher than it is today.

Moral: Less government means less political strife — and greater prosperity, to boot.

RELIGION

Almost everyone belonged to one, but few people made a big deal of it. Now, it’s de rigeur to belong to the Church of Redistributionism, Alarmism & Pseud-science (CRAP) — and a big deal if someone doesn’t belong. Religion hasn’t withered away, it’s just taken a new and more virulent form.

It used to be accepted that government wasn’t in the business of establishing or suppressing religion — and only a few woolly-haired progenitors of political correctness thought that a Christmas display on government property was an establishment of religion. Now, government is expected to force the doctrines of CRAP down everyone’s throats. That’s “progress” for you.

What’s worse is that the “progressives” who are doing the shoving don’t understand the resentment that it causes, some of which bubbled to the surface on November 8.

BULLYING (OR, THE RISK OF LIVING)

Bullying was common and accepted as a fact of life. The smart, skinny kid who wore glasses (that was me) could expect taunts and shoving from the bigger, dumber kids. And he might sometimes fight back, successfully or not, or he might devise avoidance tactics and thereby learn valuable lessons about getting through life despite its unpleasant aspects. But unless the bullying became downright persistent and turned into injurious violence, he didn’t run to Mama or the principal. And if he did, Mama or the principal would actually do something about the bullying and not cringe in fear of offending the bully or his parents because the bully was a “disadvantaged” (i.e., stupid) lout.

Bullying, in other words, was nothing new and nothing worth mounting a national campaign against. People dealt with it personally, locally, and usually successfully. And bully-ees (as I was occasionally) learned valuable lessons about (a) how to cope with the stuff life throws at you and (b) how to get along in life without having a government program to fall back on.

Life is a risk. People used to understand that. Too many of them no longer do. And worse, they expect others to carry the burden of risk for them. I’ve got enough problems of my own, I don’t need yours as well.

CLIQUES

People of similar backgrounds (religion, neighborhood, income) and tastes (sports, cars, music) tend to hang out together. True then, true now, true forever — though now (and perhaps forever) the biggest clique seems to be defined by adherence to CRAP (or lack thereof).

Aside from cliques consisting of bullies, cliques used to leave each other alone. (I’m talking about cliques, not gangs, which were less prevalent and less violent then than now.) But the CRAP clique won’t leave anyone alone, and uses government to bully non-members.

Irony: The very people who complain loudest about bullying are themselves bullies. But they don’t have the guts to do it personally. Instead, they use government — the biggest bully of all.

SEXISM

There was lots of it, but it was confined mainly to members of the male preference. (I’m kidding about “preference”; males were just males and didn’t think of themselves as having a preference, orientation, or birth-assignment. The same went for females.) And it was based on evolved norms about the roles and abilities of men and women — norms that were still evolving and would have evolved to something like those now prevalent, but with less acrimony, had the forces of forced change not evolved into CRAP.

Women probably comprised half the student body at Big-Ten U where I was a collegian. That was a big change from the quaint days of the 1920s (only thirty years earlier), when female students were still such a rarity (outside female-only colleges) that they were disparagingly called co-eds. Nationally, the male-female ratio hit 50-50 in the late 1970s and continues to shift in favor of women.

There’s plenty of evidence that women are different from men, in the brain and non-genital parts of the body, I mean. So disparities in emotional balance, abstract thinking, mechanical aptitude, size, running speed, and strength — and thus in career choices and accomplishments — will surprise and offend no one who isn’t an adherent of CRAP.

The biggest sexists of all are feminazis and the male eunuchs who worship at their feet. Together, they are turning the armed forces into day-care centers and police forces into enforcers of political correctness — and both into under-muscled remnants of institutions that were once respected and feared by wrong-doers.

RACISM

There was plenty of that, too, and there still is. The funny thing is that the adherents of CRAP expect there to be a lot less of it. Further, they expect to reduce its prevalence among whites by constantly reminding them that they’re racist trash. And what does that get you? More votes for Donald Trump, who — whatever his faults — doesn’t talk like that.

Racism, like sexism, would be a lot less prevalent if the CRAPers could leave well enough alone and let people figure out how to live in harmony despite their differences.

Living in harmony doesn’t mean being best buddies with the persons of every skin tone and sexual preference, as TV commercials and shows are wont to suggest. People are inherently tribal, and the biggest tribes of all are races, which really exist, all CRAP aside. Racial differences, like gender differences, underlie real differences in intelligence and, therefore, in proneness to violence. They also betoken deep-seated cultural differences that can’t be overlooked, unless you happen to have a weird preference for rap music.

It used to be that people understood such things because they saw life in the raw. But the CRAPers — who are the true exemplars of cosseted white privilege — haven’t a clue. In their worldview, where the mind is a blank slate and behavior is nothing more than the residue of acculturation, racism is an incomprehensible phenomenon, something that simply shouldn’t exist. Unless it’s the racism of blacks toward whites, of course.

COLLEGE EDUCATION

It was for the brightest — those who were most likely to use it to advance science, technology, the world of commerce, and so on. It wasn’t for everyone. In fact, when I went to college in the late 1950s and early 1960s, there were already too many dumb students there.

The push to get more and more dumb people into college is rationalized, in large part, by the correlation between income and level of education. But level of education used to be a sign of drive and intelligence, which are the very things that strongly determine one’s income. Now, level of education is too often a sign that an unqualified person has been pushed into college.

Pushing more and more people into college, which necessarily means taxing productive persons to subsidize the educations of dumber and dumber people, accomplishes several things, all of them bad:

  • There are fewer workers who could be doing something remunerative but not demanding of high intelligence (e.g., plumbing), but who instead are qualified only to do nothing more than the kind of work they could have done without going to college (e.g., waiting on tables and flipping burgers).
  • Which means that they’ve ended up driving down the wages of people who didn’t go to college.
  • And which also means that the tax dollars wasted on subsidizing their useless college educations could have been spent instead on investments in business creation and expansion that would have created more jobs and higher incomes for all.

PROTESTS

These began in earnest in the late 1950s. What they were meant to accomplish in those days — usually the end of legal segregation and voter suppression — were worthy objectives.

Then came the hairy, unkempt, undignified, and sometimes violent protests of the late 1960s. These set the tone for most of what followed. Nothing is too trivial to protest nowadays. To protest everything is to protest nothing.

What protesting usually accomplishes now is inconvenience to people who are simply trying to get from point A to point B, the diversion of police from real police work, the diversion of tax dollars to trash pickup, and filler for TV newscasts.

Oh, yes, it also fills protestors with a feeling of smug superiority. And if they’re of the right color (dark) or the right political persuasion (left), they’re allowed to wreak some havoc, which gives them a perverted sense of accomplishment. And radical-chic CRAPers love it.

Bring back the riot act.

As for those performers who can’t resist the urge to display their CRAP credentials, and who therefore insist on conveying their puerile (and usually hypocritical) views about social, racial, environmental, and other trendy kinds of “justice,” I’m with Laura Ingraham.

*     *     *

Related reading:
Especially 1963: The Year Zero (and the articles and posts linked to therein), and also
What Is the Point of Academic Freedom?
How to Deal with Left-Wing Academic Blather
Here We Go Again
It’s Not Anti-Intellectualism, Stupid
The Case Against Campus Speech Codes
Apropos Academic Freedom and Western Values
Academic Bias
Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Khizr Khan’s Muddled Logic
My Platform
Polarization and De Facto Partition (many more related posts are listed at the end of this one)

The Trump Tape

You know the one I mean, and you know what Trump says on it. So I won’t link to it or quote it. What I will do is ask (and try to answer) the crucial question: What happens now?

Specifically, is Trump a goner? Well, there’s evidence that he was already a goner. So what happens now is that a lot of people who were planning to vote for Trump, or who might have voted for him, will switch to Clinton, Johnson, Stein, or “other” — or they simply won’t bother to vote. As a result, there’ll be a lot fewer down-ballot votes for Republicans in other races. Perhaps not enough to give Democrats control of the House, but perhaps enough to give Democrats control of the Senate.

And therein lies the really bad news. If the Dems can muster 50 senators, they will control the Senate because the VP will be a Democrat. And even if the election ends with, say, 52 Republicans in the Senate, it won’t be hard for the Democrats to entice two RINOs to move across the aisle.

You know what will happen to the Supreme Court with Hillary in the White House and her party in control of the Senate. That’s the really bad news.

Would it matter if Trump were to withdraw from the race? As I understand the States’ laws about putting names on ballots, Trump’s name would remain at the top of the GOP ticket. But the party could heavily advertise the idea that the electors from each State nominally won by Trump would instead vote for Pence. (The electors couldn’t be forced to do so, but as party loyalists, I expect that most of them would do so.)

Would that stratagem prevent a lot of voters from switching their votes away from Trump or sitting it out? I doubt it. It’s just too damn sophisticated and uncertain And a lot of voters simply won’t want to associate themselves in any way with Trump. It’s psychological thing. And it will weigh heavily, even in the secrecy of the voting booth.

Bottom line: Trump is toast. Hillary wins (unless there’s a bigger counter-scandal in the wings). Democrats have a good shot at taking control of the Senate. The Supreme Court may then continue to violate the Constitution and march Americans more rapidly down the road to serfdom.