Suicidal Despair and the “War on Whites”

This entry is prompted by a recent spate of posts and articles about the rising mortality rate among non-Hispanic whites without a college degree (hereinafter working-class whites, for convenience). Thomas Lifson characterizes the trend as “a spiritual crisis”, after saying this:

White males, in large numbers, are simply losing their will to live, and as a result, they are dying so prematurely and in such large numbers that a startling demographic gap has emerged. [“Stunning Evidence that the Left Has Won its War on White Males“, American Thinker, March 26, 2017]

Later in the piece, Lifson gets to the “war” on white males:

For at least four decades, white males have been under continuous assault as bearers of “white privilege” and beneficiaries of sexism. Special preferences and privileges have been granted to other groups, but that is the least of it.  More importantly, the very basis of the psychological self-worth of white males have been under attack.  White males are frequently instructed by authority figures in education and the media that they are responsible for most of the evils of the modern world, that the achievements of Euro-American civilization are a net loss for humanity, stained by exploitation, racism, unfairness, and every other collective evil the progressive mind can manufacture.

Some white males are relatively unscathed by the psychological warfare, but others are more vulnerable. Those who have educational, financial, or employment achievements that have rewarded their efforts may be able to keep going as productive members of society, their self-esteem resting on tangible fruits of their work and social position. But other white males, especially those who work with their hands and have been seeing job opportunities contract or disappear, have been losing the basis for a robust sense of self-worth as their job opportunities disappear.

We now have statistical evidence that political correctness kills.

We have no such thing. The recent trend isn’t yet significant. But it is real, and government is the underlying cause.

To begin at the beginning, the source of the spate of articles about the rising mortality rate of working-class whites is Anne Case and Angus Deaton’s “Mortality and Morbidity in the 21st Century” (Brookings Institution, Brookings Papers on Economic Activity (conference edition), March 17, 2017). Three of the paper’s graphs set the scene. This one shows mortality trends in the United States:

The next figure indicates that the phenomenon isn’t unique to non-Hispanic whites in the age 50-54 bracket:

But the trend among American whites defies the trends in several other Western nations:

Whence the perverse trend? It seems due mainly to suicidal despair:

How do these recent trends stack up against the long view? I couldn’t find a long time series for drug, alcohol, and suicide mortality. But I did find a study by Feijin Luo et al. that traces suicide rates from just before the onset of the Great Depression to just before the onset of the Great Recession — “Impact of Business Cycles on US Suicide Rates, 1928–2007” (American Journal of Public Health, June 2011). Here are two key graphs from the report:

The graphs don’t reproduce well, so the following quotations will be of help:

The overall suicide rate fluctuated from 10.4 to 22.1 over the 1928–2007 period. It peaked in 1932, the last full year of the Great Depression, and bottomed in 2000. The overall suicide rate decreased from 18.0 in 1928 to 11.2 in 2007. However, most of the decline occurred before 1945; after that it fluctuated until the mid-1950s, and then it gradually moved up until the late 1970s. The overall suicide rate resumed its downward trend from the mid-1980s to 2000, followed by a trend reversal in the new millennium.

Figure 1a [top] shows that the overall suicide rate generally increased in recessions, especially in severe recessions that lasted longer than 1 year. The largest increase in the overall suicide rate occurred during the Great Depression (1929–1933), when it surged from 18.0 in 1928 to 22.1 (the all-time high) in 1932, the last full year of the Great Depression. [The Great Depression actually lasted until 1940: TEA.] This increase of 22.8% was the highest recorded for any 4-year interval during the study period. The overall suicide rate also rose during 3 other severe recessions: [the recession inside the Great Depression] (1937–1938), the oil crisis (1973–1975), and the double-dip recession (1980–1982). Not only did the overall suicide rate generally rise during recessions; it also mostly fell during expansions…. However, the overall suicide rate did not fall during the 1960s (i.e., 1961–1969), a notable phenomenon that will be explained by the different trends of age-specific suicide rates.

The age-specific suicide rates displayed more variations than did the overall suicide rate, and the trends of those age-specific suicide rates were largely different. As shown in Figure 1b [bottom], from 1928–2007, the suicide rates of the 2 elderly groups (65–74 years and 75 years and older) and the oldest middle-age group (55–64 years) experienced the most remarkable decline. The suicide rates of those groups declined in both pre- and postwar periods. The suicide rates of the other 2 middle-aged groups (45–54 years and 35–44 years) also declined from 1928–2007, which we attributed to the decrease during the war period more than offsetting the increase in the postwar period. In contrast with the declining suicide rates of the 2 elderly and 3 middle-age groups, the suicide rates of the 2 young groups (15–24 years and 25–34 years) increased or just marginally decreased from 1928–2007. The 2 young groups experienced a marked increase in suicide rates in the postwar period. The suicide rate of the youngest group (5–14 years) also increased from 1928–2007. However, because of its small magnitude, we do not include this increase in the subsequent discussion.

We noted that the suicide rate of the group aged 65–74 years, the highest of all age groups until 1936, declined the most from 1928 to 2007. That rate started at 41.2 in 1928 and dropped to 12.6 in 2007, peaking at 52.3 in 1932 and bottoming at 12.3 in 2004. By contrast, the suicide rate of the group aged 15–24 years increased from 6.7 in 1928 to 9.7 in 2007. That rate peaked at 13.7 in 1994 and bottomed at 3.8 in 1944, and it generally trended upward from the late 1950s to the mid-1990s. The suicide rate differential between the group aged 65–74 years and the group aged 15–24 years generally decreased until 1994, from 34.5 in 1928 to 1.6 in 1994.

All age groups experienced a substantial increase in their suicide rates during the Great Depression, and most groups (35–44 years, 45–54 years, 55–64 years, 65–74 years, and 75 years and older) set record-high suicide rates in 1932; but they reacted differently to many other recessions, including severe recessions such as the [1937-1938 recession] and the oil crisis. Their reactions were different during expansions as well, most notably in the 1960s, when the suicide rates of the 3 oldest groups (75 years and older, 65–74 years, and 55–64 years) declined moderately, and those of the 3 youngest groups (15–24 years, 25–34 years, and 35–44 years) rose noticeably….

[T]he overall suicide rate and the suicide rate of the group aged 45–54 years were associated with business cycles at the significance level of 1%; the suicide rates of the groups aged 25–34 years, 35–44 years, and 55–64 years were associated with business cycles at the significance level of 5%; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were associated with business cycles at nonsignificant levels. To sumarize, the overall suicide rate was significantly countercyclical; the suicide rates of the groups aged 25–34 years, 35–44 years, 45–54 years, and 55–64 years were significantly countercyclical; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were not significantly countercyclical.

The following graph, obtained from the website of the American Foundation for Suicide Prevention, extends the age-related analysis to 2015:

And this graph, from the same source, shows that the rising suicide rate is concentrated among whites and American Indians:

Though this graph encompasses deaths from all causes, the opposing trends for blacks and whites suggest strongly that working-class whites in all age groups have become much more prone to suicidal despair in the past 20 years. Moreover, the despair has persisted through periods of economic decline and economic growth (slow as it has been).

Why? Case and Deaton opine:

[S]ome of the most convincing discussions of what has happened to working class whites emphasize a long-term process of decline, or of cumulative deprivation, rooted in the steady deterioration in job opportunities for people with low education…. This process … worsened over time, and caused, or at least was accompanied by, other changes in society that made life more difficult for less-educated people, not only in their employment opportunities, but in their marriages, and in the lives of and prospects for their children. Traditional structures of social and economic support slowly weakened; no longer was it possible for a man to follow his father and grandfather into a manufacturing job, or to join the union. Marriage was no longer the only way to form intimate partnerships, or to rear children. People moved away from the security of legacy religions or the churches of their parents and grandparents, towards churches that emphasized seeking an identity, or replaced membership with the search for connections…. These changes left people with less structure when they came to choose their careers, their religion, and the nature of their family lives. When such choices succeed, they are liberating; when they fail, the individual can only hold him or herself responsible….

As technical change and globalization reduced the quantity and quality of opportunity in the labor market for those with no more than a high school degree, a number of things happened that have been documented in an extensive literature. Real wages of those with only a high school degree declined, and the college premium increased….

Lower wages made men less marriageable, marriage rates declined, and there was a marked rise in cohabitation, then much less frowned upon than had been the case a generation before…. [B]eyond the cohort of 1940, men and women with less than a BA degree are less likely to have ever been married at any given age. Again, this is not occurring among those with a four-year degree. Unmarried cohabiting partnerships are less stable than marriages. Moreover, among those who do marry, those without a college degree are also much more likely to divorce than are those with a degree….

These accounts share much, though not all, with Murray’s … account [in Coming Apart] of decline among whites in his fictional “Fishtown.” Murray argues that traditional American virtues are being lost among working-class white Americans, especially the virtue of industriousness. The withdrawal of men from the labor force reflects this loss of industriousness; young men in particular prefer leisure—which is now more valuable because of video games … —though much of the withdrawal of young men is for education…. The loss of virtue is supported and financed by government payments, particularly disability payments….

In our account here, we emphasize the labor market, globalization and technical change as the fundamental forces, and put less focus on any loss of virtue, though we certainly accept that the latter could be a consequence of the former. Virtue is easier to maintain in a supportive environment. Yet there is surely general agreement on the roles played by changing beliefs and attitudes, particularly the acceptance of cohabitation, and of the rearing of children in unstable cohabiting unions.

These slow-acting and cumulative social forces seem to us to be plausible candidates to explain rising morbidity and mortality, particularly their role in suicide, and with the other deaths of despair, which share much with suicides. As we have emphasized elsewhere, … purely economic accounts of suicide have consistently failed to explain the phenomenon. If they work at all, they work through their effects on family, on spiritual fulfillment, and on how people perceive meaning and satisfaction in their lives in a way that goes beyond material success. At the same time, cumulative distress, and the failure of life to turn out as expected is consistent with people compensating through other risky behaviors such as abuse of alcohol, overeating, or drug use that predispose towards the outcomes we have been discussing….

What our data show is that the patterns of mortality and morbidity for white non-Hispanics without a college degree move together over lifetimes and birth cohorts, and that they move in tandem with other social dysfunctions, including the decline of marriage, social isolation, and detachment from the labor force…. Whether these factors (or factor) are “the cause” is more a matter of semantics than statistics. The factor could certainly represent some force that we have not identified, or we could try to make a case that declining real wages is more fundamental than other forces. Better, we can see globalization and automation as the underlying deep causes. Ultimately, we see our story as about the collapse of the white, high school educated, working class after its heyday in the early 1970s, and the pathologies that accompany that decline. [Op. cit., pp. 29-38]

The seemingly rigorous and well-reasoned analyses reported by Case-Deaton and Luo et al. are seriously flawed, for these reasons:

  • Case and Deaton’s focus on events since 1990 is analogous to a search for lost keys under a street lamp because that’s where the light is. As shown in the graphs taken from Luo et al., suicide rates have at various times risen (and dropped) as sharply as they have in recent years.
  • Luo et al. address a much longer time span but miss an important turning point, which came during World War II. Because of that, they resort to a strained, non-parametric analysis of the relationship between the suicide rate and business cycles.
  • It is misleading to focus on age groups, as opposed to birth-year cohorts. For example, persons in the 50-54 age group in 1990 were born between 1936 and 1940, but in 2010 persons in the 50-54 age group were born between 1956 and 1960. The groups, in other words, don’t represent the same cohort. The only meaningful suicide rate for a span of more than a few years is the rate for the entire population.

I took a fresh look at the overall suicide rate and its relationship to the state of the economy. First, I extended the overall, age-adjusted suicide rate for 1928-2007 provided by Luo et al. in Supplementary Table B (purchase required) by splicing it with a series for 1999-2014 from Centers for Disease Control and Prevention, National Center for Health Statistics, Data Brief 241. I then drew on the database at Measuring Worth to derive year-over-year changes in real GDP for 1928-2014. Here’s an overview of the two time series:

The suicide rate doesn’t drop below 15 until 1942. From 1943 through 2014 it vacillates in a narrow range between 10.4 (2000) and 13.6 (1975). Despite the rise since 2000, the overall rate still hasn’t returned to the 1975 peak. And only in recent years has the overall rate reached figures that were reached often between 1943 and 1994.

Moreover, the suicide rate from 1928 through 1942 is strongly correlated with changes in real GDP. But the rate from 1943 through 2014 is not:

Something happened during the war years to loosen the connection between the state of the economy and the suicide rate. That something was the end of the pervasive despair that the Great Depression inflicted on huge numbers of Americans. It’s as if America had a mood transplant, one which has lasted for more than 70 years. The recent uptick in the rate of suicide (and the accompanying rise in slow-motion suicide) is sad because it represents wasted lives. But it is within one standard deviation of the 1943-2014 average of  12.2 suicides per 100,000 persons:

(It seems to me that researchers ought to be asking why the rate was so low for about 20 years, beginning in the 1980s.)

Perhaps the recent uptick among working-class whites can be blamed, in part, on loss of “virtue”, technological change, and globalization — as Case and Deaton claim. But they fail to notice the bigger elephant in the room: the destructive role of government.

Technological change and globalization simply reinforce the disemployment effects of the long-term decline in the rate of economic growth. I’ve addressed the decline many times, most recently in “Presidents and Economic Growth“. The decline has three main causes, all attributable to government action, which I’ve assessed in “The Rahn Curve Revisited“: the rise in government spending as a fraction of GDP, the rise in the number of regulations on the books, and the (unsurprising) effect of those variables on private business investment. The only silver lining has been a decline in the rate of inflation, which is unsurprising in view of the general slow-down of economic growth. Many jobs may have disappeared because of technological change and many jobs may have been “shipped overseas”, but there would be a lot more jobs if government had kept its hands off the economy and out of Americans’ wallets.

Moreover, the willingness of Americans — especially low-skill Americans — to seek employment has been eroded by various government programs: aid to the families of dependent children (a boon to unwed mothers and a bane to family stability), food stamps, disability benefits, the expansion of Medicaid, subsidized health-care for “children” under the age of 26, and various programs that encourage women to work outside the home, thus fostering male unemployment.

Economist Edward Glaeser puts it this way:

The rise of joblessness—especially among men—is the great American domestic crisis of the twenty-first century. It is a crisis of spirit more than of resources. The jobless are far more prone to self-destructive behavior than are the working poor. Proposed solutions that focus solely on providing material benefits are a false path. Well-meaning social policies—from longer unemployment insurance to more generous disability diagnoses to higher minimum wages—have only worsened the problem; the futility of joblessness won’t be solved with a welfare check….

The New Deal saw the rise of public programs that worked against employment. Wage controls under the National Recovery Act made it difficult for wages to fall enough to equilibrate the labor market. The Wagner Act strengthened the hand of unions, which kept pay up and employment down. Relief efforts for the unemployed, including federal make-work jobs, eased the pressure on the jobless to find private-sector work….

… In 2011, more than one in five prime-age men were out of work, a figure comparable with the Great Depression. But while employment came back after the Depression, it hasn’t today. The unemployment rate may be low, but many people have quit the labor force entirely and don’t show up in that number. As of December 2016, 15.2 percent of prime-age men were jobless—a figure worse than at any point between World War II and the Great Recession, except during the depths of the early 1980s recession….

Joblessness is disproportionately a condition of the poorly educated. While 72 percent of college graduates over age 25 have jobs, only 41 percent of high school dropouts are working. The employment-rate gap between the most and least educated groups has widened from about 6 percent in 1977 to almost 15 percent today….

Both Franklin Roosevelt and Lyndon Johnson aggressively advanced a stronger safety net for American workers, and other administrations largely supported these efforts. The New Deal gave us Social Security and unemployment insurance, which were expanded in the 1950s. National disability insurance debuted in 1956 and was made far more accessible to people with hard-to-diagnose conditions, like back pain, in 1984. The War on Poverty delivered Medicaid and food stamps. Richard Nixon gave us housing vouchers. During the Great Recession, the federal government temporarily doubled the maximum eligibility time for receiving unemployment insurance.

These various programs make joblessness more bearable, at least materially; they also reduce the incentives to find work. Consider disability insurance. Industrial work is hard, and plenty of workers experience back pain. Before 1984, however, that pain didn’t mean a disability check for American workers. After 1984, though, millions went on the disability rolls. And since disability payments vanish if the disabled person starts earning more than $1,170 per month, the disabled tend to stay disabled…. Disability insurance alone doesn’t entirely explain the rise of long-term joblessness—only one-third or so of jobless males get such benefits. But it has surely played a role.

Other social-welfare programs operate in a similar way. Unemployment insurance stops completely when someone gets a job, … [thus] the unemployed tend to find jobs just as their insurance payments run out. Food-stamp and housing-voucher payments drop 30 percent when a recipient’s income rises past a set threshold by just $1. Elementary economics tells us that paying people to be or stay jobless will increase joblessness….

The rise of joblessness among the young has been a particularly pernicious effect of the Great Recession. Job loss was extensive among 25–34-year-old men and 35–44-year-old men between 2007 and 2009. The 25–34-year-olds have substantially gone back to work, but the number of employed 35–44-year-olds, which dropped by 2 million at the start of the Great Recession, hasn’t recovered. The dislocated workers in this group seem to have left the labor force permanently.

Unfortunately, policymakers seem intent on making the joblessness crisis worse. The past decade or so has seen a resurgent progressive focus on inequality—and little concern among progressives about the downsides of discouraging work. Advocates of a $15 minimum hourly wage, for example, don’t seem to mind, or believe, that such policies deter firms from hiring less skilled workers. The University of California–San Diego’s Jeffrey Clemens examined states where higher federal minimum wages raised the effective state-level minimum wage during the last decade. He found that the higher minimum “reduced employment among individuals ages 16 to 30 with less than a high school education by 5.6 percentage points,” which accounted for “43 percent of the sustained, 13 percentage point decline in this skill group’s employment rate.”

The decision to prioritize equality over employment is particularly puzzling, given that social scientists have repeatedly found that unemployment is the greater evil…. One recent study estimated that unemployment leads to 45,000 suicides worldwide annually. Jobless husbands have a 50 percent higher divorce rate than employed husbands. The impact of lower income on suicide and divorce is much smaller. The negative effects of unemployment are magnified because it so often becomes a semipermanent state.

Time-use studies help us understand why the unemployed are so miserable. Jobless men don’t do a lot more socializing; they don’t spend much more time with their kids. They do spend an extra 100 minutes daily watching television, and they sleep more. The jobless also are more likely to use illegal drugs….

Joblessness and disability are also particularly associated with America’s deadly opioid epidemic…. The strongest correlate of those deaths is the share of the population on disability. That connection suggests a combination of the direct influence of being disabled, which generates a demand for painkillers; the availability of the drugs through the health-care system; and the psychological misery of having no economic future.

Increasing the benefits received by nonemployed persons may make their lives easier in a material sense but won’t help reattach them to the labor force. It won’t give them the sense of pride that comes from economic independence. It won’t give them the reassuring social interactions that come from workplace relationships. When societies sacrifice employment for a notion of income equality, they make the wrong choice. [“The War on Work — And How to End It“, City Journal, special issue: The Shape of Work to Come 2017]

In sum, the rising suicide rate — whatever its significance — is a direct and indirect result of government policies. “We’re from the government and we’re here to help” is black humor, at best. The left is waging a war on white males. But the real war — the war that kills — is hidden from view behind the benign facade of governmental “compassion”.

Having said all of that, I will end on a cautiously positive note. There still is upward mobility in America. Not all working-class people are destined for suicidal despair, only those at the margin who have pocketed the fool’s gold of government handouts.


Other related posts:
Bubbling Along
Economic Mobility Is Alive and Well in America
H.L. Mencken’s Final Legacy
The Problem with Political Correctness
“They Deserve to Die”?
Mencken’s Pearl of Wisdom
Class in America
Another Thought or Two about Class
The Midwest Is a State of Mind

The Secret of a Happy Marriage

Most people marry young. Even though the average age at first marriage is creeping up, it is still below 30 as far as I know. And it was closer to 20 when I wed several decades ago.

A person who is in his early 20s has a lot of life and learning ahead. His political views are likely to change. Mine changed from idealistic “liberalism” to informed conservatism, with a few stops in between. (For more, go to “About” and scroll down to “Beliefs”.) If one’s political views are heritable, as this piece suggests, what happened to me is that nature — my parents’ innate conservatism — finally overcame nurture — the attitudes and ideas that I absorbed as a collegian.

I married my wife only two years after completing my undergraduate degree, still a naive “liberal” with simplistic views about such things as race (not a problem), markets (suspect), and government (more is better). Fast-forward more than 50 years to the conservative me, still wed to the “liberal” lass who views Donald Trump as unalloyed evil, daily expresses the hope that he will be shot (though that may stop after the shooting of Steve Scalise), cannot understand why Texas Republicans care about who uses which bathroom, favors abortion (in principle, not practice), supports gun control (though we have guns in the house), has swallowed the global-warming hoax, and bases most of her other views on the slants of NBC Nightly News and the Austin American-Statesman.

But she hates to pay taxes.

That, plus love, unites us despite our differences.

“Science” vs. Science: The Case of Evolution, Race, and Intelligence

If you were to ask those people who marched for science if they believe in evolution, they would have answered with a resounding “yes”. Ask them if they believe that all branches of the human race evolved identically and you will be met with hostility. The problem, for them, is that an admission of the obvious — differential evolution, resulting in broad racial differences — leads to a fact that they don’t want to admit: there are broad racial differences in intelligence, differences that must have evolutionary origins.

“Science” — the cherished totem of left-wing ideologues — isn’t the same thing as science. The totemized version consists of whatever set of facts and hypotheses suit the left’s agenda. In the case of “climate change”, for example, the observation that in the late 1900s temperatures rose for a period of about 25 years coincident with a reported rise in the level of atmospheric CO2 occasioned the hypothesis that the generation of CO2 by humans causes temperatures to rise. This is a reasonable hypothesis, given the long-understood, positive relationship between temperature and so-called greenhouse gases. But it comes nowhere close to confirming what leftists seem bent on believing and “proving” with hand-tweaked models, which is that if humans continue to emit CO2, and do so at a higher rate than in the past, temperatures will rise to the point that life on Earth will become difficult if not impossible to sustain. There is ample evidence to support the null hypothesis (that “climate change” isn’t catastrophic) and the alternative view (that recent warming is natural and caused mainly by things other than human activity).

Leftists want to believe in catastrophic anthropogenic global warming because it suits the left’s puritanical agenda, as did Paul Ehrlich’s discredited thesis that population growth would outstrip the availability of food and resources, leading to mass starvation and greater poverty. Population control therefore became a leftist mantra, and remains one despite the generally rising prosperity of the human race and the diminution of scarcity (except where leftist governments, like Venezuela’s, create misery).

Why are leftists so eager to believe in problems that portend catastrophic consequences which “must” be averted through draconian measures, such as enforced population control, taxes on soft drinks above a certain size, the prohibition of smoking not only in government buildings but in all buildings, and decreed reductions in CO2-emitting activities (which would, in fact, help to impoverish humans)? The common denominator of such measures is control. And yet, by the process of psychological projection, leftists are always screaming “fascist” at libertarians and conservatives who resist control.

Returning to evolution, why are leftists so eager to eager to embrace it or, rather, what they choose to believe about it? My answers are that (a) it’s “science” (it’s only science when it’s spelled out in detail, uncertainties and all) and (b) it gives leftists (who usually are atheists) a stick with which to beat “creationists”.

But when it comes to race, leftists insist on denying what’s in front of their eyes: evolutionary disparities in such phenomena as skin color, hair texture, facial structure, running and jumping ability, cranial capacity, and intelligence.

Why? Because the urge to control others is of a piece with the superiority with which leftists believe they’re endowed because they are mainly white persons of European descent and above-average intelligence (just smart enough to be dangerous). Blacks and Hispanics who vote left do so mainly for the privileges it brings them. White leftists are their useful idiots.

Leftism, in other words, is a manifestation of “white privilege”, which white leftists feel compelled to overcome through paternalistic condescension toward blacks and other persons of color. (But not East Asians or the South Asians who have emigrated to the U.S., because the high intelligence of those groups is threatening to white leftists’ feelings of superiority.) What could be more condescending, and less scientific, than to deny what evolution has wrought in order to advance a political agenda?

Leftist race-denial, which has found its way into government policy, is akin to Stalin’s support of Lysenkoism, which its author cleverly aligned with Marxism. Lysenkoism

rejected Mendelian inheritance and the concept of the “gene”; it departed from Darwinian evolutionary theory by rejecting natural selection.

This brings me to Stephen Jay Gould, a leading neo-Lysenkoist and a fraudster of “science” who did much to deflect science from the question of race and intelligence:

[In The Mismeasure of Man] Gould took the work of a 19th century physical anthropologist named Samuel George Morton and made it ridiculous. In his telling, Morton was a fool and an unconscious racist — his project of measuring skull sizes of different ethnic groups conceived in racism and executed in same. Why, Morton clearly must have thought Caucasians had bigger brains than Africans, Indians, and Asians, and then subconsciously mismeasured the skulls to prove they were smarter.

The book then casts the entire project of measuring brain function — psychometrics — in the same light of primitivism.

Gould’s antiracist book was a hit with reviewers in the popular press, and many of its ideas about the morality and validity of testing intelligence became conventional wisdom, persisting today among the educated folks. If you’ve got some notion that IQ doesn’t measure anything but the ability to take IQ tests, that intelligence can’t be defined or may not be real at all, that multiple intelligences exist rather than a general intelligence, you can thank Gould….

Then, in 2011, a funny thing happened. Researchers at the University of Pennsylvania went and measured old Morton’s skulls, which turned out to be just the size he had recorded. Gould, according to one of the co-authors, was nothing but a “charlatan.”

The study itself couldn’t matter, though, could it? Well, recent work using MRI technology has established that descendants of East Asia have slightly more cranial capacity than descendants of Europe, who in turn have a little more than descendants of Africa. Another meta-analysis finds a mild correlation between brain size and IQ performance.

You see where this is going, especially if you already know about the racial disparities in IQ testing, and you’d probably like to hit the brakes before anybody says… what, exactly? It sounds like we’re perilously close to invoking science to argue for genetic racial superiority.

Am I serious? Is this a joke?…

… The reason the joke feels dangerous is that it incorporates a fact that is rarely mentioned in public life. In America, white people on average score higher than black people on IQ tests, by a margin of 12-15 points. And there’s one man who has been made to pay the price for that fact — the scholar Charles Murray.

Murray didn’t come up with a hypothesis of racial disparity in intelligence testing. He simply co-wrote a book, The Bell Curve, that publicized a fact well known within the field of psychometrics, a fact that makes the rest of us feel tremendously uncomfortable.

Nobody bears more responsibility for the misunderstanding of Murray’s work than Gould, who reviewed The Bell Curve savagely in the New Yorker. The IQ tests couldn’t be explained away — here he is acknowledging the IQ gap in 1995 — but the validity of IQ testing could be challenged. That was no trouble for the old Marxist.

Gould should have known that he was dead wrong about his central claim — that general intelligence, or g, as psychologists call it, was unreal. In fact, “Psychologists generally agree that the greatest success of their field has been in intelligence testing,” biologist Bernard D. Davis wrote in the Public Interest in 1983, in a long excoriation of Gould’s strange ideas.

Psychologists have found that performance on almost any test of cognition will have some correlation to other tests of cognition, even in areas that might seem distant from pure logic, such as recognizing musical notes. The more demanding tests have a higher correlation, or a high g load, as they term it.

IQ is very closely related to this measure, and turns out to be extraordinarily predictive not just for how well one does on tests, but on all sorts of real-life outcomes.

Since the publication of The Bell Curve, the data have demonstrated not just those points, but that intelligence is highly heritable (around 50 to 80 percent, Murray says), and that there’s little that can be done to permanently change the part that’s dependent on the environment….

The liberal explainer website Vox took a swing at Murray earlier this year, publishing a rambling 3,300-word hit job on Murray that made zero references to the scientific literature….

Vox might have gotten the last word, but a new outlet called Quillette published a first-rate rebuttal this week, which sent me down a three-day rabbit hole. I came across some of the most troubling facts I’ve ever encountered — IQ scores by country — and then came across some more reassuring ones from Thomas Sowell, suggesting that environment could be the main or exclusive factor after all.

The classic analogy from the environment-only crowd is of two handfuls of genetically identical seed corn, one planted in Iowa and the other in the Mojave Desert. One group flourishes; the other is stunted. While all of the variation within one group will be due to genetics, its flourishing relative to the other group will be strictly due to environment.

Nobody doubts that the United States is richer soil than Equatorial Guinea, but the analogy doesn’t prove the case. The idea that there exists a mean for human intelligence and that all racial subgroups would share it given identical environments remains a metaphysical proposition. We may want this to be true quite desperately, but it’s not something we know to be true.

For all the lines of attack, all the brutal slander thrown Murray’s way, his real crime is having an opinion on this one key issue that’s open to debate. Is there a genetic influence on the IQ testing gap? Murray has written that it’s “likely” genetics explains “some” of the difference. For this, he’s been crucified….

Murray said [in a recent interview] that the assumption “that everyone is equal above the neck” is written into social policy, employment policy, academic policy and more.

He’s right, of course, especially as ideas like “disparate impact” come to be taken as proof of discrimination. There’s no scientifically valid reason to expect different ethnic groups to have a particular representation in this area or that. That much is utterly clear.

The universities, however, are going to keep hollering about institutional racism. They are not going to accept Murray’s views, no matter what develops. [Jon Cassidy, “Mau Mau Redux: Charles Murray Comes in for Abuse, Again“, The American Spectator, June 9, 2017]

And so it goes in the brave new world of alternative facts, most of which seem to come from the left. But the left, with its penchant for pseudo-intellectualism (“science” vs. science) calls it postmodernism:

Postmodernists … eschew any notion of objectivity, perceiving knowledge as a construct of power differentials rather than anything that could possibly be mutually agreed upon…. [S]cience therefore becomes an instrument of Western oppression; indeed, all discourse is a power struggle between oppressors and oppressed. In this scheme, there is no Western civilization to preserve—as the more powerful force in the world, it automatically takes on the role of oppressor and therefore any form of equity must consequently then involve the overthrow of Western “hegemony.” These folks form the current Far Left, including those who would be described as communists, socialists, anarchists, Antifa, as well as social justice warriors (SJWs). These are all very different groups, but they all share a postmodernist ethos. [Michael Aaron, “Evergreen State and the Battle for Modernity“, Quillette, June 8, 2017]


Other related reading (listed chronologically):

Molly Hensley-Clancy, “Asians With “Very Familiar Profiles”: How Princeton’s Admissions Officers Talk About Race“, BuzzFeed News, May 19, 2017

Warren Meyer, “Princeton Appears To Penalize Minority Candidates for Not Obsessing About Their Race“, Coyote Blog, May 24, 2017

B. Wineguard et al., “Getting Voxed: Charles Murray, Ideology, and the Science of IQ“, Quillette, June 2, 2017

James Thompson, “Genetics of Racial Differences in Intelligence: Updated“, The Unz Review: James Thompson Archive, June 5, 2017

Raymond Wolters, “We Are Living in a New Dark Age“, American Renaissance, June 5, 2017

F. Roger Devlin, “A Tactical Retreat for Race Denial“, American Renaissance, June 9, 2017

Scott Johnson, “Mugging Mr. Murray: Mr. Murray Speaks“, Power Line, June 9, 2017


Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering
Some Notes about Psychology and Intelligence

The Left and Evergreen State: Reaping What Was Sown

Tiana Lowe writes with misguided enthusiasm at National Review:

In the past fortnight, the Evergreen State College mob has incited violence against a professor, gotten said professor, Bret Weinstein, to flee campus in fear for his physical safety, inflicted $10,000 in property damage on campus, shut down classes, and forced graduation to be held off-campus as a result.

… Prior to going quiet after receiving mass-murder threats, Weinstein wrote an editorial in the Wall Street Journal warning: “The Campus Mob Came for Me—and You, Professor, Could Be Next.”…  [T]he New York Times has found a mob victim sympathetic enough in Weinstein, a liberal professor, to publicly lambaste the mobs at Evergreen, who counter every question, comment, and even a hand gesture by shouting, “RACIST.”

“It’s just the way discourse goes these days,” Evergreen president George Bridges told the Times’s Frank Bruni. Even the Seattle Times, which has previously let Bridges wax poetic on, “Why students need trigger warnings and safe places” in its editorial pages, condemned Evergreen as having “no safety, no learning, no future.”…

With the world witnessing Evergreen’s Mizzou-scale collapse in real time, perhaps the Left has finally woken up to its own tendency to eat its own. [“Evergreen State Faces Condemnation from [T]he Seattle Times and [T]he New York Times“, June 8, 2017]

Lowe links to a piece by Frank Bruni, an unsurprisingly left-wing columnist at The New York Times (“These Campus Inquisitions Must Stop“, June 3, 2017). Bruni opens with a morally relativistic, irrelevant, and sweeping statement:

Racism pervades our country. Students who have roiled college campuses from coast to coast have that exactly right.

Pervades? Perhaps Bruni is thinking of the attitude of blacks toward whites. Most American whites don’t have the time or inclination to be racist; they’re trying to get into universities and get hired and promoted despite the favoritism that’s showered on less-qualified blacks by their condescending, leftist “betters”. Yes, there is a hotbed of racism in the U.S., and it is located in the media, among the professoriate, and in the soul of every collegian of whatever color who sees life through the lens of “racism”.

Bruni, having shored up his left-wing credentials, actually says a few sensible things. After recounting the travails of Professor Weinstein, whose cause is laudable to leftists because Weinstein is a leftist, Bruni turns to

that awful moment … when one of the dozens of students encircling Nicholas Christakis, a professor [at Yale], shrieked at him: “You should not sleep at night! You are disgusting!”

He and his wife, Erika, were masters at one of Yale’s residential colleges, and she had circulated an email in which she raised questions about the university’s caution against any Halloween costumes that might be seen as examples of cultural appropriation or hurtful stereotyping.

“American universities were once a safe space not only for maturation but also for a certain regressive, or even transgressive, experience,” she wrote. “Increasingly, it seems, they have become places of censure and prohibition. And the censure and prohibition come from above, not from yourselves! Are we all O.K. with this transfer of power? Have we lost faith in young people’s capacity — in your capacity — to exercise self-censure?”

“Talk to each other,” she added. “Free speech and the ability to tolerate offense are the hallmarks of a free and open society.”

Agree or disagree with her, she was teeing up precisely the kind of contest of ideas that higher education should be devoted to. And she did so, if you read the whole of her email, in a considered, respectful fashion.

No matter: She was pushing back at something — the costume guideline — that was draped in the garb of racial sensitivity. And that made her, ipso facto, an enemy of illumination and agent of hate.

She and her husband were driven from their roles in the residential college, though he still teaches at Yale. He posted several sympathetic tweets last week about Weinstein’s vilification. In one he wrote that his wife “spent her whole career” working with “marginalized populations” and has a “deep, abiding humanity.”

“But still they came for her,” he added.

You would think that the Christakises, having been mugged by reality, would have changed their political stripes. Life is an IQ test, and they failed the mid-term.

Bruni continues:

Like plenty of adults across the political spectrum, they use slurs in lieu of arguments, looking for catharsis rather than constructive engagement. They ratchet up their language to a degree that weakens its currency for direr circumstances. And they undermine their goals — our goals — by pushing away good-hearted allies and handing ammunition to the very people who itch to dismiss them.

Right-wing media have had a field day with Evergreen, but not because they’ve faked a story. No, the story was given to them in ribbons and bows.

That’s the real problem. Bruni is afraid that Evergreen State will be used to discredit “progressivism”. But “progressivism” discredits itself, every day in every way. The riots at Evergreen State and other universities are merely the contemporary equivalent of Stalin’s purges and “show trials“.

Another piece linked to by Lowe is an unsigned editorial in The Seattle Times, “The Evergreen State College: No Safety, No Learning, No Future” (June 5, 2017). Here’s some of it:

The public state college near Olympia has become a national caricature of intolerant campus liberalism in both The New York Times and Fox News. At least one professor has been harangued and classes disrupted by shouting mobs of students accusing the famously progressive campus of “systemic racism.”

That coverage apparently has incited anonymous threats of mass murder, resulting in the campus being closed for three days. In the critical last week of school, students have been deprived of learning by extremes on the left and right.

Caricature? How can reality be a caricature? How did the “extreme” right get into the act? It’s news to me that there were and are rightists of any kind among the thugs who seized control of Evergreen.

More:

Since the corrosive 2016 presidential election, Americans increasingly comprise a nation with citizens sealed in ideological bubbles; college campuses are often the most hermetically sealed of bubbles. When Weinstein, the professor, asked a yelling mob of students if they wanted to hear his answer, they shouted “No!”

Left-wing craziness at universities long predates the 2016 election. This is another  transparent (but failed) attempt to spread some of the blame rightward.

Leftists like Bruni and the editorial board of The Seattle Times can’t see the real problem because they’re part of it. They’re like the never-say-die apologists for socialism who protest that “real socialism” has never been tried. What they can’t face up to — despite the failure of the too-long-lived Soviet experiment — is that “real socialism” necessarily leads to suppression and violence. The Soviet Union, Communist China, Castro’s Cuba, and other socialist regimes are real socialism in action, not failed substitutes for it.

Bruni and his ilk, past and present, are responsible for the turmoil at Evergreen and other campuses. Bruni and his ilk — too many parents, most school teachers, most professors of the soft stuff, most pundits, too many politicians — have been spoon-feeding leftism to the young people of this country for more than a century. That is to say, they’ve been spoon-feeding generations of young people an intolerant ideology which prevails only through violence or the clear threat of it. The particulars of the ideology shift with the winds of leftist fashion, but its main catch-words are these:

  • liberty — to do whatever one feels like doing, and to suppress whatever one doesn’t like
  • equality — which others will be forced to pay for, à la socialism, and bow to, as in “some are more equal than others”
  • fraternity — but only with the like-minded of the moment.

Bruni and his ilk seem surprised by the virulence of their intellectual offspring, but they shouldn’t be. Dr. Frankenstein was a mere amateur by comparison with his 20th and 21st century successors, who must be blamed for loosing the monsters — students, faculty, administrators — who are destroying universities. Far worse than that, they and their elders are destroying the institutions of civil society.


Related posts:
Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
A Word of Warning to Leftists (and Everyone Else)
Another Thought or Two about Class
The Vast Left-Wing Conspiracy

Another Case of Cultural Appropriation

Maverick Philosopher makes an excellent case for cultural appropriation. I am here to make a limited case against it.

There is an eons-old tradition that marriage is a union of man and woman, which was shared  by all religions and ethnicities until yesterday, on the time-scale of human existence. Then along came some homosexual “activists” and their enablers (mainly leftists, always in search of “victims”), to claim that homosexuals can marry.

This claim ignores the biological and deep social basis of marriage, which is the procreative pairing of male and female and the resulting formation of the basic social unit: the biologically bonded family.

Homosexual “marriage” is, by contrast, a wholly artificial conception. It is the ultimate act of cultural appropriation. Its artificiality is underscored by the fact that a homosexual “marriage” seems to consist of two “wives” or two “husbands”, in a rather risible bow to traditional usage. Why not “wusbands” or “hives”?


Related posts:
In Defense of Marriage
The Myth That Same-Sex “Marriage” Causes No Harm
Getting “Equal Protection” Right
Equal Protection in Principle and Practice

The Vast Left-Wing Conspiracy

The following list of enemies of liberty is in no particular order, and is not a mutually exclusive set.

Everyone who appeals to the Constitution of the United States but doesn’t understand its principal premise, which is the co-sovereignty of a central government of enumerated and strictly limited powers (notwithstanding the purely aspirational Preamble, the widely misinterpreted General Welfare, Necessary and Proper, and Interstate Commerce clauses) and the States, which are in fact the creators of the Constitution — not the mythical “we the People”

About half of the elected officials of the federal government

A sizable chunk of the remaining half (who choose to go along rather than be portrayed as “mean”)

Varying percentages of senior appointed officials of the federal government, but about half on average

Probably more than half of judges at all levels of government

Vast numbers of elected and appointed officials of State and local governments

The overwhelming majority of civil servants at all levels of government, with the possible (but diminishing) exception of public-safety officers

Executives of large corporations who foster a cozy relationship with government, as rent-seekers, and who eagerly and visibly endorse government’s social meddling, as virtue-signalers

Almost all of the professoriate in the “liberal” arts and humanities, social “sciences”, and “education” indoctrination centers disciplines

Almost all administrators at colleges and universities

Most public-school teachers and administrators (who are excretions of the collegiate cabals listed immediately above)

Most “human resources” specialists, of whatever rank, wherever they are found

Almost everyone who is employed by any kind of entertainment or news medium, from stars to back-room technicians (the exceptions are notable because they are so few)

Almost everyone who is directly or indirectly involved in the creation, performance, or presentation of “art” (musical, visual, plastic, performing, etc.), with the exception of some practitioners of “country” music

Almost everyone who is a patron or aficionado of the aforementioned “arts”

Most American Jews, who are well represented in many of the other categories

The vast majority of members of the various groups favored and supported by government officials, in a long-standing symbiotic relationship, including (but not limited to) blacks, Hispanics, women, homosexuals (and other members of the gender-confused community), and the aforementioned “artists”

“Activists” of most stripes, who wish to remake the world in whatever utopian image enthralls them

An alarming fraction of the clergy of “mainline” religious denominations, who have somehow come to believe that Christ’s exhortations regarding private charity should be enforced by government

The spoiled children of capitalism who populate the campuses of most colleges and universities

Affluent Americans (the more affluent, the more left-leaning), whose unfounded guilt and alienation from reality have caused them to lose sight of the connection between self-reliance and dignity, and government’s powerfully destructive effect on both

A residual but still very large fraction of white working-class persons who hope that government will make their lives better or at least come through with bigger handouts

Every voter who shares those hopes

If Men Were Angels

Libertarians, God bless them, are always looking for simple solutions to complex problems. Here, for example, is David Bernstein, writing at The Volokh Conspiracy:

I doubt [that] any two libertarians agree on the exact boundaries of libertarianism, but how’s this for a working definition: “A libertarian is someone who generally opposes government interference with and regulation of civil society, even when the result of such government action would be to clamp down on things the individual in question personally dislikes, finds offensive, or morally disapproves of.”

Thus, for example, a libertarian who hates smoking opposes smoking bans in private restaurants, a libertarian who thinks homosexual sodomy is immoral nevertheless opposes sodomy laws, a libertarian who finds certain forms of “hate speech” offensive still opposes hate speech laws, a libertarian who believes in eating natural foods opposes bans or special taxes on processed foods, and a libertarian who thinks that all employers should pay a living wage nevertheless opposes living wage legislation. It doesn’t matter whether the libertarian holds these positions because he believes in natural rights, for utilitarian reasons, or because he thinks God wants us to live in a libertarian society. [“How’s This for a Working Definition of ‘Libertarian’?“, February 26,2015]

This reminds me of the title of a poem by A.E. Housman: “Terence, This Is Stupid Stuff“. Why is it stupid stuff? Because it omits an essential ingredient of liberty, which is line-drawing.

By Bernstein’s logic, one must conclude that anything goes; for example, a libertarian who hates murder, rape, theft, and fraud must oppose laws against such things. Bernstein, like many a libertarian, propounds a moral code that is devoid of morality.

Bernstein might argue that morality is supplied by prevailing social norms. Which, until the bandwagon effect produced by the Supreme Court’s decision in Obergefell v. Hodges, would have meant the non-recognition of homosexual “marriage”. But libertarians were prominent in the chorus of voices clamoring for the Supreme Court to make a national law recognizing homosexual “marriage”, even though the marriage laws still on the books in most parts of the nation — laws that defined marriage as the union of male and female — arose from prevailing social norms. Libertarians have a slippery way of proclaiming laissez faire while striving to enforce their own moral views through law.

Libertarianism is an ideology rooted in John Stuart Mill’s empty harm principle (a.k.a the non-aggression principle), about which I’ve written many times (e.g., here). Regarding ideology, I turn to Jean-François Revel:

As an a priori construction, formulated without regard to facts or ethics, ideology is distinct from science and philosophy on the one hand, and from religion and ethics on the other. Ideology is not science — which it pretends to be. Science accepts the results of the experiments it devises, whereas ideology systematically rejects empirical evidence. It is not moral philosophy — which it claims to have a monopoly on, while striving furiously to destroy the source and necessary conditions of morality: the free will of the individual. Ideology is not religion — to which it is often, and mistakenly, compared: for religion draws its meaning from faith in a transcendent reality, while ideology aims to perfect the world here below.

Ideology — that malignant invention of the human spirit’s dark side, an invention which has cost us dearly — has the singular property of causing zealots to project the structural features of their own mentality onto others. Ideologues cannot imagine that an objection to their abstract systems could come from any source other than a competing system.

All ideologies are aberrations. A sound and rational ideology cannot exist. Falsehood is intrinsic to ideology by virtue of cause, motivation and objective, which is to bring into being a fictional version of the human self — the “self,” at least, that has resolved no longer to accept reality as a source of information or a guide to action. [Last Exit to Utopia, pp. 52-53]

A key aspect of ideology — libertarian ideology included — is its studied dismissal of human nature. Arnold Kling notes, for example,

that humans in large societies have two natural desires that frustrate libertarians.

1. A desire for religion, defined as a set of rituals, norms, and affirmations that are shared by a group and which the group believes it is wrong not to share….

2. A desire for war. I think that it is in human nature to fantasize about battles against tribal enemies….

If these desires were to disappear, I believe that humans could live without a state. However, given these desires, the best approach for a peaceful large society is that which was undertaken in the U.S. when it was founded: freedom of religion guaranteed by the government, and a political system designed for peaceful succession and limitations on the power of any one political office….

I think that it is fine for libertarians to warn of the dangers of religion and to oppose war…. On other other hand, when libertarians assume away the desire for religion and war, their thinking becomes at best irrelevant and at worst nihilistic. [“Libertarians vs. Human Nature“, askblog, February 17, 2017]

In Madison’s words:

If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. [The Federalist No. 51, February 6, 1788]


Related posts:
On Liberty
Line-Drawing and Liberty
Pseudo-Libertarian Sophistry vs. True Libertarianism
Bounded Liberty: A Thought Experiment
More Pseudo-Libertarianism
True Libertarianism, One More Time
Human Nature, Liberty, and Rationalism
The Myth That Same-Sex “Marriage” Causes No Harm
Defending Liberty against (Pseudo) Libertarians
The Pseudo-Libertarian Temperament
Parsing Political Philosophy (II)
Libertarianism and the State
My View of Libertarianism
More About Social Norms and Liberty
The Authoritarianism of Modern Liberalism, and the Conservative Antidote
The Harm Principle Revisited: Mill Conflates Society and State
Liberty and Social Norms Re-examined

P.S.

Yesterday I posted about the hysterics who decry AGW but don’t act on their own to prevent its (non) occurrence. They remind me of wealthy advocates of big government who complain that their taxes are too low but don’t make voluntary donations to the U.S. Treasury.

In fact, I’m confident that most of the wealthy advocates of higher taxation are also rampant emitters of CO2. They’re foolishly consistent: hypocrites about taxation, hypocrites about AGW.

Oh, the Hysteria!

There’s little to add to the unfounded hysteria about Trump’s decision to pull out of the Paris climate agreement … but this:

If all of the hysterics truly believe that a failure to reduce CO2 emissions will result in catastrophic global warming, they have it within their power to reduce emissions drastically. They can start by getting rid of their cars in favor of bikes and horses, moving to smaller homes, doing without air conditioning, keeping their homes at 50 degrees in the winter, bathing and washing clothes in cold water, growing and raising their own foodstuffs (to eliminate transportation-based emissions), reading by candle light, and throwing out all of their electrical appliances — even including their smart phones, which rely on electrically powered systems.

Given the number of hysterics out there, I’m sure that the (non) CO2 problem would be solved in no time. If their grandparents, great-grandparents, and all who came before them could live a CO2-minimal life, why can’t a few billion true-blue saviors of the world do the same?

A Personality Test: Which Antagonist Do You Prefer?

1. Archangel Michael vs. Lucifer (good vs. evil)

2. David vs. Goliath (underdog vs. bully)

3. Alexander Hamilton vs. Aaron Burr (a slippery politician vs. a slippery politician-cum-traitor)

4. Richard Nixon vs. Alger Hiss (a slippery politician vs. a traitorous Soviet spy)

5. Sam Ervin vs. Richard Nixon (an upholder of the Constitution vs. a slippery politician)

6. Kenneth Starr vs. Bill Clinton (a straight arrow vs. a slippery politician)

7. Elmer Fudd vs. Bugs Bunny (a straight arrow with a speech impediment vs. a rascally rabbit)

8. Jerry vs. Tom (a clever mouse vs. a dumb but determined cat)

9. Tweety Bird vs. Sylvester the Cat (a devious bird vs. a predatory cat)

10. Road Runner vs. Wile E. Coyote (a devious bird vs. a stupid canine)

11. Rocky & Bullwinkle vs. Boris & Natasha (fun-loving good guys vs. funny bad guys)

12. Dudley Do-Right vs. Snidely Whiplash (a straight arrow vs. a stereotypical villain)

Summarize and explain your choices in the comments. Suggestions for other pairings are welcome.

The Midwest Is a State of Mind

I am a son of the Middle Border,* now known as the Midwest. I left the Midwest, in spirit, almost 60 years ago, when I matriculated at a decidedly cosmopolitan State university. It was in my home State, but not much of my home State.

Where is the Midwest? According to Wikipedia, the U.S. Census Bureau defines the Midwest as comprising the 12 States shaded in red:

They are, from north to south and west to east, North Dakota, South Dakota, Nebraska, Kansas, Minnesota, Iowa Missouri, Wisconsin, Illinois, Michigan, Indiana, and Ohio.

In my experience, the Midwest really begins on the west slope of the Appalachians and includes much of New York State and Pennsylvania. I have lived and traveled in that region, and found it, culturally, to be much like the part of “official” Midwest where I was born and raised.

I am now almost 60 years removed from the Midwest (except for a three-year sojourn in the western part of New York State, near the Pennsylvania border). Therefore, I can’t vouch for the currency of a description that appears in Michael Dirda’s review of Jon K. Lauck’s From Warm Center to Ragged Edge: The Erosion of Midwestern Literary and Historical Regionalism, 1920-1965 (Iowa and the Midwest Experience). Dirda writes:

[Lauck] surveys “the erosion of Midwestern literary and historical regionalism” between 1920 and 1965. This may sound dull as ditch water to those who believe that the “flyover” states are inhabited largely by clodhoppers, fundamentalist zealots and loudmouthed Babbitts. In fact, Lauck’s aim is to examine “how the Midwest as a region faded from our collective imagination” and “became an object of derision.” In particular, the heartland’s traditional values of hard work, personal dignity and loyalty, the centrality it grants to family, community and church, and even the Jeffersonian ideal of a democracy based on farms and small land-holdings — all these came to be deemed insufferably provincial by the metropolitan sophisticates of the Eastern Seaboard and the lotus-eaters of the West Coast.

That was the Midwest of my childhood and adolescence. I suspect that the Midwest of today is considerably different. American family life is generally less stable than it was 60 years ago; Americans generally are less church-going than they were 60 years ago; and social organizations are less robust than they were 60 years ago. The Midwest cannot have escaped two generations of social and cultural upheaval fomented by the explosion of mass communications, the debasement of mass culture, the rise of the drugs-and-rock culture, the erasure of social norms by government edicts, and the creation of a culture of dependency on government.

I nevertheless believe that there is a strong, residual longing for and adherence to the Midwestern culture of 60 years ago — though it’s not really unique to the Midwest. It’s a culture that persists throughout America, in rural areas, villages, towns, small cities, and even exurbs of large cities.

The results of last year’s presidential election bear me out. Hillary Clinton represented the “sophisticates” of the Eastern Seaboard and the lotus-eaters of the West Coast. She represented the supposed superiority of technocracy over the voluntary institutions of civil society. She represented a kind of smug pluralism and internationalism that smirks at traditional values and portrays as clodhoppers and fundamentalist zealots those who hold such values. Donald Trump, on the other hand (and despite his big-city roots and great wealth), came across as a man of the people who hold such values.

What about Clinton’s popular-vote “victory”? Nationally, she garnered 2.9 million more votes than Trump. But the manner of Clinton’s “victory” underscores the nation’s cultural divide and the persistence of a Midwestern state of mind. Clinton’s total margin of victory in California, New York, and the District of Columbia was 6.3 million votes. That left Trump ahead of Clinton by 3.4 million votes in the other 48 States, and even farther ahead in non-metropolitan areas. Clinton’s “appeal” (for want of a better word) was narrow; Trump’s was much broader (e.g., winning a higher percentage than Romney did of the two-party vote in 39 States). Arguably, it was broader than that of every Republican presidential candidate since Ronald Reagan won a second term in 1984.

The Midwestern state of mind, however much it has weakened in the last 60 years, remains geographically dominant. In the following graph, counties won by Clinton are shaded in blue; counties won by Trump are shaded in red:


Source: Wikipedia article about the 2016 presidential election.


* This is an allusion to Hamlin Garland‘s novel, A Son of the Middle Border. Garland, a native of Wisconsin, was himself a son of the Middle Border.


Related posts:
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
An Addendum to (Asymmetrical) Ideological Warfare
Khizr Khan’s Muddled Logic
A Lesson in Election-Rigging
My Platform (which reflects a Midwestern state of mind)
Polarization and De-facto Partition
H.L. Mencken’s Final Legacy
The Shy Republican Supporters
Roundup (see “Civil War II”)
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
You Can’t Go Home Again
Class in America
A Word of Warning to Leftists (and Everyone Else)
Another Thought or Two about Class

Equality

From the keyboard of Maverick Philosopher (“Is There a Defensible Sense in Which Human Beings Are Equal?“):

Empirical inequality cannot be denied:  by the various empirical measures there is plenty of inequality among individuals and groups. (Trivial example: men on average are taller than women. Height is an example of an empirically measurable attribute.) So if human beings are taken solely in their empirical and material natures, or if human beings are nothing more than material beings, then talk of the equality of all human beings is either false or trivial. (That all human beings are equal in that they all have been born at or near the surface of the earth is empirically true, but trivially true.)….

Given the plain fact of empirical inequality, is there any defensible sense in which human beings could be said to be equal and in possession of equal rights?…

[A] person [in the descriptive sense] is a conscious and thus sentient individual, capable of self-consciousness, possessing feeling and will and memory and the capacity to reason and plan and seek the truth about itself and everything else…. A person in the normative sense is a rights-possessor which, in virtue of having rights, induces in other persons various duties. For example, my right to life induces in you the duty to refrain from taking my life, and your duty derives from my right. In this sense rights and duties are correlative….

My claim, then, is that we are all equal as persons in the descriptive sense, and therefore all equal in the normative sense.  That is, if any one of us is a rights-possessor in virtue of being a descriptive person, then every one of us is a rights-possessor in virtue of being a descriptive person.  And all of this regardless of sex, race, age, and any other empirical feature. We are equal as persons even if my will is stronger than yours and my intellect more penetrating. We are equal as persons even if you are more compassionate than me.

The point, then, is that equality is grounded in personhood, not in animal constitution….

The above definition of ‘person’ allows for persons that are not human beings and human beings (genetic humans) that are not persons, as well as persons that are human beings….  Examples of humans that are not persons, on my definition of ‘person,’ would be anencephalic human neonates. They would not be persons because of their lack of capacity to develop language and reasoning skills. (For more on the anencephalic business, see Potentiality and the Substance View of Persons, the comments to which were good.)  But these anencephalic individuals are nonetheless genetically human as the offspring of human parents.

To repeat, our equality is grounded in our shared personhood despite our considerable empirical differences. Personhood cannot be understood in natural-scientific terms.

I will try to reduce this to a syllogism:

1. A person is a human being who is a conscious and thus sentient individual, capable of self-consciousness, possessing feeling and will and memory and the capacity to reason and plan and seek the truth about itself and everything else. (A human being who lacks the potential for becoming all of those things is not a person.)

2. All persons are equal, in the sense that they all possess or exhibit personhood, as defined in 1.

3. Given that all persons are equal, if any one of them is a rights-possessor, all of them possess the same rights by virtue of their inherent equality.

Observations:

I am bothered by the distinction made in point 1 between human persons and human non-persons. This opens the door to the kinds of distinctions that are used to justify abortion and involuntary euthanasia.

Point 2 merely says that a person is a person, as defined in point 1. This is a trivial definition of equality. Are Hitler, Stalin, and Mao “equal” to Francis of Assisi, John Paul II, and Mother Teresa? By asking such a question am I proposing the kind of arbitrary distinction that I object to in point 1? (Arbitrary because it emerges from an a priori analysis rather than experience.) The answer is no, as discussed below.

The rights in point 3 seem to be free-floating Platonic entities, independent of the existence and socialization of human beings. But rights are not like that. Nor are they unitary; an all-or-nothing set that is bestowed on every person. Rights are complex and socially constructed*, and they arise from distinctions of the kind that I make between a Hitler and a Mother Teresa. There are persons who are so despicable that they should have no rights; unlike unwitting fetuses and helpless old people, they should be erased from the face of the earth for the good of humankind.

Social intercourse is capable of generating innumerable gradations of rights, from a positive right to be cared for in one’s old age to a negative right to be allowed to die in peace without the intervention of “life saving” measures. In between are such rights as the right to resume living among free human beings, working at gainful employment, enjoying normal social pleasures, and so on, after having been imprisoned for committing socially defined harms.

Equality, then, is the enjoyment of the same socially bestowed rights as others who are similarly situated (e.g., not incarcerated, eligible for care).

Bonus observation:

One way of defining liberty is to say that it is the scope of action that is allowed by socially agreed upon rights. Negative rights define what one may not do to others; positive rights define what others must do for the beneficiaries of such rights.


* In the best case, the state would enforce socially constructed negative rights (e.g., the right not to be murdered), and would not be a tool for the fabrication and enforcement of so-called positive rights. Such rights do arise from social intercourse, but when the state enforces them it imposes burdens on persons who are not party to the creation of such rights (e.g., the duty of care for others may vary considerably from culture to culture, even within a nation-state). State and society are synonymous only in small, cohesive, and kinship groups.


Related posts:
Negative Rights
Rights, Liberty, the Golden Rule, and the Legitimate State
“Natural Rights” and Consequentialism
More about Consequentialism
Line-Drawing and Liberty
What Are “Natural Rights”?
The Golden Rule and the State
Bounded Liberty: A Thought Experiment
Evolution, Human Nature, and “Natural Rights”
The Meaning of Liberty
Positive Liberty vs. Liberty
On Self-Ownership and Desert
The Golden Rule as Beneficial Learning
Facets of Liberty
Burkean Libertarianism
Rights: Source, Applicability, How Held
Human Nature, Liberty, and Rationalism
Merit Goods, Positive Rights, and Cosmic Justice
More about Merit Goods
Society and the State
Liberty, Negative Rights, and Bleeding Hearts
Liberty and Society
Genetic Kinship and Society
Liberty as a Social Construct: Moral Relativism?
Defining Liberty
The Social Animal and the “Social Contract”
The Futile Search for “Natural Rights”
Getting Liberty Wrong
The Harmful Myth of Inherent Equality
The Principles of Actionable Harm
More About Social Norms and Liberty
The Harm Principle Revisited: Mill Conflates Society and State
Liberty and Social Norms Re-examined
Natural Law, Natural Rights, and the Real World
Natural Law and Natural Rights Revisited

Lincoln Was Wrong

Michael Stokes Paulsen and his son Luke opine:

[A]t the heart of the Civil War, the crisis that triggered it, and the changes that it brought were enormous constitutional issues. Indeed, it is no exaggeration to say that the Civil War was fought over the meaning of the Constitution, and over who would have the ultimate power to decide that meaning. The Civil War decided—on the battlefields rather than in the courts—the most important constitutional questions in our nation’s history: the nature of the Union under the Constitution, the status and future of slavery, the powers of the national government versus the states, the supremacy of the Constitution, and the wartime powers of the president as commander in chief. It was the Civil War, not any subsequent judicial decision, that “overruled” the Supreme Court’s atrocious decision in Dred Scott v. Sandford creating a national constitutional right to own slaves….

The United States is the nation it is today because of Lincoln’s unwavering commitment to the Constitution as governing a single, permanent nation and forbidding secession. Lincoln’s vision of Union is so thoroughly accepted today that we forget how hotly disputed it was for the first seventy years of our nation’s history. The result was hardly inevitable. Lincoln’s vision and resolve saved the nation. Lincoln’s nationalist views have shaped every issue of federalism and sovereignty for the past one hundred fifty years. Compared with the constitutional issues over which the Civil War was fought, today’s disputes over federal- versus-state power are minor-league ball played out on a field framed by Lincoln’s prevailing constitutional vision of the United States as one nation, indivisible.

On the president’s constitutional duty: Lincoln understood his oath to impose an absolute personal moral and legal duty not to cave in to wrong, destructive views of the Constitution. He fought on the campaign trail for his understanding of Union and of the authority of the national government to limit the spread of slavery. Once in office, he understood his oath to impose on him an irreducible moral and legal duty of faithful execution of the laws, throughout the Union. It was a duty he could not abandon for any reason. [“The Great Interpreter”, University of St. Thomas (Minnesota) Research Paper No. 15-09, April 17, 2017]

Whence Lincoln’s view of the Union? This is from the Paulsens’ book, The Constitution: An Introduction:

Lincoln was firmly persuaded that secession was unconstitutional. Immediately upon taking office as President, in his First Inaugural Address, Lincoln— a careful constitutional lawyer— laid out in public his argument as to why secession was unconstitutional: The Constitution was the supreme law of the land, governing all the states. The Constitution did not provide that states could withdraw from the Union, and to infer such a right was contrary to the letter and spirit of the document. The Constitution’s Preamble announced the objective of forming a “more perfect Union” of the states than had existed under the Articles of Confederation, which themselves had said that the Union would be “perpetual.” Moreover, the Constitution created a true national government, not a mere “compact,” league, or confederacy— in fact, it explicitly forbade states from entering into alliances, confederacies, or treaties outside of national authority. The people of the United States, taken as a whole, were sovereign, not the states.

It followed from these views, Lincoln argued, that “no State upon its own mere motion can lawfully get out of the Union; that resolves and ordinances to that effect are legally void, and that acts of violence within any State or States against the authority of the United States are insurrectionary or revolutionary, according to circumstances.” Purported secession was simply an illegal— unconstitutional— rebellion against the Union.

Lincoln’s position, which the Paulsens seem to applaud, is flawed at its root. The Constitution did not incorporate the Articles of Confederation, it supplanted them. The “perpetual Union” of the Articles vanished into thin air upon the adoption of the Constitution. Moreover, the “more perfect Union” of the Constitution’s preamble is merely aspirational, as are the desiderata that follow it:

establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity.

“More perfect”, if it means anything, means that the Constitution created a central government where there was none before. The Constitution is silent about perpetuity. It is silent about secession. Therefore, one must turn elsewhere to find (or reject) a legal basis for secession, but not to the Civil War.

The Civil War “decided” the issue of secession in the same way that World War I “decided” the future of war. It was the “war to end all wars”, was it not? Therefore, tens of millions of deaths to the contrary notwithstanding, there have been no wars since the Armistice of 1918. By the same logic, the thief who steals your car or the vandal who defaces your home or the scam artist who takes your life savings has “decided” that you don’t own a car, or that your home should be ugly, or that your savings are really his. Thus does might make right, as the Paulsens would have it.

There is in fact a perfectly obvious and straightforward case for unilateral secession, which I have made elsewhere, including “A Resolution of Secession”. You should read all of it if you are a rabid secessionist — or a rabid anti-secessionist. Here are some key passages:

The Constitution is a contract — a compact in the language of the Framers. The parties to the compact are not only the States but also the central government….

Lest there be any question about the status of the Constitution as a compact, we turn to James Madison, who is often called the Father of the Constitution. Madison, in a letter to Daniel Webster dated March 15, 1833, addresses

the question whether the Constitution of the U.S. was formed by the people or by the States, now under a theoretic discussion by animated partizans.

Madison continues:

It is fortunate when disputed theories, can be decided by undisputed facts. And here the undisputed fact is, that the Constitution was made by the people, but as imbodied into the several states, who were parties to it and therefore made by the States in their highest authoritative capacity….

[I]n The Federalist No. 39, which informed the debates in the various States about ratification….

Madison leaves no doubt about the continued sovereignty of each State and its people. The remaining question is this: On what grounds, if any, may a State withdraw from the compact into which it entered voluntarily?

There is a judicial myth — articulated by a majority of the United States Supreme Court in Texas v. White (1869) — that States may not withdraw from the compact because the union of States is perpetual….

The Court’s reasoning is born of mysticism, not legality. Similar reasoning might have been used — and was used — to assert that the Colonies were inseparable from Great Britain. And yet, some of the people of the Colonies put an end to the union of the Colonies and Great Britain, on the moral principle that the Colonies were not obliged to remain in an abusive relationship. That moral principle is all the more compelling in the case of the union known as the United States, which — mysticism aside — is nothing more than the creature of the States, as authorized by the people thereof.

In fact, the Constitution supplanted the Articles of Confederation and Perpetual Union, by the will of only nine of the thirteen States….

[I]n a letter to Alexander Rives dated January 1, 1833, Madison says that

[a] rightful secession requires the consent of the others [other States], or an abuse of the compact, absolving the seceding party from the obligations imposed by it.

An abuse of the compact most assuredly necessitates withdrawal from it, on the principle of the preservation of liberty, especially if that abuse has been persistent and shows no signs of abating. The abuse, in this instance, has been and is being committed by the central government.

The central government is both a creature of the Constitution and a de facto party to it, as co-sovereign with the States and supreme in its realm of enumerated and limited powers. One of those powers enables the Supreme Court of the United States to decide “cases and controversies” arising under the Constitution, which alone makes the central government a responsible party. More generally, the high officials of the central government acknowledge the central government’s role as a party to the compact — and the limited powers vested in them — when they take oaths of office requiring them to uphold the Constitution.

Many of those high officials have nevertheless have committed myriad abuses of the central government’s enumerated and limited powers. The abuses are far too numerous to list in their entirety. The following examples amply justify the withdrawal of the State of _______________ from the compact….

We, therefore, the representatives of the people of _______________ do solemnly publish and declare that this State ought to be free and independent; that it is absolved from all allegiance to the government of the United States; that all political connection between it and government of the United States is and ought to be totally dissolved; and that as a free and independent State it has full power to levy war, conclude peace, contract alliances, establish commerce, and to do all other acts and things which independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our lives, our fortunes and our sacred honor.


Related posts:
Secession
Secession Redux
A New Cold War or Secession?
The Real Constitution and Civil Disobedience
A Declaration of Independence
First Principles
The Constitution: Original Meaning, Corruption, and Restoration
The Southern Secession Reconsidered
A Declaration of Civil Disobedience
Our Perfect, Perfect Constitution
Reclaiming Liberty throughout the Land
Secession, Anyone?
Secession for All Seasons
A New Constitution for a New Republic
Restoring Constitutional Government: The Way Ahead
Secession Made Easy
More about “Secession Made Easy”
How Libertarians Ought to Think about the Constitution
The States and the Constitution
Judicial Supremacy: Judicial Tyranny
The Answer to Judicial Supremacy
Turning Points
A Resolution of Secession
Polarization and De-facto Partition

Quantum Mechanics and Free Will

Physicist Adam Frank, in “Minding Matter” (Aeon, March 13, 2017), visits subjects that I have approached from several angles in various posts. Frank addresses the manifestation of brain activity — more properly, the activity of the central nervous system (CNS) — which is known as consciousness. But there’s a lot more to CNS activity than that. What it all adds up to is generally called “mind”, which has conscious components (things we are aware of, including being aware of being aware) and subconscious components (things that go on in the background that we might or might not become aware of).

In the traditional (non-mystical) view, each person’s mind is separate from the minds of other persons. Mind (or the concepts, perceptions, feelings, memories, etc. that comprise it) therefore defines self. I am my self (i.e., not you) because my mind is a manifestation of my body’s CNS, which isn’t physically linked to yours.

With those definitional matters in hand, Frank’s essay can be summarized and interpreted as follows:

According to materialists, mind is nothing more than a manifestation of CNS activity.

The underlying physical properties of the CNS are unknown because the nature of matter is unknown.

Matter, whatever it is, doesn’t behave in billiard-ball fashion, where cause and effect are tightly linked.

Instead, according to quantum mechanics, matter has probabilistic properties that supposedly rule out strict cause-and-effect relationships. The act of measuring matter resolves the uncertainty, but in an unpredictable way.

Mind is therefore a mysterious manifestation of quantum-mechanical processes. One’s state of mind is affected by how one “samples” those processes, that is, by one’s deliberate, conscious attempt to use one’s CNS in formulating the mind’s output (e.g., thoughts and interpretations of the world around us).

Because of the ability of mind to affect mind (“mind over matter”), it is more than merely a passive manifestation of the physical state of one’s CNS. It is, rather, a meta-state — a physical state that is created by “mental” processes that are themselves physical.

In sum, mind really isn’t immaterial. It’s just a manifestation of poorly understood material processes that can be influenced by the possessor of a mind. It’s the ultimate self-referential system, a system that can monitor and change itself to some degree.

None of this means that human beings lack free will. In fact, the complexity of mind argues for free will. This is from a 12-year-old post of mine:

Suppose I think that I might want to eat some ice cream. I go to the freezer compartment and pull out an unopened half-gallon of vanilla ice cream and an unopened half-gallon of chocolate ice cream. I can’t decide between vanilla, chocolate, some of each, or none. I ask a friend to decide for me by using his random-number generator, according to rules of his creation. He chooses the following rules:

  • If the random number begins in an odd digit and ends in an odd digit, I will eat vanilla.
  • If the random number begins in an even digit and ends in an even digit, I will eat chocolate.
  • If the random number begins in an odd digit and ends in an even digit, I will eat some of each flavor.
  • If the random number begins in an even digit and ends in an odd digit, I will not eat ice cream.

Suppose that the number generated by my friend begins in an even digit and ends in an even digit: the choice is chocolate. I act accordingly.

I didn’t inevitably choose chocolate because of events that led to the present state of my body’s chemistry, which might otherwise have dictated my choice. That is, I broke any link between my past and my choice about a future action.I call that free will.

I suspect that our brains are constructed in such a way as to produce the same kind of result in many situations, though certainly not in all situations. That is, we have within us the equivalent of an impartial friend and an (informed) decision-making routine, which together enable us to exercise something we can call free will.

This rudimentary metaphor is consistent with the quantum nature of the material that underlies mind. But I don’t believe that free will depends on quantum mechanics. I believe that there is a part of mind — a part with a physical location — which makes independent judgments and arrives at decisions based on those judgments.

To extend the ice-cream metaphor, I would say that my brain’s executive function, having become aware of my craving for ice cream, taps my knowledge (memory) of snacks on hand, or directs the part of my brain that controls my movements to look in the cupboard and freezer. My executive function, having determined that my craving isn’t so urgent that I will drive to a grocery store, then compiles the available options and chooses the one that seems best suited to the satisfaction of my craving at that moment. It may be ice cream, or it may be something else. If it is ice cream, it will consult my “taste preferences” and choose between the flavors then available to me.

Given the ways in which people are seen to behave, it seems obvious that the executive function, like consciousness, is on a “different circuit” from other functions (memory, motor control, autonomic responses, etc.), just as the software programs that drive my computer’s operations are functionally separate from the data stored on the hard drive and in memory. The software programs would still be on my computer even if I erased all the data on my hard drive and in memory. So, too, would my executive function (and consciousness) remain even I lost all memory of everything that happened to me before I awoke this morning.

Given this separateness, there should be no question that a person has free will. That is why I can sometimes resist a craving for ice cream. That is why most people are often willing and able to overcome urges, from eating candy to smoking a cigarette to punching a jerk.

Conditioning, which leads to addiction, makes it hard to resist urges — sometimes nigh unto impossible. But the ability of human beings to overcome conditioning, even severe addictions, argues for the separateness of the executive function from other functions. In short, it argues for free will.


Related posts:
Free Will: A Proof by Example?
Free Will, Crime, and Punishment
Mind, Cosmos, and Consciousness
“Feelings, Nothing More than Feelings”
Hayek’s Anticipatory Account of Consciousness
Is Consciousness an Illusion?

Punctuation within Quotation Marks: British vs. American Style

I’ve added this to my page “On Writing“:

I have reverted to the British style of punctuating in-line quotations, which I followed 40 years ago when I published a weekly newspaper. The British style is to enclose within quotation marks only (a) the punctuation that appears in quoted text or (b) the title of a work (e.g., a blog post) that is usually placed within quotation marks.

I have reverted because of the confusion and unsightliness caused by the American style. It calls for the placement of periods and commas within quotation marks, even if the periods and commas don’t occur in the quoted material or title. Also, if there is a question mark at the end of quoted material, it replaces the comma or period that might otherwise be placed there.

If I had continued to follow American style, I would have ended a sentence in a recent post with this:

… “A New (Cold) Civil War or Secession?” “The Culture War,” “Polarization and De-facto Partition,” and “Civil War?

What a hodge-podge. There’s no comma between the first two entries, and the sentence ends with an inappropriate question mark. With two titles ending in question marks, there was no way for me to avoid a series in which a comma is lacking. I could have avoided the sentence-ending question mark by recasting the list, but the items are listed chronologically, which is how they should be read.

I solved these problems easily by reverting to the British style:

… “A New (Cold) Civil War or Secession?”, “The Culture War“, “Polarization and De-facto Partition“, and “Civil War?“.

This not only eliminates the hodge-podge, but is also more logical and accurate. All items are separated by commas, commas aren’t displaced by question marks, and the declarative sentence ends with a period instead of a question mark.

Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration

Civil War II

Are Americans really in the midst of Civil War II or a Cold Civil War? It has seemed that way for many years. I have written about it in “A New (Cold) Civil War or Secession?”, “The Culture War“, “Polarization and De-facto Partition“, and “Civil War?“.* Andrew Sullivan, whom I quit following several years ago for reasons that are evident in the following quotation (my irrepressible comments are in boldface and bracketed), has some provocative things to say about the situation:

Certain truths about human beings have never changed. We are tribal creatures in our very DNA; we have an instinctive preference for our own over others, for “in-groups” over “out-groups”; for hunter-gatherers, recognizing strangers as threats was a matter of life and death. We also invent myths and stories to give meaning to our common lives. Among those myths is the nation — stretching from the past into the future, providing meaning to our common lives in a way nothing else can. Strip those narratives away, or transform them too quickly, and humans will become disoriented. Most of us respond to radical changes in our lives, especially changes we haven’t chosen, with more fear than hope. We can numb the pain with legal cannabis or opioids, but it is pain nonetheless.

If we ignore these deeper facts about ourselves, we run the risk of fatal errors. It’s vital to remember that multicultural, multiracial, post-national societies are extremely new for the human species [but they are not “societies”], and keeping them viable and stable is a massive challenge. Globally, social trust is highest in the homogeneous Nordic countries, and in America, Pew has found it higher in rural areas than cities. The political scientist Robert Putnam has found that “people living in ethnically diverse settings appear to ‘hunker down,’ that is, to pull in like a turtle.” Not very encouraging about human nature — but something we can’t wish away, either. In fact, the American elite’s dismissal of these truths, its reduction of all resistance to cultural and demographic change as crude “racism” or “xenophobia,” only deepens the sense of siege many other Americans feel….

… Within the space of 50 years, America has gone from segregation to dizzying multiculturalism; … from homosexuality as a sin [or dangerous aberration] to homophobia as a taboo; from Christianity being the common culture to a secularism no society has ever sustained before ours [but mainly within the confines of the internet-media-academic complex, except where they have successfully enlisted government in the task of destroying social norms]….

And how can you seriously regard our political system and culture as worse than ever before in history? How self-centered do you have to be to dismiss the unprecedented freedom for women, racial minorities, and homosexuals? [How self-centered to you have to be to dismiss the fact that much of that “unprecedented freedom” has been bought at the expense of freedom of speech, freedom of association, property rights, and advancement based on merit — things that are at the very heart of liberty?]….

If the neo-reactionaries were entirely right, the collapse of our society would surely have happened long before now [Strawman alert: How does Sullivan know when “society” would have collapsed?]. But somehow, an historically unprecedented mix of races and cultures hasn’t led to civil war in the United States. [Not a shooting war, but a kind of civil war nevertheless.] … America has assimilated so many before, its culture churning into new forms, without crashing into incoherence. [Strawman alert 2: “America”, note being a “society”, doesn’t have a “culture”. But some “cultures” (e.g., welfare-dependency, “hate whitey”, drugs, political correctness) are ascendant, for those with eyes to see.] [“The Reactionary Temptation“, New York, April 30, 2017]

All in all, I would say that Mr. Sullivan protests too much. He protests so much that he confirms my view that America is smack in the middle of a Cold Civil War. (Despite that, and the fatuousness of Mr. Sullivan’s commentary, I am grateful to him for a clear explanation of the political philosophy of Leo Strauss,** the theme of which had heretofore been obscure to me.)

For other, more realistic views of the current state of affairs, see the following (listed in chronological order):

David French, “A Blue State ‘Secession’ Model I Can Get Behind” (National Review, March 19, 2017)

Daniel Greenfield, “The Civil War Is Here” (Frontpage Magazine, March 27, 2017)

Daniel Greenfield, “Winning the Civil War of Two Americas” (Frontpage Magazine, April 4, 2017)

Rick Moran, “War Between U.S. Government and Sanctuary Cities Heating Up” (American Thinker, April 10, 2017)

Angelo M. Codevilla, “The Cold Civil War” (Claremont Review of Books, April 25, 2017)


Solitude for the Masses

Paul Kingsworth reviews Michael Harris’s Solitude in “The End of Solitude: In a Hyperconnected World, Are We Losing the Art of Being Alone?” (New Statesman, April 26, 2017):

Harris has an intuition that being alone with ourselves, paying attention to inner silence and being able to experience outer silence, is an essential part of being human….

What happens when that calm separateness is destroyed by the internet of everything, by big-city living, by the relentless compulsion to be with others, in touch, all the time? Plenty of people know the answer already, or would do if they were paying attention to the question. Nearly half of all Americans, Harris tells us, now sleep with their smartphones on their bedside table, and 80 per cent are on their phone within 15 minutes of waking up. Three-quarters of adults use social networking sites regularly. But this is peanuts compared to the galloping development of the so-called Internet of Things. Within the next few years, anything from 30 to 50 billion objects, from cars to shirts to bottles of shampoo, will be connected to the net. The internet will be all around you, whether you want it or not, and you will be caught in its mesh like a fly. It’s not called the web for nothing….

What is the problem here? Why does this bother me, and why does it bother Harris? The answer is that all of these things intrude upon, and threaten to destroy, something ancient and hard to define, which is also the source of much of our creativity and the essence of our humanity. “Solitude,” Harris writes, “is a resource.” He likens it to an ecological niche, within which grow new ideas, an understanding of the self and therefore an understanding of others.

The book is full of examples of the genius that springs from silent and solitary moments. Beethoven, Dostoevsky, Kafka, Einstein, Newton – all developed their ideas and approach by withdrawing from the crowd….

Yet it is not only geniuses who have a problem: ordinary minds like yours and mine are threatened by the hypersocial nature of always-on urbanity….

So, what is to be done about all this? That’s the multibillion-dollar question, but it is one the book cannot answer. Harris spends many pages putting together a case for the importance of solitude and examining the forces that splinter it today….

Under the circumstances – and these are our circumstances – the only honest conclusion to draw is that the problem, which is caused primarily by the technological direction of our society, is going to get worse. There is no credible scenario in which we can continue in the same direction and not see the problem of solitude, or lack of it, continue to deepen….

… Short of a collapse so severe that the electricity goes off permanently, there is no escape from what the tech corporations and their tame hive mind have planned for us. The circle is closed, and the net is being hauled in. May as well play another round of Candy Crush while we wait to be dragged up on to the deck.

Well, the answer doesn’t lie in the kind of defeatism exemplified by Harris (whose book is evidently full of diagnosis and empty of remedy) or Kingsworth. It’s up to each person to decide whether or not to enlarge his scope of solitude or be defeated by the advance of technology and the breakdown of truly human connections.

But it’s not an all-or-nothing choice. Compromise is obviously necessary when it comes to making a living these days. That still leaves a lot of room for the practice of solitude, the practice and benefits of which I have addressed in “Flow“, “In Praise of Solitude“, “There’s Always Solitude“, and “The Glory of the Human Mind“.


More about the Transgender Fad

Is the transgender fad fading away, or is it just that I’m spending more time in solitude? Anyway, is was reminded of the fad by “Most Children Who Identify As Transgender Are Faking It, Says ‘Gender Clinic’ Psychiatrist” (The College Fix, April 17, 2017). It’s a brief post and the title tells the tale. So I’ll turn to my own post on the subject, “The Transgender Fad and Its Consequences“. Following a preamble and some long quotations from authoritative analysis of transgenderism, I continue with this:

Harm will come not only to  those who fall prey to the transgender delusion, but also to those who oppose its inevitable manifestations:

  • mandatory sex mingling in bathrooms, locker rooms, and dorm rooms — an invitation to predators and a further weakening of the norms of propriety that help to instill respect toward other persons
  • quotas for hiring self-described transgender persons, and for admitting them to universities, and for putting them in the ranks of police and armed forces, etc.
  • government-imposed penalties for saying “hateful and discriminatory” things about gender, the purpose of which will be to stifle dissent about the preceding matters
  • government-imposed penalties for attempts to exercise freedom of association, which is an unenumerated right under the Constitution that, properly understood, includes the right to refuse business from anyone at any time and for any reason (including but far from limited to refusing to serve drug-addled drag queens whose presence will repel other customers)….

How did America get from the pre-Kinsey view of sex as a private matter, kept that way by long-standing social norms, to the let-it-all-hang-out (literally) mentality being pushed by elites in the media, academy, and government?

I attribute much of it to the capitalist paradox. Capitalism — a misnomer for an economic system that relies mainly on free markets and private-property rights — encourages innovation, entrepreneurship, and economic growth. One result is that a “capitalist” economy eventually produces enough output to support large numbers of persons who don’t understand that living off the system and regulating it heavily will bring it down….

The social paradox is analogous to the capitalist paradox. Social relations are enriched and made more productive by the toleration of some new behaviors. But to ensure that a new behavior is enriching and productive, it must be tested in the acid of use.* Shortcuts — activism cloaked in academese, punditry, and political posturing — lead to the breakdown of the processes by which behaviors become accepted because they are enriching and productive.

In sum, the capitalist paradox breeds the very people who are responsible for the social paradox: those who are rich enough to be insulated from the vicissitudes of daily life, where living among and conversing with similar folk reinforces a distorted view of the real world.

It is the cossetted beneficiaries of capitalism who lead the way in forcing Americans to accept as “natural” and “of right” behavior that in saner times was rarely engaged in and even more rarely flaunted. That restraint wasn’t just a matter of prudery. It was a matter of two things: respect for others, and the preservation of norms that foster restraint.

How quaint. Avoiding offense to others, and teaching one’s children that normal behavior helps them to gain the acceptance and trust of others. Underlying those understood motivations was a deeper one: Children are susceptible creatures, easily gulled and led astray — led into making mistakes that will haunt them all their lives. There was, in those days, an understanding that “one thing leads to another.”…

… If the Kennedy Court of Social Upheaval continues to hold sway, its next “logical” steps  will be to declare the illegality of sexual identifiers and the prima facie qualification of any person for any job regardless of “its” mental and physical fitness for the job….

… [T[he parents of yesteryear didn’t have to worry about the transgender fad, but they did have to worry about drinking, drug-taking, and sex. Not everyone who “experimented” with those things went on to live a life of dissolution, shame, and regret. But many did. And so, too, will the many young children, adolescents, and young adults who succumb to the fad of transgenderism….

When did it all begin to go wrong? See “1963: The Year Zero.”

Thank you for working your way through this very long quotation from my own blog. But it just has to be said again and again: Transgenderism is a fad, a destructive fad, and a fad that is being used by the enemies of liberty to destroy what little of it is left in America.


The Academic Enemies of Liberty

Kurt Schlichter quite rightly says that “Academia Is Our Enemy So We Should Help It Commit Suicide“:

If Animal House were to be rebooted today, Bluto – who would probably be updated into a differently–abled trans being of heft – might ask, “See if you can guess what am I now?” before expelling a whole mass of pus-like root vegetable on the WASPrivileged villains and announcing, “I’m a university – get it?”

At least popping a zit gets rid of the infection and promotes healing. But today, the higher education racket festers on the rear end of our culture, a painful, useless carbuncle of intellectual fraud, moral bankruptcy, and pernicious liberal fascism that impoverishes the young while it subsidizes a bunch of old pinkos who can’t hack it at Real World U….

If traditional colleges performed some meaningful function that only they could perform, then there might be a rationale for them in the 21st Century. But there’s not. What do four-year colleges do today?

Well, they cater to weenies who feel “unsafe” that Mike Pence is speaking to their graduates. Seventy-some years ago, young people that age were feeling unsafe because the Wehrmacht was trying to kill them on Omaha Beach….

And in their quest to ensure their students’ perpetual unemployment, colleges are now teaching that punctuality is a social construct. Somewhere, a Starbucks manager is going to hear from Kaden the Barista that, “I like, totally couldn’t get here for my shift on time because, like intersectionality of my experience as a person of Scandinavianism and stuff. I feel unsafe because of your racist vikingaphobia and tardiness-shaming.”

Academia is pricing itself out of reach even as the antics of its inhabitants annoy and provoke those of us whose taxes already pick up a big chunk of the bill even without the “free college” okie-doke….

The quarter million dollar academic vacation model is economically unsustainable and poisonous to our culture. The world of Animal House was a lot more fun when it didn’t mean preemptive bankruptcy for its graduates and the fostering of a tyrannical training ground for future libfascists. It’s time to get all Bluto on the obsolete boil that is academia; time to give it a squeeze. [Townhall, April 13, 2017]

Cue my post, “Subsidizing the Enemies of Liberty“:

If there is a professional class that is almost solidly aligned against liberty it is the teachers and administrators who control the ideas that are pumped into the minds of students from kindergarten through graduate school. How are they aligned against liberty? Most of them are leftists, which means that they are statists who are dedicated to the suppression of liberty in favor of current left-wing orthodoxies. These almost always include the coddling of criminals, unrequited love for America’s enemies, redistribution of income and jobs toward less-productive (and non-productive) persons, restrictions on speech, and the destruction of civil society’s bulwarks: religion, marriage, and family.

In any event, spending on education in the United States amounted to $1.1 trillion in 2010, about 8 percent of GDP.  Most of that $1.1 trillion — $900 billion, in fact — was spent on public elementary and secondary schools and public colleges and universities. In other words, your tax dollars support the leftists who teach your children and grandchildren to bow at the altar of the state, to placate the enemies of liberty at home and abroad, and to tear down the traditions that have bound people in mutual trust and respect….

And what do tax-paying Americans get for their money? A strong left-wing bias, which is inculcated at universities and spreads throughout public schools (and a lot of private schools). This has been going on, in earnest, since the end of World War II. And, yet, the populace is roughly divided between hard-headed conservatives and squishy-minded “liberals.” The persistence of the divide speaks well for the dominance of nature over nurture. But it does not change the fact that American taxpayers have been subsidizing the enemies of liberty who dominate the so-called education system in this country.

See also “Academic Bias“, “Politics, Sophistry, and the Academy“, “Academic Ignorance“, and John C. Goodman’s “Brownshirts, Subsidized with Your Tax Dollars” (Townhall, May 20, 2017).


The High Cost of Untrammeled Immigration

The third entry in “Not-So-Random Thoughts (XVIII)” is about illegal immigration. It opens with this:

Ten years ago, I posted “An Immigration Roundup”, a collection of 13 posts dated March 29 through September 22, 2006. The bottom line: to encourage and allow rampant illegal immigration borders on social and economic suicide. I remain a hardliner because of the higher crime rate among Hispanics (“Immigration and Crime“), and because of Steven Camarota’s “So What Is the Fiscal and Economic Impact of Immigration?“ [National Review, September 22, 2016].

I suggest that you go to Camarota’s article, which I quote at length, to see the evidence that he has compiled. For more facts — as opposed to leftish magical thinking about immigration — see also “Welfare: Who’s on It, Who’s Not” (Truth Is Justice, April 16, 2017), which draws on

a report called “Welfare Use by Immigrant and Native Households.” The report’s principle finding is that fully 51 percent of immigrant households receive some form of welfare, compared to an already worrisomely high 30 percent of American native households. The study is based on the most accurate data available, the Census Bureau’s Survey of Income and Program Participation (SIPP). It also reports stark racial differences in the use of welfare programs.

I’ll throw in some excerpts:

Needless to say, the percentage of immigrants using some form of welfare varies enormously according to the part of the world from which they come. Rates are highest for households from Central America and Mexico (73 percent), the Caribbean (51 percent), and Africa (48 percent). Those from East Asia (32 percent), Europe (26 percent), and South Asia (17 percent) have the lowest rates….

A majority of native black and Hispanic households are on some form of means-tested welfare, compared to just 23 percent of native white households….

A striking 82 percent of black households with children receive welfare–double the white rate. Hispanic families are not far behind blacks….

Among natives, blacks receive cash handouts at more than three times the white rate; Hispanics at more than twice the white rate. Rates for black and Hispanic immigrants are relatively lower due to often-ignored restrictions on immigrant use of these programs….

Among all households, native blacks and Hispanics receive food handouts at three times the white rate; for Hispanic immigrants, the figure is four times the white rate. Among households with children, nearly all immigrant Hispanics–86 percent–get food aid. Native blacks and Hispanics aren’t far behind, with rates of 75 and 72 percent, respectively.

The takeaway: Tax-paying citizens already heavily subsidize native-born blacks and Hispanics. Adding welfare-dependent immigrants — especially from south of the border — adds injury to injury.

As long as the welfare state exists, immigration should be tightly controlled so that the United States admits only those persons (with their families) who have verifiable offers of employment from employers in the United States. Further, an immigrant’s income should be high enough to ensure that (a) he is unlikely to become dependent on any welfare program (federal, State, or local) and (b) he is likely to pay at least as much in taxes as he is likely to absorb in the way of schooling for his children, Social Security and Medicare benefits, etc.

(See also: Bob le Flambeur, “Against Open Borders“, Rightly Considered, February 8, 2017.)


* Sharp-eyed readers will notice that with this post I am adopting a “new” way of using quotation marks. The American convention is to enclose commas and periods within quotation marks, even where the commas and periods are not part of the quoted text or other material that belongs inside quotation marks (e.g., the title of a post). The American convention creates some ambiguity and awkwardness that is avoided by the British convention, which is to enclose inside quotation marks only that punctuation which is part of the quoted text or other material.

** This is from the article by Sullivan cited in the first section of this post:

[Leo] Strauss’s idiosyncratic genius defies easy characterization, but you could argue, as Mark Lilla did in his recent book The Shipwrecked Mind, that he was a reactionary in one specific sense: A Jewish refugee from Nazi Germany, Strauss viewed modernity as collapsing into nihilism and relativism and barbarism all around him. His response was to go back to the distant past — to the works of Plato, Aristotle, and Maimonides, among others — to see where the West went wrong, and how we could avoid the horrific crimes of the 20th century in the future.

One answer was America, where Strauss eventually found his home at the University of Chicago. Some of his disciples — in particular, the late professor Harry Jaffa — saw the American Declaration of Independence, with its assertion of the self-evident truth of the equality of human beings, as a civilizational high point in human self-understanding and political achievement. They believed it revived the ancient Greek and Roman conception of natural law. Yes, they saw the paradox of a testament to human freedom having been built on its opposite — slavery — but once the post–Civil War constitutional amendments were ratified, they believed that the American constitutional order was effectively set forever, and that the limited government that existed in the late-19th and early-20th centuries required no fundamental change.

Microsoft Edge: A Review

My version of Windows 10 was subjected recently to the Creators Update, whatever that is. Its marvelous effects are entirely invisible to me, which is good. But at the end of the update, I was urged to use a new version of the Microsoft Edge browser. Well, it was more than an urging. The thing popped onto my screen at the end of the update, as if I had ordered it. But I hadn’t, so I closed it.

Curiosity got the better of me, so I did a bit of research and found that the new Edge is supposed to be faster than other browsers. I tried it, and it does seem faster. But I quickly abandoned it and removed it from my taskbar.

What went wrong? Unlike Firefox, which I’m still able to customize to match my browsing preferences, Edge is simple-minded (like the software engineers who designed it):

Extensions are almost non-existent. Of the several Firefox extensions that I use, Edge offers only Adblock Plus.

Some functions that are handled by Firefox extensions (e.g., find in page) must be accessed in Edge by going to a drop-down menu. But the functions must be reactivated every time Edge is re-opened. There’s no session-to-session memory of chosen functions.

One of the great Firefox extensions is Classic Theme Restorer, which enables me to have a menu bar, which is a hell of a lot easier to use than clicking on Edge’s single drop-down menu and searching through it in vain for the functions that I want to perform. Edge’s idea of functionality is to require the user to memorize a long list of keyboard shortcuts.

Classic Theme Restorer also allows me to position tabs just above the image area for web pages, which is where they belong. (Try it, you’ll like it.)

Thanks to Classic Theme Restorer, I am also able to have icons for the following useful functions in my tool bar and menu bar: change zoom, open new window, open last closed tab, show history, open the download list, subscribe to a site’s RSS feed, and adjust Adblock Plus settings. Some of those are extension-based functions that Edge doesn’t offer. Some others are available to masochists who like to memorize and use keyboard shortcuts instead of simply clicking on icons.

Zoom, which Firefox offers in 10-percent increments, is available only in 25-percent increments with Edge. As a result, the type on most Edge pages is either uncomfortably small or uncomfortably large. How wonderful is that?

I could dredge up more examples if I wanted to waste more of my time, but I’ll close with this observation: Edge plays badly with WordPress. Examples:

It’s not possible to copy text from a web page and paste it into WordPress’s visual editor by using the standard right-click operations. How does one paste in Edge? Using a keyboard shortcut, of course.

Worse than that, pasting web-page material into WordPress’s visual editor creates a mess; all kinds of extraneous coding appears and a lot of punctuation (e.g., dashes, quotations marks, apostrophes) is displayed as garbage. It’s possible to paste copied text into the HTML editor, but that results in the loss of embedded links.

To top it off, Edge just can’t keep up with WordPress’s background operations; the visual editor often stalls or goes blank.

Edge is aptly named. It’s at the trailing edge of browser technology. Microsoft strikes (out), again.