Political Economy & Civil Society

Racism on Parade

There has been much ado about an article by lawprofs Amy Wax (University of Pennsylvania) and Larry Alexander (University of San Diego), “Paying the Price for the Breakdown of the Country’s Bourgeois Culture” (The Inquirer, August 9, 2017). Wax and Alexander say this:

Too few Americans are qualified for the jobs available. Male working-age labor-force participation is at Depression-era lows. Opioid abuse is widespread. Homicidal violence plagues inner cities. Almost half of all children are born out of wedlock, and even more are raised by single mothers. Many college students lack basic skills, and high school students rank below those from two dozen other countries.

The causes of these phenomena are multiple and complex, but implicated in these and other maladies is the breakdown of the country’s bourgeois culture.

That culture laid out the script we all were supposed to follow: Get married before you have children and strive to stay married for their sake. Get the education you need for gainful employment, work hard, and avoid idleness. Go the extra mile for your employer or client. Be a patriot, ready to serve the country. Be neighborly, civic-minded, and charitable. Avoid coarse language in public. Be respectful of authority. Eschew substance abuse and crime.

These basic cultural precepts reigned from the late 1940s to the mid-1960s. They could be followed by people of all backgrounds and abilities, especially when backed up by almost universal endorsement. Adherence was a major contributor to the productivity, educational gains, and social coherence of that period.

Did everyone abide by those precepts? Of course not. There are always rebels — and hypocrites, those who publicly endorse the norms but transgress them. But as the saying goes, hypocrisy is the homage vice pays to virtue. Even the deviants rarely disavowed or openly disparaged the prevailing expectations….

… The loss of bourgeois habits seriously impeded the progress of disadvantaged groups. That trend also accelerated the destructive consequences of the growing welfare state, which, by taking over financial support of families, reduced the need for two parents. A strong pro-marriage norm might have blunted this effect. Instead, the number of single parents grew astronomically, producing children more prone to academic failure, addiction, idleness, crime, and poverty.

This cultural script began to break down in the late 1960s. A combination of factors — prosperity, the Pill, the expansion of higher education, and the doubts surrounding the Vietnam War — encouraged an antiauthoritarian, adolescent, wish-fulfillment ideal — sex, drugs, and rock-and-roll — that was unworthy of, and unworkable for, a mature, prosperous adult society….

And those adults with influence over the culture, for a variety of reasons, abandoned their role as advocates for respectability, civility, and adult values. As a consequence, the counterculture made great headway, particularly among the chattering classes — academics, writers, artists, actors, and journalists — who relished liberation from conventional constraints and turned condemning America and reviewing its crimes into a class marker of virtue and sophistication.

All cultures are not equal. Or at least they are not equal in preparing people to be productive in an advanced economy. The culture of the Plains Indians was designed for nomadic hunters, but is not suited to a First World, 21st-century environment. Nor are the single-parent, antisocial habits, prevalent among some working-class whites; the anti-“acting white” rap culture of inner-city blacks; the anti-assimilation ideas gaining ground among some Hispanic immigrants. These cultural orientations are not only incompatible with what an advanced free-market economy and a viable democracy require, they are also destructive of a sense of solidarity and reciprocity among Americans. If the bourgeois cultural script — which the upper-middle class still largely observes but now hesitates to preach — cannot be widely reinstated, things are likely to get worse for us all….

… Among those who currently follow the old precepts, regardless of their level of education or affluence, the homicide rate is tiny, opioid addiction is rare, and poverty rates are low. Those who live by the simple rules that most people used to accept may not end up rich or hold elite jobs, but their lives will go far better than they do now. All schools and neighborhoods would be much safer and more pleasant. More students from all walks of life would be educated for constructive employment and democratic participation.

But restoring the hegemony of the bourgeois culture will require the arbiters of culture — the academics, media, and Hollywood — to relinquish multicultural grievance polemics and the preening pretense of defending the downtrodden. Instead of bashing the bourgeois culture, they should return to the 1950s posture of celebrating it.

There’s a nit-picky but not fundamentally damaging commentary here, which follows a positive commentary by Jonathan Haidt, whom I presume to be a neutral party given his political centrism and rigorous approach to the psychology of politics.

As for me, I am skeptical about the restoration of the hegemony of bourgeois culture. It’s my view that when constructive social norms (e.g., work rather than welfare, marriage before children) have been breached on a large scale (as in Charles Murray’s “Fishtown”), they can’t be put back together again. Not on a large scale among persons now living, at least.

It’s true that many aspiring escapees from “Fishtown” (and its equivalents among blacks and Hispanics) will emulate the social norms of the middle and upper-middle classes. Those who are steadfast in their emulation are more likely to escape their respective white, tan, and black “ghettos” than those who don’t try or give up.

But “ghettos” will persist for as long as government provides “freebies” to people for not working, for not marrying, and for having children out of wedlock. And I see no end to to the “freebies” because (a) there are a lot of votes in the “ghettos” and (b) there are too many members of the middle and upper-middle classes — mainly but not exclusively “progressives” — who would rather give a man a fish every day rather than teach him how to fish.

That said, the heated controversy about the Wax-Alexander piece stems from its perceived racism — perceived by the usual, hyper-sensitive suspects. How dare Wax and Alexander drag blacks and Hispanics into their discussion by referring to

  • homicidal violence that plagues inner cities
  • the fact that almost half of all children are born out of wedlock, and even more are raised by single mothers
  • the anti-“acting white” rap culture of inner-city blacks
  • the anti-assimilation ideas gaining ground among some Hispanic immigrants

And how dare they assert (quite reasonably) that not all cultures are equal.

So the condemnation began. The thrust of it, of course, is the Wax and Alexander are “racist”.

For her sins, Wax was the target of an open letter of condemnation signed by 33 of her law school colleagues at UPenn. And for his sins, Alexander was singled out for criticism by the dean of USD’s law school.

Turnabout is fair play — or it will be as long as there are vestiges of free speech on college campuses. Tom Smith, a lawprof at USD who blogs at The Right Coast, is mightily miffed about his dean’s response to the Wax-Alexander piece. Smith and seven other USD lawprofs signed a letter which reads, in part:

Yesterday, Stephen Ferruolo, dean of the University of San Diego School of Law, sent to the entire law school community a lengthy email message entitled “Our Commitment to Diversity and Inclusion.” The message began by thanking those who have “expressed their concerns” about an op-ed written by our colleague Larry Alexander and University of Pennsylvania law professor Amy Wax and published last month in the Philadelphia Inquirer…. While acknowledging that Professor Alexander has a right to his views, the dean then declared, “I personally do not agree with those views, nor do I believe that they are representative of the views of our law school community.”…

The dean did not describe the contents of the Alexander-Wax op-ed, and he offered no specifics about what he disagreed with. In the context of the overall message, readers of the dean’s statement will inevitably infer that, at least in the dean’s view, Professor Alexander’s op-ed was in some sense supportive of exclusion or “racial discrimination or cultural subordination.” In effect, the dean adopted the extraordinary measure of singling out a colleague, by name, for a kind of public shaming through unsupported insinuation.

As colleagues of Professor Alexander, we write in response for two principal reasons.

First, the law school community and the interested public should know that Professor Alexander is an honorable, honest man who is not in any way racist…. Just last May, Dean Ferruolo along with the deans of the Yale Law School and the University of Illinois Law School praised Professor Alexander effusively at a conference convened at Yale Law School specifically to discuss and commemorate Professor Alexander’s scholarly contributions in a variety of fields. Considering this distinguished career and unparalleled contribution to the law school, we believe it is unconscionable for a law school dean to subject Professor Alexander to this sort of public shaming.

Second, we are concerned about the harmful effects of the dean’s message for the law school community. A law school and a university should be places where the free exchange of ideas is encouraged, not inhibited…. We have been grateful to study, teach, and write at USD, where in our experience civility and a commitment to freedom of discussion have prevailed. But this commitment is seriously undermined if faculty or students come to perceive that their expression of views disfavored by some may cause them to be singled out for public disapproval by university officials.

We understand that there are limits to the freedom of expression. Anyone, including colleagues and deans, should of course feel free to challenge on the merits the views expressed by other members of the community. As noted, Dean Ferruolo’s email made no attempt to do this. In addition, a member of the university who is shown to promote racist or bigoted views or practices may deserve public censure. However, we challenge the dean or other critics to identify anything in Professor Alexander’s op-ed that expresses or endorses bigotry or “racial discrimination or cultural subordination.”…

Smith continues, in his inimitable style:

I signed onto the letter and I’m grateful to find my name in such distinguished company. More emails and no doubt facebook posts, tweets, blog posts and so forth will no doubt issue in response to these letters. I am breaching my usual dirty bird principle (from the adage, “it’s a dirty bird who fouls his (or her!) own nest”) because this controversy sounds so directly on matters I blog about, sometimes humorously and usually carefully…. [A] man or woman should be entitled to express him or herself in the public prints without having a Dean rain down a ton of politically correct nonsense on his head, for heaven’s sake…. And also, I just have to say, what Larry is calling for (get up in the morning, go to your job, don’t take drugs, don’t have kids out of wedlock, etc., etc.) is rather in line with traditional Catholic teaching, is it not? So if someone says something that is “loudly dogma[tic]”, to coin a phrase, in a newspaper, or at least is consistent with that dogma, he runs the risk of being shamed by the administration of a nominally Catholic law school? That just ain’t rat. Larry of course is not Catholic, he’s a secular Jew, but he’s advocating things that are absolutely in line with what a good or even just sort of good Catholic person would do or practice.

I must say, I feel just a teensy bit neglected myself here. Have I not said things at least as politically incorrect as Larry? What am I, chopped liver? Or whatever the WASP equivalent of chopped liver is? Bologna and mayonnaise perhaps? Celery with peanut butter? Alas, we are but a small blog. But no matter. All in all, this is just a hellova way to thank Larry, who is nearing the end of his career and has given all of it to a small law school when, at least by professional lights, he should have been at a top ten school. And I don’t see how the situation can really be put right at this point. But who knows, perhaps somehow it will be. Meanwhile, the weather finally is beautiful again here today, for what that’s worth.

As for the “racist” label that has been so freely flung at Wax and Alexander, I’ll tell you what’s racist. It’s people like Dean Steve (which is as much of an honorific as he deserves) who assert that it’s racist to advise anyone (of any race, creed, color, national origin, sexual orientation, or whatever other identifying characteristics seem to matter these days) to get a job, stick to it, work hard at it, and take responsibility for yourself.

There are lots of blacks — undoubtedly a majority of them (and many of whom I worked with) — who don’t think such attitudes are racist. But Dean Steve and his ilk seem to believe that such attitudes are racist. Which means that Dean Steve and his ilk are racists, because they believe that all blacks either (a) don’t work hard, etc., and/or (b) are affronted by the idea that hard work, etc., are virtues. How racist can you get?


Related posts:
The Euphemism Conquers All
Superiority
Non-Judgmentalism as Leftist Condescension
Retrospective Virtue-Signalling
Leftist Condescension
Leftism As Crypto-Fascism: The Google Paradigm

Libertarianism, Conservatism, and Political Correctness

Why do conservatives and libertarians generally eschew political correctness? Because we take individual persons as they come, and evaluate each them on his merits.

That is to say, we reject stereotyping, and political correctness is just another form of stereotyping. Instead of insisting on something foolish like “all blacks are criminals”, political correctness leans the other way and insists that it is wrong to believe or say anything negative of blacks — or of any other group that has been condescendingly identified as “victims” by leftists.

Group differences matter mainly to the extent that they affect the likely success or (more likely) failure of government interventions aimed at defeating human nature. They also matter to the extent that human beings — including members of all racial and ethic groups — tend to prefer like to unlike (e.g., the preference of “liberal” white yuppies to live in enclaves of “liberal” white yuppies). But such matters have nothing to do with the conservative-libertarian disposition to treat individuals, when encountered as individuals, with the respect (or disrespect) due to them — as individuals.

In that regard, the conservative disposition is especially instructive. A conservative will not rush to judgment (pro or con) based on superficial characteristics, but will judge a person by what he actually says and does in situations that test character and ability. For example, I distinguish between leftists of my acquaintance who are at bottom kind but politically naive, and those whose political views reflect their inner nastiness.

Leftists, in their usual mindless way, take the opposite view and presume that the superficial characteristics that define a group count for more than the character and ability of each member of the group. Political correctness is of a piece with the intellectual laziness that characterizes leftism.


Related posts:
Academic Bias
Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
The Vast Left-Wing Conspiracy
The Left and Evergreen State: Reaping What Was Sown
Leftism As Crypto-Fascism: The Google Paradigm
Leftism (page) and related bibliography

The Danger of Marginal Thinking

The “marginal revolution” in economics, which occurred in the latter part of the 19th century, introduced marginalism,

a theory of economics that attempts to explain the discrepancy in the value of goods and services by reference to their secondary, or marginal, utility. The reason why the price of diamonds is higher than that of water, for example, owes to the greater additional satisfaction of the diamonds over the water. Thus, while the water has greater total utility, the diamond has greater marginal utility.

Although the central concept of marginalism is that of marginal utility, marginalists, following the lead of Alfred Marshall, drew upon the idea of marginal physical productivity in explanation of cost. The neoclassical tradition that emerged from British marginalism abandoned the concept of utility and gave marginal rates of substitution a more fundamental role in analysis. Marginalism is an integral part of mainstream economic theory.

But pure marginalism can be the road to ruin for a business if the average cost of a unit of output is greater than average revenue, that is, the price for which a unit is sold.

Marginalism is the road to ruin in law and politics. If a governmental act can be shown to have a positive effect “at the margin”, its broader consequences are usually ignored. This kind of marginalism is responsible for the slippery sloperatchet effect enactment and perpetuation of one economically and socially destructive government program after another. Obamacare, same-sex “marriage”, and rampant transgenderism are the most notorious examples of recent years. Among the many examples of earlier years are the Pure Food and Drug Act, the Supreme Court’s holding in Wickard v. Filburn, the Social Security Act and its judicial vindication, the Civil Rights Act of 1964, and the various enactments related to “equal employment opportunity”, including the Americans with Disabilities Act.

Frédéric Bastiat’s wrote about it more than 160 years ago, in “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

The unseen effects — the theft of Americans’ liberty and prosperity — had been foreseen by some (e.g., Tocqueville and Hayek). But their wise words have been overwhelmed by power-lust, ignorance, and greed. Greed manifests itself in the interest-group paradox:

The interest-group paradox is a paradox of mass action….

Pork-barrel legislation exemplifies the interest-group paradox in action, though the paradox encompasses much more than pork-barrel legislation. There are myriad government programs that — like pork-barrel projects — are intended to favor particular classes of individuals. Here is a minute sample:

  • Social Security, Medicare, and Medicaid, for the benefit of the elderly (including the indigent elderly)
  • Tax credits and deductions, for the benefit of low-income families, charitable and other non-profit institutions, and home buyers (with mortgages)
  • Progressive income-tax rates, for the benefit of persons in the mid-to-low income brackets
  • Subsidies for various kinds of “essential” or “distressed” industries, such as agriculture and automobile manufacturing
  • Import quotas, tariffs, and other restrictions on trade, for the benefit of particular industries and/or labor unions
  • Pro-union laws (in many States), for the benefit of unions and unionized workers
  • Non-smoking ordinances, for the benefit of bar and restaurant employees and non-smoking patrons.

What do each of these examples have in common? Answer: Each comes with costs. There are direct costs (e.g., higher taxes for some persons, higher prices for imported goods), which the intended beneficiaries and their proponents hope to impose on non-beneficiaries. Just as importantly, there are indirect costs of various kinds (e.g., disincentives to work and save, disincentives to make investments that spur economic growth)….

You may believe that a particular program is worth what it costs — given that you probably have little idea of its direct costs and no idea of its indirect costs. The problem is millions of your fellow Americans believe the same thing about each of their favorite programs. Because there are thousands of government programs (federal, State, and local), each intended to help a particular class of citizens at the expense of others, the net result is that almost no one in this fair land enjoys a “free lunch.” Even the relatively few persons who might seem to have obtained a “free lunch” — homeless persons taking advantage of a government-provided shelter — often are victims of the “free lunch” syndrome. Some homeless persons may be homeless because they have lost their jobs and can’t afford to own or rent housing. But they may have lost their jobs because of pro-union laws, minimum-wage laws, or progressive tax rates (which caused “the rich” to create fewer jobs through business start-ups and expansions).

The paradox that arises from the “free lunch” syndrome is…. like the paradox of panic, in that there is a  crowd of interest groups rushing toward a goal — a “pot of gold” — and (figuratively) crushing each other in the attempt to snatch the pot of gold before another group is able to grasp it. The gold that any group happens to snatch is a kind of fool’s gold: It passes from one fool to another in a game of beggar-thy-neighbor, and as it passes much of it falls into the maw of bureaucracy.

As far as I know, only one agency of the federal government has been abolished in my lifetime, while dozens have been created and expanded willy-nilly at the behest of politicians, bureaucrats, and cronies. The one that was abolished — the Interstate Commerce Commission — still had “residual functions” that were transferred elsewhere. That’s the way it works in Washington, and in State capitals.

So one obvious danger of marginal thinking is that the nose of the camel under the edge of the tent is invariably followed by its neck, its humps, its tail, another camel’s nose, etc., etc. etc.

There’s a less obvious danger, which is typified by the penchant of faux-libertarians for dismissing objections to this and that “harmless” act. Economist Mark Perry, for example, regurgitates Milton Friedman’s 30-year-old plea for the decriminalization of drugs. Just because some behavior is “private” doesn’t mean that it’s harmless to others. Murder behind a closed door is still murder.

In the case of drugs, I turn to Theodore Dalrymple:

[I]t is not true that problems with drugs arise only when or because they are prohibited.

The relationship between crime and drug prohibition is also much more complex than the legalizers would have us believe. It is certainly true that gangs quickly form that try to control drug distribution in certain areas, and that conflict between the aspirant gangs leads to violence…. But here I would point out two things: first that the violence of such criminal gangs was largely confined to the subculture from which they emerged, so that other people were not much endangered by it; and second that, in my dealings with such people, I did not form the impression that, were it not for the illegality of drugs, they would otherwise be pursuing perfectly respectable careers. If my impression is correct, then the illegality of drugs might protect the rest of society from their criminality: the illegal drug trade being the occasion, but not the cause, of their violence.

What about Prohibition, is the natural reply? It is true that the homicide rate in the United States fell dramatically in the wake of repeal. By the 1960s, however, when alcohol was not banned, it had climbed higher than during Prohibition…. Moreover, what is less often appreciated, the homicide rate in the United States rose faster in the thirteen years before than in the thirteen years during Prohibition. (In other respects, Prohibition was not as much of a failure as is often suggested: alcohol-related problems such as liver disease declined during it considerably. But no consequences by themselves can justify a policy, otherwise the amputation of thieves’ hands would be universal.) Al Capone was not a fine upstanding citizen before Prohibition turned him into a gangster. [“Ditching Drug Prohibition: A Dissent”, Library of Law and Liberty, July 23, 2015, and the second in a series; see also “The Simple Truth about J.S. Mill’s Simple Truth”, op. cit., July 20, 2015; “Myths and Realities of Drug Addiction, Consumption, and Crime”, op. cit., July 31, 2015; and “Closing Argument on the Drug Issue”, op. cit., August 4, 2015]

This reminds me of my post, “Prohibition, Abortion, and ‘Progressivism’”, in which I wrote about the Ken Burns series, Prohibition. Here’s some of it:

Although eugenics is not mentioned in Prohibition, it looms in the background. For eugenics — like prohibition of alcohol and, later, the near-prohibition of smoking — is symptomatic of the “progressive” mentality. That mentality is paternalistic, through and through. And “progressive” paternalism finds its way into the daily lives of Americans through the regulation of products and services — for our own good, of course. If you can think of a product or service that you use (or would like to use) that is not shaped by paternalistic regulation or taxes levied with regulatory intent, you must live in a cave.

However, the passing acknowledgement of “progressivism” as a force for the prohibition of alcohol is outweighed by the attention given to the role of “evangelicals” in the enactment of prohibition. I take this as a subtle swipe at anti-abortion stance of fundamentalist Protestants and adherents of the “traditional” strands of Catholicism and Judaism. Here is the “logic” of this implied attack on pro-lifers: Governmental interference in a personal choice is wrong with respect to the consumption of alcohol and similarly wrong with respect to abortion.

By that “logic,” it is wrong for government to interfere in or prosecute robbery, assault, rape, murder and other overtly harmful acts, which — after all — are merely the consequences of personal choices made by their perpetrators. Not even a “progressive” would claim that robbery, assault, etc., should go unpunished, though he would quail at effective punishment.

“Liberals” of both kinds (“progressive” fascists and faux-libertarian) just don’t know when to smack camels on the nose. Civilization depends on deep-seated and vigorously enforced social norms. They reflect eons of trial and error, and can’t be undone peremptorily without unraveling the social fabric — the observance of mores and morals that enable a people to coexist peacefully and beneficially because they are bound by mutual trust, mutual respect, and mutual forbearance.

A key function of those norms is to inculcate self-restraint. For it is the practice of self-restraint that underlies peaceful, beneficial coexistence: What goes around comes around.


Related pages and posts:
Leftism
Social Norms and Liberty
*****
On Liberty
In Defense of Marriage
Myopic Moaning about the War on Drugs
Facets of Liberty
Burkean Libertarianism
The Myth That Same-Sex “Marriage” Causes No Harm
Lock ‘Em Up
Liberty and Society
The Eclipse of “Old America”
Genetic Kinship and Society
The Fallacy of Human Progress
Defining Liberty
The Pseudo-Libertarian Temperament
Getting Liberty Wrong
“Liberalism” and Personal Responsibility
Crime Revisited
A Cop-Free World?
The Beginning of the End of Liberty in America
Marriage: Privatize It and Revitalize It
More About Social Norms and Liberty
Amen to That
The Opposition and Crime
“And the Truth Shall Set You Free”
Double Amen
Economically Liberal, Socially Conservative
The Transgender Fad and Its Consequences
The Harm Principle Revisited: Mill Conflates Society and State
Liberty and Social Norms Re-examined
Natural Law, Natural Rights, and the Real World
Natural Law and Natural Rights Revisited
If Men Were Angels
Death of a Nation
Self-Made Victims

Another Angle on Alienation

In an earlier post about alienation I said that

the life of the hunter-gatherer, however fraught, is less rationalized than the kind of life that’s represented by intensive agriculture, let alone modern manufacturing and office work.

The hunter-gatherer isn’t “a cog in a machine”, he is the machine. He is the shareholder, the manager, the worker, and the consumer, all in one. His work with others is truly cooperative. It is like the execution of a game-winning touchdown by a football team, and unlike the passing of a product from stage to stage in an assembly line, or the passing of a virtual piece of paper from computer to computer.

What really matters in life — perhaps as much as love and friendship — is the sense of accomplishment that derives from producing something of value to others, something that they willingly pay for.

In decades of post-collegiate work, nothing gave me more satisfaction than the weekly publication of the Pennysaver that — in the late 1970s — I owned, operated, and poured my labor (and a large share of my savings) into for three years. “Publish or perish” was far truer of me than it is of the academics who exclaim it.

I bought the Pennysaver to escape the “rat race” of the D.C.-area government-contractor milieu: big-city anonymity, commuting, high taxes, and — most of all — disconnect between work and accomplishment. In fact, I doubted that the work that I and thousands of others like me accomplished anything but the appropriation of taxpayers’ money.

During the Pennysaver years I concentrated intensly on making a living. But more than that, I was producing something of real value — a publication supported by willing advertisers and eagerly awaited by local residents, who found it in their mailboxes every Wednesday.

I gave up the Pennysaver to return to the “rat race” of the D.C. area, so that I could earn enough to retire comfortably. (Life is full of choices; that was mine.) I often took pride in some of what I accomplished in the ensuing 18 years. But it wasn’t the same sense of accomplishment that I experienced as a business owner. It was just the satisfaction of doing a job well, even if the job wasn’t worth doing.

I worked hard in those final 18 years — from 60 to 70 hours a week until the end was nigh. But I was no longer the captain of my own ship, though I usually worked directly for the CEO. There were three of them in those years. The first one was deposed (deservedly) in a coup, brought about in part by internal opposition to his Queegish management. The second one was a careerist of high professional and ethical standards who steered the organization back to its roots as an empirical, objective, and apolitical operations research outfit.

Then along came the third one, and a new kind of alienation descended on me: I couldn’t even derive a sense of satisfaction from doing a useless job well because he corrupted the organization. Not in a criminal way, but — almost as bad — in a political way. He was prone to magical thinking (e.g., there should be a greater percentage of black Ph.D.s on the staff but standards shouldn’t be lowered), and he pushed the organization away from empirical research into “policy analysis” (a.k.a., advocacy bullshit) with a partisan edge. It was all in keeping with his proud self-identification as a “Carter Democrat”.

The stress of working for such a man became almost debilitating. So I arranged for early retirement on favorable terms before the stress became absolutely unbearable. My foreboding was borne out when, in the years after my retirement, the organization took an overtly political turn (e.g., backing for some of Obama’s domestic programs, “global warming” as a national-security issue).

Alienation comes in many forms. And it isn’t restricted to workers who are just “cogs in a machine”. Alienation is a sense of uselessness that can descend on anyone in any job at any income level.

Death of a Nation

More than 50 years ago I heard a white woman say of blacks “They’re not Americans.” I was appalled by that statement, for it contradicted what I had been taught to believe about America, namely, this:

“America is not just a country,” said the rock singer Bono, in Pennsylvania in 2004: “It’s an idea.”

That’s the opening of John O’Sullivan’s essay, “A People, Not Just an Idea” (National Review, November 19, 2015).

Bono is a decent, thoughtful, and public-spirited man. I didn’t choose his quotation to suggest that this view of America is a kind of pop opinion. It just happened that in my Google search his name came ahead of many others, from George Will to Irving Kristol to almost every recent presidential candidate, all of whom had described America either as an idea or as a “proposition nation,” to distinguish it from dynastic realms or “blood and soil” ethnicities. This philosophical definition of America is now the conventional wisdom of Left and Right, at least among people who write and talk of such things.

Indeed, we have heard variations on Bono’s formulation so many times that we probably fail to notice how paradoxical it is. But listen to how it sounds when reversed: “America is not just an idea; it is a nation.” Surely that version has much more of the ring of common sense. For a nation is plainly something larger, more complex, and richer than an idea. A nation may include ideas. It may have evolved under the influence of a particular set of ideas. But because it encompasses so many other things — notably the laws, institutions, language of the nation; the loyalties, stories, and songs of the people; and above all Lincoln’s “mystic chords of memory” — the nation becomes more than an idea with every election, every battle, every hero, every heroic tale, every historical moment that millions share.

That is not to deny that the United States was founded on some very explicit political ideas, notably liberty and equality, which Jefferson helpfully wrote down in the Declaration of Independence. To be founded on an idea, however, is not the same thing as to be an idea. A political idea is not a destination or a conclusion but the starting point of an evolution — and, in the case of the U.S., not really a starting point, either. The ideas in the Declaration on which the U.S. was founded were not original to this country but drawn from the Anglo-Scottish tradition of Whiggish liberalism. Not only were these ideas circulating well before the Revolution, but when the revolutionaries won, they succeeded not to a legal and political wasteland but to the institutions, traditions, and practices of colonial America — which they then reformed rather than abolished….

As John Jay pointed out, Americans were fortunate in having the same religion (Protestantism), the same language, and the same institutions from the first. Given the spread of newspapers, railways, and democratic debate, that broad common culture would intensify the sense of a common American identity over time. It was a cultural identity more than an ethnic one, and one heavily qualified by regional loyalties… And the American identity might have become an ethnic one in time if it had not been for successive waves of immigration that brought other ethnicities into the nation.

That early American identity was robust enough to absorb these new arrivals and to transform them into Americans. But it wasn’t an easy or an uncomplicated matter. America’s emerging cultural identity was inevitably stretched by the arrivals of millions of people from different cultures. The U.S. government, private industry, and charitable organizations all set out to “Americanize” them. It was a great historical achievement and helped to create a new America that was nonetheless the old America in all essential respects….

By World War II, … all but the most recent migrants had become culturally American. So when German commandos were wandering behind American lines in U.S. uniforms during the Battle of the Bulge, the G.I.s testing their identity asked not about … the First Amendment but questions designed to expose their knowledge (or ignorance) of American life and popular culture….

Quite a lot flows from this history. Anyone can learn philosophical Americanism in a civics class; for a deeper knowledge and commitment, living in America is a far surer recipe…. Americans are a distinct and recognizable people with their own history, culture, customs, loyalties, and other qualities that are wider and more various than the most virtuous summary of liberal values….

… If Americans are a distinct people, with their own history, traditions, institutions, and common culture, then they can reasonably claim that immigrants should adapt to them and to their society rather than the reverse. For most of the republic’s history, that is what happened. And in current circumstances, it would imply that Muslim immigrants should adapt to American liberty as Catholic immigrants once did.

If America is an idea, however, then Americans are not a particular people but simply individuals or several different peoples living under a liberal constitution.

For a long time the “particular people” were not just Protestants but white Protestants of European descent. As O’Sullivan points out, Catholics (of European descent) eventually joined the ranks of “particular people”. But there are others — mostly blacks and Hispanics — who never did and never will join those ranks. Whatever the law may say about equality, access to housing, access to public accommodations, and so on, membership in the ranks of “particular people” is up to those who are already members.

The woman who claimed that blacks weren’t Americans was a member. She was a dyed-in-the-wool Southerner, but her attitude wasn’t untypical of the attitudes of many white Americans — Northern and Southern, past and present. Like it or not, the attitude remains prevalent in the country. (Don’t believe polls that purport to demonstrate racial comity; there’s a well-known aversion to giving a “wrong” answer to a pollster.)

The revealed preference of most whites (a preference shared by most blacks) is for racial segregation. Aggregate statistics hide the real story, which is the gentrification of some parts of inner cities (i.e., the creation of white enclaves) and “white flight” from suburbs to which inner-city blacks are fleeing. (See this article, for instance.)

The taste for segregation shows up in statistics about public-school enrollment. (See this article, for instance.) White parents (and affluent blacks) are more often keeping their children out of local public schools with large “minority” enrollments by choosing one of the alternatives legally available to them (e.g., home schooling). (Presidents with school-age children — including Barack Obama — have done the same thing to avoid sending their children to the public schools of the District of Columbia, whose students are predominantly black and Hispanic.)

I have focused on voluntary racial segregation because it underscores the fact — not lost on the white, Southern woman of my acquaintance — that the United States was once built upon the “blood and soil” ethnicity of whites whose origins lay in Europe. Blacks can never be part of that nation. Neither can Hispanics, who now outnumber blacks in America. Blacks and Hispanics belong to the “proposition” nation.

They have been joined by the large numbers of Americans who no longer claim allegiance to the “blood and soil” nation, regardless of their race or ethnicity — leftists, in other words. Since the 1960s leftists have played an ever-larger, often dominant, role in the governance of America. They have rejected the “history, culture, customs, [and] loyalties” which once bound most Americans. In fact they are working daily — through the academy, the media, and the courts — to transform America fundamentally by erasing the “history, culture, customs, [and] loyalties” of Americans from the people’s consciousness and the nation’s laws.

Pat Buchanan, who is usually too strident for my taste, hits it on the head:

In Federalist No. 2, John Jay writes of them as “one united people . . . descended from the same ancestors, speaking the same language, professing the same religion, attached to the same principles of government, very similar in their manners and customs . . .”

If such are the elements of nationhood and peoplehood, can we still speak of Americans as one nation and one people?

We no longer have the same ancestors. They are of every color and from every country. We do not speak one language, but rather English, Spanish and a host of others. We long ago ceased to profess the same religion. We are Evangelical Christians, mainstream Protestants, Catholics, Jews, Mormons, Muslims, Hindus and Buddhists, agnostics and atheists.

Federalist No. 2 celebrated our unity. Today’s elites proclaim that our diversity is our strength. But is this true or a tenet of trendy ideology?

After the attempted massacre of Republican Congressmen at that ball field in Alexandria, Fareed Zakaria wrote: “The political polarization that is ripping this country apart” is about “identity . . . gender, race, ethnicity, sexual orientation (and) social class.” He might have added — religion, morality, culture and history.

Zakaria seems to be tracing the disintegration of our society to that very diversity that its elites proclaim to be its greatest attribute: “If the core issues are about identity, culture and religion … then compromise seems immoral. American politics is becoming more like Middle Eastern politics, where there is no middle ground between being Sunni or Shiite.”

Among the issues on which we Americans are at war with one another — abortion, homosexuality, same-sex marriage, white cops, black crime, Confederate monuments, LGBT rights, affirmative action.

America is no longer a nation whose inhabitants are bound mainly by “blood and soil”. Worse than that, it was — until the election of 2016 — fast becoming a nation governed by the proposition that liberty is only what leftists say it is: the liberty not to contradict the left’s positions on climate, race, intelligence, economics, religion, marriage, the right to life, and government’s intrusive role in all of those things and more. The resistance to Donald Trump is fierce and unforgiving because his ascendancy threatens what leftists have worked so hard to achieve in the last 50 years: the de-Americanization of America.

Is all of this just the grumbling of white men of European descent? I think not. Measures of national unity are hard to come by. Opinion polls, aside from their relatively brief history (compared with the age of the Union), are notoriously unreliable. Presidential elections are more meaningful because (some degree of chicanery aside) they reflect voters’ feelings about the state of the Union. Regardless of the party affiliation of the winning candidate, a strong showing usually reflects broad satisfaction with the nation’s direction; a weak showing usually reflects the opposite.

Popular votes were first recorded in the election of 1824. Here is a graphical history of the winning candidate’s percentages of the vote in each election from 1824 through 2016 (with the exclusion of 1864, when the South wasn’t in the Union):


Derived from this table in this article at Wikipedia.

Election-to-election variations reflect the personal popularity of some candidates, the strength of third-party movements, and various other transitory factors. The 5-election average smooths those effects and reveals what is (to me) an obvious story: national disunity in the years before and after the Civil War; growing unity during the first half of the 20th century, peaking during the Great Depression and World War II; modest post-war decline followed by stability through the 1980s; and rapid decline since then because of the left’s growing power and the rapid rise of the Hispanic population.

The graph underscores what I already knew: The America in which I was born and raised — the America of the 1940s and 1950s — has been beaten down. It is more likely to die than it is to revive. And even if it revives to some degree, it will never be the same.


Related posts:
Academic Bias
Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Liberty and Society
The Eclipse of “Old America”
Genetic Kinship and Society
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
Class in America
A Word of Warning to Leftists (and Everyone Else)
Another Thought or Two about Class
The Vast Left-Wing Conspiracy
The Left and Evergreen State: Reaping What Was Sown

Suicidal Despair and the “War on Whites”

This entry is prompted by a recent spate of posts and articles about the rising mortality rate among non-Hispanic whites without a college degree (hereinafter working-class whites, for convenience). Thomas Lifson characterizes the trend as “a spiritual crisis”, after saying this:

White males, in large numbers, are simply losing their will to live, and as a result, they are dying so prematurely and in such large numbers that a startling demographic gap has emerged. [“Stunning Evidence that the Left Has Won its War on White Males“, American Thinker, March 26, 2017]

Later in the piece, Lifson gets to the “war” on white males:

For at least four decades, white males have been under continuous assault as bearers of “white privilege” and beneficiaries of sexism. Special preferences and privileges have been granted to other groups, but that is the least of it.  More importantly, the very basis of the psychological self-worth of white males have been under attack.  White males are frequently instructed by authority figures in education and the media that they are responsible for most of the evils of the modern world, that the achievements of Euro-American civilization are a net loss for humanity, stained by exploitation, racism, unfairness, and every other collective evil the progressive mind can manufacture.

Some white males are relatively unscathed by the psychological warfare, but others are more vulnerable. Those who have educational, financial, or employment achievements that have rewarded their efforts may be able to keep going as productive members of society, their self-esteem resting on tangible fruits of their work and social position. But other white males, especially those who work with their hands and have been seeing job opportunities contract or disappear, have been losing the basis for a robust sense of self-worth as their job opportunities disappear.

We now have statistical evidence that political correctness kills.

We have no such thing. The recent trend isn’t yet significant. But it is real, and government is the underlying cause.

To begin at the beginning, the source of the spate of articles about the rising mortality rate of working-class whites is Anne Case and Angus Deaton’s “Mortality and Morbidity in the 21st Century” (Brookings Institution, Brookings Papers on Economic Activity (conference edition), March 17, 2017). Three of the paper’s graphs set the scene. This one shows mortality trends in the United States:

The next figure indicates that the phenomenon isn’t unique to non-Hispanic whites in the age 50-54 bracket:

But the trend among American whites defies the trends in several other Western nations:

Whence the perverse trend? It seems due mainly to suicidal despair:

How do these recent trends stack up against the long view? I couldn’t find a long time series for drug, alcohol, and suicide mortality. But I did find a study by Feijin Luo et al. that traces suicide rates from just before the onset of the Great Depression to just before the onset of the Great Recession — “Impact of Business Cycles on US Suicide Rates, 1928–2007” (American Journal of Public Health, June 2011). Here are two key graphs from the report:

The graphs don’t reproduce well, so the following quotations will be of help:

The overall suicide rate fluctuated from 10.4 to 22.1 over the 1928–2007 period. It peaked in 1932, the last full year of the Great Depression, and bottomed in 2000. The overall suicide rate decreased from 18.0 in 1928 to 11.2 in 2007. However, most of the decline occurred before 1945; after that it fluctuated until the mid-1950s, and then it gradually moved up until the late 1970s. The overall suicide rate resumed its downward trend from the mid-1980s to 2000, followed by a trend reversal in the new millennium.

Figure 1a [top] shows that the overall suicide rate generally increased in recessions, especially in severe recessions that lasted longer than 1 year. The largest increase in the overall suicide rate occurred during the Great Depression (1929–1933), when it surged from 18.0 in 1928 to 22.1 (the all-time high) in 1932, the last full year of the Great Depression. [The Great Depression actually lasted until 1940: TEA.] This increase of 22.8% was the highest recorded for any 4-year interval during the study period. The overall suicide rate also rose during 3 other severe recessions: [the recession inside the Great Depression] (1937–1938), the oil crisis (1973–1975), and the double-dip recession (1980–1982). Not only did the overall suicide rate generally rise during recessions; it also mostly fell during expansions…. However, the overall suicide rate did not fall during the 1960s (i.e., 1961–1969), a notable phenomenon that will be explained by the different trends of age-specific suicide rates.

The age-specific suicide rates displayed more variations than did the overall suicide rate, and the trends of those age-specific suicide rates were largely different. As shown in Figure 1b [bottom], from 1928–2007, the suicide rates of the 2 elderly groups (65–74 years and 75 years and older) and the oldest middle-age group (55–64 years) experienced the most remarkable decline. The suicide rates of those groups declined in both pre- and postwar periods. The suicide rates of the other 2 middle-aged groups (45–54 years and 35–44 years) also declined from 1928–2007, which we attributed to the decrease during the war period more than offsetting the increase in the postwar period. In contrast with the declining suicide rates of the 2 elderly and 3 middle-age groups, the suicide rates of the 2 young groups (15–24 years and 25–34 years) increased or just marginally decreased from 1928–2007. The 2 young groups experienced a marked increase in suicide rates in the postwar period. The suicide rate of the youngest group (5–14 years) also increased from 1928–2007. However, because of its small magnitude, we do not include this increase in the subsequent discussion.

We noted that the suicide rate of the group aged 65–74 years, the highest of all age groups until 1936, declined the most from 1928 to 2007. That rate started at 41.2 in 1928 and dropped to 12.6 in 2007, peaking at 52.3 in 1932 and bottoming at 12.3 in 2004. By contrast, the suicide rate of the group aged 15–24 years increased from 6.7 in 1928 to 9.7 in 2007. That rate peaked at 13.7 in 1994 and bottomed at 3.8 in 1944, and it generally trended upward from the late 1950s to the mid-1990s. The suicide rate differential between the group aged 65–74 years and the group aged 15–24 years generally decreased until 1994, from 34.5 in 1928 to 1.6 in 1994.

All age groups experienced a substantial increase in their suicide rates during the Great Depression, and most groups (35–44 years, 45–54 years, 55–64 years, 65–74 years, and 75 years and older) set record-high suicide rates in 1932; but they reacted differently to many other recessions, including severe recessions such as the [1937-1938 recession] and the oil crisis. Their reactions were different during expansions as well, most notably in the 1960s, when the suicide rates of the 3 oldest groups (75 years and older, 65–74 years, and 55–64 years) declined moderately, and those of the 3 youngest groups (15–24 years, 25–34 years, and 35–44 years) rose noticeably….

[T]he overall suicide rate and the suicide rate of the group aged 45–54 years were associated with business cycles at the significance level of 1%; the suicide rates of the groups aged 25–34 years, 35–44 years, and 55–64 years were associated with business cycles at the significance level of 5%; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were associated with business cycles at nonsignificant levels. To sumarize, the overall suicide rate was significantly countercyclical; the suicide rates of the groups aged 25–34 years, 35–44 years, 45–54 years, and 55–64 years were significantly countercyclical; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were not significantly countercyclical.

The following graph, obtained from the website of the American Foundation for Suicide Prevention, extends the age-related analysis to 2015:

And this graph, from the same source, shows that the rising suicide rate is concentrated among whites and American Indians:

Though this graph encompasses deaths from all causes, the opposing trends for blacks and whites suggest strongly that working-class whites in all age groups have become much more prone to suicidal despair in the past 20 years. Moreover, the despair has persisted through periods of economic decline and economic growth (slow as it has been).

Why? Case and Deaton opine:

[S]ome of the most convincing discussions of what has happened to working class whites emphasize a long-term process of decline, or of cumulative deprivation, rooted in the steady deterioration in job opportunities for people with low education…. This process … worsened over time, and caused, or at least was accompanied by, other changes in society that made life more difficult for less-educated people, not only in their employment opportunities, but in their marriages, and in the lives of and prospects for their children. Traditional structures of social and economic support slowly weakened; no longer was it possible for a man to follow his father and grandfather into a manufacturing job, or to join the union. Marriage was no longer the only way to form intimate partnerships, or to rear children. People moved away from the security of legacy religions or the churches of their parents and grandparents, towards churches that emphasized seeking an identity, or replaced membership with the search for connections…. These changes left people with less structure when they came to choose their careers, their religion, and the nature of their family lives. When such choices succeed, they are liberating; when they fail, the individual can only hold him or herself responsible….

As technical change and globalization reduced the quantity and quality of opportunity in the labor market for those with no more than a high school degree, a number of things happened that have been documented in an extensive literature. Real wages of those with only a high school degree declined, and the college premium increased….

Lower wages made men less marriageable, marriage rates declined, and there was a marked rise in cohabitation, then much less frowned upon than had been the case a generation before…. [B]eyond the cohort of 1940, men and women with less than a BA degree are less likely to have ever been married at any given age. Again, this is not occurring among those with a four-year degree. Unmarried cohabiting partnerships are less stable than marriages. Moreover, among those who do marry, those without a college degree are also much more likely to divorce than are those with a degree….

These accounts share much, though not all, with Murray’s … account [in Coming Apart] of decline among whites in his fictional “Fishtown.” Murray argues that traditional American virtues are being lost among working-class white Americans, especially the virtue of industriousness. The withdrawal of men from the labor force reflects this loss of industriousness; young men in particular prefer leisure—which is now more valuable because of video games … —though much of the withdrawal of young men is for education…. The loss of virtue is supported and financed by government payments, particularly disability payments….

In our account here, we emphasize the labor market, globalization and technical change as the fundamental forces, and put less focus on any loss of virtue, though we certainly accept that the latter could be a consequence of the former. Virtue is easier to maintain in a supportive environment. Yet there is surely general agreement on the roles played by changing beliefs and attitudes, particularly the acceptance of cohabitation, and of the rearing of children in unstable cohabiting unions.

These slow-acting and cumulative social forces seem to us to be plausible candidates to explain rising morbidity and mortality, particularly their role in suicide, and with the other deaths of despair, which share much with suicides. As we have emphasized elsewhere, … purely economic accounts of suicide have consistently failed to explain the phenomenon. If they work at all, they work through their effects on family, on spiritual fulfillment, and on how people perceive meaning and satisfaction in their lives in a way that goes beyond material success. At the same time, cumulative distress, and the failure of life to turn out as expected is consistent with people compensating through other risky behaviors such as abuse of alcohol, overeating, or drug use that predispose towards the outcomes we have been discussing….

What our data show is that the patterns of mortality and morbidity for white non-Hispanics without a college degree move together over lifetimes and birth cohorts, and that they move in tandem with other social dysfunctions, including the decline of marriage, social isolation, and detachment from the labor force…. Whether these factors (or factor) are “the cause” is more a matter of semantics than statistics. The factor could certainly represent some force that we have not identified, or we could try to make a case that declining real wages is more fundamental than other forces. Better, we can see globalization and automation as the underlying deep causes. Ultimately, we see our story as about the collapse of the white, high school educated, working class after its heyday in the early 1970s, and the pathologies that accompany that decline. [Op. cit., pp. 29-38]

The seemingly rigorous and well-reasoned analyses reported by Case-Deaton and Luo et al. are seriously flawed, for these reasons:

  • Case and Deaton’s focus on events since 1990 is analogous to a search for lost keys under a street lamp because that’s where the light is. As shown in the graphs taken from Luo et al., suicide rates have at various times risen (and dropped) as sharply as they have in recent years.
  • Luo et al. address a much longer time span but miss an important turning point, which came during World War II. Because of that, they resort to a strained, non-parametric analysis of the relationship between the suicide rate and business cycles.
  • It is misleading to focus on age groups, as opposed to birth-year cohorts. For example, persons in the 50-54 age group in 1990 were born between 1936 and 1940, but in 2010 persons in the 50-54 age group were born between 1956 and 1960. The groups, in other words, don’t represent the same cohort. The only meaningful suicide rate for a span of more than a few years is the rate for the entire population.

I took a fresh look at the overall suicide rate and its relationship to the state of the economy. First, I extended the overall, age-adjusted suicide rate for 1928-2007 provided by Luo et al. in Supplementary Table B (purchase required) by splicing it with a series for 1999-2014 from Centers for Disease Control and Prevention, National Center for Health Statistics, Data Brief 241. I then drew on the database at Measuring Worth to derive year-over-year changes in real GDP for 1928-2014. Here’s an overview of the two time series:

The suicide rate doesn’t drop below 15 until 1942. From 1943 through 2014 it vacillates in a narrow range between 10.4 (2000) and 13.6 (1975). Despite the rise since 2000, the overall rate still hasn’t returned to the 1975 peak. And only in recent years has the overall rate reached figures that were reached often between 1943 and 1994.

Moreover, the suicide rate from 1928 through 1942 is strongly correlated with changes in real GDP. But the rate from 1943 through 2014 is not:

Something happened during the war years to loosen the connection between the state of the economy and the suicide rate. That something was the end of the pervasive despair that the Great Depression inflicted on huge numbers of Americans. It’s as if America had a mood transplant, one which has lasted for more than 70 years. The recent uptick in the rate of suicide (and the accompanying rise in slow-motion suicide) is sad because it represents wasted lives. But it is within one standard deviation of the 1943-2014 average of  12.2 suicides per 100,000 persons:

(It seems to me that researchers ought to be asking why the rate was so low for about 20 years, beginning in the 1980s.)

Perhaps the recent uptick among working-class whites can be blamed, in part, on loss of “virtue”, technological change, and globalization — as Case and Deaton claim. But they fail to notice the bigger elephant in the room: the destructive role of government.

Technological change and globalization simply reinforce the disemployment effects of the long-term decline in the rate of economic growth. I’ve addressed the decline many times, most recently in “Presidents and Economic Growth“. The decline has three main causes, all attributable to government action, which I’ve assessed in “The Rahn Curve Revisited“: the rise in government spending as a fraction of GDP, the rise in the number of regulations on the books, and the (unsurprising) effect of those variables on private business investment. The only silver lining has been a decline in the rate of inflation, which is unsurprising in view of the general slow-down of economic growth. Many jobs may have disappeared because of technological change and many jobs may have been “shipped overseas”, but there would be a lot more jobs if government had kept its hands off the economy and out of Americans’ wallets.

Moreover, the willingness of Americans — especially low-skill Americans — to seek employment has been eroded by various government programs: aid to the families of dependent children (a boon to unwed mothers and a bane to family stability), food stamps, disability benefits, the expansion of Medicaid, subsidized health-care for “children” under the age of 26, and various programs that encourage women to work outside the home, thus fostering male unemployment.

Economist Edward Glaeser puts it this way:

The rise of joblessness—especially among men—is the great American domestic crisis of the twenty-first century. It is a crisis of spirit more than of resources. The jobless are far more prone to self-destructive behavior than are the working poor. Proposed solutions that focus solely on providing material benefits are a false path. Well-meaning social policies—from longer unemployment insurance to more generous disability diagnoses to higher minimum wages—have only worsened the problem; the futility of joblessness won’t be solved with a welfare check….

The New Deal saw the rise of public programs that worked against employment. Wage controls under the National Recovery Act made it difficult for wages to fall enough to equilibrate the labor market. The Wagner Act strengthened the hand of unions, which kept pay up and employment down. Relief efforts for the unemployed, including federal make-work jobs, eased the pressure on the jobless to find private-sector work….

… In 2011, more than one in five prime-age men were out of work, a figure comparable with the Great Depression. But while employment came back after the Depression, it hasn’t today. The unemployment rate may be low, but many people have quit the labor force entirely and don’t show up in that number. As of December 2016, 15.2 percent of prime-age men were jobless—a figure worse than at any point between World War II and the Great Recession, except during the depths of the early 1980s recession….

Joblessness is disproportionately a condition of the poorly educated. While 72 percent of college graduates over age 25 have jobs, only 41 percent of high school dropouts are working. The employment-rate gap between the most and least educated groups has widened from about 6 percent in 1977 to almost 15 percent today….

Both Franklin Roosevelt and Lyndon Johnson aggressively advanced a stronger safety net for American workers, and other administrations largely supported these efforts. The New Deal gave us Social Security and unemployment insurance, which were expanded in the 1950s. National disability insurance debuted in 1956 and was made far more accessible to people with hard-to-diagnose conditions, like back pain, in 1984. The War on Poverty delivered Medicaid and food stamps. Richard Nixon gave us housing vouchers. During the Great Recession, the federal government temporarily doubled the maximum eligibility time for receiving unemployment insurance.

These various programs make joblessness more bearable, at least materially; they also reduce the incentives to find work. Consider disability insurance. Industrial work is hard, and plenty of workers experience back pain. Before 1984, however, that pain didn’t mean a disability check for American workers. After 1984, though, millions went on the disability rolls. And since disability payments vanish if the disabled person starts earning more than $1,170 per month, the disabled tend to stay disabled…. Disability insurance alone doesn’t entirely explain the rise of long-term joblessness—only one-third or so of jobless males get such benefits. But it has surely played a role.

Other social-welfare programs operate in a similar way. Unemployment insurance stops completely when someone gets a job, … [thus] the unemployed tend to find jobs just as their insurance payments run out. Food-stamp and housing-voucher payments drop 30 percent when a recipient’s income rises past a set threshold by just $1. Elementary economics tells us that paying people to be or stay jobless will increase joblessness….

The rise of joblessness among the young has been a particularly pernicious effect of the Great Recession. Job loss was extensive among 25–34-year-old men and 35–44-year-old men between 2007 and 2009. The 25–34-year-olds have substantially gone back to work, but the number of employed 35–44-year-olds, which dropped by 2 million at the start of the Great Recession, hasn’t recovered. The dislocated workers in this group seem to have left the labor force permanently.

Unfortunately, policymakers seem intent on making the joblessness crisis worse. The past decade or so has seen a resurgent progressive focus on inequality—and little concern among progressives about the downsides of discouraging work. Advocates of a $15 minimum hourly wage, for example, don’t seem to mind, or believe, that such policies deter firms from hiring less skilled workers. The University of California–San Diego’s Jeffrey Clemens examined states where higher federal minimum wages raised the effective state-level minimum wage during the last decade. He found that the higher minimum “reduced employment among individuals ages 16 to 30 with less than a high school education by 5.6 percentage points,” which accounted for “43 percent of the sustained, 13 percentage point decline in this skill group’s employment rate.”

The decision to prioritize equality over employment is particularly puzzling, given that social scientists have repeatedly found that unemployment is the greater evil…. One recent study estimated that unemployment leads to 45,000 suicides worldwide annually. Jobless husbands have a 50 percent higher divorce rate than employed husbands. The impact of lower income on suicide and divorce is much smaller. The negative effects of unemployment are magnified because it so often becomes a semipermanent state.

Time-use studies help us understand why the unemployed are so miserable. Jobless men don’t do a lot more socializing; they don’t spend much more time with their kids. They do spend an extra 100 minutes daily watching television, and they sleep more. The jobless also are more likely to use illegal drugs….

Joblessness and disability are also particularly associated with America’s deadly opioid epidemic…. The strongest correlate of those deaths is the share of the population on disability. That connection suggests a combination of the direct influence of being disabled, which generates a demand for painkillers; the availability of the drugs through the health-care system; and the psychological misery of having no economic future.

Increasing the benefits received by nonemployed persons may make their lives easier in a material sense but won’t help reattach them to the labor force. It won’t give them the sense of pride that comes from economic independence. It won’t give them the reassuring social interactions that come from workplace relationships. When societies sacrifice employment for a notion of income equality, they make the wrong choice. [“The War on Work — And How to End It“, City Journal, special issue: The Shape of Work to Come 2017]

In sum, the rising suicide rate — whatever its significance — is a direct and indirect result of government policies. “We’re from the government and we’re here to help” is black humor, at best. The left is waging a war on white males. But the real war — the war that kills — is hidden from view behind the benign facade of governmental “compassion”.

Having said all of that, I will end on a cautiously positive note. There still is upward mobility in America. Not all working-class people are destined for suicidal despair, only those at the margin who have pocketed the fool’s gold of government handouts.


Other related posts:
Bubbling Along
Economic Mobility Is Alive and Well in America
H.L. Mencken’s Final Legacy
The Problem with Political Correctness
“They Deserve to Die”?
Mencken’s Pearl of Wisdom
Class in America
Another Thought or Two about Class
The Midwest Is a State of Mind

The Midwest Is a State of Mind

I am a son of the Middle Border,* now known as the Midwest. I left the Midwest, in spirit, almost 60 years ago, when I matriculated at a decidedly cosmopolitan State university. It was in my home State, but not much of my home State.

Where is the Midwest? According to Wikipedia, the U.S. Census Bureau defines the Midwest as comprising the 12 States shaded in red:

They are, from north to south and west to east, North Dakota, South Dakota, Nebraska, Kansas, Minnesota, Iowa Missouri, Wisconsin, Illinois, Michigan, Indiana, and Ohio.

In my experience, the Midwest really begins on the west slope of the Appalachians and includes much of New York State and Pennsylvania. I have lived and traveled in that region, and found it, culturally, to be much like the part of “official” Midwest where I was born and raised.

I am now almost 60 years removed from the Midwest (except for a three-year sojourn in the western part of New York State, near the Pennsylvania border). Therefore, I can’t vouch for the currency of a description that appears in Michael Dirda’s review of Jon K. Lauck’s From Warm Center to Ragged Edge: The Erosion of Midwestern Literary and Historical Regionalism, 1920-1965 (Iowa and the Midwest Experience). Dirda writes:

[Lauck] surveys “the erosion of Midwestern literary and historical regionalism” between 1920 and 1965. This may sound dull as ditch water to those who believe that the “flyover” states are inhabited largely by clodhoppers, fundamentalist zealots and loudmouthed Babbitts. In fact, Lauck’s aim is to examine “how the Midwest as a region faded from our collective imagination” and “became an object of derision.” In particular, the heartland’s traditional values of hard work, personal dignity and loyalty, the centrality it grants to family, community and church, and even the Jeffersonian ideal of a democracy based on farms and small land-holdings — all these came to be deemed insufferably provincial by the metropolitan sophisticates of the Eastern Seaboard and the lotus-eaters of the West Coast.

That was the Midwest of my childhood and adolescence. I suspect that the Midwest of today is considerably different. American family life is generally less stable than it was 60 years ago; Americans generally are less church-going than they were 60 years ago; and social organizations are less robust than they were 60 years ago. The Midwest cannot have escaped two generations of social and cultural upheaval fomented by the explosion of mass communications, the debasement of mass culture, the rise of the drugs-and-rock culture, the erasure of social norms by government edicts, and the creation of a culture of dependency on government.

I nevertheless believe that there is a strong, residual longing for and adherence to the Midwestern culture of 60 years ago — though it’s not really unique to the Midwest. It’s a culture that persists throughout America, in rural areas, villages, towns, small cities, and even exurbs of large cities.

The results of last year’s presidential election bear me out. Hillary Clinton represented the “sophisticates” of the Eastern Seaboard and the lotus-eaters of the West Coast. She represented the supposed superiority of technocracy over the voluntary institutions of civil society. She represented a kind of smug pluralism and internationalism that smirks at traditional values and portrays as clodhoppers and fundamentalist zealots those who hold such values. Donald Trump, on the other hand (and despite his big-city roots and great wealth), came across as a man of the people who hold such values.

What about Clinton’s popular-vote “victory”? Nationally, she garnered 2.9 million more votes than Trump. But the manner of Clinton’s “victory” underscores the nation’s cultural divide and the persistence of a Midwestern state of mind. Clinton’s total margin of victory in California, New York, and the District of Columbia was 6.3 million votes. That left Trump ahead of Clinton by 3.4 million votes in the other 48 States, and even farther ahead in non-metropolitan areas. Clinton’s “appeal” (for want of a better word) was narrow; Trump’s was much broader (e.g., winning a higher percentage than Romney did of the two-party vote in 39 States). Arguably, it was broader than that of every Republican presidential candidate since Ronald Reagan won a second term in 1984.

The Midwestern state of mind, however much it has weakened in the last 60 years, remains geographically dominant. In the following graph, counties won by Clinton are shaded in blue; counties won by Trump are shaded in red:


Source: Wikipedia article about the 2016 presidential election.


* This is an allusion to Hamlin Garland‘s novel, A Son of the Middle Border. Garland, a native of Wisconsin, was himself a son of the Middle Border.


Related posts:
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
An Addendum to (Asymmetrical) Ideological Warfare
Khizr Khan’s Muddled Logic
A Lesson in Election-Rigging
My Platform (which reflects a Midwestern state of mind)
Polarization and De-facto Partition
H.L. Mencken’s Final Legacy
The Shy Republican Supporters
Roundup (see “Civil War II”)
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
You Can’t Go Home Again
Class in America
A Word of Warning to Leftists (and Everyone Else)
Another Thought or Two about Class

Another Thought or Two about Class

My recent post, “Class in America,” offers a straightforward taxonomy of the socioeconomic pecking order in the United States. The post doesn’t address the dynamics of movement between classes, so I want to say something about dynamics. And I want to address the inevitability of class-like distinctions, despite the avowed (but hypocritical) goals of leftists to erase such distinctions.

With respect to dynamics, I begin with these observations from “Class in America”:

Class in America isn’t a simple thing. It has something to do with one’s inheritance, which is not only (or mainly) wealth. It has mainly to do with one’s intelligence (which is largely of genetic origin) and behavior (which also has a genetic component). Class also has a lot to do with what one does with one’s genetic inheritance, however rich or sparse it is. Class still depends a lot on acquired skills, drive, and actual achievements — even dubious ones like opining, acting, and playing games — and the income and wealth generated by them.

Class distinctions depend on the objective facts (whether observable or not) about genetic inheritance and one’s use (or not) thereof. Class distinctions also depend on broadly shared views about the relative prestige of various combinations of wealth, income (which isn’t the same as wealth), power, influence, and achievement. Those broadly shared views shift over time.

For example, my taxonomy includes three “suspect” classes whose denizens are athletes and entertainers. There were relatively few highly paid entertainers and almost no highly paid athletes in the late 1800s, when some members of today’s old-wealth aristocracy (e.g., Rockefeller and Ford) had yet to rise to that pinnacle. Even those few athletes and entertainers, unless they had acquired a patina of “culture,” would have been considered beyond the pale of class distinctions — oddities to be applauded (or not) and rewarded for the exercise of their talents, but not to be emulated by socially striving youngsters.

How the world has changed. Now that sports and entertainment have become much more visible and higher-paying than they were in the Gilded Age, there are far more Americans who accord high status to the practitioners in those fields. This is not only a matter of income, but also a matter of taste. If the American Dream of the late 19th century was dominated by visions of rising to the New-Wealth Aristocracy, the American Dream of the early 21st century gives a place of prominence to visions of becoming the next LaBron James or Lady Gaga.

I should qualify the preceding analysis by noting that it applies mainly to whites of European descent and those blacks who are American-born or more than a generation removed from foreign shores. I believe that the old American Dream still prevails among Americans of Asian descent and blacks who are less than two generations removed from Africa or the Caribbean. The Dream prevails to a lesser extent among Latinos — who have enjoyed great success in baseball — but probably more than it does among the aforementioned whites and blacks. As a result, the next generations of upper classes (aside from the Old-Wealth Aristocracy) will become increasingly Asian and Latino in complexion.

Yes, there are millions of white and black Americans (of non-recent vintage) who still share The Dream, though millions more have abandoned it. Their places will be taken by Americans of Asian descent, Latinos, and African-Americans of recent vintage. (I should add that, in any competition based on intellectual merit, Asians generally have the advantage of above-average-to-high intelligence.)

Which brings me to my brief and unduly dismissive rant about the predominantly white and

growing mob of whiny, left-wing fascists[.] For now, they’re sprinkled among the various classes depicted in the table, even classes at or near the top. In their vision of a “classless” society, they would all be at the top, of course, flogging conservatives, plutocrats, malefactors of great wealth, and straight, white (non-Muslim, non-Hispanic), heterosexual males — other than those of their whiny, fascist ilk.

The whiny left is not only predominantly white but also predominantly college-educated, and therefore probably of above-average intelligence. Though there is a great deal of practiced glibness at work among the left-wingers who dominate the professoriate and punditocracy, the generally high intelligence of the whiny class can’t be denied. But the indisputable fact of its class-ness testifies to an inconvenient truth: It is natural for people to align themselves in classes.

Class distinctions are status distinctions. But they can also connote the solidarity of an in-group that is united by a worldview of some kind. The worldview is usually of a religious character, where “religious” means a cult-like devotion to certain beliefs that are taken on faith. Contemporary leftists signal their solidarity — and class superiority — in several ways:

They proclaim themselves on the side of science, though most of them aren’t scientists and wouldn’t know real science if it bit them in the proverbial hindquarters.

There are certain kinds of “scientific” dangers and catastrophes that attract leftists because they provide a pretext for shaping people’s lives in puritanical ways: catastrophic anthropogenic global warming; extreme environmentalism, which stretches to the regulation of mud puddles; second-hand smoking as a health hazard; the “evident” threat posed by the mere depiction or mention of guns; “overpopulation” (despite two centuries of it); obesity (a result, God forbid, of market forces that result in the greater nourishment of poor people); many claims about the ill effects of alcohol, salt, butter, fats, etc., that have been debunked; any number of regulated risks that people would otherwise treat as IQ tests thrown up by life and opportunities to weed out the gene pool; and on and on.

They are in constant search of victims to free from oppression, whether it is the legal oppression of the Jim Crow South or simply the “oppression” of hurt feelings inflicted on the left itself by those who dare to hold different views. (The left isn’t always wrong about the victims it claims to behold, but it has been right only when its tender sensibilities have been confirmed by something like popular consensus.)

Their victim-olatry holds no place, however, for the white working class, whose degree of “white privilege” is approximately zero. To earn one’s daily bread by sweating seems to be honorable only for those whose skin isn’t white or whose religion isn’t Christian.

They are astute practitioners of moral relativism. The inferior status of women in Islam is evidently of little or no account to them. Many of them were even heard to say, in the wake of 9/11, that “we had it coming,” though they were not among the “we.” And “we had it coming” for what, the audacity of protecting access to a vital resource (oil) that helps to drive an economy whose riches subsidize their juvenile worldview? It didn’t occur to those terrorists manqué that it was Osama bin Laden who had it coming. (And he finally “got” it, but Obama — one of their own beneath his smooth veneer — was too sensitive to the feelings of our Muslim enemies to show the proof that justice was done. This was also done to spite Americans who, rightly, wanted more than a staged photo of Obama and his stooges watching the kill operation unfold.)

To their way of thinking, justice — criminal and “social” — consists of outcomes that favor certain groups. For example, it is prima facie wrong that blacks are disproportionately convicted of criminal offenses, especially violent crimes, because … well, just because. It is right (“socially just”) that blacks and other “protected” groups get jobs, promotions, and university admissions for which they are less-qualified than whites and Asians because slavery happened more than 160 years ago and blacks still haven’t recovered from it. (It is, of course, futile and “racist” to mention that blacks are generally less intelligent than whites and Asians.)

Their economic principles (e.g., “helping” the poor through minimum wage and “living wage” laws, buying local because … whatever, promoting the use of bicycles to reduce traffic congestion, favoring strict zoning laws while bemoaning a lack of “affordable” housing) are anti-scientific but virtuous. With leftists, the appearance of virtuousness always trumps science.

All of this mindless posturing has only two purposes, as far as I can tell. The first is to make leftists feel good about themselves, which is important because most of them are white and therefore beneficiaries of “white privilege.” (They are on a monumental guilt-trip, in other words.) The second, as I have said, is to signal their membership in a special class that is bound by attitudes rather than wealth, income, tastes, and other signals that have deep roots in social evolution.

I now therefore conclude that the harsh, outspoken, virulent, violence-prone left is a new class unto itself, though some of its members may retain the outward appearance of belonging to other classes.


Related posts:
Academic Bias
Intellectuals and Capitalism
The Cocoon Age
Inside-Outside
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Tolerance on the Left
The Eclipse of “Old America”
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Leftist Condescension
Beating Religion with the Wrong End of the Stick
Psychological Insights into Leftism
Nature, Nurture, and Leniency
Red-Diaper Babies and Enemies Within
A Word of Warning to Leftists (and Everyone Else)

Class in America

I often refer to class — or socioeconomic status (SES) — as do many other writers. SES is said to be a function of “a person’s work experience and of an individual’s or family’s economic and social position in relation to others, based on income, education, and occupation.” Wealth counts, too. As do race and ethnicity, to be candid.

Attempts to quantify SES are psuedo-scientific, so I won’t play that game. It’s obvious that class distinctions are subtle and idiosyncratic. Class is in the eye of the beholder.

I am a beholder, and what I behold is parsed in the table below. There I have sorted Americans into broad, fuzzy, and overlapping classes, in roughly descending order of prestige. The indented entries pertain to a certain “type” of person who doesn’t fit neatly into the usual taxonomy of class. What is the type? You’ll see as you read the table.

(To enlarge the image, open it in a new tab and use your browser’s “magnifying glass.”)

What about retirees? If their financial status or behavioral traits don’t change much after retirement, they generally stay in the class to which they belonged at retirement.

Where are the “rednecks”? Most of them are probably in the bottom six rungs, but so are huge numbers of other Americans who (mostly) escape opprobrium for being there. Many “rednecks” have risen to higher classes, especially but not exclusively the indented ones.

What about the growing mob of whiny, left-wing fascists? For now, they’re sprinkled among the various classes depicted in the table, even classes at or near the top. In their vision of a “classless” society, they would all be at the top, of course, flogging conservatives, plutocrats, malefactors of great wealth, and straight, white (non-Muslim, non-Hispanic), heterosexual males — other than those of their whiny, fascist ilk.

Here’s what I make of all this. Class in America isn’t a simple thing. It has something to do with one’s inheritance, which is not only (or mainly) wealth. It has mainly to do with one’s intelligence (which is largely of genetic origin) and behavior (which also has a genetic component). Class also has a lot to do with what one does with one’s genetic inheritance, however rich or sparse it is. Class still depends a lot on acquired skills, drive, and actual achievements — even dubious ones like opining, acting, and playing games — and the income and wealth generated by them. Some would call that quintessentially American.

My family background, on both sides, is blue-collar. I wound up on the senior manager-highly educated rung. That’s quintessentially American.

There’s a lot here to quibble with. Have at it.

If comments are closed by the time you read this post, you may send them to me by e-mail at the Germanic nickname for Friedrich followed by the last name of the great Austrian economist and Nobel laureate whose first name is Friedrich followed by the 3rd and 4th digits of his birth year followed by the usual typographic symbol followed by the domain and extension for Google’s e-mail service — all run together.


Related posts:
Are You in the Bubble?
Race and Reason: The Achievement Gap — Causes and Implications
Not-So-Random Thoughts (X) (last item)
Privilege, Power, and Hypocrisy
Thinkers vs. Doers
Bubbling Along
Intelligence, Assortative Mating, and Social Engineering
“They Deserve to Die”?
More about Intelligence

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.

The Internet-Media-Academic Complex vs. Real Life

I spend an inordinate share of my time at my PC. (Unlike smart-phone and tablet users, I prefer to be seated in a comfortable desk chair, viewing a full-size screen, and typing on a real keyboard.) When I’m not composing a blog post or playing spider solitaire, I’m reading items from several dozen RSS feeds.

My view of the world is shaped, for the worse, by what I read. If it’s not about leftist cant and scientific fraud, it’s about political warfare on many levels. But my view of the world is more sanguine when I reflect on real life as I experience it when I’m away from my PC.

When the subject isn’t politics, and the politics of the other person are hidden from view, I experience a world of politeness, competence (even unto excellence), and intelligence. Most of the people in that world are owners of small businesses, their employees, and the employees of larger businesses.

In almost every case, their attitude of friendliness is sincere — and I’ve been around the block enough tines to spot insincerity. There’s an innate goodness in most people, regardless of their political views, that comes out when you’re interacting with them as a “real” human being.

The exception to the rule, in my experience, is the highly educated analyst or academic — regardless of political outlook — who thinks he is smarter than everyone else. And it shows in his abrupt, superior attitude toward others, especially if they are strangers whom he is unlikely to encounter again.

The moral of the story: If government were far less powerful, and if it were kept that way, the political noise level would be much reduced and the world would be a far more pleasant place.

Thoughts for the Day

Excerpts of recent correspondence.

Robots, and their functional equivalents in specialized AI systems, can either replace people or make people more productive. I suspect that the latter has been true in the realm of medicine — so far, at least. But I have seen reportage of robotic units that are beginning to perform routine, low-level work in hospitals. So, as usual, the first people to be replaced will be those with rudimentary skills, not highly specialized training. Will it go on from there? Maybe, but the crystal ball is as cloudy as an old-time London fog.

In any event, I don’t believe that automation is inherently a job-killer. The real job-killer consists of government programs that subsidize non-work — early retirement under Social Security, food stamps and other forms of welfare, etc. Automation has been in progress for eons, and with a vengeance since the second industrial revolution. But, on balance, it hasn’t killed jobs. It just pushes people toward new and different jobs that fit the skills they have to offer. I expect nothing different in the future, barring government programs aimed at subsidizing the “victims” of technological displacement.

*      *      *

It’s civil war by other means (so far): David Wasserman, “Purple America Has All but Disappeared” (The New York Times, March 8, 2017).

*      *      *

I know that most of what I write (even the non-political stuff) has a combative edge, and that I’m therefore unlikely to persuade people who disagree with me. I do it my way for two reasons. First, I’m too old to change my ways, and I’m not going to try. Second, in a world that’s seemingly dominated by left-wing ideas, it’s just plain fun to attack them. If what I write happens to help someone else fight the war on leftism — or if it happens to make a young person re-think a mindless commitment to leftism — that’s a plus.

*     *     *

I am pessimistic about the likelihood of cultural renewal in America. The populace is too deeply saturated with left-wing propaganda, which is injected from kindergarten through graduate school, with constant reinforcement via the media and popular culture. There are broad swaths of people — especially in low-income brackets — whose lives revolve around mindless escape from the mundane via drugs, alcohol, promiscuous sex, etc. Broad swaths of the educated classes have abandoned erudition and contemplation and taken up gadgets and entertainment.

The only hope for conservatives is to build their own “bubbles,” like those of effete liberals, and live within them. Even that will prove difficult as long as government (especially the Supreme Court) persists in storming the ramparts in the name of “equality” and “self-creation.”

*     *     *

I correlated Austin’s average temperatures in February and August. Here are the correlation coefficients for following periods:

1854-2016 = 0.001
1875-2016 = -0.007
1900-2016 = 0.178
1925-2016 = 0.161
1950-2016 = 0.191
1975-2016 = 0.126

Of these correlations, only the one for 1900-2016 is statistically significant at the 0.05 level (less than a 5-percent chance of a random relationship). The correlations for 1925-2016 and 1950-2016 are fairly robust, and almost significant at the 0.05 level. The relationship for 1975-2016 is statistically insignificant. I conclude that there’s a positive relationship between February and August temperatures, but weak one. A warm winter doesn’t necessarily presage an extra-hot summer in Austin.

“They Deserve to Die”?

In “Prosperity Isn’t Everything” I quoted Megan McArdle’s observations about how thing have gotten better and worse for Americans. Here’s some of what she wrote:

By the standards of today, my grandparents were living in wrenching poverty. Some of this, of course, involves technologies that didn’t exist—as a young couple in the 1930s my grandparents had less access to health care than the most  neglected homeless person in modern America, simply because most of the treatments we now have had not yet been invented. That is not the whole story, however. Many of the things we now have already existed; my grandparents simply couldn’t afford them.  With some exceptions, such as microwave ovens and computers, most of the modern miracles that transformed 20th century domestic life already existed in some form by 1939. But they were out of the financial reach of most people….

[Not] everything has gotten better in every way, all the time. There are areas in which things have gotten broadly worse….

  • … Substance abuse, and the police response to it, has devastated both urban and rural communities.
  • Divorce broke up millions of families, and while the college educated class seems to have found a new equilibrium of stable and happy later marriages, marriage is collapsing among the majority who do not have a college degree, leaving millions of children in unstable family situations where fathers are often absent from the home, and their attention and financial resources are divided between multiple children with multiple women.
  • Communities are much less cohesive than they used to be, and while the educated elite may have found substitutes online, the rest of the country is “bowling alone” more and more often—which is not merely lonely, but also means they have fewer social supports when they find themselves in trouble.
  • A weekly wage packet may buy more than it did sixty years ago, but the stability of manufacturing jobs is increasingly being replaced by contingent and unreliable shift work that is made doubly and triply difficult by the instability of the families that tend to do these jobs. The inability to plan your life or work in turn makes it hard to form a family, and stressful to keep one together….

Charles Murray writes candidly but not unsympathetically about the plight of low-income white Americans in Coming Apart: The State of White America, 1960-2010:

Drawing on five decades of statistics and research, Coming Apart demonstrates that a new upper class and a new lower class have diverged so far in core behaviors and values that they barely recognize their underlying American kinship—divergence that has nothing to do with income inequality and that has grown during good economic times and bad.

The top and bottom of white America increasingly live in different cultures, Murray argues, with the powerful upper class living in enclaves surrounded by their own kind, ignorant about life in mainstream America, and the lower class suffering from erosions of family and community life that strike at the heart of the pursuit of happiness.

Along comes Kevin D. Williamson of the National Review to pour scorn upon low-income whites. Williamson’s article, which appeared in the print edition of March 28, 2016, was originally titled “The Father-Fuhrer,” a reference to Donald Trump. The online version is called “Chaos in the Family, Chaos in the State: The White Working Class’s Dysfunction.” Written before Trump had clinched the GOP nomination, the piece is a transparent attempt to discredit Trump by discrediting a key source of his support: low-income whites in chronically depressed regions of the country.

Here’s a key passage:

The truth about these dysfunctional, downscale communities is that they deserve to die. Economically, they are negative assets. Morally, they are indefensible. Forget all your cheap theatrical Bruce Springsteen crap. Forget your sanctimony about struggling Rust Belt factory towns and your conspiracy theories about the wily Orientals stealing our jobs…. The white American underclass is in thrall to a vicious, selfish culture whose main products are misery and used heroin needles. Donald Trump’s speeches make them feel good. So does OxyContin. What they need isn’t analgesics, literal or political. They need real opportunity, which means that they need real change, which means that they need U-Haul.

Disgusting.

Scott Grier, writing in The Daily Caller (“National Review Writer: Working-Class Communities ‘Deserve To Die’,” March 12, 2016), seems to share my disgust. He closes with this:

While Williamson blames the people living in run-down white communities for their own woes, he does not apply the same principle to run-down minority communities. In his book and articles on the failures of Detroit, for instance, the National Review writer blames “progressivism” and unions for ruining the predominately African-American city.

Spot on. As I say in “Prosperity Isn’t Everything,”

Let’s begin with social norms, which are the basis of social ties. If you and I observe the same social norms, we’re likely to feel bound in some way, even if we’re not friends or relatives. This, of course, is tribalism, which is verboten among those who view all of mankind as brothers, sisters, and whatevers under the skin — all mankind except smarty-pants Americans of East Asian descent, Israeli Jews and American Jews who support Israel, Southerners (remember the Civil War!), and everyone else who is a straight, non-Hispanic white male of European descent. To such people, the only legitimate tribe is the tribe of anti-tribalism.You may by now understand that I blame leftists for the breakdown of social norms and social ties. But how can that be if, as McArdle says, “the college educated class seems to have found a new equilibrium of stable and happy later marriages”? The college-educated class resides mostly on the left, and affluent leftists do seem to have avoided the rot.

Yes, but they caused it. You could think of it as a non-suicidal act of terror. But it would be kinder and more accurate to call it an act of involuntary manslaughter.  Leftists meant to make the changes that caused the rot; they just didn’t foresee or intend the rot. Nor is it obvious that they care about it, except as an excuse to “solve” social problems from on high by throwing money and behavioral prescriptions at them — which is why there’s social rot in the first place.

The good intentions embedded in governmental acts and decrees have stealthily expanded and centralized government’s power, and in the process have sundered civil society….

The undoing of traditional mores began in earnest in the 1960s, with a frontal assault on traditional morality and the misguided expansion of the regulatory-welfare state. The unraveling continues to this day. Traditional morality is notable in its neglect; social cohesion is almost non-existent, except where the bonds of religion and ethnicity remain strong. The social fabric that once bound vast swaths of America has rotted — and is almost certainly beyond repair.

The social fabric has frayed precisely because government has pushed social institutions aside and made dependents of hundreds of millions of Americans. As Ronald Reagan said in his first inaugural address, “In this present crisis, government is not the solution to our problem, government is the problem.”

Now for an ironic twist. Were the central government less profligate and intrusive, Americans would become much more prosperous.

Clearly, Kevin Williamson wants to distance himself from people who don’t share his elevated norms. In that respect, he’s no different from a sneering, leftist-voting yuppie. If he were truly conservative, he’d have compassion for the people about whom he writes.

But Williamson has shown himself to be a faux conservative: all economic efficiency and no heart.

Unorthodox Economics: 4. A Parable of Political Economy

This is the fourth entry in what I hope will become a book-length series of posts. That result, if it comes to pass, will amount to an unorthodox economics textbook. This first chapter gives a hint of things to come. Here are the chapters that have been posted to date:

1. What Is Economics?
2. Pitfalls
3. What Is Scientific about Economics?
4. A Parable of Political Economy

Imagine a simple society in which Jack and Jill own neighboring farms that are equally endowed in natural resources, tools, and equipment. Jack makes bread and Jill makes butter. Jack also could make butter and Jill also could make bread, but both of them have learned that they are better off if they specialize. Thus:

  • Jack can make 1 loaf of bread or 0.5 pound of butter a day. (The rate of transformation is linear; e.g. Jack could make 0.5 loaf of bread and 0.25 pound of butter daily.)
  • Jill can make 1 loaf of bread or 1 pound of butter a day. (Again, the rate of transformation is linear; Jill could make 0.5 loaf of bread and 0.5 pound of butter daily.)
  • If both Jack and Jill make bread and butter their total daily output might be 1 loaf and 0.75 pounds.
  • Alternatively, if Jack specializes in bread and Jill specializes in butter their total daily output could be 1 loaf and 1 pound.

Jill is more intelligent than Jack, and thus more innovative. That’s why she is able to reap as much wheat and make as much bread as Jack, even though he’s stronger. That’s also why she’s able to produce twice as much butter as Jack.

Jill has an absolute advantage over Jack, in that she can make as much bread as he can, and more butter than he can. But Jack has a comparative advantage in the production of bread; if he specializes in bread and Jill specializes in butter, he and Jill will be better off than if they both produce bread and butter for themselves.

Jack and Jill negotiate the exchange rate between bread and butter. Each ends up with 0.5 loaf of bread; but Jill gets 0.6 pound of butter to Jack’s 0.4 pound. Jill ends up with more butter than Jack because her greater productivity puts in her in superior bargaining position. In sum, she earns more because she produces more.

Jack and Jill have another neighbor, June, who makes clothing. Jack and Jill are more productive when they’re properly clothed during the colder months of the year. So they’re willing to trade some of their output to June, in return for heavy clothing.

Jerry, another neighbor, is a laborer who used to work for Jack and Jill, but has been unemployed for a long time because of Jill’s technological innovations. Jerry barely subsists on the fruit and game that he’s able to find and catch. Jack and Jill would hire Jerry but he insists on a wage that they can’t afford to pay unless they spends less to maintain their equipment, which would eventually result in a lower rate of output.

Along comes Juan, a wanderer from another region, who has nothing to offer but his labor. Juan is willing to work for a lower wage than Jerry, but has to be fed and clothed so that he becomes strong enough to deliver the requisite amount of labor to be worthy of hire.

Jack, Jill, and June meet to discuss Jerry and Juan. They are worried about Jerry because he’s a neighbor whom they’ve known for a long time. They also empathize with Juan’s plight, though they’re not attached to him because he’s a stranger and doesn’t speak their language well.

Jake — the gunslinger hired by Jack, Jill, and June to protect them from marauders — invites himself the meeting and brings Jerry with him. Jake likes to offset his stern image by feigning compassion. He tells Jack and Jill that they have a duty to pay Jerry the wage that he demands. He also requires Jack and Jill to feed and clothe Juan until he’s ready to work, and then they must hire him and pay him the same wage as Jerry. Jack and Jill demur because they can’t afford to do what Jake demands and make enough bread and butter to sustain their families and put something aside for retirement. June, who reacts with great sympathy to every misfortune around her — perceived and real — sides with Jake. Jerry argues that he should be helped, but Juan shouldn’t be helped because he’s just a stranger with a strange accent who’s looking for a handout.

Jake the gunslinger, disregarding Jerry’s reservation about Juan, announces that Jack and Jill must abide by his decision, inasmuch as there are 3 votes for it and only 2 votes against it — and he has the gun.

What happens next? Several things:

Jack and Jill quite properly accuse Jake of breach of contract. He has assumed a power that wasn’t given to him by Jack, Jill, and June when they hired him. Jake merely laughs at them.

Jack, Jill, and June (though she doesn’t understand it) have lost control of their businesses. They can no longer produce their goods efficiently. This means less output, that is less to trade with each other. Less output also means that they won’t be able to invest as much as before in the improvement and expansion of their operations.

June is happy, for the moment, because Jake sided with her. But she will be unhappy when Jake abuses his authority in a way that she disapproves, and when she finally understands what Jake has done to her business.

Jack and Jill have good reason to resent Juan and Jerry for using Jake to coerce them, and June for siding with Jerry and Juan. There is now a rift that will hinder cooperation for mutual benefit (e.g., willingness to help each other in times of illness).

Juan and Jerry have become dependent on Jake, thus undermining their ability to develop marketable skills and good work habits. Their dependency will keep them mired in near-poverty.

In a sane world, Jack and Jill would get rid of Jake, and the others would applaud them for doing it.

*     *     *

Related posts:
The Sentinel: A Tragic Parable of Economic Reality
Liberty, General Welfare, and the State
Monopoly and the General Welfare
Gains from Trade
Trade
A Conversation with Uncle Sam

A Nation of Immigrants, a Nation of Enemies

I’m sick and tired of hearing that the United States is a nation of immigrants. So what if the United States is a nation of immigrants? The real issue is whether immigrants wish to become Americans in spirit, not in name only — loyal to the libertarian principles of the Constitution or cynical abusers of it.

I understand and sympathize with the urge to live among people with whom one shares a religion, a language, and customs. Tribalism is a deeply ingrained trait. It is not necessarily a precursor to aggression, contrary to the not-so-subtle message (aimed at white Americans) of the UN propaganda film that I was subjected to in high school. And the kind of tribalism found in many American locales, from the barrios of Los Angeles to the disappearing German communities of Texas to the Orthodox Jewish enclaves of New York City, is harmless compared with  Reconquista and Sharia.

Proponents of such creeds don’t want to become Americans whose allegiance is to the liberty promised by the Constitution. They are cynical abusers of that liberty, whose insidious rhetoric is evidence against free-speech absolutism.

But they are far from the only abusers of that liberty. It is unnecessary to import enemies when there is an ample supply of them among native-born Americans. Well, they are Americans in name because they were born in the United States and (in most cases) haven’t formally renounced their allegiance to the Constitution. But they are its enemies, no matter how cleverly they twist its meaning to support their anti-libertarian creed.

I am speaking of the left, of course. Lest we forget, the real threat to liberty in America is home-grown. The left’s recent hysterical hypocrisy leads me to renounce my naive vow to be a kinder, gentler critic of the left’s subversive words and deeds.

*     *     *

Related posts:
IQ, Political Correctness, and America’s Present Condition
Greed, Conscience, and Big Government
Tolerance
Privilege, Power, and Hypocrisy
Thinkers vs. Doers
Society, Polarization, and Dissent
Another Look at Political Labels
Individualism, Society, and Liberty
Social Justice vs. Liberty
My Platform
Polarization and De-facto Partition
How America Has Changed
The Left and “the People”
Why Conservatives Shouldn’t Compromise
Liberal Nostrums
Politics, Personality, and Hope for a New Era

Prosperity Isn’t Everything

There is no denying that per-capita income rises with specialization and trade; for example:

  • A is a farmer with land that’s good for growing fruit trees; B is a farmer with land that’s good for raising cattle.
  • The total output of both apples and butter will be greater if A specializes in growing apples and B specializes in making butter than if both A and B grew apples and made butter.
  • A and B can then trade apples for butter so that of them is better off than he would have been in the absence of specialization and trade.

Sometimes A and B live in different cities, different States, and different countries. If the raison d’etre of specialization and trade is the maximization of income, it would be foolish to exclude international trade while allowing inter-State and inter-city trade. (Note that the preceding sentence begins with if.)

The combination of specialization, trade, invention, innovation, and entrepreneurship has wrought much good. Here’s Megan McArdle’s testimony:

By the standards of today, my grandparents were living in wrenching poverty. Some of this, of course, involves technologies that didn’t exist—as a young couple in the 1930s my grandparents had less access to health care than the most  neglected homeless person in modern America, simply because most of the treatments we now have had not yet been invented. That is not the whole story, however. Many of the things we now have already existed; my grandparents simply couldn’t afford them.  With some exceptions, such as microwave ovens and computers, most of the modern miracles that transformed 20th century domestic life already existed in some form by 1939. But they were out of the financial reach of most people.

If America today discovered a young couple where the husband had to drop out of high school to help his father clean tons of unsold, rotted produce out of their farm’s silos, and now worked a low-wage, low-skilled job, was living in a single room with no central heating and a single bathroom to share for two families, who had no refrigerator and scrubbed their clothes by hand in a washtub, who had serious conversations in low voices over whether they should replace or mend torn clothes, who had to share a single elderly vehicle or make the eight-mile walk to town  … that family would be the subject of a three-part Pulitzer prizewinning series on Poverty in America.

But in their time and place, my grandparents were a boring bourgeois couple, struggling to make ends meet as everyone did, but never missing a meal or a Sunday at church. They were excited about the indoor plumbing and electricity which had just been installed on his parents’ farm, and they were not too young to marvel at their amazing good fortune in owning an automobile. In some sense they were incredibly deprived, but there are millions of people in America today who are incomparably better off materially, and yet whose lives strike us (and them) as somehow objectively more difficult.

Much of that is true of my parents, who were of the same generation as McArdle’s grandparents. More of it is true of my maternal grandmother, who was born in 1880, wed in 1903, bore and raised ten children, and was widowed at the age of 60. I remember well the years before she reached the age of 70; until then she cooked on a wood-fired range, pumped water from a well in her backyard, and went to the outhouse for calls of nature. And yet, the following things, and much more, came to pass in her lifetime: alternating-current electricity, a telephone in most homes (though my grandmother lacked one until she was in her 70s), automobiles (though she never learned to drive), airplanes (she first flew at the age of 93), movies, radio, movies with sound, television (she never owned one), radar, penicillin, vaccinations against various debilitating diseases, electric typewriters, and early transistorized computers.

Because my dominant memories of my grandmother and her way of life in a small village are boyhood memories, it’s tempting to characterize them as nostalgic and somewhat romanticized. But I know that she was more or less typical of the residents of her village. Though she was far from rich, she wasn’t poor by the standards of the village. She certainly didn’t feel impoverished or resentful about her lack of material goods.

Today, however, relatively poor people in America have far, far more in the way of material goods than my grandmother ever dreamt of owning, yet they are anxious and even miserable, because… Here’s McArdle’s view:

[Not] everything has gotten better in every way, all the time. There are areas in which things have gotten broadly worse….

  • … Substance abuse, and the police response to it, has devastated both urban and rural communities.
  • Divorce broke up millions of families, and while the college educated class seems to have found a new equilibrium of stable and happy later marriages, marriage is collapsing among the majority who do not have a college degree, leaving millions of children in unstable family situations where fathers are often absent from the home, and their attention and financial resources are divided between multiple children with multiple women.
  • Communities are much less cohesive than they used to be, and while the educated elite may have found substitutes online, the rest of the country is “bowling alone” more and more often—which is not merely lonely, but also means they have fewer social supports when they find themselves in trouble.
  • A weekly wage packet may buy more than it did sixty years ago, but the stability of manufacturing jobs is increasingly being replaced by contingent and unreliable shift work that is made doubly and triply difficult by the instability of the families that tend to do these jobs. The inability to plan your life or work in turn makes it hard to form a family, and stressful to keep one together….
  • Widespread credit has democratized large purchases like furniture and cars. It has also enabled many people, particularly financially marginal people, to get into serious trouble.  Debt magnifies your life experience: when things are going relatively well, it gives you more options, but when things are going badly, it can turn a setback into a catastrophe—as many, many families found out in 2008….

This list illustrates why public policy seems to be struggling to come up with a plan of attack against our current insecurities. The welfare state is relatively good at giving people money: you collect the taxes, write a check, and now people have money. The welfare state has proven very bad at giving people stable jobs and stable families, a vibrant community life, promising career tracks, or a cure for their drug addiction. No wonder so many hopes now seem to be pinned on early childhood education, far in excess of the evidence to support them: it is the only thing we have not already tried and failed at.

But I think this list illustrates the poverty of trying to measure living standards by staring at median wages. Many of the changes of the last century show up in that statistic, but others, like the time no longer spent plucking chickens, or the joys of banishing lye from the pantry, appear nowhere.  Nor do the changes in job and family structure that have made the lives of people who are indisputably vastly materially richer than my young grandparents were, nonetheless feel much more precarious.

Where did it all go wrong? And I do believe that it went wrong. I say that as a man who has lived more than his three-score and ten years, remains in good health, lives comfortably, has a loving wife of 52 years, has two fine children and twelve joyous grandchildren, and is by nature an optimistic achiever who isn’t easily thrown off course by a setback.

It didn’t go wrong because of globalization, though globalization may have hastened the rot. It didn’t go wrong because of prosperity per se, though it was helped by the fevered pursuit of prosperity. It went wrong because of the fraying of the social ties that bound much of America for so long — even with the Civil War and its decades-long residue of bitterness.

Why did those ties fray? And why are they now weaker than than have been since the eve of the Civil War?

Let’s begin with social norms, which are the basis of social ties. If you and I observe the same social norms, we’re likely to feel bound in some way, even if we’re not friends or relatives. This, of course, is tribalism, which is verboten among those who view all of mankind as brothers, sisters, and whatevers under the skin — all mankind except smarty-pants Americans of East Asian descent, Israeli Jews and American Jews who support Israel, Southerners (remember the Civil War!), and everyone else who is a straight, non-Hispanic white male of European descent. To such people, the only legitimate tribe is the tribe of anti-tribalism.

You may by now understand that I blame leftists for the breakdown of social norms and social ties. But how can that be if, as McArdle says, “the college educated class seems to have found a new equilibrium of stable and happy later marriages”? The college-educated class resides mostly on the left, and affluent leftists do seem to have avoided the rot.

Yes, but they caused it. You could think of it as a non-suicidal act of terror. But it would be kinder and more accurate to call it an act of involuntary manslaughter.  Leftists meant to make the changes that caused the rot; they just didn’t foresee or intend the rot. Nor is it obvious that they care about it, except as an excuse to “solve” social problems from on high by throwing money and behavioral prescriptions at them — which is why there’s social rot in the first place.

The good intentions embedded in governmental acts and decrees have stealthily expanded and centralized government’s power, and in the process have sundered civil society. Walter Williams puts it this way in “Culture and Social Pathology” (creators.com, June 16, 2015):

A civilized society’s first line of defense is not the law, police and courts but customs, traditions, rules of etiquette and moral values. These behavioral norms — mostly transmitted by example, word of mouth and religious teachings — represent a body of wisdom distilled over the ages through experience and trial and error. They include important thou-shalt-nots, such as thou shalt not murder, thou shalt not steal and thou shalt not cheat. They also include all those courtesies that have traditionally been associated with ladylike and gentlemanly conduct.

The failure to fully transmit these values and traditions to subsequent generations represents one of the failings of what journalist Tom Brokaw called “The Greatest Generation.” People in this so-called great generation, who lived during the trauma of the Great Depression and fought World War II, not only failed to transmit the moral values of their parents but also are responsible for government programs that will deliver economic chaos….

For nearly three-quarters of a century, the nation’s liberals have waged war on traditional values, customs and morality. Our youths have been counseled that there are no moral absolutes. Instead, what’s moral or immoral is a matter of personal opinion. During the 1960s, the education establishment began to challenge and undermine lessons children learned from their parents and Sunday school with fads such as “values clarification.” So-called sex education classes are simply indoctrination that undermines family and church strictures against premarital sex. Lessons of abstinence were considered passe and replaced with lessons about condoms, birth control pills and abortions. Further undermining of parental authority came with legal and extralegal measures to assist teenage abortions with neither parental knowledge nor parental consent….

If it were only the economic decline threatening our future, there might be hope. It’s the moral decline that spells our doom.

The undoing of traditional mores began in earnest in the 1960s, with a frontal assault on traditional morality and the misguided expansion of the regulatory-welfare state. The unraveling continues to this day. Traditional morality is notable in its neglect; social cohesion is almost non-existent, except where the bonds of religion and ethnicity remain strong. The social fabric that once bound vast swaths of America has rotted — and is almost certainly beyond repair.

The social fabric has frayed precisely because government has pushed social institutions aside and made dependents of hundreds of millions of Americans. As Ronald Reagan said in his first inaugural address, “In this present crisis, government is not the solution to our problem, government is the problem.”

Now for an ironic twist. Were the central government less profligate and intrusive, Americans would become much more prosperous.

*     *     *

Related posts:
Social Norms and Liberty
Whiners — Left and Libertarian
The Adolescent Rebellion Syndrome
“Intellectuals and Society”: A Review
Government vs. Community
The Left’s Agenda
The Left and Its Delusions
The Destruction of Society in the Name of “Society”
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Society and the State
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Democracy, Human Nature, and the Future of America
1963: The Year Zero
Society
How Democracy Works
“Cheerful” Thoughts
How Government Subverts Social Norms
Turning Points
The Twilight’s Last Gleaming?
How America Has Changed

Why Conservatives Shouldn’t Compromise

It’s tempting, sometimes, to compromise with the left’s agenda, which is top-down regulation of social and economic relations. The agenda has a huge constituency, after all. Think of the tens of millions of persons who would be harmed in the short run, if not for a long time, if a leftist scheme were undone.

Consider Obamacare, for example. A key provision of Obamacare — the camel’s nose, head, and shoulders in the tent of universal health care (a.k.a., socialized medicine) — is the vast expansion of eligibility for Medicaid. In the 30-some States that have opted to participate in the expanded program, persons with incomes up to 133 percent of the poverty line are eligible, including adults without dependent children.

It would seem that only a Simon Legree or Ebenezer Scrooge would deny Medicaid coverage to those millions who have obtained it by way of Obamacare. Or it would until the following considerations come to mind:

  • The poverty line is a misleading metric. It’s a relative measure of income, not an absolute one. Most “poor” persons in today’s America are anything but poor in relation the truly poor of the world, and they live far above a subsistence level. The poverty line is nothing but an arbitrary standard that justifies income redistribution.
  • Other persons, with their own problems, are paying for the government’s generous “gift” to the semi-poor. But who is really in a position to say that the problems of Medicaid recipients are more deserving of subsidization than the problems facing those who defray the subsidy?
  • If expanded Medicaid coverage were withdrawn, those now covered would be no worse off than they had been before taxpayers were forced to subsidize them.
  • Being relatively poor used to be a good reason for a person to work his way up the ladder of success. Perhaps not far up the ladder, but in an upward direction. It meant learning skills — on the job, if necessary — and using those skills to move on to more-demanding and higher-paying jobs. Redistributive measures — Medicaid subsidies, food stamps, extended unemployment benefits, etc. — blunt the incentive to better oneself and, instead, reinforce dependency on government.

I will underscore the last point. The lack of something, if it’s truly important to a person, is an incentive for that person to find a way to afford the something. That’s what my parents’ generation did, even in the depths of the Great Depression, without going on the dole. There’s no reason why later generations can’t do it; it’s merely assumed that they can’t. But lots of people do it. I did it; my children did it; my grandchildren are doing it.

Republicans used to say such things openly and with conviction, before they became afraid of seeming “mean.” Principled conservatives should still be thinking and saying such things. When conservatives compromise their principles because they don’t want to seem “mean,” they are complicit in the country’s march down the road to serfdom — dependency on and obeisance to the central government.

Every advance in the direction of serfdom becomes harder and harder to reverse. The abolition of Social Security, Medicare, and Medicaid is now unthinkable, even though those programs have caused hundreds of millions of Americans to become addicted to government handouts.

And how does government pay for those handouts? In part, it taxes many of the people who receive them. It also pays generous salaries and benefits of the army of drones who administer them. It’s a Ponzi scheme enforced at gunpoint.

The best time — usually the only time — to kill a government program is before it starts. That’s why conservatives shouldn’t compromise.

The Problem with Political Correctness

UPDATED BELOW 12/09/16

Why do conservatives and (some) libertarians cringe and react negatively to political correctness? I mean by political correctness “language, policies, or measures that are intended to avoid offense or disadvantage to particular groups in society.” Further, critics of p.c. use the term “as a pejorative, implying that these policies are excessive,” not to mention the language and measures of p.c.-ness.

There are several reasons to reject p.c.-ness:

  1. It is often condescending toward the identity groups it is meant to protect and advance.
  2. It is meant to hide the truth about common characteristics of such groups.
  3. It implies that those persons who don’t join in p.c.-ness are racist bigots with minds that are closed to reality (which is exactly what 1 and 2 say about proponents of p.c.-ness).
  4. The policies and measures that flow from p.c.-ness usually go beyond “avoiding disadvantage to particular groups” to confer advantage on particular groups.
  5. Such policies and measures are therefore anti-libertarian, and often are costly and ineffective (even counterproductive).
  6. Such policies and measures tend to penalize persons who have had nothing to do with any real disadvantages that may have befallen various identity groups.

I can’t speak for conservatives as a group — though they should be a “protected group” (I write sarcastically). But I can tell you that my rejection of p.c.-ness is based on all six of those reasons. And the sum of the six is a devastating attack on social comity (or what’s left of it), even-handed treatment of all persons under the law, freedom of speech, freedom of association, property rights, and the economic well-being of the nation. Other than that, there’s nothing wrong with p.c.-ness.

Whatever merit there is in p.c.-ness, it is canceled by the bad odor that surrounds it. P..c.-ness is a variant of crying wolf: The more often it’s invoked, the less believable it becomes. There’s a corollary: The more people who require p.c. treatment, the fewer people who are left to be blamed for the conditions that p.c.-ness is meant to remedy. Or, if almost everyone is a “victim,” almost no one is a “victim.”

Unless you believe, of course, that straight, white males of European descent are to blame for every bad thing that has befallen every other identity group. Or unless you believe that it’s simply “unfair” for straight, white males of European descent to have been so dominant for so long in so many fields of endeavor.

Was it “unfair” of Newton and Einstein to have been the greatest of physicists? Was in “unfair” of Abraham Lincoln to have been the president who conquered the South and thereby put an end to slavery? Is it “unfair” that there seems to be something in the genetic makeup of East Asians that gives them higher IQs on average than whites, who have higher IQs on average than blacks? (Why aren’t whites complaining about the “unfairness” of the distribution of IQs?) Is it “unfair” that (in the United States, at least) whites, who are on average smarter than blacks, earn more than blacks on average? If that is “unfair,” why is it “fair” that the NBA is dominated by black athletes whose IQs are lower than the IQs of white physicists but who earn many, many times as much as white physicists do?

The problem with “fairness,” which is at the heart of p.c.-ness, is that it is a reality-free concept. It doesn’t take account of the facts of life, such as those alluded to in the preceding paragraph. It assumes that differences in outcomes (e.g., relative earnings, literary fame, scientific achievements, political advancement) are due mainly to one’s membership (or lack thereof) in an identity group. P.c.-ness leaves no room for reality. It leaves no room for individual responsibility. It seeks special treatment for groups of people, regardless of the mental, physical, or moral capacity of each member of a group. (It’s just a variant of white supremacy.)

Which brings me to the deeper reason why conservatives and (some) libertarians instinctively cringe and react negatively to political correctness. Conservatives and libertarians are big on personal responsibility. It’s at the center of libertarianism. It plays an important role in conservatism, where personal responsibility includes not only responsibility for one’s self and for one’s role in society (properly understood), but also responsibility for the observance and continuance of time-tested social norms.

Political correctness casts personal responsibility aside and replaces it with identity politics. That’s the deeper reason why conservatives and (some) libertarians cringe and react negatively to it.

UPDATE 12/09/16

Travis Scott focuses on one (of many) counterproductive effects of political correctness in “The Science Says Putting Women into Combat Endangers National Security” (The Federalist, December 9, 2016). Title of the article speaks for itself. I will quote two passages. The first is about the apprehension of an intruder who climbed over a fence at the White House:

In 2014, a veteran named Omar Gonzalez jumped a fence and rushed the White House. He had a weapon, and made it all the way across the green lawn and into the White House. He was first confronted in the White House by a lone guard, whom he overpowered with ease. He ran through the White House and was not apprehended until he got to the East Room.

Many of the news reports failed to mention that the guard Gonzalez overpowered was a female member of the Secret Service, and that the people who apprehended Gonzalez were males. While the president’s life may have been put in jeopardy by putting a female guard between him and a knife-wielding wild man (a guard the Secret Service had deemed physically fit enough to defend the president), other issues were addressed instead, such as “added layers” of security to the lawn of the White House.

That’s just a single illustration of the folly of the politically correct position which says that women can do everything that men can do. (Most men — conservatives ones, at least — wouldn’t think of claiming that they can do everything that women can do.) More generally, with respect to gender integration of combat forces, Scott writes about the Marine Corps study:

Coinciding with all previous research and scientific findings, in military training also women fail at incredibly higher rates at physically demanding tasks. In 2015, the Marine Corps concluded a yearlong study of how de-sexing units would affect combat readiness. They found: “all-male units were faster, more lethal and able to evacuate casualties in less time… All-male squads, the study found, performed better than mixed gender units across the board. The males were more accurate hitting targets, faster at climbing over obstacles, better at avoiding injuries.” Similar studies within our military, and even from other countries, reinforce these findings.

Irrationally, government officials in the Obama administration have opted to ignore all available scientific data to forward their own politically correct agenda. This suggests they didn’t care what the science said to begin with. It means they are willing to degrade the quality of the military’s effectiveness to artificially advance women who can’t compete by the same standards, and by doing this they are knowingly putting our soldiers at greater risk for injury and death. For this, their actions are condemnable before God and all the men of their country.

While some nitpick the all-male versus mixed-sex units study, no one has suggested studying how effective all-male units would be against all-female units. Not only are there simply not enough women capable and willing to fill such roles, but nobody thinks all-female units could be as effective as all-male units. It should stand to reason that because we know women are weaker then men on a biological level, that it should be obvious that integrating women into all-male units would tactically weaken those units. When you take these plain truths and put them together, the Marine Corps findings aren’t all that surprising.

Sgt. Maj. Justin D. Lehew, who was a part of this Marine Corps study, lashed back at critics who claimed “better women could have been picked,” and that the evaluators’ mindsets were “biased” against women from the start:

We selected our best women for this test unit, selected our most mature female leaders as well. The men (me included) were the most progressive and open minded that you could get… The best women in The GCEITF as a group in regard to infantry operations were equal or below in most all cases to the lowest 5 percent of men as a group in this test study. They are slower on all accounts in almost every technical and tactical aspect and physically weaker in every aspect across the range of military operations… Listen up folks. Your senior leadership of this country does not want to see America overwhelmingly succeed on the battlefield, it wants to ensure that everyone has an opportunity to pursue whatever they want regardless of the outcome on national security…There is nothing gender biased about this, it is what it is. You will never see a female Quarterback in the NFL, there will never be a female center on any NHL team and you will never see a female batting in the number 4 spot for the New York Yankees. It is what it is.

What it comes down to is this: Conservatives are realists. Politically correct “liberals” are fantasists.

Civil War?

I follow American Thinker because the articles and posts there are usually provocative. A lot of it is wild-eyed speculation by right-wingers. But even the most wild-eyed stuff sometimes has a tangential relationship to a plausible idea.

This is from Robert Arvay’s “Will the Left Actually Incite a Civil War?” (November 21, 2016):

It is … not entirely impossible for me to peer into the minds of the anti-Trump protesters, since their dread has actually materialized – as a Clinton defeat at the polls.  So far, their angst has been manifested mostly in tears, whining, and cowering – but there is a violent element among them.  Their fears are enormous, some imaginary, some real, but in either case, those fears will motivate them.  The imaginary fears include the predicted assembly of illegal immigrants into concentration camps.  The real fears include loss of political power and all its perquisites, including the dictatorial ability to force bakers to serve cakes at same-sex ceremonies, an ability that portends much worse to come.

Be assured that every failure of liberal policies (such as the implosion of the Obama health care system) will now be blamed on Republicans, and particularly on the man they despise most, Donald Trump.  The Democrat ministry of propaganda (formerly the mainstream news media) will headline every unfortunate instance of a child suffering from disease, and loudly proclaim that the child would be in perfect health had not Trump cruelly withheld the funds to save that child.  Such diatribes cannot help but incite violent emotions.

Calls for assassination will be made, as in fact they already have been, including by educators.  God help us should something tragic result.

From my side of the front lines, I still view the republic as at risk.  From their side, many may now feel they have nothing to lose.  Had Clinton won, I would very likely feel the same.

I don’t know how any of the things that Arvay mentions would incite a civil war. It’s true (I hope) that Trump will clamp down on political correctness, and that a Supreme Court with the addition of a Trump nominee would reverse the anti-free speech laws that have sprung up in some States. But would violence ensue? I doubt it.

Yes, the MSM will continue to be the Democrat ministry of propaganda — nothing new there — and will double down on its portrayal of Republicans as heartless and cruel — nothing new there, either.

If Trump were assassinated by a leftist, or a cabal of leftists, would that lead to civil war? It might lead to anti-leftist violence by the kind of people who are drawn to Richard B. Spencer. But a violent response, if any, would most likely come from black militants, who are leftists only in the sense that they are loyal to the Democrat Party and its patronizing policies toward blacks. The resulting conflict would shed a lot of blood, but it could be mopped up quickly by police forces and National Guard units empowered to do so by the governors of States where violence erupts. And under a President Pence, they probably would feel empowered to do so, not constrained by the specter of a civil-rights investigations by the Department of Justice. I would expect Pence to do everything in his power (and perhaps more) to support local and State authorities in their efforts to quell violence. He would have nothing to gain and much to lose if it weren’t quelled. Failure to do so would undermine his authority as the newly fledged president.

What’s much more likely than a civil war is a growing secessionist movement on the left. As I argue in “Polarization and De-Facto Partition,” such a movement could be exploited to advance the cause of liberty:

Given the increasing polarization of the country — political and geographic — something like a negotiated partition seems like the only way to make the left and the right happier.

And then it occurred to me that a kind of partition could be achieved by constitutional means; that is, by revising the Constitution to return to its original plan of true federalism. The central government would, once again, be responsible for the defense of liberty and free trade. Each State would, within the framework of liberty, make its own decisions about the extent to which it intervenes in the economic and social affairs of its citizens.

How might that come to pass?

There are today in this land millions — probably tens of millions — of depressed leftists who foresee at least four years of GOP rule dedicated to the diminution of the regulatory-welfare state….

The shoe is now on the other foot. A lot of leftists will want out (see this for example), just as Northern abolitionists wanted separation from the South in the 1830s and 1840s. Let’s give them a way out while the giving is good, that is, while the GOP controls the federal government. The way out for the left is also the way out for conservatives.

Congress, namely, its Republican majorities, can all an Article V convention of the States….

The convention would be controlled by Republicans, who control a majority of State legislatures. The Republican majority should make it clear from the outset that the sole purpose of the convention is to devolve power to the States. For example, if a State government wants to establish its own version of Social Security to supplement what remains of it after future benefits have been scaled back to match projected future revenues, that State government wouldn’t be prevented from doing so. And it could design that program — and any others — as it wishes, free from interference on by the central government.

For more (much more) read the whole thing, and then read my version of a revised Constitution: “A Constitution for the 21st Century.”

 

Economically Liberal, Socially Conservative

A provocative piece by Samuel Gregg, “Markets, Catholicism, and Libertarianism” (Public Discourse, October 24, 2016) reminds me of an idea for a post that flitted through my aging brain a while back. Gregg writes:

In a recent American Prospect article, John Gehring maintains that Catholics like myself who regard markets as the most optimal set of economic conditions are effectively promoting libertarian philosophy. Gehring’s concerns about libertarianism and what he calls “free market orthodoxy” have been echoed in other places.

The generic argument seems to be the following. Promoting market approaches to economic life involves buying into libertarian ideology. . . .

What [Gregg and other] critics seem to miss is that a favorable assessment of markets and market economics need not be premised on acceptance of libertarianism in any of its many forms. . . .

Libertarianism’s great strength lies in economics. Prominent twentieth-century libertarian economists, such as Ludwig von Mises and Friedrich von Hayek, made major contributions to the critique of socialist economics.. . . .

Philosophically speaking, Mises associated himself, especially in Human Action (1949), with Epicureanism and utilitarianism. Hayek’s views were more complicated. While his Law, Legislation and Liberty (1973/1976/1979) rejected Benthamite utilitarianism, Hayek embraced a type of indirect-rule utilitarianism in works such as The Constitution of Liberty (1960). He also articulated progress-for-the-sake-of-progress arguments and social evolutionist positions heavily shaped by David Hume’s writings.

Such philosophical views are characteristic of many self-described libertarians. . . .

None of the above-noted contributions to economics by Mises and Hayek are, however, dependent upon any of their libertarian philosophical commitments.

That’s exactly right. The great insight of libertarian economics is that people acting freely and cooperatively through markets will do the best job of producing goods and services that match consumers’ wants. Yes, there’s lack of information, asymmetrical information, buyer’s remorse, and (supposed) externalities (which do find their way into prices). But the modern “solution” to such problems is one-size-fits-all regulation, which simply locks in the preferences of regulators and market incumbents, and freezes out (or makes very expensive) the real solutions that are found through innovation, entrepreneurship, and competition.

Social conservatism is like the market liberalism of libertarian economics. Behavior is channeled in cooperative, mutually beneficial, and voluntary ways by the institutions of civil society: family, church, club, community, and — yes — commerce. It is channeled by social norms that have evolved from eons of voluntary social intercourse. Those norms are the bedrock and “glue” of civilization. Government is needed only as the arbiter of last resort, acting on behalf of civil society as the neutral enforcer of social norms of the highest order: prohibitions of murder, rape, theft, fraud, and not much else. Civil society, if left alone, would deal adequately with lesser transgressions through inculcation and disapprobation (up to and including ostracism). When government imposes norms that haven’t arisen from eons of trial-and-error it undermines civil society and vitiates the civilizing influence of social norms.

The common denominator of market liberalism and social conservatism is that both are based on real-world behavior. Trial and error yields information that free actors are able to exploit for their betterment and (intended or not) the betterment of others.

Related posts:
Pseudo-Libertarian Sophistry vs. True Libertarianism
More Pseudo-Libertarianism
More about Conservative Governance
Burkean Libertarianism
True Libertarianism, One More Time
Why Conservatism Works
Liberty and Society
Liberty as a Social Construct: Moral Relativism?
Defending Liberty against (Pseudo) Libertarians
Parsing Political Philosophy (II)
Modern Liberalism as Wishful Thinking
Romanticizing the State
Governmental Perversity
Libertarianism and the State
“Liberalism” and Personal Responsibility
My View of Libertarianism
More About Social Norms and Liberty
The Authoritarianism of Modern Liberalism, and the Conservative Antidote
Another Look at Political Labels
Individualism, Society, and Liberty
Social Justice vs. Liberty