Is This How It Ends?

With a whimper.

I recently read Rebekah Koffler’s Putin’s Playbook: Russia’s Secret Plan to Destroy America. Koffler is an American citizen of Russian birth. She came to the U.S. about 30 years ago and worked for the Defense Intelligence Agency (DIA) from 2008 to 2016. Her career there — which involved the analysis of intelligence at the highest levels of classification — ended when she lost her clearance through machinations by higher-ups who disliked her hard-line views on the threat posed by Russia.

Koffler’s book is a riveting and sobering read; for example:

Putin’s new military doctrine is aggressive. It is even more dangerous than the one which the Soviet Union followed during the Cold War…. It is more dangerous because of the special role reserved for nuclear weapons. Unlike during the Cold War, when the Soviets were preparing for a “bolt-out-of-the-blue-sky” nuclear strike from the United States, with the eventual symmetrical goal of Washington’s decapitation and total annihilation, today’s doctrine is more grounded in “reality”— Russian reality, that is. Putin’s doctrine is focused on Russia’s preparedness to fight a limited war — including with nuclear weapons — with the narrow objective of “defending” what Moscow views as its strategic perimeter. In other words, the nuclear option is not a theoretical doctrine….

That’s the aspect of Russia’s military doctrine which seems to have deterred the U.S. and NATO from intervening directly in the Ukraine war. But that isn’t the really scary part of Russia’s military doctrine. This is:

Russian doctrine envisions degrading or disrupting the U.S. forces’ “kill chain” by the targeting the C4ISR (command, control, communications, computers, intelligence, surveillance, and reconnaissance) and space systems on which America’s forces critically depend for its defense intelligence and warfighting operations…. Just like our smartphones, U.S. PGMs, or “smart weapons” are guided to a large extent by GPS satellites, unlike the previous generation’s, which are now called “unguided” or “dumb” bombs. To impede or thwart U.S. military operations, Russia has developed formidable counter-space (anti-satellite) and cyber capabilities to create what the Pentagon calls an anti-access/ area denial (A2AD) environment. Russia’s will use A2AD-type capabilities to deny, or at minimum impede, U.S. forces’ access to the conflict zone, so it can “seize strategic initiative” during the initial period of war, as the doctrine dictates, interdict U.S. forces’ reinforcement, and fight the conflict with the balance of forces favoring Russia. Russia believes that its new doctrine, with weapons to match it, enables Moscow to inflict “unacceptable damage” on the U.S. and/ or the Allied military, economy, and population and end the conflict on terms favorable or at minimum acceptable to the Kremlin….

How would it end? Here’s Koffler’s take:

I am not in a position to write about the scenarios based on actual wargames that I participated in [because of their classification]. All I can say is that my experience is similar to that of RAND Corporation analyst David Ochmanek, who has participated in RAND wargames sponsored by the Pentagon, and former deputy secretary of defense (DEPSECDEF) Robert Work. “In our games, when we fight Russia and China, blue [the U.S. military] gets its ass handed to it,” Ochmanek disclosed to the publication Breaking Defense. Former DEPSECDEF Work echoed Ochmanek’s commentary: “The simulated enemy forces tend to shut down [U.S.] networks so effectively that nothing works.” Worst of all, both former DEPSECDEF and the RAND analyst said, “The [United States] doesn’t just take body blows, it takes a hard hit in the head as well.… Its communications satellites, wireless networks, and other command-and-control systems suffer such heavy hacking and jamming that they are suppressed, if not shattered.” And then, according to Work, when “the red force really destroys our command and control, we stop the exercise, … instead of figuring out how to keep fighting when your command post gives you nothing but blank screens and radio static.” This is exactly what the Russian doctrine envisions and counts on — breaking the U.S. forces’ will to fight by taking away their technological advantages and crutches.


Related posts:

Pay Any Price? (07/13/22)

The Meaning of the War in Ukraine (07/26/22)

The Way Ahead (09/15/22)

Mutual Deterrence and the War in Ukraine (09/27/22)

War with China? (11/19/22)

The Death of a Nation

A nation I remember fondly.

“America is not just a country,” said the rock singer Bono, in Pennsylvania in 2004: “It’s an idea.”

That’s the opening of John O’Sullivan’s essay, “A People, Not Just an Idea” (National Review, November 19, 2015):

I didn’t choose [Bono’s] quotation to suggest that this view of America is a kind of pop opinion. It just happened that in my Google search his name came ahead of many others, from George Will to Irving Kristol to almost every recent presidential candidate, all of whom had described America either as an idea or as a “proposition nation,” to distinguish it from dynastic realms or “blood and soil” ethnicities. This philosophical definition of America is now the conventional wisdom of Left and Right, at least among people who write and talk of such things.

Indeed, we have heard variations on Bono’s formulation so many times that we probably fail to notice how paradoxical it is. But listen to how it sounds when reversed: “America is not just an idea; it is a nation.” Surely that version has much more of the ring of common sense. For a nation is plainly something larger, more complex, and richer than an idea. A nation may include ideas. It may have evolved under the influence of a particular set of ideas. But because it encompasses so many other things — notably the laws, institutions, language of the nation; the loyalties, stories, and songs of the people; and above all Lincoln’s “mystic chords of memory” — the nation becomes more than an idea with every election, every battle, every hero, every heroic tale, every historical moment that millions share.

That is not to deny that the United States was founded on some very explicit political ideas, notably liberty and equality, which Jefferson helpfully wrote down in the Declaration of Independence. To be founded on an idea, however, is not the same thing as to be an idea. A political idea is not a destination or a conclusion but the starting point of an evolution — and, in the case of the U.S., not really a starting point, either. The ideas in the Declaration on which the U.S. was founded were not original to this country but drawn from the Anglo-Scottish tradition of Whiggish liberalism. Not only were these ideas circulating well before the Revolution, but when the revolutionaries won, they succeeded not to a legal and political wasteland but to the institutions, traditions, and practices of colonial America — which they then reformed rather than abolished….

As John Jay pointed out, Americans were fortunate in having the same religion (Protestantism), the same language, and the same institutions from the first. Given the spread of newspapers, railways, and democratic debate, that broad common culture would intensify the sense of a common American identity over time. It was a cultural identity more than an ethnic one, and one heavily qualified by regional loyalties… And the American identity might have become an ethnic one in time if it had not been for successive waves of immigration that brought other ethnicities into the nation.

That early American identity was robust enough to absorb these new arrivals and to transform them into Americans. But it wasn’t an easy or an uncomplicated matter. America’s emerging cultural identity was inevitably stretched by the arrivals of millions of people from different cultures. The U.S. government, private industry, and charitable organizations all set out to “Americanize” them. It was a great historical achievement and helped to create a new America that was nonetheless the old America in all essential respects….

By World War II, … all but the most recent migrants had become culturally American. So when German commandos were wandering behind American lines in U.S. uniforms during the Battle of the Bulge, the G.I.s testing their identity asked not about … the First Amendment but questions designed to expose their knowledge (or ignorance) of American life and popular culture….

Quite a lot flows from this history. Anyone can learn philosophical Americanism in a civics class; for a deeper knowledge and commitment, living in America is a far surer recipe…. Americans are a distinct and recognizable people with their own history, culture, customs, loyalties, and other qualities that are wider and more various than the most virtuous summary of liberal values….

… If Americans are a distinct people, with their own history, traditions, institutions, and common culture, then they can reasonably claim that immigrants should adapt to them and to their society rather than the reverse. For most of the republic’s history, that is what happened. And in current circumstances, it would imply that Muslim immigrants should adapt to American liberty as Catholic immigrants once did.

If America is an idea, however, then Americans are not a particular people but simply individuals or several different peoples living under a liberal constitution.

For a long time the “particular people” were not just Protestants but white Protestants of European descent. As O’Sullivan points out, Catholics (of European descent) eventually joined the ranks of “particular people”.

The United States was built upon the “blood and soil” allegiance of whites whose origins lay in Europe. That allegiance was diluted by blacks, most of whom were alienated from the nation by slavery, Jim Crow, lingering racial prejudice (a two-way street), and the leftist bigotry of low expectations. from thcan never be part of that nation. That allegiance has been further diluted by Hispanics, who (in the first generation, at least) are marked by the differences of color and culture. Blacks and Hispanics belong to the “proposition” nation, not the “blood and soil” nation.

Blacks and Hispanics have been joined by the large numbers of Americans who no longer claim allegiance to the “blood and soil” nation, regardless of their race or ethnicity — leftists, in other words. Since the 1960s, leftists have played an ever-larger, often dominant, role in the governance of America. They have rejected the “history, culture, customs, [and] loyalties” which once bound most Americans. In fact they are working daily — through government, the academy, public schools, the media, Big Tech, and corporate America — to transform America fundamentally by erasing the “history, culture, customs, [and] loyalties” of Americans from the nation’s laws and the people’s consciousness.

Pat Buchanan hits it on the head:

In Federalist No. 2, John Jay writes of them as “one united people . . . descended from the same ancestors, speaking the same language, professing the same religion, attached to the same principles of government, very similar in their manners and customs . . .”

If such are the elements of nationhood and peoplehood, can we still speak of Americans as one nation and one people?

We no longer have the same ancestors. They are of every color and from every country. We do not speak one language, but rather English, Spanish and a host of others. We long ago ceased to profess the same religion. We are Evangelical Christians, mainstream Protestants, Catholics, Jews, Mormons, Muslims, Hindus and Buddhists, agnostics and atheists.

Federalist No. 2 celebrated our unity. Today’s elites proclaim that our diversity is our strength. But is this true or a tenet of trendy ideology?

After the attempted massacre of Republican Congressmen at that ball field in Alexandria, Fareed Zakaria wrote: “The political polarization that is ripping this country apart” is about “identity . . . gender, race, ethnicity, sexual orientation (and) social class.” He might have added — religion, morality, culture and history.

Zakaria seems to be tracing the disintegration of our society to that very diversity that its elites proclaim to be its greatest attribute: “If the core issues are about identity, culture and religion … then compromise seems immoral. American politics is becoming more like Middle Eastern politics, where there is no middle ground between being Sunni or Shiite.”

Among the issues on which we Americans are at war with one another — abortion, homosexuality, same-sex marriage, white cops, black crime, Confederate monuments, LGBT rights, affirmative action.

America is no longer a nation whose inhabitants are bound mainly by “blood and soil”. Worse than that, it is fast becoming a nation governed by the proposition that liberty is only what leftists say it is: the liberty not to contradict the left’s positions on climate, race, intelligence, economics, religion, marriage, the right to life, and government’s intrusive role in all of those things and more.

The resistance to Donald Trump was fierce and unforgiving because his ascendancy threatened what leftists have worked so hard to achieve in the last 60 years: the de-Americanization of America. Expect more of the same Trump (or DeSantis) becomes the GOP nominee in 2024.

The America in which I was born and raised — the America of the 1940s and 1950s — has been beaten down. It is far more likely to die than it is to revive. And even if it revives to some degree, it will never be the same.

I am speaking of America on the whole. Vast parts of it remain more or less true to the old “blood and soil” nation. It will take a national divorce to keep those regions from being frog-marched into serfdom.

The Relative Depth of Recessions Since World War II

You may be surprised.

I define a recession as a span of at least two consecutive quarters in which real (inflation-adjusted) GDP has dropped below its recent peak and remains below that peak.

The relative depth — or economic pain — caused by a recession can be measured roughly by the depth of the drop, measured as a percentage of the pre-recession peak. An alternative measure, and perhaps a better one, is the cumulative percentage drop from the pre-recession peak.

The Great Recession (2008-10) and the Pandemic Recession (2020) stand out by both measures as being more painful than any of the preceding post-war recessions. However, the back-to-back recessions of 1980-83, if combined, would surpass the cumulative percentage drop incurred during the Pandemic Recession.

The recessions, as painful as they have been, pale in comparison to America’s continuing mega-depression, which is largely the product of governmental interventions in the economy.

Names Aren't What They Used to Be

Though some are coming back.

The Social Security Administration publishes a list of the names most commonly given to newborns. Here are last year’s top 20:

And here are the top 20 in my maternal grandmother’s birth year:

Most of the names in the second list are the solid names that were common even unto my generation. The first list includes a lot of names have been dredged up from the 1700s and from romance novels.

You can follow the link to see how popular a particular name was in any year since 1900. Tyler, for example, has ranked as high as #5 among boy names (1993, 1994), and as high as #238 among girl names (1993).

Thinking of Tyler led me to wonder which president’s last names that have been given to famous, infamous, semi-famous, and unknown persons as first names. Here’s what I came up with:

  • Washington Irving, author of “The Legend of Sleepy Hollow” — which engendered the terrible movie starring Johnny Depp and Christina Ricci

  • Jefferson Davis, leader of “The Lost Cause”

  • Madison Kuhn, obscure historian — but not a girl

  • Monroe McKay, judge

  • Van Buren Unknownun Québécois, go figure

  • Jackson Pollock, artist dribbler painter

  • Harrison Ford, car dealer? film actor

  • Tyler Mathisen, CNBC host

  • Taylor Booth, computer scientist

  • Fillmore Unknown — 5, count ‘em, 5 (all boys)

  • Pierce Brosnan, ex-007

  • Lincoln Chafee, RINO

  • Johnson Unknown, but many times among the top 1,000 boy names (quelle surprise)

  • Grant Sharp, retired Rear Admiral, United States Navy (named for his great-grandmother’s sister’s husband, Ulysses S. Grant)

  • Hayes Milam, security guard at the think-tank at which I worked for many years

  • Arthur Godfrey, entertainer/radio-TV host remembered mainly for playing the ukulele, buzzing the control tower at Leesburg, Virginia, airport, and firing singer Julius La Rosa on the air

  • Cleveland Amory, cat lover and writer

  • McKinley Unknown — a semi-popular name for boys and girls in recent decades

  • Roosevelt Grier, immovable object defensive lineman

  • Taft Unknown — semi-popular back in the day when W.H. Taft was “big”

  • Wilson Pickett, R&B and soul singer

  • Harding Unknown — semi-popular when Warren Gamaliel was the big enchilada

  • Coolidge Unknown — ditto

  • Truman Capote, American poof writer

  • Kennedy McMann, American actress — fairly populr name for boys and girls since 1960; #54 among girl names in 2014

  • Nixon Unknown — oddly enough, in the top 1000 boy names 2011-2021

  • Ford Madox Ford, English aesthete writer

  • Carter Stanley, Ralph’s very late brother

  • Reagan Dunn, member of the King County, Washington, council

  • Clinton Eastwood, still life film actor — but you probably don’t think of him as Clinton

By my reckoning, these last names haven’t been used as first names:

  • Adams — not to be confused with Adam; John wasn’t the first “man”

  • Polk — might be mistaken for an invitation

  • Buchanan — pronounce it properly: “buck-an-un”

  • Eisenhower — no parent should do this

  • Bush — don’t go there

  • Obama — why would anyone do that to a child?

  • Trump — double ditto

  • Biden – quadruple ditto

Any takers?

True Libertarianism and Its Enemies

“O brave new world that has such people in’t.” — Shakespeare, The Tempest

This post extends “The Libertarian-Conservative Divide”, which I dashed off when something reminded me of the essential difference between the two political philosophies.

There is true libertarianism and there is pseudo-libertarianism. The former is really a kind of conservatism, which is why I call it Burkean libertarianism. The latter — which is the kind of “libertarianism” much in evidence on the internet — rests on the Nirvana fallacy and posits dangerously false ideals.

True libertarianism requires the general observance of socially evolved norms because those norms evidence and sustain the mutual trust, respect, and forbearance that — taken together — foster willing, peaceful coexistence. That, in turn, fosters beneficially cooperative behavior. (If there is a better description of liberty, I have yet to read it.)

Given the general observance of socially evolved norms, government’s role is the minimal one of protecting the populace from domestic and foreign predators, both actively and through the swift and certain enactment of retribution as a deterrent to future abuses. In this respect, true libertarianism resembles a kind of “libertarianism” known as minarchism, except that minarchists emhasize minimal government and do not acknowledge the need for socially evolved norms.

The core of pseudo-libertarianism is the impossible dream of living without restraints, either those imposed by government or the social norms that must be observed in order to live in peaceful coexistence with other human beings. In that respect, pseudo-libertarians are aligned with leftists, who — intera alia — wish to “cancel” norms that they find offensive.

Human beings, ineluctably tribal creatures that they are, find the basis for mutual trust, respect, and forbearance in a common culture (mores) that evolved by being tested in the acid of use.* This is not to be found in the discord of clashing cultures that pseudo-libertarians promote, sometimes (paradoxically) through state action. The left, of course, resorts reflexively to state action (and private, state-condoned and encouraged action) in its zeal to shatter the common culture and to force its social and economic preferences upon the populace.

Pseudo-libertarians and leftists do not understand (or care) that the long evolution of rules of conduct by human beings who must coexist might just be superior to the rules that they arbitrarily impose (or would if they could). Pseudo-libertarians and leftists obviously believe in the possibility of separating the warp and woof of the social fabric — the common culture — without causing the its disintegration.

When the common culture disintegrates there is an open field for government to dictate the terms on which a people coexist. This necessarily alienates large segments of the populace from one another.

What is worse is that cultural disintegration results in a rising tide of social acrimony and violence. The result is political polarization, something like an epidemic of mass murder, and the coarsening of behavior generally.

Cultural disintegration is the key to what has been happening to America for a long time and in earnest since the early 1960s. Cultural disintegration is a mild term to apply to outrages like these:

  • authorization, by the U.S. Supreme Court, to kill unborn children

  • abandonment of restraint in the use of profanity, violence, and sexualtiy in the various modes of “entertainment”, with subtantial encouragement by the Court

  • wide acceptance and inculcation of the patent myth (patent to anyone who has eyes and ears and half a brain) that the plight of blacks today is entirely the fault of America’s “racist” past

  • vast expansion of the welfare state, to the detriment of self-reliance (and economic vitality)

  • encouragement of murder, mayhem, and lessser crimes by “justice reform” and similar movements

  • encouragement of illegal immigration, which brings crime and disruption to the communities affected directly by it

  • suppression — through censorship, loss of jobs, boycotts, etc. — of persons and businesses who openly resist such outrages.

Thus have long-standing norms been widely rejected and reversed by a cabal whose leaders and members are drawn from politics, the political bureaucracy, the academy, the legal profession, public “education”, corporate management (prominently but not exclusively Big Tech), and the media (including “entertainment”). The result is a deep chasm between Americans who still hew to traditional norms, or would like to, and the “elites” who have rejected many of those norms.

There is no liberty for adherents of traditional norms when they cannot live as they would choose to live and speak as they would choose to speak. The adherents of traditional norms have become strangers in a strange land. They must tread carefully to avoid ostracism, legal and financial sanctions, and verbal and physical assault. Not that the left-aligned media give any attention to the parlous condition of social conservatives, who are (as Russians would say) “the main enemy” of the emerging dystopia.


* I owe “tested in the acid of use” to Philip M. Morse and George E. Kimball’s Methods of Operations Research, at page 10:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science.

Moral Courage, Moral Standards, and Political Polarization

Who knows what evil lurks in the hearts of men?

Moral courage is speaking or acting to protest or prevent a wrong, despite strong opposition or the threat of sanction or violence. (Hereafter, I will simply use “act” or “acting” to refer to an outward display of moral courage, whether it is verbal or physical.)

An act of opposition to authority isn’t necessarily an act of moral courage, though it may sometimes be one.

To act with moral courage requires deliberate and self-critical thought about the condition that provokes the urge to speak and act. Specifically, one must ask whether there is a wrong to be protested or simply a condition that displeases one for other reasons (e.g., bruised ego, esthetic offense, dislike of an otherwise moral outcome).

Except in rare circumstances (e.g., an intervention to prevent a beating of shooting), impulsive acts are not acts of moral courage. They are usually acts of petulance and ego-stroking. A person who joins a group in an such an act because it’s the “thing to do” is a moral coward. (A current example is going along with a group that protests supposed wrong-doing by committing futile and destructive acts of vandalism.)

Standing up for the “rights” of a particular group is an act of moral courage only to the extent that the group is being deprived of rights that it could enjoy without trampling on the rights of others. The “right” of a self-proclaimed transgender female (i.e., a biological male) to invade the privacy of biological females is a “right” only in the view of those persons for whom traditional social norms are merely litter to be tossed in the nearest trash bin. A self-proclaimed transgender female (or any other person who identifies as LGBTQ+) has all of the rights enjoyed by every other American, but not the privilege of violating long-standing social norms of the kind that, in their observance, foster mutual trust and respect.

What about a transsexual person born male who has undergone extensive surgery, hormone therapy, and various other medical and psychological procedures so as to mimic female-ness convincingly? There is a saying that is sometimes true: What you don’t know can’t hurt you. There is no offense against privacy unless the offense is felt by the person whose privacy is at stake. (Offenses against privacy that result in actual harm, such as identity theft, are of a different kind than the subject of this paragraph.)

Returning to the main theme of this post, it is necessary to ask whether there are agreed moral standards that can be applied against putative acts of moral courage. There’s the rub. The polarization of political views reflects vast disagreement about morality. Although political views are nominally about what government should and should not do, they are really about what people should and should not do. Every governmental edict either discourages or encourages a private act.

Take the Dobbs decision by the U.S. Supreme Court, for example. That decision didn’t “outlaw” abortion in the United States, as hysterical propagandists and ignoramuses are wont to say, but it did reverse earlier decisions which held that abortion (when it met certain criterea), was a constitutional right throughout the United States. Dobbs merely transferred the question of abortion back to the individual States, each of which was thus empowered (as it was before Roe v. Wade) to determine the legal status of abortion in its jurisdiction.

You will now have grasped the absurdity of the situation. How can the morality of an act be determined by whether the act is committed inside or outside a particular State? It can’t be, which is why many persons (notably, leading Democrats) believe that abortion is moral and seek to make it legal throughout the nation. By the same token, many persons believe that abortion is immoral and seek to make it illegal throughout the nation.

Political polarization — and the culture war that it reflects — is about the clash of moral codes. Political polarization therefore reflects a shattering of what was once something close to a national consensus about morality. That near-consensus had been fraying since the Progressive Era of the late 1800s and early 1900s, but it began to unravel in the 1960s. What it came down to, before the fraying and unraveling, was wide observance of Judeo-Christian morality, especially as enunciated in the final six of the Ten Commandments, and a code of personal responsibility relatively uncorrupted by the welfare state and reinforced by a system of justice that is swifter and harsher than today’s.

The clash of moral codes means that moral courage has become not just a matter of acting against wrong-doing, but also a matter of acting against an array of powerful proponents of wrong-doing who believe that it is right-doing; for example:

  • teaching white children that they are racists

  • demanding reparations from persons who are blameless for whatever wrongs supposedly warrant reparations

  • encouraging and manipulating children into undergoing transgender therapies that they will come to regret

  • using force and misusing “science” to make people’s lives miserable (e.g., forbidding the use of efficient and relatively inexpensive fossil fuels, dictating useless — and economically and socially destructive — lockdowns and school closings during the pandemic)

  • killing unborn children

  • “canceling” and smearing anyone one who dares utter such things instead of debating them with facts and logic (a sure sign of wrong-doing).

But there is a difference between speaking out against the proponents of wrong-doing from the safety of a blog, and speaking out against them when they are in a position to ruin one’s reputation, destroy one’s career, and shred one’s ability to obtain credit. Unlike some peformers, executives, professors, and scientists, I am not exercising moral courage by taking the stances that I do in this blog.

I am truly thankful for the morally courageous persons who risk their fortunes and honor to oppose the rampant wrong-doing of leftists.* The tragedy of our time is that the left has gained enough power in this country to be a threat to anyone.

Of all the beasts which the Lord God had made, there was none that could match the serpent in cunning. — Genesis 3:1


* A good example is Rebekah Koffler, author of Putin’s Playbook. Read it and weep for her and for this formerly great nation.

The Great Resignation in Perspective

There’s nothing new under the sun.

Consider this graph:

The index of real unemployment rate (explained here) serves to highlight the Great Recession (2008-2011) and the Pandemic Recession (2020-2021). The indices of quit rates and discharge rated (derived from date available here), complement the index of real unemployment rate and illustrate the following observations.

Discharges (firings and layoffs), as a fraction of the level of employment, peaked during the Great Recession and declined until the Pandemic Recession of 2020. The downtrend continued after the end of the lattere recession. Nothing unexpected there.

Quits, as a fraction of the number of persons employed, began to rise near the bottom of the Great Recession in 2009. The quit rate dropped sharply during the Pandemic Recession, then resumed its rise. That rise can be interpreted as a continuation of the rise that began in 2009. What’s interesting is that quit rate peaked in November 2021 and has generally declined somewhat since then. A straight line plotted through the index from its bottom would come close to the final point on the graph, which represents September 2022.

It is therefore my view that the surge in quits that occurred after the Pandemic Recession is a continuation of the rise that began in 2009. The deeper question, then, is why have quit rates been rising for the past 13 years? I doubt that there’s a good answer to that question.

An analyst at the Bureau of Labor Statistics addressed the question and came up with this:

The historical data examined in the article suggest that recent quit rates, while certainly high for the 21st century, are not the highest historically [emphasis added]. Nonetheless, the pace of resignations seems to have risen more quickly than one would have expected from labor market tightening alone. [But the rate has slackened since this was written.] Future research should assess alternative explanations for this development, taking into account pandemic-related factors such as increased stimulus payments, health concerns, childcare issues, and changing attitudes toward work. Examining which demographic groups have seen their quit rates rise most quickly might provide clues here.23

The footnote leads to this paper, which concludes with this:

Evidence from both recent worker surveys and historical data on quits shows that the “Great Resignation” is not as unusual as one might think. Waves of quits have been common during fast recoveries in the postwar period. In line with this historical evidence, the recent wave reflects the rapid rebound in labor demand for young and less-educated workers, largely driven by the retail, leisure and hospitality, and accommodation and food services sectors.

In sum, quitting a job doesn’t mean leaving the labor force. More likely it means jumping to a better job or doing something that doesn’t count as a job (see below).

The more interesting question is why the real rate of unemployment remains significantly higher than it was in 2000. It is my view that “marginal workers” — young persons without special skills or training — quit returning to the labor market out of a habit that was born during the Great Recession, and which was reinforced ruing the Pandemic Recession. The habit was fed by:

  • Obamacare, which enabled persons under the age of 27 to obtain health insurance through their parents’ policies,

  • extended unemployment benefits,

  • “free money” (stimulus checks),

  • enjoyment of the leisure afforded by the precding, and

  • the expansion of the “gig” economy, wherein work is a sometime thing and it is done not as an employee but as an “independent contractor”.

It’s not all the fault of government, but most of it is.

Stuff ("Liberal" Yuppie) White People Like

Another one from the archives.

The short-lived blog, Stuff White People Like (2008-2010), was fun while it lasted (if taken in small doses). I may be the last person to have found it. But, unlike white-”liberal”-yuppie persons, being au courant isn’t “where I’m at” (to use an expression that’s probably no longer au courant).

There are 136 entries. Here are my suggested additions:

  • Foreign-language films — especially if incomprehensible, even with subtitles, about angst and suffering, and without an ending (the French way).

  • Dressing casually — especially at fine restaurants. It’s a fetish — like wearing shorts regardless of the temperature.

  • Public schools — for other people’s children.

  • Public universities —très gauche, even if you attended one.

  • Cheese — as in “I found this wonderful little cheese store.”

  • Handymen — as in “I found this wonderful little handyman.” Who’s probably not white. But “little” isn’t racist, is it?

  • Charity auctions — for buying ugly stuff and feeling good about it.

  • Celebrities — good if they’re adopting half of China or full of socialist/greenie crap that they don’t practice

  • Europe — such a civilized place, as long as you ignore economic stagnation, unemployment on the dole, rioting Muslims, and the tendency to turn to the U.S. when in danger.

  • Britain — ditto, with smashing accents.

  • Social Security — good for the “little” people.

  • Medicare — ditto, but avoid doctors who accept Medicaid patients.

  • Drug companies — hate ’em. Where are my tranqs, anyway?

  • Brand-name products of a superior kind — très important, as long as they signify good taste, in an understated way, of course.

  • Urban compression — the opposite of “urban sprawl”. Also known as cities: dirty, smelly, crowded, crime-ridden, architecturally chaotic places that, for some reason, “deserve to be saved”. But why, and at whose expense?

War with China?

Trying to read the tea leaves.

I am pessimistic about America’s future because of the threats posed by China and Russia. Russia seems to have bitten off more than it can chew in Ukraine, but that doesn’t make Russia a weak adversary given its ability (and willingness) to engage in no-holds-barred cyber war and nuclear blackmail. What Russia needs is a staunch and powerful ally (or two or three).

Which brings me to China. In “Are We on the Road to Another Pearl Harbor?” (The American Spectator, November 19, 2022), Francis P. Sempa reviewed

The Road to Pearl Harbor: Great Power War in Asia and the Pacific [Naval Institute Press] which is part history and part a warning that history may be about to repeat itself with another great power war in Asia and the Pacific. The book, which is edited by John Maurer of the Naval War College and Erik Goldstein of Boston University, brings together historians and strategists who provide provocative insights into the origins, evolution, and outcome of World War II in Asia and the Pacific and aspects of that war that, in the editors’ view, “illuminate the dangers that currently confront American leaders.”

History, of course, never exactly repeats itself, but it remains the greatest teacher of human behavior….

the jewel in this collection of fine essays is Toshi Yoshihara’s examination of China’s views of a future war in the western Pacific. Yoshihara, who taught strategy at the Naval War College and who currently works at the Center for Strategic and Budgetary Assessments, is perhaps the nation’s foremost expert on the Chinese navy. He has mined open sources for the views of Chinese military strategists, including Chinese naval officers, and warns that the People’s Liberation Army’s (PLA) views on future warfare in the western Pacific are consistent with a Pearl Harbor–like first-strike attack on U.S. naval ships and facilities in the region, especially our “logistical infrastructure” at the Yokosuka naval base in Japan, the home of the U.S. Seventh Fleet.

The PLA, Yoshihara writes, “is predisposed to delivering a decisive first blow against U.S. forward-deployed forces in the western Pacific, particularly those in Japan.” Chinese doctrinal writings, he notes, emphasize surprise attacks and offensive campaigns at the outset of war. Chinese strategists call this part of a “counterintervention strategy” designed to strike American targets that pose the greatest threat to China’s important coastal hubs along the “Beijing-Tianjin, Shanghai-Nanjing, and Guangzhou-Shenzhen corridors.” And the PLA has steadily acquired the precision-strike arsenals (long-range ballistic missiles, cruise missiles), especially the DF-21C, a conventionally armed intermediate-range ballistic missile that, Yoshihara notes, is capable of hitting any target on the entire Japanese archipelago, and the DF-26 missile which can deliver both conventional and nuclear warheads up to 4000 kilometers.

Yoshihara explains that attacks on U.S. naval forces based in Japan would only be “one element of a larger campaign” of strikes that would likely include our airbases at Iwakuni, Yokota, and Misawa in Japan, and Kadena on Okinawa, as well as cyber warfare attacks designed to interrupt our military communications. But like Japan’s attack at Pearl Harbor, even a successful initial Chinese attack on U.S. forces and facilities “could very well prove strategically counterproductive, if not disastrous, for Beijing,” Yoshihara suggests, by drawing in Japan, the world’s third-largest economy with a powerful navy, on the side of the United States, thereby awakening two “sleeping giants” that together could spell doom for the PLA and the communist regime.

For more than a decade, America’s political leaders have talked about a strategic “pivot” to Asia without providing the necessary forces in the region to match the rhetoric. Meanwhile, the current administration in Washington promotes “engagement” with China even as war clouds gather in the western Pacific. Just as in the 1930s, the road to another Pearl Harbor could be paved with good intentions.

On the same day, John Woudhuysen wrote about “The Coming Conflict with China” (spiked):

[W]e can only hope the growing rivalry between the US and China doesn’t boil over anytime soon. But while talks are preferable to conflict, they don’t preclude it….

… America has imposed wide-ranging sanctions on China and, historically, sanctions have formed a significant prelude to conflict. The ban on semiconductor technology certainly seems like a provocative escalation from the Biden administration….

Washington’s determination to continue selling arms to Taipei rankles Beijing. As did this year’s Western naval manoeuvres near Taiwan, and the establishment last year of the Aukus nuclear-powered submarine alliance between Australia, the UK and the US.

Still, there are points at which American and Chinese interests align. Economic worries mean that both share an interest in the Russian war against Ukraine not getting out of hand….

The Kremlin’s humiliating withdrawal from Kherson may have made Xi think twice again about Taiwan. As Biden remarked in Bali, a Chinese invasion does not look imminent; a slow, attritional tightening of Beijing’s screws on the island looks the more likely prospect. But the loss of Taiwan to nationalists in 1949 still weighs heavily on the Chinese psyche. If Xi does not take the island back sometime over the five-year term he has just embarked upon, he will have some explaining to do.

Of course, war on Taiwan would likely turn out disastrously for Beijing, because of the integration of China’s trade, supply chains and foreign direct investment with the West – all of which would be threatened by an invasion. By the same token, this integration is also crucial for the West. This is why many Western leaders are preaching a more cautious path when it comes to relations with China….

As ever, China’s conduct abroad, from Britain to Latin America to Africa and even the Arctic, seems to grow heavier, more cack-handed and more lurid with every passing year. Conversely, another debacle for American imperialism, like the one it suffered last year in Afghanistan, could well embolden Xi to get a whole lot tougher with Taiwan….

… As much as we hope cooler heads will prevail, war is not a rational exercise. Arms build-ups, nautical intrigues, air drills and Chinese infringements on Taiwanese airspace have an inexorable and lethal logic of their own.

China isn’t building a formidable naval force and acquiring bases around the South China Sea (and elsewhere) for the sake of doing nothing. Perhaps, in the spirit of Donald Trump, China means to make itself independent of trade with the West. Having done so, it could then seize control of the South China Sea and the countries that ring it. Western leaders could attribute China’s fait accompli to geopolitical destiny. (Shades of Hong Kong and the rest of the British Empire.)

This would avert a war — for the time being — but it would also allow China to build its military forces and extend its influence to other parts of the globe. Bit, by bit, Western influence and economic dealings with the rest of the world would be subsumed by China. And in league with Russia, Iran, and North Korea, it could use military and economic blackmail to contain the West — and even to dictate its internal political arrangements.

As I said here:

If there isn’t a de facto surrender by the West — marked by significant concessions on trade, sanctions, and the scope of military operations and influence — there will be a World War III.

But I fully expect concessions by weak-kneed Western leaders. The concessions will be sugar-coated for domestic consumption and packaged in the form of measures (rationing, lock-downs) to fight the crises du jour, be they a pandemic, inflation, a depression, or the ever-popular threat of incineration by a temperature rise of a degree or two.

There is, however, a way to avert subjugation and to remain engaged with the rest of the West, if not the wider world outside of the new axis of evil:

Prevent … a concerted economic-military attack on U.S. interests — by possessing more than enough means to end it quickly. Which translates into deterring it in the first place (but ending it quickly if deterrence fails.)

This is neo-isolationism in the sense that it eschews military adventures that aren’t worth the price paid by Americans. But it is not isolationism of the old-fashioned kind. Forces would be deployed forward (in space, on land, and in the oceans) to shorten reaction times and remind our adversaries that we are there, big stick in hand. Americans and American businesses would continue to be engaged with the world, in travel and trade, with the exception that America would become (once again) energy-independent.

But the time to do what it takes — to arm America to the teeth and replace its leftist “leadership” — is running short. And the Chinese (and their potential allies) know it.

The “useful idiots” in the West — the “progressives” who are preoccupied with the transformation of the old order into universal “wokeness” — either don’t understand the situation or don’t care. They probably believe, foolishly, that they will be wined and dined in Beijing and Moscow when the new world order is in place. Good luck with that! Daniel Greenfield sums it up:

A proper Marxist regime has little use for militant minorities, feminism, gay rights, police defunding, transgender bathrooms, pipeline protests, abortion, or any of the other issues the radicals have been using to waste our time. If you doubt that, go look at how many of any of the above you can find in China, Cuba, or North Korea….

After a brief permissive period, the Soviet Union criminalized homosexuality and insisted on traditional marriages and roles for women. Those feminists who resisted were soon shown their place with one of the more notorious free love figures being forcibly married off by Lenin…. [“Obama and the Broken Nation He Made Come Of Age”, Front Page Magazine, June 25, 2021]


Related posts:

World War II in Retrospect

Turning Points in America’s History

How to View Defense Spending

The Way Ahead

A Grand Strategy for the United States

An Addendum to “Grand Strategy”: Neo-Isolationism

xx

The Libertarian-Conservative Divide

It’s deep and irreconcilable.

Libertarians and conservatives are alike in their devotion to liberty. But the resemblance ends there.

Libertarianism is a “thin” ideology that has no special place for human relationships other than the benefits (mainly economic) that derive from voluntary transactions.

Conservatism is a “thick” ideology — or, rather, a disposition — that sees human relationships and the benefits derived from them (love, devotion, honor, trust, respect, mutually beneficial endeavors) as an indivisible whole.

Conservatives believe that liberty exists where there is a modus vivendi — mutual trust, respect, and forbearance — that fosters (among other things) voluntary, cooperative arrangements that are mutually beneficial.

Libertarians see liberty as an abstraction, devoid of human bonds. Conservatives see human bonds as the source of liberty and the good that it yields. (Conservatives, in most cases, do this instinctively because they value human bonds and experience the good that flows from them.)

Libertarians claim that their version of liberty doesn’t preclude the conservative version of liberty. But it does, because it allows — nay, encourages — the destruction of liberty’s foundation: true community.

Libertarians believe that liberty can exist even while social bonds are sundered in the name of economic efficiency (“globalsim”), cultural diversity (“wokeness”), and personal freedom (abortion, gender fluidity, polyamory), which destroy the sense of community upon which liberty depends. Libertarians, in other words, give intellectual cover to the left’s destructive agenda.

Conservatives know that when social bonds are sundered, the norms upon which liberty depends are frayed and destroyed. The creation and enforcement of norms then becomes the province of a government that exists for its own sake and for the sake of particular interest groups, not for the general welfare. Thus the “warre of every man against his neighbor”.

When it comes to the left-right divide, libertarians are on the left, their protestations the contrary notwithstanding.

The Cocoon Age

“Safetyism” run rampant.

I borrowed “safetyism” from Theodore Dalrymple, who writes about the plethora of warning signs on the stairs at the stations along London’s Underground (subway system):

[T]he danger of death from walking up or down [the] stairs was, at the most, 1 in 1,400,000,000 each time….

But the impression given by the warnings was of acute danger and the need to protect the public from itself. They implied also that official badgering of the public was the way to protect it, that an ever-solicitous administration was in loco parentis, as it were….

[T]he signage is a call to the first duty of the citizen: be anxious.  Only if you are truly anxious do you need the protection of our bureaucratic shepherds.

Meanwhile, over here, “Ingenius Americanus” devised and perfected the greatest get-rich-quick scheme since the chain letter and multi-level marketing: When something bad happens to you, sue. Sue someone, anyone, even if your misfortune is plain old bad luck or the result of your own stupidity.

There’s nothing new about this game of buck-passing-buck-grabbing. It became a bigtime sport in the 1970s with the advent of product-liability suits. And it hasn’t let up.

There were some doozies in the ’70s. Item: A small machine-tool company was sued by a workman who lost three fingers while using (or misusing) its product, even though the machine had been rebuilt at least once and had changed hands four times. Item: Another machinery manufacturer lost a case involving its 33-year-old table saw, although the owner of the machine admitted he knew it was dangerous and wouldn’t have used the guard if it had had one.

Then there was the judgment in favor of a woman who burned herself with hot coffee (what did she expect it was?) dispensed by a fast-food chain. Around the same time there was the award of millions (millions!) of dollars to the purchaser of a new BMW who discovered that its paint job was not pristine.

Why not buy an unrestored 1930 Ford and try to keep up with traffic on the Washington Beltway? (If you haven’t tried it, don’t — unless you live for masochistic thrills.) Then, when you have suffered a severe dislocation of your cervical vertebrae (at best) because the 30-ton rig behind you had no place to go but up your tail pipe, sue the Ford Motor Company because its product wasn’t up to mission impossible.

Of more recent vintage are the frivolous law suits against petroleum refiners and others, including governments, seeking damages for the continued rise of atmospheric CO2. The rise, which has almost nothing to do with emissions of CO2, most of which have nothing to do with human activity, continues unabated despite efforts to control CO2 emissions and the natural experiment — engendered by COVID-19 — which caused a drastic reduction in the activities that supposedly result in the accumulation of atmospheric CO2. Given that “climate change” as we know it is a mere blip in Earth’s natural history, it would make more sense to sue Santa Claus for failing to honor your Christmas wish-list.

And so the absurdities roll on in this new age — this “Cocoon Age” — when almost everyone seems to crave cradle-to-grave protection from the merest hint of inconvenience. Even criminals who have been injured while perpetrating crimes have had the gall to sue police departments and victims.

If such nonsense continues its cancerous spread through our legal system, a court somewhere in this land will someday entertain a suit against the Almighty. The complaint? He permitted the serpent to tempt our first ancestors, thus depriving us of the benefits of life in Eden.

Worse, through careful judge-shopping, there will be a summary judgment for the plaintiff because the Defendant did not deign to respond. And that will be the end of that nonsense.

I Hate to Hear Twenty-Somethings Speak

Worse than fingernails on a blackboard, though they don’t know what that is.

My wife and I had a favorite Thai restaurant when we lived in Austin, Texas. It wasn’t the best Thai restaurant in our experience. We’ve dined at much better ones in Washington, D.C., and Yorktown, Virginia. The best one was in Arlington, Virginia.

At any rate, our favorite Thai restaurant in Austin was very good and accordingly popular. And because Thai food is relatively inexpensive, it drew a lot of twenty- (and-thirty-) somethings.

Thus the air was filled (as usual) with “like”, “like”, “like”, “like”, and more “like”, ad nauseum. It made me want to stand up and shout “Shut up, I can’t take it any more.”

One evening, the fellow at the next table not only used “like” in every sentence, but also had a raspy, penetrating vocal fry, which is another irritating speech pattern of twenty-somethings. He was seated so that he was facing in my direction. As a result, I had to turn down my hearing aids to soften the croak that ended his every sentence.

His date (a person of the opposite sex, which is noteworthy in Austin) merely giggled at everything he said. It must have been a getting-to-know you date. The relationship couldn’t have lasted if she was at all fussy about “like” or if he was put off by giggling.

Harumph!

The Myth of Social Welfare: Part II

A more rigorous analysis of the myth, with commentary about the social and political consequences of the quest for “social justice”.

After I posted “The Myth of Social Welfare” (now with “Part I” added), I remembered other material that I had posted on the subject at my old blogs. This post is a consolidated and edited version of the other material. It comes at “social welfare” and its close kin, “social justice”, from several angles. Some repetition is therefore necessary. But I believe that you will profit by reading the whole thing.


There is among many (most?) Americans (even some who claim to be conservative) approval of some degree of income and wealth redistribution by force (i.e., through government action). Redistribution can occur by direct transfers of money (e.g., welfare payments), programs that have the same effect by providing benefits that are subsidized by higher-income taxpayers (e.g., Social Security), and privileges that are accorded to classes of (presumably) “disadvantaged” persons (e.g., affirmative action, preferential treatment in college admissions).

The various justifications for redistribution include “fairness” or “social justice”. Particular justifications — vague as they may be — are encompassed in an even vaguer one: improving “social welfare”, that is, the general well-being of the populace.

There is also an economic justification, based on the fallacy that consumption spending drives GDP. The idea is that the transfer of incomes and wealth from persons with a low propensity to consume (i.e., “the rich”) to persons with a high propensity to consume (i.e., “the poor”), GDP will rise.

I will begin by disposing of that bit of economic hogwash before addressing the other justifications. I will end with some general observations about the quest for “social justice” and the evolution of that quest into a kind of civil war.

REDISTRIBUTION AS A SPUR TO ECONOMIC GROWTH?

GDP (Gross Domestic Product) is an estimate of the aggregate monetary value of the goods and services produced in the United States during a specified period of time (usually a year). Some of that output takes the form of goods and services that are “consumed” (used) immediately upon purchase or soon thereafter. A sudden drop in the rate of consumption (C) would cause GDP to drop because C is a major component of GDP.

But given a rather stable distribution of income and wealth (in the short run), C will not drop suddenly unless something else causes a sudden drop in GDP (a financial shock, for example). The cart (C) is pulled by the horse (GDP), not the other way around. (As for the Keynesian multiplier, which justifies increases in government spending as a remedy for GDP shocks, see “The Keynesian Multiplier: Fiction vs. Fact”.)

The rate of economic growth, and therefore the material well-being of Americans, depends largely on investments in new capital that enable more and better things to be produced by a given number of workers. Such investments are made possible by non-consumption — saving — the rate of which tends to rise with one’s income. Therefore, in terms of GDP, it is counterproductive to take income (and wealth) from high-income persons and give it to low-income persons. The latter will consume most or all of what they receive instead of using it to finance capital investments. Result: A lower rate of economic growth and therefore fewer and less-well-paying jobs for low-income persons.

Redistribution is therefore harmful to the long-term prospects of those persons who are most vulnerable to the vagaries of economic growth.

IS “SOCIAL WELFARE” THE ANSWER?

The Bottom-Up Approach

I learned some years ago, and to my surprise, that David Friedman (son of Milton and Rose Friedman) subscribes to the mistaken notion that the utility (well-being) gained from additional income diminishes as income increases; for example:

Consider a program such as social security which collects money and pays out money. Dollars collected from the richer taxpayer probably cost him less utility than dollars collected from the poorer taxpayer cost him. But dollars paid to the richer taxpayers also provide less utility than dollars paid to the poorer.

Friedman’s mistake is a common one. It is a misapplication of the concept of diminishing marginal utility (DMU): the entirely sensible notion that the enjoyment of a particular good or service declines, at some point, with the quantity consumed during a given period of time.* For example, a second helping of chocolate dessert might be more enjoyable than a first helping, but a third helping might not be as enjoyable as the second one — and so on.

Do not assume (even implicitly, as Friedman does) that as one’s income rises one continues to consume the same goods and services, just at a higher rate. In fact, having more income enables a person to consume goods and services of greater variety and higher quality. Given that, it is possible to increase one’s enjoyment by shifting from a “third helping” of a cheap product to a “first helping” of an expensive one, and to keep on doing so as one’s income rises.

Look around your home and thing of all of the things you could improve with more income — the food you eat, your apparel, TV, PC, phone, tablet, auto, furnishings, cookware, tableware, etc., etc., etc. And think beyond that. For example, If repeated trips to a Florida beach become boring, graduate to the Caribbean, and then to the Riviera. Think about equivalent upgrades in other leisure pursuits: reading material, home-entertainment products, restaurants, clubs, sporting events — the list may not be endless, but it is very long. Then there are durable things to acquire and even amass: flashy cars, boats, yachts, houses, horses — another long list of possibilities.

On top of all that, there’s the accumulation of wealth. It’s obviously the objective of the likes of Jeff Bezos, Warren Buffet, partners in Wall Street investment banks, high-paid lawyers, etc. It’s also an objective that’s shared by almost everyone and for many reasons (e.g., a more comfortable and secure retirement, leaving money to children and grandchildren). I daresay that one’s sense of well-being — though it can’t be measured — actually rises faster than one’s wealth.

Amassing more wealth also allows one to engage in philanthropy on a grander scale than giving a $5 bill to a panhandler. The philanthropist’s sense of well-being is being served by making others happier. And the wealthier they are, the happier they can make others — and themselves.

Is there a point at which one opts for leisure (or other non-work activities) over income? Yes, for most persons, but income and wealth can continue to accumulate even after a person quits working — and even after he quits actively managing his wealth.

So much for diminishing marginal utility.

Here’s the most that can be said without assuming knowledge that is impossible to acquire: Taking money from a person who is in a high-income (or wealth) bracket will not harm that person as much (financially) as would taking the same amount of money from a person who is in a low-income (or wealth) bracket. Why? Because the high-income person would still be able to afford everything that a low-income person can afford (and a lot more, besides), but the low-income person might become ill or die from lack of nutrition, adequate clothing, or shelter.

But when government gets into the act and forces redistribution, it damages the engine of economic growth, which is what really enables low-income persons to better their lot permanently. And by damaging the engine of economic growth, government also makes it harder for persons to make enough to engage in voluntary charity on the scale that would alleviate hunger, inadequate clothing, and lack of shelter.

Am I serious? Yes. Read “America’s Mega-Depression”.

What about the short run? Why not just use governmental power to help those most in need? Governments don’t work that way. The creation of a program for any purpose essentially guarantees perpetual life for that program regardless of its effectiveness. Social Security, to take a leading example, not only offers much larger old-age benefits to many more persons than originally envisaged, but it has been expanded (through Medicare, Medicaid, and Obamacare) to encompass a vast array of medical (and non-medical) expenses for persons of all ages — retired or not. Social Security and its offshoots are a large part of the problem outlined in “America’s Mega-Depression”.

__________
* It is a misconception that demand curves slope downward and to the right because of DMU. They do not, as explained by Bruce R. Beattie and Jeffrey T. LaFrance in “The Law of Demand versus Diminishing Marginal Utility”, which is available here as a PowerPoint presentation.

The Top-Down Approach

The foregoing discussion should obviate the need for what follows, but I will nevertheless plunge ahead.

Some fans of redistribution will argue that there must be — conceptually, at least — a social welfare function (SWF) that will rise if income is redistributed from high-earners to low-earners, and if wealth is redistributed from the wealthier to the less wealthy. To put it simply, it must be the case that “society” will be better off if “the rich” are forced to make “the poor” better off.

This erroneous view rests on four errors.

There is the fallacy of misplaced concreteness, which is found in the notion of utility. Have you ever been able to measure your own state of happiness? I mean measure it, not just say that you’re feeling happier today than you were when your pet dog died. It’s an impossible task, isn’t it? If you can’t measure your own happiness, how can you (or anyone) presume to measure and aggregate the happiness of millions or billions of individual human beings? It can’t be done.

Next is the error of arrogance. Given the impossibility of measuring one person’s happiness, and the consequent impossibility of measuring and comparing the happiness of many persons, it is pure arrogance to insist that “society” would be better off if X amount of income or wealth were transferred from Group A to Group B.

Think of it this way: A tax levied on Group A for the benefit of Group B doesn’t make Group A better off. It may make some smug members of Group A feel superior to other members of Group A, but it doesn’t make all members of Group A better off. In fact, most members of Group A are likely to feel worse off. It takes an arrogant so-and-so to insist that “society” is somehow better off even though a lot of persons (i.e., members of “society”) have been made worse off.

Would the arrogant so-and-so agree that “society” had been made better off if I were to gain a great deal of satisfaction by punching him in the nose? I don’t think so, but that’s the import of his redistributive arrogance. He could claim that my increase in happiness doesn’t cancel his decrease in happiness, and he would be right. But that’s as far as he could go; he couldn’t claim to measure and compare my gain and his loss.

Which leads me to an error that I will call lack of introspection. If you’re like most mere mortals (as I am), your income during your early working years barely covered your bills. If you’re more than a few years into your working career, subsequent pay raises probably made you feel better about your financial state — not just a bit better but a whole lot better. Those raises enabled you to enjoy newer, better things (as discussed above). And if your real income has risen by a factor of two or three or more — and if you haven’t messed up your personal life (which is another matter) — you’re probably incalculably happier than when you were just able to pay your bills. And you’re especially happy if you put aside a good chunk of money for your retirement, the anticipation and enjoyment of which adds a degree of utility (such a prosaic word) that was probably beyond imagining when you were in your twenties, thirties, and forties.

In sum, the idea that one’s marginal utility (an unmeasurable abstraction) diminishes with one’s income or wealth is nothing more than an assumption that simply doesn’t square with my experience. And I’m sure that my experience is far from unique, though I’m not arrogant enough to believe that it’s universal.

A Closer Look at the Social Welfare Function

The validity of the SWF depends on these assumptions:

  • It is possible to make interpersonal utility comparisons (IUCs), that is, to determine whether and when it hurts X less than it benefits Y when the state takes a dollar from X and gives it to Y.

  • Having done that, the proponents of redistribution are able to conclude that the Xs should be forced to give certain amounts of their income to the Ys.

  • Making the Xs worse off doesn’t, in the longer run, also make the Ys worse off than they would have been absent redistribution. (This critical assumption is flat wrong, as discussed above.)

All of this is arrant — and arrogant — moonshine. Yes, one may safely assume that Y will be made happier if you give him more money or the things that money can buy. So what? Almost everyone is happier with more money or the things it can buy. (I except the exceptional: monks and the like.) And those who don’t want the money or the things it can buy can make themselves happier by giving it away.

What one cannot know and can never measure is how much happier more money makes Y and how much less happy less money makes X. Some proponents of IUCs point to the possibility of measuring brain activity, as if such measurement could or should be made — and made in “real time” — and as if such measurements could somehow be quantified. We know that brains differ in systematic ways (as between men and women, for instance), and we know a lot about the ways in which they are different, but we do not know (and cannot know) precisely how much happier or less happy a person is made — or would be made — by a change in his income or wealth. Happiness is a feeling. It varies from person to person, and for a particular person it varies from moment to moment and day to day, even for a given stimulus. (For more about the impossibility of making IUCs, see these posts by Glen Whitman. For more about measuring happiness, see these posts by Arnold Kling.)

In any event, even if individual utilities (states of happiness or well-being) could be measured, X’s and Y’s utilities are not interchangeable. Taking income from X makes X less happy. Giving some of X’s income to Y may make Y happier (in the short run), but it does not make X happier. It is the height of arrogance for anyone — “liberal”, fascist, communist, or whatever — to assert that making X less happy is worth it if it makes Y happier.

WHAT ABOUT “SOCIAL JUSTICE”?

This is from Anthony de Jasay’s essay, “Risk, Value, and Externality“:

Stripped of rhetoric, an act of social justice (a) deliberately increases the relative share . . . of the worse-off in total income, and (b) in achieving (a) it redresses part or all of an injustice. . . . This implies that some people being worse off than others is an injustice and that it must be redressed. However, redress can only be effected at the expense of the better-off; but it is not evident that they have committed the injustice in the first place. Consequently, nor is it clear why the better-off should be under an obligation to redress it….

There is the view, acknowledged by de Jasay, that the better-off are better off merely because of luck. Here is his answer to that assertion:

Nature never stops throwing good luck at some and bad luck at others, no sooner are [social] injustices redressed than some people are again better off than others. An economy of voluntary exchanges is inherently inegalitarian…. Striving for social justice, then, turns out to be a ceaseless combat against luck, a striving for the unattainable, sterilized economy that has built-in mechanisms … for offsetting the misdeeds of Nature.

Further — for reasons that I explain in “War, Slavery, and Reparations” — the cost of delivering “social justice” is borne by persons who derived no benefit from “injustices” that are the works of Nature, the consequences of governmental misfeasance and malfeasance, or of powerful persons who are beyond the reach of justice.

The best that government can do is ensure a level playing field. Every attempt to tilt the playing field in favor of some “victims” simply creates new, innocent ones. Perhaps the most egregious attempt to tilt the playing field has been affirmative action, which is simply an indirect form of redistribution that harms many innocents. All I need say about affirmative action I have said here, here, here, here, and here.

A guaranteed income is not the way to level the playing field. It creates dependency on the state. Dependency on the state creates voters who support its continuance and expansion. The result: more victims of injustice, more discord, and more waste and fraud. These are commodities that the state already produces in abundance (e.g., “crony capitalism”, the gutting of neighborhoods in the name of improvement, race-consciouness, the futile and antis-scientific effort to fight “climate change” by replacing effective and efficent source of energy with inefficient and ineffective ones).

The expansion of state power should not be encouraged by anyone who values equal justice under the law and prosperity for all.

POWER-LUST AND A NEW KIND OF CIVIL WAR

Redistribution is now just a subset of the never-ending quest for “social justice”, which has morphed from an economic imperative with racial overtones to a war on anyone who is “privileged” (excepting affluent “elites” of the left). The enemies of “social justice” are legion but easily defined: every straight, white, male, of European descent, who is law-abiding, not a beneficiary of “social justice” privileges, and an opponent of such privileges.

The quest for “social justice” is therefore a main component of a new kind of class warfare, of which “wokeness” is a salient feature. “Social justice warriors” of high and low rank seek to:

  • exact economic tribute from their foes;

  • acquire and solidify special privileges that they merit because they are members of “victim groups”;

  • “purify” Earth, which has been “victimized” by capitalism and its artifacts (e.g., the efficient generation of energy by fossil fuels); and

  • suppress dissent from this agenda through ostracism, financial blackmail, and outright censorship.

Given the vast economic and social destruction that this new kind of civil war has exacted and will continue to exact, it is necessary to ask who benefits. The short answer: everyone who believes that he can and will accrue power, prestige, or greater prosperity as a result of the war. There is also the not-inconsiderable satifaction of laying waste to one’s opponents.

Is there anyone on the side of “social justice” or “wokeness” who wants to do good? There is undoubtedly, but those of us on the other side cannot afford to credit good intentions when the consequences are so dire. This is war, not a debate conducted by the rules of the Oxford Union.

In a sane world where government had not undone the good that arises from liberty — peaceful, willing coexistence and its concomitant: beneficially cooperative behavior — the quest for “social justice” would be a risible eccentricity.

Robert Nozick puts it this way in Anarchy, State, and Utopia:

We are not in the position of children who have been given portions of pie by someone who now makes last-minute adjustments to rectify careless cutting. There is no central distribution, no person or group entitled to control all the resources, jointly deciding how they are to be doled out. What each person gets, he gets from others who give to him in exchange for something, or as a gift. In a free society, diverse persons control different resources, and new holdings arise out of the voluntary exchanges and actions of persons. [Quoted by Gregory Mankiw in “Fair Taxes? Depends on What You Mean by Fair,” The New York Times, July 15, 2007.]

The Framers of the Constitution bequeathed us a system of government that would have brought us as close as is humanly possible to the realization of liberty. Decades of searching for “social justice” have squandered that bequest, leaving Americans on the path to subjugation by their government and, eventually, by foreign powers.

The Myth of Social Welfare: Part I

Another impossible thing to to believe before breakfast.

The subtitle of this post alludes to a line in Alice in Wonderland, where the Red Queen declares that “sometimes I’ve believed as many as six impossible things before breakfast.”  Social welfare is (at least) the seventh impossible thing.

The notion of social welfare arises from John Stuart Mill’s utilitarianism, which is best captured in the phrase “the greatest good for the greatest number” or, more precisely “the greatest amount of happiness altogether”.

From this facile philosophy (not Mill’s only one) grew the ludicrous idea that it might be possible to determine, by some mystical process, that “society” would be better off if the government were to do such and such.

The pseudo-scientific version of utilitarianism is cost-benefit analysis. Governments often subject proposed projects and regulations (e.g., new highway construction, automobile safety requirements) to cost-benefit analysis. The theory of cost-benefit analysis is simple: If the expected benefits from a government project or regulation are greater than its expected costs, the project or regulation is economically justified.

Here is the problem with cost-benefit analysis — which is the problem with utilitarianism: One person’s benefit cannot be compared with another person’s cost. Suppose, for example, the City of Los Angeles were to conduct a cost-benefit analysis that “proved” the wisdom of constructing yet another freeway through the city in order to reduce the commuting time of workers who drive into the city from the suburbs. In order to construct the freeway, the city must exercise its power of eminent domain and take residential and commercial property, paying “just compensation”, of course. But “just compensation” for a forced taking cannot be “just” — not when property is being wrenched from often-unwilling “sellers” at prices they would not accept voluntarily. Not when those “sellers” (or their lessees) must face the additional financial and psychic costs of relocating their homes and businesses, of losing (in some cases) decades-old connections with friends, neighbors, customers, and suppliers.

How can a supposedly rational economist, politician, pundit, or “liberal” imagine that the benefits accruing to some persons (commuters, welfare recipients, etc.) somehow cancel the losses of other persons (taxpayers, property owners, etc.)? To take a homely example, consider A who derives pleasure from causing great pain to B (a non-masochist) by punching him in the nose. A’s pleasure cannot cancel B’s pain. And I am willing to prove it by punching a “liberal” in the nose.

Yet, that is how utilitarianism works, if not explicitly then implicitly. It is the spirit of utilitarianism (not to mention power-lust, arrogance, and ignorance) that drives politicians and bureaucrats throughout the land to impose their will upon us — to our lasting detriment.


Related posts:

The Myth of Social Welfare: Part II

War, Slavery, and Reparations

War, Slavery, and Reparations

A tangled ethical web.

The centuries-old practice of forcing a nation that lost a war to pay reparations to the victor seems to have ended after World War II, with the exception of payments by Iraq to Kuwait in the aftermath of the Gulf War of 1990-91.

Was there ever an ethical case for the payment of reparations? Not really. War reparations are a form of victor’s justice. What about the citizens on the losing side who suffered great losses at the hands of the winning side? What about members of the winning side whose conduct of the war was atrocious at times (e.g., the USSR in World War II)?

Which brings me to reparations for slavery in the United States? Who pays? The descendants of Africans who sold other Africans to slave-traders? The descendants of the slave-traders who live in countries other than the United States? All Americans who pay federal income taxes, regardless of any benefites their distant ancestors might have derived from slavery? Only non-black Americans, even if their ancestors did not benefit from slavery and immigrated to this country long after slavery was abolished?

For the reasons implied in my questions — and other reasons that you can readily devise — it would be impossible to determine what living persons and estates benefited from slavery, and by how much. It would also be impossible to itemize the damage, given the tortuous path of personal circumstances and the (often counterproductive) “pro-black” government programs enacted since the ratification of the Thirteenth Amendment in 1865.

If there’s an ethical problem with reparations for slavery, what about reparations for the victims of Jim Crow laws in the South, which persisted for a century after slavery? Jim Crow is still far enough in the past to eliminate the problem of identifying “winners” and “losers” and tallying gains and losses remain, despite the shorter passage of time. And why should the descendants of Southerners who benefited from Jim Crow bear the burden of reparations for practices that weren’t “official” but were condoned and encouraged in other parts of the country?

Ethically, reparations for slavery (or Jim Crow) would be on a par with war reparations: victor’s justice. When did American blacks become “victors” rather than “victims”? When, as individuals, they began to be excused — and even rewarded — for their personal failings and shortcomings (e.g., misbeaving in class, being insufficiently intelligent to merit admission to a college, rioting) because of the color of their skin.

But what about whites who enjoyed the same special treatment in the past, and even in the present (though less overtly)? Well, it was and is wrong. And I believe in the adage that “two wrongs don’t make a right”.

Penalizing an innocent white living today for the sins committed by a dead white 70 or 170 years ago isn’t justice. (It may be vengeance, but vengeance is justice only when it is visited upon a known wrong-doer.) Moreover, as I have explained, righting a past wrong on the basis of skin color (or any other general characteristic, such as country of citizenship) is an ethical impossibility.


Related post: The Myth of Social Welfare

Presidential Trivia

With some side commentary.

KEY DATES

BIRTHPLACE AND RELIGION

AGE AT DEATH

FREQUENCY OF BIRTH YEARS

The year which saw the births of the most presidents is 1946: Clinton, G.W. Bush and Trump. There was a 24-year span between the inauguration of Clinton (the second-youngest elected president) and Trump (the oldest elected president).

RECURRENCE OF FIRST NAME

Eight different first names appear more than once in the list of presidents. Here are the names (listed in order of first appearance), with the middle and last names of the presidents to which the names are attached:

Presidents-repeated first names

Stephen counts as a multiple entry because, officially, Cleveland is the 22nd and 24th president. (Note that I carefully opened this section with the statement that “Eight different first names appear more than once in the list of presidents.”

The unique first names (unique to a president, that is) are Martin, Zachary, Millard, Abraham, Ulysses (born Hiram), Rutherford, Chester, Benjamin, Theodore, Warren, Harry, Dwight (born David), Lyndon, Richard, Gerald (born Leslie), Ronald, Barack, Donald, and Joseph.

RECURRENCE OF FIRST LETTER OF LAST NAME

Counting Cleveland only once, and assigning V to Van Buren and M to McKinley (à l’américaine), here’s how many times each letter of he alphabet occurs as the first letter of a president’s last name:

You will note that several letters are as yet unused: D, I, Q, S, U, X, Y, and Z.

DEATHS DURING THE ADMINISTRATIONS OF SITTING PRESIDENTS

The chart below depicts the death years of presidents. The years are plotted in a saw-tooth pattern, from left to right — row 1, row 2, row 3, row 4, row 5, row 1, row 2, etc. The vertical green and white bands delineate presidential administrations. Washington’s is the first green band, followed by a white band for John Adams, and so on.

Many administrations didn’t experience any presidential deaths. Those administrations with more than one presidential death are as follows:

  • John Quincy Adams — Thomas Jefferson and John Adams

  • Andrew Jackson — James Monroe and James Madison

  • Abraham Lincoln — John Tyler, Martin Van Buren, and Abraham Lincoln (I consider the death of a sitting president to have occurred during his administration.)

  • Ulysses S. Grant — Franklin Pierce, Millard Fillmore, and Andrew Johnson

  • Grover Cleveland (first administration) — Ulysses S. Grant and Chester Alan Arthur

  • William McKinley — Benjamin Harrison and William McKinley

  • Herbert C. Hoover — William Howard Taft and Calvin Coolidge

  • Richard M. Nixon — Dwight D. Eisenhower, Harry S Truman, and Lyndon B. Johnson

  • George W. Bush — Ronald W. Reagan and Gerald R. Ford.

LIVING EX-PRESIDENTS AT THE START OF EACH ADMINISTRATION

Lincoln, Clinton, G.W. Bush, Trump, and Biden are tied for the most living ex-presidents (5 each):

HEIGHTS

No president has yet equaled or surpassed Lincoln’s 6’4″. Next are LBJ at 6’3-1/2”; Trump at 6’3″; Jefferson at 6’2-1/2″; Washington, FDR, G.H.W. Bush, and Clinton at 6’2″. Rounding out the 6′ and over list are Jackson, Reagan, and Obama at 6’1″; and Monroe, Buchanan, Garfield, Harding, Kennedy, and Biden at 6′.

ELECTORAL FACTS ABOUT “MODERN” PRESIDENTS

The modern presidency began with the adored “activist”, Teddy Roosevelt. From TR to the present, there have been only four (of twenty-one) presidents who first competed in a general election as a candidate for the presidency: Taft, Hoover, Eisenhower, and Trump. Trump was alone in having had no previous governmental service before becoming president.

ELECTORAL RESULTS

The results of general elections from the birth of the Republican Party in 1856 to the election of 2020:

Note the unusual era from 1952 through 1988, when Republican presidential candidates outpolled their congressional counterparts.

The table below compares the GOP candidates’ shares of the two-party vote, by State, in the presidential elections of 2012, 2016, and 2020. The changes from 2012 to 2016 that resulted in the election of Trump are highlighted in red. In sum, Trump won by flipping Florida, Michigan, Ohio, Pennsylvania, and Wisconsin. The official (but still disputed) changes from 2016 to 2020 that resulted in the election of Biden are highlighted in blue. In sum, Biden won by reversing Trump’s wins in Michigan, Pennsylvania, and Wisconsin, and by flipping Arizona and Georgia.

Did the GOP Under-Perform in House Races?

Surprisingly, the answer is no.

The Red Ripple-Trickle-Fizzle may not have been as bad (for the GOP) as it seems. Expectations were inflated, which led to the deflation of partisans (like me) who wanted the Dems to get it good and hard.

As it turns out, the GOP may have done about as well as could be expected, based on the history of general elections since World War II. The following analysis draws on the official history of biennial elections through 2020 (here), an estimate of the GOP’s share of the two-party vote for House seats in 2022 (51.6 percent), and an informed guesstimate of the number of seats the GOP will hold when all of the votes are counted (221).

In the following graph, the point representing the 2022 election is circled. So, yes, winning 50.8 percent of House seats (i.e, 221) is less than expected when the GOP wins 51.6 percent of the two-party vote in a general election:

Why did the GOP under-perform in 2022 (and other years)? I derived a regression equation that explains (robustly) the differences between the estimated values (the straight line in the graph above) and the actual values (the data points). The equation has two explanatory variables:

  • the party of the incumbent president at the time of the election

  • whether or not the GOP holds a minority of House seats at the time of the election.

Relative to the estimates based solely on percentage of two-party vote (the regression equation in the graph above), the GOP does less well than expected when (a) there’s a Democrat is in the White House and (b) the GOP is the minority party in the House. Both conditions prevailed this year.

Here’s a graph of the record for every general election since World War II:

My method of adjusting the raw relationship between vote share and share of seats yields an estimate for 2022 that is very close to reality: actual = 50.8 percent; estimate = 50.4 percent.

Were other factors in play? Of course; to name some of them:

  • reapportionment of House seats and redistricting after the 2020 census, which probably helped Republicans

  • the perception of Biden as a senile and dangerous leftist, which should have energized Republicans

  • the Dobbs decision on abortion, which did energize Democrats and took some energy away from Republicans

  • Trump’s toxic, egoistic visibility, which also energized Democrats and may not have done much for Republicans

  • some loony pro-Trump GOP candidates, nominated with the help of Democrat funding and crossover votes in the GOP primaries (Looniness hurts Republicans more than Democrats — witness John Fetterman and the “Squad” — because it’s not expected of Republicans.)

How did those factors (and others) combine to affect the percentage of votes garnered by GOP House candidates? I have no idea and neither does anyone else. I will say that if the GOP’s percentage of the vote was lower than it could have been, it was probably because Trump is still on the political stage. He did a lot of good as president, and I defended him staunchly in my blog. But it’s time for him to remove himself from the public sphere, for the sake of his party and the country.

A final note: Given the leftward drift of the country since World War II, it’s almost miraculous that the GOP emerged from minority status in the 1990s and remains a force to be reckoned with in Congress. Granted, the GOP has also moved somewhat to the left, but it still espouses conservative values, even if it doesn’t always uphold them.

The Hardening of Political Affiliations in America

Following the leftward lurch of the Republican Party under the influence of Teddy Roosevelt, it returned to “normalcy” in 1920 with the election of Warren G. Harding and his running mate-cum-successor, Calvin Coolidge. By “normalcy” I mean that Harding and subsequent GOP nominees have paid lip service, and sometimes actual service, to the project of limited, constitutional government. In any event, GOP presidential candidates, whatever their platforms and programs, have been consistently to the right of their Democrat opponents.

Given that, the division of the popular vote between the two major parties gives an approximation of the left-right divide in America:

This image has an empty alt attribute; its file name is image.png

The wide swings that prevailed through the 1980s have given way to much narrower ones. In fact, the outccomes of the presidential elections of 2008-2020 suggest that there is now a permanent and possibly growing tilt toward the left. Some States and regions will remain reliably on the right, for a long while, at least. But — barring the resurgence of a charismatic Republican or a national catasprophe that can be ascribed solely to Democrat policies — it’s beginning to look like there’s a permanent Democrat (leftist) majority in the nation as a whole. Party-switchers won’t disappear, but their number (relative to the rising number of voters) seems to have shrunk considerably.

The hardening of ideological positions strikes me as another reason to sue for a national divorce. “United we stand, divided we fall” has become a hollow slogan.

A formal division is preferable to the pretense of unity. The latter weakens the nation, emboldens its enemies, and enables the domestic enemies of liberty to trample on their foes. With a divorce, at least half of America would be able to mount a credible deterrent to economic and military blackmail, while also restoring liberty to part of the land.

What to Believe about Inflation?

It depends on which trend you follow.

The release today of “good” news about inflation sparked a huge stock-market rally. The S&P 500 is up by more than 4 percent as I write this.

How good was the news? The year-over-year change in the CPI-U for the 12 months ending in October 2022 was “only” 7.75 percent, down from 8.20 percent for the 12 months ending in September 2022. Both rates are below the recent peak of 9.06 percent for the 12 months ending in June 2022.

On the other hand, as an economist might say, there’s bad news in the month-to-month inflation figures. After the peak in June, prices dropped slightly in July and August: -0.14 percent and -0.42 percent, annualized. But prices rose again in September and October: 2.61 percent and 4.98 percent, annualized.

The October jump is — or should be — of concern to the Fed. Today’s stock-market exuberance may turn out to be irrational.

Philosophical Musings: Part VI

Beliefs, herds, and oppression.

This post, which is a lightly edited version of one that I wrote 17 months ago, reprises a theme of several of my recent posts: the dire outlook for America given its political direction. I am grateful for your indulgence, and I will reward it by retiring the theme for a while.


To come to the point of this series: Human beings can and will believe anything. And much of what they believe — even “science” — is either mistaken or beyond proof. Belief, at bottom, is a matter of faith; it is a matter of what one chooses to believe.

Why do we choose what to believe? We choose to believe those things that make us feel good about ourselves in one way or another. Here are four (not mutually exclusive) ways in which our beliefs serve that purpose:

  • Logical or epistemic consistency, which can be intellectually satisfying even if the logic is fatally flawed or the knowledge is cherry-picked to fit a worldview.

  • The (usually false) reassurance that a belief has been proclaimed “true” by an authority — “science”, religious leaders, political leaders, etc.

  • No skin in the game: The holding of views (for reasons listed above) that are inconsequential to the holder of the views but which (when put into action) are harmful to others (e.g., a rich person whose life and property are secured by private means but who calls for defunding the police).

  • Groupthink: Going along to get along, also known as “taking sides”.

On the last point, I defer to Michael Huemer:

There’s … a study that finds that political beliefs are heritable. (Alford et al, “Are Political Orientations Genetically Transmitted?”) They get a heritability estimate of 0.53 for political orientation (p. 162), much larger than the influence of either shared environment or unshared environment. That’s kind of weird, isn’t it — who knew that you could genetically transmit political beliefs? But of course, you don’t directly transmit beliefs; you genetically transmit personality traits, and people pick their political beliefs based on their personality traits.

But as Huemer notes,

the primary choice people make is not so much which propositions they want to be wedded to, but which group of people they want to affiliate with. Maybe there’s only a very tenuous link between some personality trait and some particular political position, but it’s enough to make that position slightly more prevalent, initially, among people with that trait. But once those people decide that they belong to “the same side” in society, there’s psychological pressure for individual members of the tribe to conform their beliefs to the majority of their tribe, and to oppose the beliefs of “the other side”.

So, e.g., you decide that fetuses don’t have rights because the fetus-rights position is associated with the other tribe, and you don’t want to be disloyal to your own side by embracing one of the other side’s positions. Of course, you never say this to yourself; you just automatically find all of your side’s arguments “more plausible”.

And, as we have seen, belonging to a “side” and signaling one’s allegiance to that “side” seems to have become the paramount desideratum among huge numbers of Americans. “Liberals”, who not long ago were ardent upholders of freedom of speech are now its leading opponents. And many “liberals” – executives and employees of Big Tech companies, for example – demonstrate their opposition daily by suppressing the expression of ideas that they don’t like and denying the means of expression to persons whose views they oppose. They can conjure sophisticated excuses for their hypocrisy, but they are obvious and shallow excuses for their evident unwillingness to countenance “heretical” views.

This hypocrisy extends beyond partisan politics. It extends into discussions of race (i.e., the suppression of “bad news” about blacks and research findings about the intelligence of blacks). It extends into discussions of scientific matters (e.g., labeling as a “science denier” any scientist who writes objectively about the evidence against CO2 as the primary cause of a recent warming trend that is probably overstated, in any case, because of the urban-heat-island effect). It extends elsewhere, of course, but there’s no point in belaboring the odious.

The worst part of it is that the hypocrisy isn’t practiced just by lay persons who wish to signal their allegiance to “progressivism”. It’s practiced by scientists, academicians, and highly educated persons who hold important positions in the business world (witness Big Tech’s censorship practices and the “wokeness” of major corporations).

In other words, the herd instinct is powerful. It sweeps all before it. Even truth. Especially truth when it contravenes the herd’s dogmas — which are its “truths”.

And a herd that runs wild — driven hither and thither by ever-shifting “truths” — is dangerous, as we are seeing now in the suppression of actual truth, the suppression of political speech, firings for being associated with the wrong “side”, etc.

Today’s state of affairs is often likened to that which prevailed in the years leading up to the Civil War. There is a good reason for that comparison, for the two epochs are alike in a fundamental way: One side (Unionists then, the “woke” now) assumes the mantle of virtue and thus garbed presumes to dictate to the other side.

Yes, slavery was wrong. But that did not justify the (successful) attempt of the Unionists to prevent the Confederacy’s secession on the principle of self-determination — the very principle that inspired the American Revolution that led to the Union.

Yes, it is fitting and proper to treat the (relatively) poor, persons of color, and persons whose sexual proclivities are “unusual” with respect and equality under the law. But that does not justify the wholesale violation of immigration laws, the advancement of the “oppressed” at the expense of blameless others (who are mainly straight, white, males of European descent), the repudiation of America’s past (the good with the bad), or the destruction of the religious, social, and economic freedoms that have served all Americans well.

Ironically, the power of the central government, which was enabled by the victory of the Unionists, now enables “progressivism” to advance its dictatorial agenda through acqueiscence and assistance.

Donald J. Trump did oppose that agenda, and opposed it with some success for four years. That is why it was imperative for the “progressive” establishment — abetted by pusillanimous “conservatives” and never-Trumpers — to undermine Trump from the outset and, in the end, to remove Trump from power by stealing the election of 2020. There has never, in American politics, been a more heinous case of wholesale corruption than was evidenced in the machinations against Trump.

Having said all of that, what will happen to America? The slide toward fascism, which has been underway (with interruptions) for more than a century, now seems to be near its destination: the dictation of myriad aspects of social and economic intercourse by our “betters” in Washington and their cronies in the academy, the media, public schools, and corporate America.

And most Americans — having been brainwashed by the “education system”, bought off by various forms of welfare, and cowed by officious officials and mobs — will simply acquiesce in their own enslavement.

Finito.


Related reading:

Matt, “Varieties of Opinion“, Imlac’s Journal, March 14, 2021

Frank Furedi, “Big Brother Comes to America“, Spiked, February 8, 2021

Victor Davis Hanson, “Our Animal Farm“, American Greatness, February 7, 2021

Arnold Kling, “Rationalist Epistemology“, askblog, February 26, 2021

Arnold Kling, “Cultural Brain Hypothesis“, askblog, March 5, 2021

Mark J. Perry, “Quotation of the Day on Truths That We Are No Longer Allowe to Speak About … “, Carpe Diem, February 2, 2021

Malcolm Pollack, “The Enemy Within“, American Greatness, February 13, 2021

Quilette editorial, “With a Star Science Reporter’s Purging, Mob Culture at The New York Times Enters a Strange New Phase“, Quilette, February 9, 2021