Politics – Politicians – Government in Action

Politics Trumps Economics

Years ago I was conversing with a hard-core economist, one of the benighted kind who assume that everyone behaves like a wealth-maximizing robot. I observed that even if he were right in his presumption that economic decisions are made rationally and in a way that comports with economic efficiency, government stands in the way of efficiency. In my pithy phrasing: Politics trumps economics.

So even if the impetus for efficiency isn’t blunted by governmental acts (laws, regulations, judicial decrees), those acts nevertheless stand in the way of efficiency, despite clever workarounds. A simple case in point is the minimum wage, which doesn’t merely drive up the wages of some workers, but also ensures that many workers are unemployed in the near term, and that many more workers will be unemployed in the long-term. Yes, the minimum wage causes some employers to substitute capital (e.g., robots) for labor, but they do so only to reduce the bottom-line damage of the minimum wage (at least in the near-term). Neither the employer nor the jobless is made better off by the employer’s machinations. Thus politics (the urge to regulate) trumps economics (the efficiency-maximizing state of affairs that would otherwise obtain).

I was reminded of my exchange with the economist by a passage in Jean-François Revel’s Last Exit to Utopia: The Survival of Socialism in a Post-Soviet Era:

Karl Jaspers, in his essay on Max Weber, records the following conversation between Weber and Joseph Schumpeter:

The two men met at a Vienna cafe… Schumpter indicated how gratified he was by the socialist revolution in Russia. Henceforth socialism would not be just a program on paper — it would have to prove its viability.

To which Weber … replied that Communism at this stage of development in Russia virtually amounted to a crime, and that to take this path would lead to human misery without equal and to a terrible catastrophe.

“That’s exactly what will happen,” agreed Schumpeter, “but what a perfect laboratory experiment.”

“A laboratory in which mountains of corpses will be heaped!” retorted Weber….

This exchange must have occurred at the beginning of the Bolshevik regime, since Max Weber died in 1920. Thus one of the twentieth century’s greatest sociologists and one of its greatest economists were in substantial agreement about Communism: they had no illusions about it and were fully aware of its criminogenic tendencies. On one issue, though, they differed. Schumpeter was still in thrall to a belief that Weber did not share, namely the illusion that the failures and crimes of Communism would serve as a lesson to humanity. [pp. 141-142]

Weber was right, of course. Politics trumps economics because people — especially people in power — will cling to counterproductive beliefs, even despite evidence that they are counterproductive. Facts and logic don’t stand a chance against power-lust, magical thinking, virtue-signalling, and the band-wagon effect.


Related posts:
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
A Keynesian Fantasy Land
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Income Inequality and Economic Growth
A Case for Redistribution, Not Made
Ruminations on the Left in America
Academic Ignorance
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
The Rahn Curve Revisited
Retrospective Virtue-Signalling
Four Kinds of “Liberals”
Leftist Condescension
The Vast Left-Wing Conspiracy
Leftism As Crypto-Fascism: The Google Paradigm
What’s Going On? A Stealth Revolution

Libertarianism, Conservatism, and Political Correctness

Why do conservatives and libertarians generally eschew political correctness? Because we take individual persons as they come, and evaluate each them on his merits.

That is to say, we reject stereotyping, and political correctness is just another form of stereotyping. Instead of insisting on something foolish like “all blacks are criminals”, political correctness leans the other way and insists that it is wrong to believe or say anything negative of blacks — or of any other group that has been condescendingly identified as “victims” by leftists.

Group differences matter mainly to the extent that they affect the likely success or (more likely) failure of government interventions aimed at defeating human nature. They also matter to the extent that human beings — including members of all racial and ethic groups — tend to prefer like to unlike (e.g., the preference of “liberal” white yuppies to live in enclaves of “liberal” white yuppies). But such matters have nothing to do with the conservative-libertarian disposition to treat individuals, when encountered as individuals, with the respect (or disrespect) due to them — as individuals.

In that regard, the conservative disposition is especially instructive. A conservative will not rush to judgment (pro or con) based on superficial characteristics, but will judge a person by what he actually says and does in situations that test character and ability. For example, I distinguish between leftists of my acquaintance who are at bottom kind but politically naive, and those whose political views reflect their inner nastiness.

Leftists, in their usual mindless way, take the opposite view and presume that the superficial characteristics that define a group count for more than the character and ability of each member of the group. Political correctness is of a piece with the intellectual laziness that characterizes leftism.


Related posts:
Academic Bias
Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
The Vast Left-Wing Conspiracy
The Left and Evergreen State: Reaping What Was Sown
Leftism As Crypto-Fascism: The Google Paradigm
Leftism (page) and related bibliography

Stagnation: ‘Tis a Tale Told by the Stock Market

I have just come across two articles about the shrinking number of firms listed on U.S. stock exchanges:

Kathleen Kahle and René M. Stulz, “Is the American Public Corporation in Trouble?”, Journal of Economic Perspectives, Volume 31, Number 3, Summer 2017

Michael J. Mauboussin, Dan Callahan, and Darius Majd, “The Incredible Shrinking Universe of Stocks: The Causes and Consequences of Fewer U.S. Equities“, Credit Suisse, Global Financial Strategies, March 22, 2017

I will refer to the first article as K&S and the second article as MC&M. (Despite the publication dates, K&S predates MC&M.) The articles tell this tale:

  • From the mid-1970s to the mid-1990s, the number of listed companies rose sharply.
  • Since the min-1990s, the number of listed companies has dropped sharply.
  • The declining number of listed companies has been accompanied by consolidation within many industries and — among the surviving firms — greater size, higher profits, bigger payouts to shareholders, and higher average market capitalization (market value of outstanding shares).

Here are some relevant observations from K&S:

If consolidation has nothing to do with being a public firm, we should see the total number of firms decreasing, whether firms are public or private. We don’t. The United States has become an economy dominated by service industries, and so a good way to demonstrate this is to look at the service industries. Even though the number of firms in the service industries increases by 30 percent from 1995 to 2014 and employment increases by 240 percent, the number of public firms falls by 38 percent. A similar evolution occurs in the finance industry, in which the number of firms increases by 18.7 percent from 1995 to 2014, but over the same time the number of listed firms falls by 42.3 percent. Further, … the propensity of firms to be listed … falls across all firm-size categories when size is measured by employment….

The drop in the propensity to be listed suggests that there is a problem with being a public firm…. In the United States, corporate law is governed by state of incorporation, but public firms are subject to federal securities laws. As a result, Congress can regulate public firms in ways that it cannot regulate private firms….

Our data show that the fraction of small public firms has dropped dramatically…. [T]he drop in initial public offerings is particularly acute among small firms. Why are public markets no longer welcoming for small firms?… [R]esearch and development investments have become more important. Generally, R&D is financed with some form of equity rather than debt, at least in early stages before a firm has accumulated lucrative patents. Raising equity in public markets to fund R&D can be difficult. Investors want to know what they invest in, but the more a firm discloses, the more it becomes at risk of providing ammunition to its competitors. As a result, R&D-intensive firms may be better off raising equity privately from investors who then have large stakes….

There are several additional potential explanations for why small firms are staying out of public markets… First, public markets have become dominated by institutional investors…. Investing in really small firms is unattractive for institutional investors, because they cannot easily invest in a small firm on a scale that works for them. As a result, small firms receive less attention and less support from financial institutions. This makes being public less valuable for these firms. Second, developments in financial intermediation and regulatory changes have made it easier to raise funds as a private firm. Private equity and venture capital firms have grown to provide funding and other services to private firms. The internet has reduced search costs for firms searching for investors. As a result, private firms have come to have relatively easier access to funding.

… According to [the economies of scope] hypothesis, small firms have become less profitable and less able to grow on a stand-alone basis, but are more profitable as part of a larger organization that enables them to scale up quickly and efficiently. Thus, small firms are better off selling themselves to a large organization that can bring a product to market faster and realize economies of scope. This dynamic arises partly because it has become important to get big quickly as technological innovation has accelerated. Globalization also means that firms must be able to access global markets quickly. Further, network and platform effects can make it more advantageous for small firms to take advantage of these effects by being acquired. This hypothesis is consistent with our evidence that the fraction of exchange-listed firms with losses has increased and that average cash flows for smaller firms have dropped…. [M]any mergers do involve small firms, so small firms do indeed choose to be acquired rather than grow as public firms.

The increased concentration we document could also make it harder for small firms to succeed on their own, as large established firms are more entrenched and more dominant….

[Gerald] Davis … argues [in The Vanishing American Corporation] that it has become easier to put a new product on the market without hard assets…. When all the pieces necessary to produce a product can be outsourced and rented, a firm can bring a product to market without large capital requirements. Hence, the firm does not need to go public to raise vast amounts of equity to acquire the fixed assets necessary for production… Ford’s largest production facility in the 1940s, the River Rouge complex, employed more than 100,000 workers at its peak. Of today’s largest US firms, only Amazon has substantially more employees than that complex at its peak. With this evolution, there is no point in going public, except to enable owners to cash out.

These explanations imply that there are fewer public firms both because it has become harder to succeed as a public firm and also because the benefits of being public have fallen. As a result, firms are acquired rather than growing organically. This process results in fewer thriving small public firms that challenge larger firms and eventually succeed in becoming large. A possible downside of this evolution is that larger firms may be able to worry less about competition, can become more set in their ways, and do not have to innovate and invest as much as they would with more youthful competition. Further, small firms are not as ambitious and often choose the path of being acquired rather than succeeding in public markets. With these possible explanations, the developments we document can be costly, leading to less investment, less growth, and less dynamism.

This is all consistent with the creeping stagnation of the U.S. economy, as it collapses under the weight of government spending and regulation:

What’s Going On? A Stealth Revolution

UPDATED WITH A LIST OF RELATED READING

Senator Cory Booker (D-NJ) hints at the game plan:

I will be introducing a bill to remove Confederate statues from the US Capitol building. This is just one step. We have much work to do.

What work? Based on what I’ve seen since the Charleston church shooting in 2015, it’s a stealth revolution (e.g., this) piggy-backing on mass hysteria. Here’s the game plan:

Focus on racism — mainly against blacks, but also against Muslims and Latinos. (“Racism” covers a lot of ground these days.)

Thrown in sexism and gender bias (i.e., bias against gender-confused persons).

Pin it all on conservatives.

Watch as normally conservative politicians, business people, and voters swing left rather than look “mean” and put up a principled fight for conservative values. (Many of them can’t put up such a fight, anyway. Trump’s proper but poorly delivered refusal to pin all of the blame on neo-Nazis for the Charlottesville riot just added momentum to the left’s cause because he’s Trump and a “fascist” by definition.)

Watch as Democrats play the racism-sexism-gender card to retake the White House and Congress.

With the White House in the hands of a left-wing Democrat (is there any other kind now?) and an aggressive left-wing majority in Congress, freedom of speech, freedom of association, and property rights will become not-so-distant memories. “Affirmative action” will be enforced on an unprecedented scale of ferocity. The nation will become vulnerable to foreign enemies while billions of dollars are wasted on the hoax of catastrophic anthropogenic global warming and “social services” for the indolent. The economy, already buckling under the weight of statism, will teeter on the brink of collapse as the regulatory regime goes into high gear and entrepreneurship is all but extinguished by taxation and regulation.

All of that will be secured by courts dominated by left-wing judges — from here to eternity.

And most of the affluent white enablers dupes of the revolution will come to rue their actions. But they won’t be free to say so.

Thus will liberty — and prosperity — die in America.


Related reading (some items suggested by commenter Matt):
Roger L. Simon, “Is Charlottesville What’s Really Going On in the USA?“, PJ Media, August 12, 2017
David Horowitz, “The Real Race War“, FrontpageMag, August 16, 2017
Ben Stein, “Whose Side Is He On?“, The American Spectator, August 16, 2017
Dov Fischer, “And Yet President Trump, in His Classically Inartful Way, Was Absolutely Right“, The American Spectator, August 17, 2017
Danusha V. Goska, “Charlottesville, Selective Outrage, and Demonization of White, American Men“, FrontpageMag, August 18, 2017
Joseph Klein, “The Left’s Exploitation of Charlottesville Tragedy Continues“, FrontpageMag, August 18, 2017
Bruce Thornton, “Charlottesville, Race, and Republican Virtue-Signaling“, FrontpageMag, August 18, 2017


Related pages and posts:
Leftism and the related bibliography
Ethics and the Socialist Agenda
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
The Culture War
Ruminations on the Left in America
The Euphemism Conquers All
Superiority
Whiners
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
The Vast Left-Wing Conspiracy
The Left and Evergreen State: Reaping What Was Sown
Leftism As Crypto-Fascism: The Google Paradigm

The “Public Goods” Myth

The argument for the provision of public goods by the state goes like this:

People will free ride on a public good like a clean atmosphere because they can benefit from it without contributing to it. Mimi will enjoy more breathable air when others switch to a Prius even if she doesn’t drive one herself. So the state is justified as a means of forcing people like Mimi to contribute: for instance, by creating laws that penalize pollution….

Standard models predict that public goods will be underprovided because of free riding. Public goods are non-excludable, meaning that you cannot be excluded from enjoying them even if you didn’t contribute to them. Public goods are also non-rivalrous, meaning that my enjoyment of the good doesn’t subtract from yours. Here’s an example. A storm threatens to flood the river, a flood that would destroy your town. If the townspeople join together to build a levee with sandbags, the town will be spared. However, your individual contribution won’t make or break the effort. The levee is a public good. If it prevents the flood, your house will be saved whether or not you helped stack the sandbags. And the levee will protect the entire town, so protecting your house doesn’t detract from the protection afforded to other houses.

It’s typically assumed that people won’t voluntarily contribute to public goods like the levee. Your individual contribution is inconsequential, and if the levee does somehow get provided, you enjoy its protection whether or not you helped. You get the benefit without paying the costs. So the self-interested choice is to watch Netflix on your couch while your neighbors hurt their backs lugging sandbags around. The problem is, your neighbors have the exact same incentive to stay home— if enough others contribute to the levee, they’ll enjoy the benefits whether or not they contributed themselves. Consequently, no one has an incentive to contribute to the levee. As a result of this free-rider problem, the town will flood even though the flood is bad for everyone. [Christopher Freiman, Unequivocal Justice, 2017]

The idea is that private entities won’t provide certain things because there will be too many free riders. And yet, people do buy Priuses and similar cars, and do volunteer in emergencies, and do commit myriad acts of kindness and generosity without compensation (other than psychic). These contrary and readily observable facts should be enough to discredit public-goods theory. But I shall continue with a critical look at key terms and assumptions.

What is a public good? It’s a good that’s “underprovided”. What does that mean? It means that someone who believes that a certain good should be provided in a certain quantity at a certain price is dissatisfied with the actual quantity and/or price at which the good is provided (or not provided).

Who is that someone? Whoever happens to believe that a certain good should be provided at a certain price. Or, more likely, that it should be provided “free” by government. There are many advocates of universal health care, for example, who are certain that health care is underprovided, and that it should be made available freely to anyone who “needs” it. They are either ignorant of the track record of socialized medicine in Canada and Britain, or are among the many (usually leftists) who prefer hope to experience.

What is a free rider, and why is it bad to be a free rider? A free rider is someone who benefits from the provision and use of goods for which he (the free rider) doesn’t pay. There are free riders all around us, all the time. Any product, service, or activity that yields positive externalities is a boon to many persons who don’t buy the product or service, or engage in the activity. (Follow the link in the preceding sentence for a discussion and examples of positive externalities.) But people do buy products and services that yield positive externalities, and companies do stay in business by provide such products and services.

In sum, “free rider” is a scare term invoked for the purpose of justifying government-provided public goods. Why government-provided? Because that way the goods will be “free” to many users of them, and “the rich” will be taxed to provide the goods, of course. (“Free” is an illusion. See this.)

Health care — which people long paid for out of their own pockets or which was supported by voluntary charity — is demonstrably not a public good. If anything, the more that government has come to dominate the provision of health care (including its provision through insurance), the more costly it has become. The rising cost has served to justify greater government involvement in health care, which has further driven up the cost, etc., etc., etc. That’s what happens when government provides a so-called public good.

What about defense? As I say here,

given the present arrangement of the tax burden, those who have the most to gain from defense and justice (classic examples of “public goods”) already support a lot of free riders and “cheap riders.” Given the value of defense and justice to the orderly operation of the economy, it is likely that affluent Americans and large corporations — if they weren’t already heavily taxed — would willingly form syndicates to provide defense and justice. Most of them, after all, are willing to buy private security services, despite the taxes they already pay….

… It may nevertheless be desirable to have a state monopoly on police and justice — but only on police and justice, and only because the alternatives are a private monopoly of force, on the one hand, or a clash of warlords, on the other hand.

The environment? See this and this. Global warming? See this, and follow the links therein.

All in all, the price of “free” government goods is extremely high; government taketh away far more than it giveth. With a minimal government restricted to the defense of citizens against force and fraud there would be far fewer people in need of “public goods” and far, far more private charity available to those few who need it.


Related posts:
A Short Course in Economics
Addendum to a Short Course in Economics
Monopoly: Private Is Better than Public
Voluntary Taxation
What Free-Rider Problem?
Regulation as Wishful Thinking
Merit Goods, Positive Rights, and Cosmic Justice
More about Merit Goods
Don’t Just Stand There, “Do Something”

The Social Security Mess Revisited

Laurence Kotlikoff draws attention to the Social Security mess in his recent column, “Will Social Security Be There for You?“. He states the problem and poses two stark options for solving it:

Social Security’s trustees just released their annual report. It’s a very long document, with the most important part buried deep in appendix table VIF1.

Table VIF1 shows the system is $34.2 trillion in the red. That’s its unfunded liability. Stated differently, the system’s trust fund needs to be $37 trillion, not its actual $2.8 trillion, to permit Social Security to pay all scheduled benefits into the future. How large is $34.2 trillion? Very large. It’s almost two years of GDP!

There is, of course, more than one way to make ends meet. If we can’t get the good lord to drop $34.2 trillion into Social Security’s coffers as manna from heaven, we can raise taxes. One option is to take 4.2 percent more out of everyone’s paycheck (up to the taxable earnings ceiling, now $127,500) on a permanent basis. Since Social Security’s FICA payroll tax rate is 12.4 percent, we’re talking a 33.9 (4.2/12.4) percent immediate and permanent Social Security tax hike!

Another option is to cut all Social Security benefits (retirement, spousal, divorcee, widow(er), young child, disabled child, child-in-care spousal, mother (father), disability and parent benefits) immediately and permanently by 25 percent!

There are, in fact, other options. One is to keep kicking the can down the road, as long as foreign investors are willing, in effect, to underwrite Social Security’s deficit. They do this by shipping the proceeds of their “trade surplus” (our “trade deficit”) back to the U.S. in exchange for stocks, bonds, and real estate. Some of their money goes directly into U.S. government bonds; the rest helps to relieve the crowding out that occurs when the U.S. government borrows to sustain its profligate spending, which includes Social Security.

Here’s another one. The unfunded liability isn’t a current liability; it’s the  present value of future Social Security deficits. Which means that another way of kicking the can down the road is to gradually increase Social Security taxes and/or reduce benefits to a sustainable level while foreigners to underwrite the transition.

I prefer a third option, which is usually considered politically unthinkable: eventual privatization of Social Security. How would that work? Here’s my plan:

1. Abolish Social Security payroll taxes as of a date certain (Abolition Day).

2. Pay normal benefits (those implicitly promised under the present system) to persons who are then collecting Social Security and to all other qualifying persons who have then reached the age of 62.

3. Persons who are 55 to 61 years old would receive normal benefits, pro-rated according to their contributions as of Abolition Day.

4. The retirement age for full benefits would be raised for all persons who are younger than 55 as of Abolition Day. The full retirement age is now scheduled to rise to 67 in 2027. It could rise to 70 by, say, 2025. Moreover, the minimum age for receiving partial benefits would rise from 62 to 65.

4. Persons who are 45 to 54 years old also would receive prorated benefits based on their contributions as of Abolition Day. But their initial benefits would be reduced on a sliding scale, so that the benefits of those persons who are 45 as of Abolition Day would be linked entirely to the CPI rather than the wage index.

5. Persons who are younger than 45 would receive a lump-sum repayment of their contributions (plus accrued interest) at full retirement age, in lieu of future benefits. That payment would automatically go to a surviving spouse or next-of-kin if the recipient dies intestate. Otherwise, the recipient could bequeath, transfer, or sell his interest in the payment at any time before it comes due.

The residual obligations outlined in steps 2-5 would be funded in part by a payroll tax, which would diminish as those obligations are paid off. The U.S. government would continue to borrow as necessary to fund the Social Security deficit, but — unlike the first two options — the borrowing would eventually come to an end. Social Security would be “saved”, there would be less crowding-out in financial markets, and — best of all — everyone’s retirement savings would be plowed into investment-inducing vehicles: stocks, bonds, CDs, savings accounts. This would push up the rate of economic growth and make privatization all the more affordable, and desirable.

Liberty in Chains

I continue to add items to the list of related readings at the bottom of this post.

Liberty is a good thing, and not just because most people like to feel unconstrained by anything but their moral obligations and principles. (And it’s a good thing for liberty that it doesn’t depend entirely on self-control.) Liberty fosters constructive competition and, in the terminology of pop psychology, self-actualization; for example: the trial-and-error emergence of social norms that advance cooperation and comity; the trial-and-error emergence (and dissolution) of businesses that advance consumers’ interests; the freedom to work where one chooses (according to one’s ability), to live where one chooses (within one’s means), to marry the person of one’s choosing (consistent with public health and the wisdom embedded in voluntarily evolved social norms), to have as many children as a couple can provide for without imposing involuntary burdens on others; and the freedom to associate with persons of one’s own choosing, including the persons with whom one does business.

It’s no secret that those manifestations of liberty haven’t held throughout the United States and throughout its history. It’s also no secret that none of them is sharply and permanently defined in practice. In some jurisdictions, for example, a businessperson is forced by law to provide certain services or pay a stiff fine. The argument for such treatment takes a one-sided view of freedom of association, and grants it only to the person seeking the services, while denigrating the wishes of the person who is forced to provide them. The liberty of the customer is advanced at the expense of the liberty of the businessperson. It is an approach, like that of civil-rights law generally, which favors positive rights and dismisses negative ones. It is therefore anti-libertarianism in the name of liberty, as I explain in a recent post.

In sum, liberty isn’t a simple thing. It’s certainly not as simple or simplistic as J.S. Mill’s harm principle (a.k.a. non-aggression principle), as I have explained many times (e.g. here). In fact, it’s impossible for everyone to be satisfied that they’re living in a state of liberty. This is partly because so many people believe that they possess positive rights — entitlements provided at the expense of others.

More generally, liberty has been and continues to be invoked as a justification for anti-libertarian acts, beyond the creation and enforcement of positive rights. There is the problem of freedom of speech, for example, which has been interpreted to allow advocacy of anti-libertarian forms of government — most notably America’s present de facto blend of socialism and fascism.

This problem, which is actually a constitutional catastrophe, is closely related to the problem of democracy. There are many who advocate unbridled democratic control of private institutions through government power. One such person is Nancy MacLean, whose recent book, Democracy in Chains, seems to be a mindless defense of majority opinion. The Wikipedia entry for the book (as of today’s writing) seems fair and balanced (links and footnotes omitted):

In June 2017 MacLean published Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America, focusing on the Nobel Prize winning political economist James McGill Buchanan, Charles Koch, George Mason University and the libertarian movement in the U.S, who, she argues, have undertaken “a stealth bid to reverse-engineer all of America, at both the state and national levels back to the political economy and oligarchic governance of midcentury Virginia, minus the segregation.” According to MacLean, Buchanan represents “the true origin story of today’s well-heeled radical right.”….

Booklist, which gave the book a starred review, wrote that Democracy in Chains … is “perhaps the best explanation to date of the roots of the political divide that threatens to irrevocably alter American government.” Publisher’s Weekly wrote that “MacLean constructs an erudite searing portrait of how the late political economist James McGill Buchanan (1919 – 2013) and his deep-pocketed conservative allies have reshaped–and undermined–American democracy.” Kirkus said “MacLean offers a cogent yet disturbing analysis of libertarians’ current efforts to rewrite the social contract and manipulate citizens’ beliefs.”

In The Atlantic, Sam Tanenhaus called Democracy in Chains, “A vibrant intellectual history of the radical right.” Genevieve Valentine wrote on NPR: “This sixty-year campaign to make libertarianism mainstream and eventually take the government itself is at the heart of Democracy in Chains.”

Democracy in Chains was criticized by libertarians. David Bernstein disputed MacLean’s portrayal of Buchanan and George Mason University, where Bernstein is and Buchanan was a professor. Jonathan H. Adler noted allegations of serious errors and misleading quotations in Democracy in Chains raised by Russ Roberts, David R. Henderson, Don Boudreaux and others. Ilya Somin disagreed that libertarianism was uniquely anti-democratic, arguing that libertarians and left-liberals alike believed in constraining democratic majorities regarding “abortion, privacy rights, robust definitions of free speech… and freedom of religion, extensive protections for criminal defendants, and limitations on the powers of law enforcement personnel”. George Vanberg argued that MacLean’s portrayal of Buchanan as wishing to “preserve (or enhance) the power of a white, wealthy elite at the expenses of marginalized social groups … represents a fundamental misunderstanding”. Michael Munger wrote that Democracy in Chains “is a work of speculative historical fiction” while Phillip Magness argued that MacLean had “simply made up an inflammatory association” concerning Buchanan and the Southern Agrarians. In response MacLean argued she was the target of a “coordinated and interlinked set of calculated hit jobs” from “the Koch team of professors who don’t disclose their conflicts of interest and the operatives who work fulltime for their project to shackle our democracy.”

Henry Farrell and Steven Teles wrote that “while we do not share Buchanan’s ideology … we think the broad thrust of the criticism is right. MacLean is not only wrong in detail but mistaken in the fundamentals of her account.”

The writers cited in the second and third paragraphs are far better qualified than I am to defend Buchanan’s integrity and ideas. (See “Related Reading”, below.) I will focus on MacLean’s ideas.

Why does MacLean claim that democracy is in chains? In what follows I draw on Alex Shephard’s interview of MacLean for The New Republic. MacLean is especially interested in preserving “liberal democracy”. What is it, according to Maclean? She doesn’t say in the interview, and mentions it only once:

[I]f you block off the political process from answering people’s needs, as the radical right managed to do throughout Barack Obama’s two terms on so many major issues, then people get frustrated. They get frustrated that politics has become so polarized between right and left and they believe that liberal democracy does not work—they start to believe that we need a radical alternative.

MacLean seems to have the same view of “liberal democracy” as her European counterparts. It is a mechanism through which government takes some people’s money, property rights, and jobs to buy other people’s votes. It is democracy only in the sense that a majority of voters can be counted on to demand other people’s money, property rights, and jobs. It confers on “liberal” politicians and their bureaucratic minions the image of being “compassionate”, and enables them to characterize their opponents — people who don’t support legalized theft — as mean-spirited. It is asymmetrical ideological warfare in action, with an unsurprising result: the victory of “liberal democracy”.

This sample of MacLean’s thinking (to dignify her knee-jerk leftism) is consistent with her assumption that she knows what “the people” really want; for example:

On issue after issue you see vast majorities of Republicans who actually agree on some of the fundamental needs of the country: They support a progressive income tax, they want to address global warming, they care about the preservation of Medicare and Social Security as originally construed as social insurance, they care about public education…. But they have been riled up by this apparatus [“radical right” libertarianism], and by very cynical Republican leaders, to support a party that is undermining the things that they seek.

What does the purported (and dubious) existence of “vast majorities of Republicans” have to do with anything? To the extent that there are voters who identify themselves as Republicans and who favor those programs that benefit them (or so they believe), that simply makes them part of the problem: another interest-group trying to spend other people’s money, and thus spending their own because every “favor” requires repayment in the realpolitik of Washington.

Also, as evidenced by many of the items listed below in “Related Reading”, MacLean’s discussion of “the fundamental needs of the country” and how they can best be met is deeply flawed. Further, her generalization about “vast majorities” is dubious given that (a) not a single Republican member of Congress voted for Obamacare but (b) Republicans made large gains in both houses of Congress in the election that followed the enactment of Obamacare.

There’s a bigger problem that MacLean doesn’t seem to grasp or acknowledge, namely, the debilitating effects of “liberal democracy” on liberty and prosperityeveryone’s liberty and prosperity, contrary to MacLean’s “right-wing conspiracy” theory. Even the poorest of today’s Americans would be far better off had the United States not become a “liberal democracy”. As for liberty, social and economic liberty are indivisible; taxation and regulation diminish prosperity and liberty (the ability to choose where one lives, with whom one associates, etc.).

In any event, by MacLean’s logic the demise of liberty and decline of prosperity are acceptable if a majority of the American people want it. Certainly, by the mid-1930s a vast majority of the German people wanted Adolph Hitler to remain in power, and it’s likely that similar majorities of the Russian and Chinese people felt the same way about Stalin and Mao. (The devil you know, etc.) The difference between “liberal democracy” and the totalitarian regimes of Hitler, Stalin, Mao, and their ilk is only a matter of degree.

MacLean would no doubt respond that there is a proper limit on government power, a limit that is respected in a “liberal democracy” but not in a dictatorship. But there is no limit, not even in a “liberal democracy”, except for the limit that those in power actually observe. In the end, it is up to those in power to observe that limit — and they won’t.

The Framers of the Constitution, who understood human nature very well, knew that venality and power-lust would prevail if the new government wasn’t constrained by rigorous checks and balances. (Those words don’t occur in the Constitution, but it is nevertheless replete with checks on the power of the central government and balances of power within the central government and between it and the States.) But the checks and balances have all but disappeared under the onslaught of decades’ worth of unconstitutional legislation, executive usurpation, and judicial malfeasance. The election of Donald Trump — leftist hysteria to the contrary notwithstanding — is all that saved America (temporarily, at least) from its continued spiral into hard despotism. Hillary Clinton would no doubt have redoubled Barack Obama’s penchant for government by executive fiat, given her expansive view of the role of the central government and her own dictatorial personality. (As far as I know, for all the hysteria about Mr. Trump’s supposed “fascism”, he has yet to defy court orders enjoining his executive actions, despite the apparent unconstitutionality of some of the judicial interventions.)

MacLean’s hysteria is badly misplaced. “Liberal democracy” isn’t under siege. Liberty and prosperity are, and have been for a long time. The siege continues, in the form of resistance to Mr. Trump’s administration by legislators, judges, the media, much of the academy, and the usual left-wing rabble. It’s all part of a vast, left-wing conspiracy.


Related reading (listed chronologically):

Jason Brennan, “Conspire Me This: Is Nancy MacLean a Hired Gun for the Establishment?“, Bleeding Heart Libertarians, June 23, 2017 (See grant number FA-57183-13, awarded to MacLean by the National Endowment for the Humanities.)

Russell Roberts, “Nancy MacLean Owes Tyler Cowen an Apology“, Medium, June 25, 2017

Don Boudreaux, Cafe Hayek — begin with “Russ Roberts on Nancy MacLean on Tyler Cowen on Democracy” (June 26, 2017) and continue through dozens of relevant and eloquent posts about MacLean’s outright errors, mental sloppiness, and argumentative slipperiness

David Henderson, “Nancy MacLean’s Distortion of James Buchanan’s Statement“, EconLog, June 27, 2017

Philip W. Magness, “How Nancy MacLean Went Whistlin’ Dixie“, Philip W. Magness, June 27, 2017

Ramesh Ponnuru, “Nancy MacLean’s Methods“, National Review Online, The Corner, June 27, 2017

Johnathan H. Adler, “Does ‘Democracy in Chains’ Paint an Accurate Picture of James Buchanan?“, The Volokh Conspiracy, June 28, 2017 (Adler updates this often)

David Bernstein, “Some Dubious Claims in Nancy MacLean’s ‘Democracy in Chains’“, The Volokh Conspiracy, June 28, 2017

Steve Horwitz, “MacLean on Nutter and Buchanan on Universal Education“, Bleeding Heart Libertarians, June 28, 2017

David Bernstein, “Some Dubious Claims in Nancy MacLean’s ‘Democracy in Chains,’ Continued“, The Volokh Conspiracy, June 29, 2017

Philip W. Magness, “Nancy MacLean’s Calhounite Imagination“, Philip W. Magness, June 29, 2017

Michael C. Munger, “On the Origins and Goals of Public Choice“, Independent Institute (forthcoming in The Independent Review), June 29, 2017

Daniel Bier, “The Juvenile ‘Research’ of ‘Historian’ Nancy MacLean“, The Skeptical Libertarian, July 5, 2017

David Boaz, “Another Misleading Quotation in Nancy MacLean’s ‘Democracy in Chains’“, Cato At Liberty, July 5, 2017

David Bernstein, “Yet More Dubious Claims in Nancy MacLean’s ‘Democracy in Chains’“, The Volokh Conspiracy, July 6, 2017

Michael Giberson, “Fun With Footnotes (A Game of Scholarly Diversity)“, Knowledge Problem, July 9, 2017

David Gordon, “MacLean on James Buchanan: Fake History for an Age of Fake News“, Mises Wire, July 10, 2017

Ilya Somin, “Who Wants to Put Democracy in Chains?“, The Volokh Conspiracy, July 10, 2017

David Bernstein, “Nancy MacLean’s Conspiratorial Response to Criticism of ‘Democracy in Chains’“, The Volokh Conspiracy, July 11, 2017

Don Boudreaux, “Quotation of the Day…“, Cafe Hayek, July 12, 2017

Don Boudreaux, “Nancy MacLean Should Get Back in Touch with Reality“, Cafe Hayek, July 12, 2017

Steve Horwitz, “A Devastating Review of Nancy MacLean’s Book on the Klan“, Bleeding Heart Libertarians, July 12, 2017

Jeffrey A. Tucker, “This Confused Conspiracy Theory Gets the Agrarians All Wrong“, FEE, Articles, July 12, 2017

David Bernstein, “Duke Professor Georg Vanberg on ‘Democracy in Chains’“, The Volokh Conspiracy, July 14, 2017

Henry Farrell and Steven Teles, “Even the Intellectual Left Is Drawn to Conspiracy Theories about the Right.Resist Them.“, Vox, July 14, 2017

Jonathan H. Adler, “‘Why Have So Many Embraced Such an Obviously Flawed Book?’“, The Volokh Conspiracy, July 15, 2017

Sarah Skwire, “MacLean Is Not Interested in a Fair Fight“, FEE, Articles, July 15, 2017

Steven Hayward, “The Scandal of the Liberal Mind“, Power Line, July 16, 2017

Steven Hayward, “When You’ve Lost Rick Perlstein…“, Power Line, July 19, 2017

Jonathan H. Adler, “Nancy MacLean Responds to Her Critics“, The Volokh Conspiracy, July 20, 2017

Charlotte Allen, “They’re Out to Get Her?“, The Weekly Standard, July 20, 2017

Dave Bernstein, “Did Nancy MacLean Make Stuff Up in ‘Democracy in Chains’?“, The Volokh Conspiracy, July 20, 2017

Brian Doherty, “What Nancy MacLean Gets Wrong about James Buchanan“, reason.com, July 20, 2017

Arnold Kling, “Nancy MacLean: Ignoring the Central Ethical Issue“, askblog, July 20, 2017

Greg Weiner, “Nancy MacLean’s Ad Hominem Ad Hominem“, Library of Law & Liberty, July 25, 2017

Jon Cassidy, “Render Them Unable: More on Nancy MacLean’s ‘Democracy in Chains’“, The American Spectator, July 27, 2017

Steve Horwitz, “Book Review — Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America“, Cato Journal, September 2017

Death of a Nation

More than 50 years ago I heard a white woman say of blacks “They’re not Americans.” I was appalled by that statement, for it contradicted what I had been taught to believe about America, namely, this:

“America is not just a country,” said the rock singer Bono, in Pennsylvania in 2004: “It’s an idea.”

That’s the opening of John O’Sullivan’s essay, “A People, Not Just an Idea” (National Review, November 19, 2015).

Bono is a decent, thoughtful, and public-spirited man. I didn’t choose his quotation to suggest that this view of America is a kind of pop opinion. It just happened that in my Google search his name came ahead of many others, from George Will to Irving Kristol to almost every recent presidential candidate, all of whom had described America either as an idea or as a “proposition nation,” to distinguish it from dynastic realms or “blood and soil” ethnicities. This philosophical definition of America is now the conventional wisdom of Left and Right, at least among people who write and talk of such things.

Indeed, we have heard variations on Bono’s formulation so many times that we probably fail to notice how paradoxical it is. But listen to how it sounds when reversed: “America is not just an idea; it is a nation.” Surely that version has much more of the ring of common sense. For a nation is plainly something larger, more complex, and richer than an idea. A nation may include ideas. It may have evolved under the influence of a particular set of ideas. But because it encompasses so many other things — notably the laws, institutions, language of the nation; the loyalties, stories, and songs of the people; and above all Lincoln’s “mystic chords of memory” — the nation becomes more than an idea with every election, every battle, every hero, every heroic tale, every historical moment that millions share.

That is not to deny that the United States was founded on some very explicit political ideas, notably liberty and equality, which Jefferson helpfully wrote down in the Declaration of Independence. To be founded on an idea, however, is not the same thing as to be an idea. A political idea is not a destination or a conclusion but the starting point of an evolution — and, in the case of the U.S., not really a starting point, either. The ideas in the Declaration on which the U.S. was founded were not original to this country but drawn from the Anglo-Scottish tradition of Whiggish liberalism. Not only were these ideas circulating well before the Revolution, but when the revolutionaries won, they succeeded not to a legal and political wasteland but to the institutions, traditions, and practices of colonial America — which they then reformed rather than abolished….

As John Jay pointed out, Americans were fortunate in having the same religion (Protestantism), the same language, and the same institutions from the first. Given the spread of newspapers, railways, and democratic debate, that broad common culture would intensify the sense of a common American identity over time. It was a cultural identity more than an ethnic one, and one heavily qualified by regional loyalties… And the American identity might have become an ethnic one in time if it had not been for successive waves of immigration that brought other ethnicities into the nation.

That early American identity was robust enough to absorb these new arrivals and to transform them into Americans. But it wasn’t an easy or an uncomplicated matter. America’s emerging cultural identity was inevitably stretched by the arrivals of millions of people from different cultures. The U.S. government, private industry, and charitable organizations all set out to “Americanize” them. It was a great historical achievement and helped to create a new America that was nonetheless the old America in all essential respects….

By World War II, … all but the most recent migrants had become culturally American. So when German commandos were wandering behind American lines in U.S. uniforms during the Battle of the Bulge, the G.I.s testing their identity asked not about … the First Amendment but questions designed to expose their knowledge (or ignorance) of American life and popular culture….

Quite a lot flows from this history. Anyone can learn philosophical Americanism in a civics class; for a deeper knowledge and commitment, living in America is a far surer recipe…. Americans are a distinct and recognizable people with their own history, culture, customs, loyalties, and other qualities that are wider and more various than the most virtuous summary of liberal values….

… If Americans are a distinct people, with their own history, traditions, institutions, and common culture, then they can reasonably claim that immigrants should adapt to them and to their society rather than the reverse. For most of the republic’s history, that is what happened. And in current circumstances, it would imply that Muslim immigrants should adapt to American liberty as Catholic immigrants once did.

If America is an idea, however, then Americans are not a particular people but simply individuals or several different peoples living under a liberal constitution.

For a long time the “particular people” were not just Protestants but white Protestants of European descent. As O’Sullivan points out, Catholics (of European descent) eventually joined the ranks of “particular people”. But there are others — mostly blacks and Hispanics — who never did and never will join those ranks. Whatever the law may say about equality, access to housing, access to public accommodations, and so on, membership in the ranks of “particular people” is up to those who are already members.

The woman who claimed that blacks weren’t Americans was a member. She was a dyed-in-the-wool Southerner, but her attitude wasn’t untypical of the attitudes of many white Americans — Northern and Southern, past and present. Like it or not, the attitude remains prevalent in the country. (Don’t believe polls that purport to demonstrate racial comity; there’s a well-known aversion to giving a “wrong” answer to a pollster.)

The revealed preference of most whites (a preference shared by most blacks) is for racial segregation. Aggregate statistics hide the real story, which is the gentrification of some parts of inner cities (i.e., the creation of white enclaves) and “white flight” from suburbs to which inner-city blacks are fleeing. (See this article, for instance.)

The taste for segregation shows up in statistics about public-school enrollment. (See this article, for instance.) White parents (and affluent blacks) are more often keeping their children out of local public schools with large “minority” enrollments by choosing one of the alternatives legally available to them (e.g., home schooling). (Presidents with school-age children — including Barack Obama — have done the same thing to avoid sending their children to the public schools of the District of Columbia, whose students are predominantly black and Hispanic.)

I have focused on voluntary racial segregation because it underscores the fact — not lost on the white, Southern woman of my acquaintance — that the United States was once built upon the “blood and soil” ethnicity of whites whose origins lay in Europe. Blacks can never be part of that nation. Neither can Hispanics, who now outnumber blacks in America. Blacks and Hispanics belong to the “proposition” nation.

They have been joined by the large numbers of Americans who no longer claim allegiance to the “blood and soil” nation, regardless of their race or ethnicity — leftists, in other words. Since the 1960s leftists have played an ever-larger, often dominant, role in the governance of America. They have rejected the “history, culture, customs, [and] loyalties” which once bound most Americans. In fact they are working daily — through the academy, the media, and the courts — to transform America fundamentally by erasing the “history, culture, customs, [and] loyalties” of Americans from the people’s consciousness and the nation’s laws.

Pat Buchanan, who is usually too strident for my taste, hits it on the head:

In Federalist No. 2, John Jay writes of them as “one united people . . . descended from the same ancestors, speaking the same language, professing the same religion, attached to the same principles of government, very similar in their manners and customs . . .”

If such are the elements of nationhood and peoplehood, can we still speak of Americans as one nation and one people?

We no longer have the same ancestors. They are of every color and from every country. We do not speak one language, but rather English, Spanish and a host of others. We long ago ceased to profess the same religion. We are Evangelical Christians, mainstream Protestants, Catholics, Jews, Mormons, Muslims, Hindus and Buddhists, agnostics and atheists.

Federalist No. 2 celebrated our unity. Today’s elites proclaim that our diversity is our strength. But is this true or a tenet of trendy ideology?

After the attempted massacre of Republican Congressmen at that ball field in Alexandria, Fareed Zakaria wrote: “The political polarization that is ripping this country apart” is about “identity . . . gender, race, ethnicity, sexual orientation (and) social class.” He might have added — religion, morality, culture and history.

Zakaria seems to be tracing the disintegration of our society to that very diversity that its elites proclaim to be its greatest attribute: “If the core issues are about identity, culture and religion … then compromise seems immoral. American politics is becoming more like Middle Eastern politics, where there is no middle ground between being Sunni or Shiite.”

Among the issues on which we Americans are at war with one another — abortion, homosexuality, same-sex marriage, white cops, black crime, Confederate monuments, LGBT rights, affirmative action.

America is no longer a nation whose inhabitants are bound mainly by “blood and soil”. Worse than that, it was — until the election of 2016 — fast becoming a nation governed by the proposition that liberty is only what leftists say it is: the liberty not to contradict the left’s positions on climate, race, intelligence, economics, religion, marriage, the right to life, and government’s intrusive role in all of those things and more. The resistance to Donald Trump is fierce and unforgiving because his ascendancy threatens what leftists have worked so hard to achieve in the last 50 years: the de-Americanization of America.

Is all of this just the grumbling of white men of European descent? I think not. Measures of national unity are hard to come by. Opinion polls, aside from their relatively brief history (compared with the age of the Union), are notoriously unreliable. Presidential elections are more meaningful because (some degree of chicanery aside) they reflect voters’ feelings about the state of the Union. Regardless of the party affiliation of the winning candidate, a strong showing usually reflects broad satisfaction with the nation’s direction; a weak showing usually reflects the opposite.

Popular votes were first recorded in the election of 1824. Here is a graphical history of the winning candidate’s percentages of the vote in each election from 1824 through 2016 (with the exclusion of 1864, when the South wasn’t in the Union):


Derived from this table in this article at Wikipedia.

Election-to-election variations reflect the personal popularity of some candidates, the strength of third-party movements, and various other transitory factors. The 5-election average smooths those effects and reveals what is (to me) an obvious story: national disunity in the years before and after the Civil War; growing unity during the first half of the 20th century, peaking during the Great Depression and World War II; modest post-war decline followed by stability through the 1980s; and rapid decline since then because of the left’s growing power and the rapid rise of the Hispanic population.

The graph underscores what I already knew: The America in which I was born and raised — the America of the 1940s and 1950s — has been beaten down. It is more likely to die than it is to revive. And even if it revives to some degree, it will never be the same.


Related posts:
Academic Bias
Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Liberty and Society
The Eclipse of “Old America”
Genetic Kinship and Society
The Culture War
Ruminations on the Left in America
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Retrospective Virtue-Signalling
The Left and Violence
Four Kinds of “Liberals”
Leftist Condescension
Class in America
A Word of Warning to Leftists (and Everyone Else)
Another Thought or Two about Class
The Vast Left-Wing Conspiracy
The Left and Evergreen State: Reaping What Was Sown

Suicidal Despair and the “War on Whites”

This entry is prompted by a recent spate of posts and articles about the rising mortality rate among non-Hispanic whites without a college degree (hereinafter working-class whites, for convenience). Thomas Lifson characterizes the trend as “a spiritual crisis”, after saying this:

White males, in large numbers, are simply losing their will to live, and as a result, they are dying so prematurely and in such large numbers that a startling demographic gap has emerged. [“Stunning Evidence that the Left Has Won its War on White Males“, American Thinker, March 26, 2017]

Later in the piece, Lifson gets to the “war” on white males:

For at least four decades, white males have been under continuous assault as bearers of “white privilege” and beneficiaries of sexism. Special preferences and privileges have been granted to other groups, but that is the least of it.  More importantly, the very basis of the psychological self-worth of white males have been under attack.  White males are frequently instructed by authority figures in education and the media that they are responsible for most of the evils of the modern world, that the achievements of Euro-American civilization are a net loss for humanity, stained by exploitation, racism, unfairness, and every other collective evil the progressive mind can manufacture.

Some white males are relatively unscathed by the psychological warfare, but others are more vulnerable. Those who have educational, financial, or employment achievements that have rewarded their efforts may be able to keep going as productive members of society, their self-esteem resting on tangible fruits of their work and social position. But other white males, especially those who work with their hands and have been seeing job opportunities contract or disappear, have been losing the basis for a robust sense of self-worth as their job opportunities disappear.

We now have statistical evidence that political correctness kills.

We have no such thing. The recent trend isn’t yet significant. But it is real, and government is the underlying cause.

To begin at the beginning, the source of the spate of articles about the rising mortality rate of working-class whites is Anne Case and Angus Deaton’s “Mortality and Morbidity in the 21st Century” (Brookings Institution, Brookings Papers on Economic Activity (conference edition), March 17, 2017). Three of the paper’s graphs set the scene. This one shows mortality trends in the United States:

The next figure indicates that the phenomenon isn’t unique to non-Hispanic whites in the age 50-54 bracket:

But the trend among American whites defies the trends in several other Western nations:

Whence the perverse trend? It seems due mainly to suicidal despair:

How do these recent trends stack up against the long view? I couldn’t find a long time series for drug, alcohol, and suicide mortality. But I did find a study by Feijin Luo et al. that traces suicide rates from just before the onset of the Great Depression to just before the onset of the Great Recession — “Impact of Business Cycles on US Suicide Rates, 1928–2007” (American Journal of Public Health, June 2011). Here are two key graphs from the report:

The graphs don’t reproduce well, so the following quotations will be of help:

The overall suicide rate fluctuated from 10.4 to 22.1 over the 1928–2007 period. It peaked in 1932, the last full year of the Great Depression, and bottomed in 2000. The overall suicide rate decreased from 18.0 in 1928 to 11.2 in 2007. However, most of the decline occurred before 1945; after that it fluctuated until the mid-1950s, and then it gradually moved up until the late 1970s. The overall suicide rate resumed its downward trend from the mid-1980s to 2000, followed by a trend reversal in the new millennium.

Figure 1a [top] shows that the overall suicide rate generally increased in recessions, especially in severe recessions that lasted longer than 1 year. The largest increase in the overall suicide rate occurred during the Great Depression (1929–1933), when it surged from 18.0 in 1928 to 22.1 (the all-time high) in 1932, the last full year of the Great Depression. [The Great Depression actually lasted until 1940: TEA.] This increase of 22.8% was the highest recorded for any 4-year interval during the study period. The overall suicide rate also rose during 3 other severe recessions: [the recession inside the Great Depression] (1937–1938), the oil crisis (1973–1975), and the double-dip recession (1980–1982). Not only did the overall suicide rate generally rise during recessions; it also mostly fell during expansions…. However, the overall suicide rate did not fall during the 1960s (i.e., 1961–1969), a notable phenomenon that will be explained by the different trends of age-specific suicide rates.

The age-specific suicide rates displayed more variations than did the overall suicide rate, and the trends of those age-specific suicide rates were largely different. As shown in Figure 1b [bottom], from 1928–2007, the suicide rates of the 2 elderly groups (65–74 years and 75 years and older) and the oldest middle-age group (55–64 years) experienced the most remarkable decline. The suicide rates of those groups declined in both pre- and postwar periods. The suicide rates of the other 2 middle-aged groups (45–54 years and 35–44 years) also declined from 1928–2007, which we attributed to the decrease during the war period more than offsetting the increase in the postwar period. In contrast with the declining suicide rates of the 2 elderly and 3 middle-age groups, the suicide rates of the 2 young groups (15–24 years and 25–34 years) increased or just marginally decreased from 1928–2007. The 2 young groups experienced a marked increase in suicide rates in the postwar period. The suicide rate of the youngest group (5–14 years) also increased from 1928–2007. However, because of its small magnitude, we do not include this increase in the subsequent discussion.

We noted that the suicide rate of the group aged 65–74 years, the highest of all age groups until 1936, declined the most from 1928 to 2007. That rate started at 41.2 in 1928 and dropped to 12.6 in 2007, peaking at 52.3 in 1932 and bottoming at 12.3 in 2004. By contrast, the suicide rate of the group aged 15–24 years increased from 6.7 in 1928 to 9.7 in 2007. That rate peaked at 13.7 in 1994 and bottomed at 3.8 in 1944, and it generally trended upward from the late 1950s to the mid-1990s. The suicide rate differential between the group aged 65–74 years and the group aged 15–24 years generally decreased until 1994, from 34.5 in 1928 to 1.6 in 1994.

All age groups experienced a substantial increase in their suicide rates during the Great Depression, and most groups (35–44 years, 45–54 years, 55–64 years, 65–74 years, and 75 years and older) set record-high suicide rates in 1932; but they reacted differently to many other recessions, including severe recessions such as the [1937-1938 recession] and the oil crisis. Their reactions were different during expansions as well, most notably in the 1960s, when the suicide rates of the 3 oldest groups (75 years and older, 65–74 years, and 55–64 years) declined moderately, and those of the 3 youngest groups (15–24 years, 25–34 years, and 35–44 years) rose noticeably….

[T]he overall suicide rate and the suicide rate of the group aged 45–54 years were associated with business cycles at the significance level of 1%; the suicide rates of the groups aged 25–34 years, 35–44 years, and 55–64 years were associated with business cycles at the significance level of 5%; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were associated with business cycles at nonsignificant levels. To sumarize, the overall suicide rate was significantly countercyclical; the suicide rates of the groups aged 25–34 years, 35–44 years, 45–54 years, and 55–64 years were significantly countercyclical; and the suicide rates of the groups aged 15–24 years, 65–74 years, and 75 years and older were not significantly countercyclical.

The following graph, obtained from the website of the American Foundation for Suicide Prevention, extends the age-related analysis to 2015:

And this graph, from the same source, shows that the rising suicide rate is concentrated among whites and American Indians:

Though this graph encompasses deaths from all causes, the opposing trends for blacks and whites suggest strongly that working-class whites in all age groups have become much more prone to suicidal despair in the past 20 years. Moreover, the despair has persisted through periods of economic decline and economic growth (slow as it has been).

Why? Case and Deaton opine:

[S]ome of the most convincing discussions of what has happened to working class whites emphasize a long-term process of decline, or of cumulative deprivation, rooted in the steady deterioration in job opportunities for people with low education…. This process … worsened over time, and caused, or at least was accompanied by, other changes in society that made life more difficult for less-educated people, not only in their employment opportunities, but in their marriages, and in the lives of and prospects for their children. Traditional structures of social and economic support slowly weakened; no longer was it possible for a man to follow his father and grandfather into a manufacturing job, or to join the union. Marriage was no longer the only way to form intimate partnerships, or to rear children. People moved away from the security of legacy religions or the churches of their parents and grandparents, towards churches that emphasized seeking an identity, or replaced membership with the search for connections…. These changes left people with less structure when they came to choose their careers, their religion, and the nature of their family lives. When such choices succeed, they are liberating; when they fail, the individual can only hold him or herself responsible….

As technical change and globalization reduced the quantity and quality of opportunity in the labor market for those with no more than a high school degree, a number of things happened that have been documented in an extensive literature. Real wages of those with only a high school degree declined, and the college premium increased….

Lower wages made men less marriageable, marriage rates declined, and there was a marked rise in cohabitation, then much less frowned upon than had been the case a generation before…. [B]eyond the cohort of 1940, men and women with less than a BA degree are less likely to have ever been married at any given age. Again, this is not occurring among those with a four-year degree. Unmarried cohabiting partnerships are less stable than marriages. Moreover, among those who do marry, those without a college degree are also much more likely to divorce than are those with a degree….

These accounts share much, though not all, with Murray’s … account [in Coming Apart] of decline among whites in his fictional “Fishtown.” Murray argues that traditional American virtues are being lost among working-class white Americans, especially the virtue of industriousness. The withdrawal of men from the labor force reflects this loss of industriousness; young men in particular prefer leisure—which is now more valuable because of video games … —though much of the withdrawal of young men is for education…. The loss of virtue is supported and financed by government payments, particularly disability payments….

In our account here, we emphasize the labor market, globalization and technical change as the fundamental forces, and put less focus on any loss of virtue, though we certainly accept that the latter could be a consequence of the former. Virtue is easier to maintain in a supportive environment. Yet there is surely general agreement on the roles played by changing beliefs and attitudes, particularly the acceptance of cohabitation, and of the rearing of children in unstable cohabiting unions.

These slow-acting and cumulative social forces seem to us to be plausible candidates to explain rising morbidity and mortality, particularly their role in suicide, and with the other deaths of despair, which share much with suicides. As we have emphasized elsewhere, … purely economic accounts of suicide have consistently failed to explain the phenomenon. If they work at all, they work through their effects on family, on spiritual fulfillment, and on how people perceive meaning and satisfaction in their lives in a way that goes beyond material success. At the same time, cumulative distress, and the failure of life to turn out as expected is consistent with people compensating through other risky behaviors such as abuse of alcohol, overeating, or drug use that predispose towards the outcomes we have been discussing….

What our data show is that the patterns of mortality and morbidity for white non-Hispanics without a college degree move together over lifetimes and birth cohorts, and that they move in tandem with other social dysfunctions, including the decline of marriage, social isolation, and detachment from the labor force…. Whether these factors (or factor) are “the cause” is more a matter of semantics than statistics. The factor could certainly represent some force that we have not identified, or we could try to make a case that declining real wages is more fundamental than other forces. Better, we can see globalization and automation as the underlying deep causes. Ultimately, we see our story as about the collapse of the white, high school educated, working class after its heyday in the early 1970s, and the pathologies that accompany that decline. [Op. cit., pp. 29-38]

The seemingly rigorous and well-reasoned analyses reported by Case-Deaton and Luo et al. are seriously flawed, for these reasons:

  • Case and Deaton’s focus on events since 1990 is analogous to a search for lost keys under a street lamp because that’s where the light is. As shown in the graphs taken from Luo et al., suicide rates have at various times risen (and dropped) as sharply as they have in recent years.
  • Luo et al. address a much longer time span but miss an important turning point, which came during World War II. Because of that, they resort to a strained, non-parametric analysis of the relationship between the suicide rate and business cycles.
  • It is misleading to focus on age groups, as opposed to birth-year cohorts. For example, persons in the 50-54 age group in 1990 were born between 1936 and 1940, but in 2010 persons in the 50-54 age group were born between 1956 and 1960. The groups, in other words, don’t represent the same cohort. The only meaningful suicide rate for a span of more than a few years is the rate for the entire population.

I took a fresh look at the overall suicide rate and its relationship to the state of the economy. First, I extended the overall, age-adjusted suicide rate for 1928-2007 provided by Luo et al. in Supplementary Table B (purchase required) by splicing it with a series for 1999-2014 from Centers for Disease Control and Prevention, National Center for Health Statistics, Data Brief 241. I then drew on the database at Measuring Worth to derive year-over-year changes in real GDP for 1928-2014. Here’s an overview of the two time series:

The suicide rate doesn’t drop below 15 until 1942. From 1943 through 2014 it vacillates in a narrow range between 10.4 (2000) and 13.6 (1975). Despite the rise since 2000, the overall rate still hasn’t returned to the 1975 peak. And only in recent years has the overall rate reached figures that were reached often between 1943 and 1994.

Moreover, the suicide rate from 1928 through 1942 is strongly correlated with changes in real GDP. But the rate from 1943 through 2014 is not:

Something happened during the war years to loosen the connection between the state of the economy and the suicide rate. That something was the end of the pervasive despair that the Great Depression inflicted on huge numbers of Americans. It’s as if America had a mood transplant, one which has lasted for more than 70 years. The recent uptick in the rate of suicide (and the accompanying rise in slow-motion suicide) is sad because it represents wasted lives. But it is within one standard deviation of the 1943-2014 average of  12.2 suicides per 100,000 persons:

(It seems to me that researchers ought to be asking why the rate was so low for about 20 years, beginning in the 1980s.)

Perhaps the recent uptick among working-class whites can be blamed, in part, on loss of “virtue”, technological change, and globalization — as Case and Deaton claim. But they fail to notice the bigger elephant in the room: the destructive role of government.

Technological change and globalization simply reinforce the disemployment effects of the long-term decline in the rate of economic growth. I’ve addressed the decline many times, most recently in “Presidents and Economic Growth“. The decline has three main causes, all attributable to government action, which I’ve assessed in “The Rahn Curve Revisited“: the rise in government spending as a fraction of GDP, the rise in the number of regulations on the books, and the (unsurprising) effect of those variables on private business investment. The only silver lining has been a decline in the rate of inflation, which is unsurprising in view of the general slow-down of economic growth. Many jobs may have disappeared because of technological change and many jobs may have been “shipped overseas”, but there would be a lot more jobs if government had kept its hands off the economy and out of Americans’ wallets.

Moreover, the willingness of Americans — especially low-skill Americans — to seek employment has been eroded by various government programs: aid to the families of dependent children (a boon to unwed mothers and a bane to family stability), food stamps, disability benefits, the expansion of Medicaid, subsidized health-care for “children” under the age of 26, and various programs that encourage women to work outside the home, thus fostering male unemployment.

Economist Edward Glaeser puts it this way:

The rise of joblessness—especially among men—is the great American domestic crisis of the twenty-first century. It is a crisis of spirit more than of resources. The jobless are far more prone to self-destructive behavior than are the working poor. Proposed solutions that focus solely on providing material benefits are a false path. Well-meaning social policies—from longer unemployment insurance to more generous disability diagnoses to higher minimum wages—have only worsened the problem; the futility of joblessness won’t be solved with a welfare check….

The New Deal saw the rise of public programs that worked against employment. Wage controls under the National Recovery Act made it difficult for wages to fall enough to equilibrate the labor market. The Wagner Act strengthened the hand of unions, which kept pay up and employment down. Relief efforts for the unemployed, including federal make-work jobs, eased the pressure on the jobless to find private-sector work….

… In 2011, more than one in five prime-age men were out of work, a figure comparable with the Great Depression. But while employment came back after the Depression, it hasn’t today. The unemployment rate may be low, but many people have quit the labor force entirely and don’t show up in that number. As of December 2016, 15.2 percent of prime-age men were jobless—a figure worse than at any point between World War II and the Great Recession, except during the depths of the early 1980s recession….

Joblessness is disproportionately a condition of the poorly educated. While 72 percent of college graduates over age 25 have jobs, only 41 percent of high school dropouts are working. The employment-rate gap between the most and least educated groups has widened from about 6 percent in 1977 to almost 15 percent today….

Both Franklin Roosevelt and Lyndon Johnson aggressively advanced a stronger safety net for American workers, and other administrations largely supported these efforts. The New Deal gave us Social Security and unemployment insurance, which were expanded in the 1950s. National disability insurance debuted in 1956 and was made far more accessible to people with hard-to-diagnose conditions, like back pain, in 1984. The War on Poverty delivered Medicaid and food stamps. Richard Nixon gave us housing vouchers. During the Great Recession, the federal government temporarily doubled the maximum eligibility time for receiving unemployment insurance.

These various programs make joblessness more bearable, at least materially; they also reduce the incentives to find work. Consider disability insurance. Industrial work is hard, and plenty of workers experience back pain. Before 1984, however, that pain didn’t mean a disability check for American workers. After 1984, though, millions went on the disability rolls. And since disability payments vanish if the disabled person starts earning more than $1,170 per month, the disabled tend to stay disabled…. Disability insurance alone doesn’t entirely explain the rise of long-term joblessness—only one-third or so of jobless males get such benefits. But it has surely played a role.

Other social-welfare programs operate in a similar way. Unemployment insurance stops completely when someone gets a job, … [thus] the unemployed tend to find jobs just as their insurance payments run out. Food-stamp and housing-voucher payments drop 30 percent when a recipient’s income rises past a set threshold by just $1. Elementary economics tells us that paying people to be or stay jobless will increase joblessness….

The rise of joblessness among the young has been a particularly pernicious effect of the Great Recession. Job loss was extensive among 25–34-year-old men and 35–44-year-old men between 2007 and 2009. The 25–34-year-olds have substantially gone back to work, but the number of employed 35–44-year-olds, which dropped by 2 million at the start of the Great Recession, hasn’t recovered. The dislocated workers in this group seem to have left the labor force permanently.

Unfortunately, policymakers seem intent on making the joblessness crisis worse. The past decade or so has seen a resurgent progressive focus on inequality—and little concern among progressives about the downsides of discouraging work. Advocates of a $15 minimum hourly wage, for example, don’t seem to mind, or believe, that such policies deter firms from hiring less skilled workers. The University of California–San Diego’s Jeffrey Clemens examined states where higher federal minimum wages raised the effective state-level minimum wage during the last decade. He found that the higher minimum “reduced employment among individuals ages 16 to 30 with less than a high school education by 5.6 percentage points,” which accounted for “43 percent of the sustained, 13 percentage point decline in this skill group’s employment rate.”

The decision to prioritize equality over employment is particularly puzzling, given that social scientists have repeatedly found that unemployment is the greater evil…. One recent study estimated that unemployment leads to 45,000 suicides worldwide annually. Jobless husbands have a 50 percent higher divorce rate than employed husbands. The impact of lower income on suicide and divorce is much smaller. The negative effects of unemployment are magnified because it so often becomes a semipermanent state.

Time-use studies help us understand why the unemployed are so miserable. Jobless men don’t do a lot more socializing; they don’t spend much more time with their kids. They do spend an extra 100 minutes daily watching television, and they sleep more. The jobless also are more likely to use illegal drugs….

Joblessness and disability are also particularly associated with America’s deadly opioid epidemic…. The strongest correlate of those deaths is the share of the population on disability. That connection suggests a combination of the direct influence of being disabled, which generates a demand for painkillers; the availability of the drugs through the health-care system; and the psychological misery of having no economic future.

Increasing the benefits received by nonemployed persons may make their lives easier in a material sense but won’t help reattach them to the labor force. It won’t give them the sense of pride that comes from economic independence. It won’t give them the reassuring social interactions that come from workplace relationships. When societies sacrifice employment for a notion of income equality, they make the wrong choice. [“The War on Work — And How to End It“, City Journal, special issue: The Shape of Work to Come 2017]

In sum, the rising suicide rate — whatever its significance — is a direct and indirect result of government policies. “We’re from the government and we’re here to help” is black humor, at best. The left is waging a war on white males. But the real war — the war that kills — is hidden from view behind the benign facade of governmental “compassion”.

Having said all of that, I will end on a cautiously positive note. There still is upward mobility in America. Not all working-class people are destined for suicidal despair, only those at the margin who have pocketed the fool’s gold of government handouts.


Other related posts:
Bubbling Along
Economic Mobility Is Alive and Well in America
H.L. Mencken’s Final Legacy
The Problem with Political Correctness
“They Deserve to Die”?
Mencken’s Pearl of Wisdom
Class in America
Another Thought or Two about Class
The Midwest Is a State of Mind

Another Case of Cultural Appropriation

Maverick Philosopher makes an excellent case for cultural appropriation. I am here to make a limited case against it.

There is an eons-old tradition that marriage is a union of man and woman, which was shared  by all religions and ethnicities until yesterday, on the time-scale of human existence. Then along came some homosexual “activists” and their enablers (mainly leftists, always in search of “victims”), to claim that homosexuals can marry.

This claim ignores the biological and deep social basis of marriage, which is the procreative pairing of male and female and the resulting formation of the basic social unit: the biologically bonded family.

Homosexual “marriage” is, by contrast, a wholly artificial conception. It is the ultimate act of cultural appropriation. Its artificiality is underscored by the fact that a homosexual “marriage” seems to consist of two “wives” or two “husbands”, in a rather risible bow to traditional usage. Why not “wusbands” or “hives”?


Related posts:
In Defense of Marriage
The Myth That Same-Sex “Marriage” Causes No Harm
Getting “Equal Protection” Right
Equal Protection in Principle and Practice

Lincoln Was Wrong

Michael Stokes Paulsen and his son Luke opine:

[A]t the heart of the Civil War, the crisis that triggered it, and the changes that it brought were enormous constitutional issues. Indeed, it is no exaggeration to say that the Civil War was fought over the meaning of the Constitution, and over who would have the ultimate power to decide that meaning. The Civil War decided—on the battlefields rather than in the courts—the most important constitutional questions in our nation’s history: the nature of the Union under the Constitution, the status and future of slavery, the powers of the national government versus the states, the supremacy of the Constitution, and the wartime powers of the president as commander in chief. It was the Civil War, not any subsequent judicial decision, that “overruled” the Supreme Court’s atrocious decision in Dred Scott v. Sandford creating a national constitutional right to own slaves….

The United States is the nation it is today because of Lincoln’s unwavering commitment to the Constitution as governing a single, permanent nation and forbidding secession. Lincoln’s vision of Union is so thoroughly accepted today that we forget how hotly disputed it was for the first seventy years of our nation’s history. The result was hardly inevitable. Lincoln’s vision and resolve saved the nation. Lincoln’s nationalist views have shaped every issue of federalism and sovereignty for the past one hundred fifty years. Compared with the constitutional issues over which the Civil War was fought, today’s disputes over federal- versus-state power are minor-league ball played out on a field framed by Lincoln’s prevailing constitutional vision of the United States as one nation, indivisible.

On the president’s constitutional duty: Lincoln understood his oath to impose an absolute personal moral and legal duty not to cave in to wrong, destructive views of the Constitution. He fought on the campaign trail for his understanding of Union and of the authority of the national government to limit the spread of slavery. Once in office, he understood his oath to impose on him an irreducible moral and legal duty of faithful execution of the laws, throughout the Union. It was a duty he could not abandon for any reason. [“The Great Interpreter”, University of St. Thomas (Minnesota) Research Paper No. 15-09, April 17, 2017]

Whence Lincoln’s view of the Union? This is from the Paulsens’ book, The Constitution: An Introduction:

Lincoln was firmly persuaded that secession was unconstitutional. Immediately upon taking office as President, in his First Inaugural Address, Lincoln— a careful constitutional lawyer— laid out in public his argument as to why secession was unconstitutional: The Constitution was the supreme law of the land, governing all the states. The Constitution did not provide that states could withdraw from the Union, and to infer such a right was contrary to the letter and spirit of the document. The Constitution’s Preamble announced the objective of forming a “more perfect Union” of the states than had existed under the Articles of Confederation, which themselves had said that the Union would be “perpetual.” Moreover, the Constitution created a true national government, not a mere “compact,” league, or confederacy— in fact, it explicitly forbade states from entering into alliances, confederacies, or treaties outside of national authority. The people of the United States, taken as a whole, were sovereign, not the states.

It followed from these views, Lincoln argued, that “no State upon its own mere motion can lawfully get out of the Union; that resolves and ordinances to that effect are legally void, and that acts of violence within any State or States against the authority of the United States are insurrectionary or revolutionary, according to circumstances.” Purported secession was simply an illegal— unconstitutional— rebellion against the Union.

Lincoln’s position, which the Paulsens seem to applaud, is flawed at its root. The Constitution did not incorporate the Articles of Confederation, it supplanted them. The “perpetual Union” of the Articles vanished into thin air upon the adoption of the Constitution. Moreover, the “more perfect Union” of the Constitution’s preamble is merely aspirational, as are the desiderata that follow it:

establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity.

“More perfect”, if it means anything, means that the Constitution created a central government where there was none before. The Constitution is silent about perpetuity. It is silent about secession. Therefore, one must turn elsewhere to find (or reject) a legal basis for secession, but not to the Civil War.

The Civil War “decided” the issue of secession in the same way that World War I “decided” the future of war. It was the “war to end all wars”, was it not? Therefore, tens of millions of deaths to the contrary notwithstanding, there have been no wars since the Armistice of 1918. By the same logic, the thief who steals your car or the vandal who defaces your home or the scam artist who takes your life savings has “decided” that you don’t own a car, or that your home should be ugly, or that your savings are really his. Thus does might make right, as the Paulsens would have it.

There is in fact a perfectly obvious and straightforward case for unilateral secession, which I have made elsewhere, including “A Resolution of Secession”. You should read all of it if you are a rabid secessionist — or a rabid anti-secessionist. Here are some key passages:

The Constitution is a contract — a compact in the language of the Framers. The parties to the compact are not only the States but also the central government….

Lest there be any question about the status of the Constitution as a compact, we turn to James Madison, who is often called the Father of the Constitution. Madison, in a letter to Daniel Webster dated March 15, 1833, addresses

the question whether the Constitution of the U.S. was formed by the people or by the States, now under a theoretic discussion by animated partizans.

Madison continues:

It is fortunate when disputed theories, can be decided by undisputed facts. And here the undisputed fact is, that the Constitution was made by the people, but as imbodied into the several states, who were parties to it and therefore made by the States in their highest authoritative capacity….

[I]n The Federalist No. 39, which informed the debates in the various States about ratification….

Madison leaves no doubt about the continued sovereignty of each State and its people. The remaining question is this: On what grounds, if any, may a State withdraw from the compact into which it entered voluntarily?

There is a judicial myth — articulated by a majority of the United States Supreme Court in Texas v. White (1869) — that States may not withdraw from the compact because the union of States is perpetual….

The Court’s reasoning is born of mysticism, not legality. Similar reasoning might have been used — and was used — to assert that the Colonies were inseparable from Great Britain. And yet, some of the people of the Colonies put an end to the union of the Colonies and Great Britain, on the moral principle that the Colonies were not obliged to remain in an abusive relationship. That moral principle is all the more compelling in the case of the union known as the United States, which — mysticism aside — is nothing more than the creature of the States, as authorized by the people thereof.

In fact, the Constitution supplanted the Articles of Confederation and Perpetual Union, by the will of only nine of the thirteen States….

[I]n a letter to Alexander Rives dated January 1, 1833, Madison says that

[a] rightful secession requires the consent of the others [other States], or an abuse of the compact, absolving the seceding party from the obligations imposed by it.

An abuse of the compact most assuredly necessitates withdrawal from it, on the principle of the preservation of liberty, especially if that abuse has been persistent and shows no signs of abating. The abuse, in this instance, has been and is being committed by the central government.

The central government is both a creature of the Constitution and a de facto party to it, as co-sovereign with the States and supreme in its realm of enumerated and limited powers. One of those powers enables the Supreme Court of the United States to decide “cases and controversies” arising under the Constitution, which alone makes the central government a responsible party. More generally, the high officials of the central government acknowledge the central government’s role as a party to the compact — and the limited powers vested in them — when they take oaths of office requiring them to uphold the Constitution.

Many of those high officials have nevertheless have committed myriad abuses of the central government’s enumerated and limited powers. The abuses are far too numerous to list in their entirety. The following examples amply justify the withdrawal of the State of _______________ from the compact….

We, therefore, the representatives of the people of _______________ do solemnly publish and declare that this State ought to be free and independent; that it is absolved from all allegiance to the government of the United States; that all political connection between it and government of the United States is and ought to be totally dissolved; and that as a free and independent State it has full power to levy war, conclude peace, contract alliances, establish commerce, and to do all other acts and things which independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our lives, our fortunes and our sacred honor.


Related posts:
Secession
Secession Redux
A New Cold War or Secession?
The Real Constitution and Civil Disobedience
A Declaration of Independence
First Principles
The Constitution: Original Meaning, Corruption, and Restoration
The Southern Secession Reconsidered
A Declaration of Civil Disobedience
Our Perfect, Perfect Constitution
Reclaiming Liberty throughout the Land
Secession, Anyone?
Secession for All Seasons
A New Constitution for a New Republic
Restoring Constitutional Government: The Way Ahead
Secession Made Easy
More about “Secession Made Easy”
How Libertarians Ought to Think about the Constitution
The States and the Constitution
Judicial Supremacy: Judicial Tyranny
The Answer to Judicial Supremacy
Turning Points
A Resolution of Secession
Polarization and De-facto Partition

Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration

Civil War II

Are Americans really in the midst of Civil War II or a Cold Civil War? It has seemed that way for many years. I have written about it in “A New (Cold) Civil War or Secession?”, “The Culture War“, “Polarization and De-facto Partition“, and “Civil War?“.* Andrew Sullivan, whom I quit following several years ago for reasons that are evident in the following quotation (my irrepressible comments are in boldface and bracketed), has some provocative things to say about the situation:

Certain truths about human beings have never changed. We are tribal creatures in our very DNA; we have an instinctive preference for our own over others, for “in-groups” over “out-groups”; for hunter-gatherers, recognizing strangers as threats was a matter of life and death. We also invent myths and stories to give meaning to our common lives. Among those myths is the nation — stretching from the past into the future, providing meaning to our common lives in a way nothing else can. Strip those narratives away, or transform them too quickly, and humans will become disoriented. Most of us respond to radical changes in our lives, especially changes we haven’t chosen, with more fear than hope. We can numb the pain with legal cannabis or opioids, but it is pain nonetheless.

If we ignore these deeper facts about ourselves, we run the risk of fatal errors. It’s vital to remember that multicultural, multiracial, post-national societies are extremely new for the human species [but they are not “societies”], and keeping them viable and stable is a massive challenge. Globally, social trust is highest in the homogeneous Nordic countries, and in America, Pew has found it higher in rural areas than cities. The political scientist Robert Putnam has found that “people living in ethnically diverse settings appear to ‘hunker down,’ that is, to pull in like a turtle.” Not very encouraging about human nature — but something we can’t wish away, either. In fact, the American elite’s dismissal of these truths, its reduction of all resistance to cultural and demographic change as crude “racism” or “xenophobia,” only deepens the sense of siege many other Americans feel….

… Within the space of 50 years, America has gone from segregation to dizzying multiculturalism; … from homosexuality as a sin [or dangerous aberration] to homophobia as a taboo; from Christianity being the common culture to a secularism no society has ever sustained before ours [but mainly within the confines of the internet-media-academic complex, except where they have successfully enlisted government in the task of destroying social norms]….

And how can you seriously regard our political system and culture as worse than ever before in history? How self-centered do you have to be to dismiss the unprecedented freedom for women, racial minorities, and homosexuals? [How self-centered to you have to be to dismiss the fact that much of that “unprecedented freedom” has been bought at the expense of freedom of speech, freedom of association, property rights, and advancement based on merit — things that are at the very heart of liberty?]….

If the neo-reactionaries were entirely right, the collapse of our society would surely have happened long before now [Strawman alert: How does Sullivan know when “society” would have collapsed?]. But somehow, an historically unprecedented mix of races and cultures hasn’t led to civil war in the United States. [Not a shooting war, but a kind of civil war nevertheless.] … America has assimilated so many before, its culture churning into new forms, without crashing into incoherence. [Strawman alert 2: “America”, note being a “society”, doesn’t have a “culture”. But some “cultures” (e.g., welfare-dependency, “hate whitey”, drugs, political correctness) are ascendant, for those with eyes to see.] [“The Reactionary Temptation“, New York, April 30, 2017]

All in all, I would say that Mr. Sullivan protests too much. He protests so much that he confirms my view that America is smack in the middle of a Cold Civil War. (Despite that, and the fatuousness of Mr. Sullivan’s commentary, I am grateful to him for a clear explanation of the political philosophy of Leo Strauss,** the theme of which had heretofore been obscure to me.)

For other, more realistic views of the current state of affairs, see the following (listed in chronological order):

David French, “A Blue State ‘Secession’ Model I Can Get Behind” (National Review, March 19, 2017)

Daniel Greenfield, “The Civil War Is Here” (Frontpage Magazine, March 27, 2017)

Daniel Greenfield, “Winning the Civil War of Two Americas” (Frontpage Magazine, April 4, 2017)

Rick Moran, “War Between U.S. Government and Sanctuary Cities Heating Up” (American Thinker, April 10, 2017)

Angelo M. Codevilla, “The Cold Civil War” (Claremont Review of Books, April 25, 2017)


Solitude for the Masses

Paul Kingsworth reviews Michael Harris’s Solitude in “The End of Solitude: In a Hyperconnected World, Are We Losing the Art of Being Alone?” (New Statesman, April 26, 2017):

Harris has an intuition that being alone with ourselves, paying attention to inner silence and being able to experience outer silence, is an essential part of being human….

What happens when that calm separateness is destroyed by the internet of everything, by big-city living, by the relentless compulsion to be with others, in touch, all the time? Plenty of people know the answer already, or would do if they were paying attention to the question. Nearly half of all Americans, Harris tells us, now sleep with their smartphones on their bedside table, and 80 per cent are on their phone within 15 minutes of waking up. Three-quarters of adults use social networking sites regularly. But this is peanuts compared to the galloping development of the so-called Internet of Things. Within the next few years, anything from 30 to 50 billion objects, from cars to shirts to bottles of shampoo, will be connected to the net. The internet will be all around you, whether you want it or not, and you will be caught in its mesh like a fly. It’s not called the web for nothing….

What is the problem here? Why does this bother me, and why does it bother Harris? The answer is that all of these things intrude upon, and threaten to destroy, something ancient and hard to define, which is also the source of much of our creativity and the essence of our humanity. “Solitude,” Harris writes, “is a resource.” He likens it to an ecological niche, within which grow new ideas, an understanding of the self and therefore an understanding of others.

The book is full of examples of the genius that springs from silent and solitary moments. Beethoven, Dostoevsky, Kafka, Einstein, Newton – all developed their ideas and approach by withdrawing from the crowd….

Yet it is not only geniuses who have a problem: ordinary minds like yours and mine are threatened by the hypersocial nature of always-on urbanity….

So, what is to be done about all this? That’s the multibillion-dollar question, but it is one the book cannot answer. Harris spends many pages putting together a case for the importance of solitude and examining the forces that splinter it today….

Under the circumstances – and these are our circumstances – the only honest conclusion to draw is that the problem, which is caused primarily by the technological direction of our society, is going to get worse. There is no credible scenario in which we can continue in the same direction and not see the problem of solitude, or lack of it, continue to deepen….

… Short of a collapse so severe that the electricity goes off permanently, there is no escape from what the tech corporations and their tame hive mind have planned for us. The circle is closed, and the net is being hauled in. May as well play another round of Candy Crush while we wait to be dragged up on to the deck.

Well, the answer doesn’t lie in the kind of defeatism exemplified by Harris (whose book is evidently full of diagnosis and empty of remedy) or Kingsworth. It’s up to each person to decide whether or not to enlarge his scope of solitude or be defeated by the advance of technology and the breakdown of truly human connections.

But it’s not an all-or-nothing choice. Compromise is obviously necessary when it comes to making a living these days. That still leaves a lot of room for the practice of solitude, the practice and benefits of which I have addressed in “Flow“, “In Praise of Solitude“, “There’s Always Solitude“, and “The Glory of the Human Mind“.


More about the Transgender Fad

Is the transgender fad fading away, or is it just that I’m spending more time in solitude? Anyway, is was reminded of the fad by “Most Children Who Identify As Transgender Are Faking It, Says ‘Gender Clinic’ Psychiatrist” (The College Fix, April 17, 2017). It’s a brief post and the title tells the tale. So I’ll turn to my own post on the subject, “The Transgender Fad and Its Consequences“. Following a preamble and some long quotations from authoritative analysis of transgenderism, I continue with this:

Harm will come not only to  those who fall prey to the transgender delusion, but also to those who oppose its inevitable manifestations:

  • mandatory sex mingling in bathrooms, locker rooms, and dorm rooms — an invitation to predators and a further weakening of the norms of propriety that help to instill respect toward other persons
  • quotas for hiring self-described transgender persons, and for admitting them to universities, and for putting them in the ranks of police and armed forces, etc.
  • government-imposed penalties for saying “hateful and discriminatory” things about gender, the purpose of which will be to stifle dissent about the preceding matters
  • government-imposed penalties for attempts to exercise freedom of association, which is an unenumerated right under the Constitution that, properly understood, includes the right to refuse business from anyone at any time and for any reason (including but far from limited to refusing to serve drug-addled drag queens whose presence will repel other customers)….

How did America get from the pre-Kinsey view of sex as a private matter, kept that way by long-standing social norms, to the let-it-all-hang-out (literally) mentality being pushed by elites in the media, academy, and government?

I attribute much of it to the capitalist paradox. Capitalism — a misnomer for an economic system that relies mainly on free markets and private-property rights — encourages innovation, entrepreneurship, and economic growth. One result is that a “capitalist” economy eventually produces enough output to support large numbers of persons who don’t understand that living off the system and regulating it heavily will bring it down….

The social paradox is analogous to the capitalist paradox. Social relations are enriched and made more productive by the toleration of some new behaviors. But to ensure that a new behavior is enriching and productive, it must be tested in the acid of use.* Shortcuts — activism cloaked in academese, punditry, and political posturing — lead to the breakdown of the processes by which behaviors become accepted because they are enriching and productive.

In sum, the capitalist paradox breeds the very people who are responsible for the social paradox: those who are rich enough to be insulated from the vicissitudes of daily life, where living among and conversing with similar folk reinforces a distorted view of the real world.

It is the cossetted beneficiaries of capitalism who lead the way in forcing Americans to accept as “natural” and “of right” behavior that in saner times was rarely engaged in and even more rarely flaunted. That restraint wasn’t just a matter of prudery. It was a matter of two things: respect for others, and the preservation of norms that foster restraint.

How quaint. Avoiding offense to others, and teaching one’s children that normal behavior helps them to gain the acceptance and trust of others. Underlying those understood motivations was a deeper one: Children are susceptible creatures, easily gulled and led astray — led into making mistakes that will haunt them all their lives. There was, in those days, an understanding that “one thing leads to another.”…

… If the Kennedy Court of Social Upheaval continues to hold sway, its next “logical” steps  will be to declare the illegality of sexual identifiers and the prima facie qualification of any person for any job regardless of “its” mental and physical fitness for the job….

… [T[he parents of yesteryear didn’t have to worry about the transgender fad, but they did have to worry about drinking, drug-taking, and sex. Not everyone who “experimented” with those things went on to live a life of dissolution, shame, and regret. But many did. And so, too, will the many young children, adolescents, and young adults who succumb to the fad of transgenderism….

When did it all begin to go wrong? See “1963: The Year Zero.”

Thank you for working your way through this very long quotation from my own blog. But it just has to be said again and again: Transgenderism is a fad, a destructive fad, and a fad that is being used by the enemies of liberty to destroy what little of it is left in America.


The Academic Enemies of Liberty

Kurt Schlichter quite rightly says that “Academia Is Our Enemy So We Should Help It Commit Suicide“:

If Animal House were to be rebooted today, Bluto – who would probably be updated into a differently–abled trans being of heft – might ask, “See if you can guess what am I now?” before expelling a whole mass of pus-like root vegetable on the WASPrivileged villains and announcing, “I’m a university – get it?”

At least popping a zit gets rid of the infection and promotes healing. But today, the higher education racket festers on the rear end of our culture, a painful, useless carbuncle of intellectual fraud, moral bankruptcy, and pernicious liberal fascism that impoverishes the young while it subsidizes a bunch of old pinkos who can’t hack it at Real World U….

If traditional colleges performed some meaningful function that only they could perform, then there might be a rationale for them in the 21st Century. But there’s not. What do four-year colleges do today?

Well, they cater to weenies who feel “unsafe” that Mike Pence is speaking to their graduates. Seventy-some years ago, young people that age were feeling unsafe because the Wehrmacht was trying to kill them on Omaha Beach….

And in their quest to ensure their students’ perpetual unemployment, colleges are now teaching that punctuality is a social construct. Somewhere, a Starbucks manager is going to hear from Kaden the Barista that, “I like, totally couldn’t get here for my shift on time because, like intersectionality of my experience as a person of Scandinavianism and stuff. I feel unsafe because of your racist vikingaphobia and tardiness-shaming.”

Academia is pricing itself out of reach even as the antics of its inhabitants annoy and provoke those of us whose taxes already pick up a big chunk of the bill even without the “free college” okie-doke….

The quarter million dollar academic vacation model is economically unsustainable and poisonous to our culture. The world of Animal House was a lot more fun when it didn’t mean preemptive bankruptcy for its graduates and the fostering of a tyrannical training ground for future libfascists. It’s time to get all Bluto on the obsolete boil that is academia; time to give it a squeeze. [Townhall, April 13, 2017]

Cue my post, “Subsidizing the Enemies of Liberty“:

If there is a professional class that is almost solidly aligned against liberty it is the teachers and administrators who control the ideas that are pumped into the minds of students from kindergarten through graduate school. How are they aligned against liberty? Most of them are leftists, which means that they are statists who are dedicated to the suppression of liberty in favor of current left-wing orthodoxies. These almost always include the coddling of criminals, unrequited love for America’s enemies, redistribution of income and jobs toward less-productive (and non-productive) persons, restrictions on speech, and the destruction of civil society’s bulwarks: religion, marriage, and family.

In any event, spending on education in the United States amounted to $1.1 trillion in 2010, about 8 percent of GDP.  Most of that $1.1 trillion — $900 billion, in fact — was spent on public elementary and secondary schools and public colleges and universities. In other words, your tax dollars support the leftists who teach your children and grandchildren to bow at the altar of the state, to placate the enemies of liberty at home and abroad, and to tear down the traditions that have bound people in mutual trust and respect….

And what do tax-paying Americans get for their money? A strong left-wing bias, which is inculcated at universities and spreads throughout public schools (and a lot of private schools). This has been going on, in earnest, since the end of World War II. And, yet, the populace is roughly divided between hard-headed conservatives and squishy-minded “liberals.” The persistence of the divide speaks well for the dominance of nature over nurture. But it does not change the fact that American taxpayers have been subsidizing the enemies of liberty who dominate the so-called education system in this country.

See also “Academic Bias“, “Politics, Sophistry, and the Academy“, “Academic Ignorance“, and John C. Goodman’s “Brownshirts, Subsidized with Your Tax Dollars” (Townhall, May 20, 2017).


The High Cost of Untrammeled Immigration

The third entry in “Not-So-Random Thoughts (XVIII)” is about illegal immigration. It opens with this:

Ten years ago, I posted “An Immigration Roundup”, a collection of 13 posts dated March 29 through September 22, 2006. The bottom line: to encourage and allow rampant illegal immigration borders on social and economic suicide. I remain a hardliner because of the higher crime rate among Hispanics (“Immigration and Crime“), and because of Steven Camarota’s “So What Is the Fiscal and Economic Impact of Immigration?“ [National Review, September 22, 2016].

I suggest that you go to Camarota’s article, which I quote at length, to see the evidence that he has compiled. For more facts — as opposed to leftish magical thinking about immigration — see also “Welfare: Who’s on It, Who’s Not” (Truth Is Justice, April 16, 2017), which draws on

a report called “Welfare Use by Immigrant and Native Households.” The report’s principle finding is that fully 51 percent of immigrant households receive some form of welfare, compared to an already worrisomely high 30 percent of American native households. The study is based on the most accurate data available, the Census Bureau’s Survey of Income and Program Participation (SIPP). It also reports stark racial differences in the use of welfare programs.

I’ll throw in some excerpts:

Needless to say, the percentage of immigrants using some form of welfare varies enormously according to the part of the world from which they come. Rates are highest for households from Central America and Mexico (73 percent), the Caribbean (51 percent), and Africa (48 percent). Those from East Asia (32 percent), Europe (26 percent), and South Asia (17 percent) have the lowest rates….

A majority of native black and Hispanic households are on some form of means-tested welfare, compared to just 23 percent of native white households….

A striking 82 percent of black households with children receive welfare–double the white rate. Hispanic families are not far behind blacks….

Among natives, blacks receive cash handouts at more than three times the white rate; Hispanics at more than twice the white rate. Rates for black and Hispanic immigrants are relatively lower due to often-ignored restrictions on immigrant use of these programs….

Among all households, native blacks and Hispanics receive food handouts at three times the white rate; for Hispanic immigrants, the figure is four times the white rate. Among households with children, nearly all immigrant Hispanics–86 percent–get food aid. Native blacks and Hispanics aren’t far behind, with rates of 75 and 72 percent, respectively.

The takeaway: Tax-paying citizens already heavily subsidize native-born blacks and Hispanics. Adding welfare-dependent immigrants — especially from south of the border — adds injury to injury.

As long as the welfare state exists, immigration should be tightly controlled so that the United States admits only those persons (with their families) who have verifiable offers of employment from employers in the United States. Further, an immigrant’s income should be high enough to ensure that (a) he is unlikely to become dependent on any welfare program (federal, State, or local) and (b) he is likely to pay at least as much in taxes as he is likely to absorb in the way of schooling for his children, Social Security and Medicare benefits, etc.

(See also: Bob le Flambeur, “Against Open Borders“, Rightly Considered, February 8, 2017.)


* Sharp-eyed readers will notice that with this post I am adopting a “new” way of using quotation marks. The American convention is to enclose commas and periods within quotation marks, even where the commas and periods are not part of the quoted text or other material that belongs inside quotation marks (e.g., the title of a post). The American convention creates some ambiguity and awkwardness that is avoided by the British convention, which is to enclose inside quotation marks only that punctuation which is part of the quoted text or other material.

** This is from the article by Sullivan cited in the first section of this post:

[Leo] Strauss’s idiosyncratic genius defies easy characterization, but you could argue, as Mark Lilla did in his recent book The Shipwrecked Mind, that he was a reactionary in one specific sense: A Jewish refugee from Nazi Germany, Strauss viewed modernity as collapsing into nihilism and relativism and barbarism all around him. His response was to go back to the distant past — to the works of Plato, Aristotle, and Maimonides, among others — to see where the West went wrong, and how we could avoid the horrific crimes of the 20th century in the future.

One answer was America, where Strauss eventually found his home at the University of Chicago. Some of his disciples — in particular, the late professor Harry Jaffa — saw the American Declaration of Independence, with its assertion of the self-evident truth of the equality of human beings, as a civilizational high point in human self-understanding and political achievement. They believed it revived the ancient Greek and Roman conception of natural law. Yes, they saw the paradox of a testament to human freedom having been built on its opposite — slavery — but once the post–Civil War constitutional amendments were ratified, they believed that the American constitutional order was effectively set forever, and that the limited government that existed in the late-19th and early-20th centuries required no fundamental change.

The Impeachment Trap?

Old adage: Be careful what you wish for, you may just get it.


Here’s the recipe for Impeachment à la Mode 2017:

Take a massive, vocal, determined, and politically experienced resistance — spearheaded (symbolically, at least) by a former president and a defeated presidential candidate, and funded by leftists with deep pockets (e.g., George Soros).

Add one unwitting president — a political neophyte who isn’t used to having his every word and deed challenged and psychoanalyzed, and who arms his enemies and hands them the ammunition they need for a political assassination.

Add a pinch of intramural hate, discontent, and demoralization.

Combine with anti-Trump conservatives whose opposition has survived Trump’s many early successes.

Stir with senior GOP leaders in Congress who don’t want their majorities to go down with Trump, and who will desert him to avoid that fate.

Bake in the oven of leftist-dominated media for a few months, and Bob’s your uncle.

I chose the epigram for this post before I came across “The Impeachment Trap: Be Careful What You Wish For,” by a blogger (Jeff Alson, In These Times) whom I would characterize as a member of the “resistance.” In fact, he has anticipated much of what I planned to say here, so I will now quote him at some length:

I believe it would be a major strategic blunder for the Democratic Party to fall for what I call the Impeachment Trap—the powerful temptation to lead the charge for impeachment without considering the strategic implications….

The simple majority necessary to impeach in the House of Representatives, as well as the two-thirds majority that is required to convict in the Senate, can be achieved with the support of most or all Democrats and a minority of Republicans. Unfortunately, this scenario would offer enormous political benefits to the Republicans.

If Trump were impeached and convicted, Vice President Mike Pence, a right-wing, evangelical ideologue, would be a much more reliable and competent rubber stamp for the conservative policy agenda. Trump, for all his failings, cannot be counted on to support conservative Republican orthodoxy. While his cabinet picks and early policy proposals have largely catered to right-wing ideology, his policy flip-flops and incompetence make him a very unreliable partner for congressional Republicans…. Pence, on the other hand, who was given a 99 percent rating from the American Conservative Union, would be much more likely to cut Social Security, push National Right to Work, and try to restrict gay marriage, and would probably treat immigrants and refugees just as badly, in order to court the Trump base.

Impeachment would also help restore the damaged Republican brand. Trump lost the popular vote by the largest margin of any incoming president in history. His administration is mired in incompetence, chaos, and suspicion, and has already sparked a massive public resistance. His public approval rating hovers around 40 percent, by far a record low for a new president. If these trends continue, his presidency will be a massive albatross around the GOP’s neck in future elections.

By contrast, the robot-like Pence—despite his extreme right-wing views—would be packaged as a comforting return to normalcy. The relief at no longer having an egotistical lunatic at the helm could provide Pence with a long and generous public opinion honeymoon. Republicans could claim that Trump was “never one of theirs,” and approach the 2020 campaign with the benefit of incumbency and without Trump’s liabilities.

Democratic ownership of impeachment would also cement the loyalty of working-class Trump voters to the Republican Party….

Of course, Republicans may well decide that impeachment is in their best interests and lead the charge. This is a slightly better scenario for Democrats.

… With Republicans owning impeachment, Trump supporters would be livid with the Republican Party, some withdrawing from politics altogether or splintering off to support minor parties, others perhaps willing to reconsider a Democratic Party refocused on economic justice. The combination of Republicans losing core Trump supporters and ongoing demographic trends would put Democrats in a very favorable position for 2018 and 2020 and beyond….

Paradoxical as it may seem, however, the best scenario for Democrats is one in which they resist the impeachment trap, the Republicans stand by their president, and Trump, odious as he may be, remains in office…. From a policy perspective, a paralyzed Trump administration would be far better than a more competent and reliably right-wing Pence presidency. Politically, Trump would become a black eye for the GOP, and the Democratic opposition would remain energized, all of which would favor the Democrats in both 2018 and 2020….

It won’t be easy to resist the temptation to humiliate the worst president in modern history, but Democrats must muster the discipline to resist the Impeachment Trap, insist that Republicans be the ones to take responsibility for their shameful president, and mobilize to build real grassroots democratic power for 2018, 2020 and beyond.

A key issue, for Republicans, is whether Trump Democrats would go “home” to the Democrat Party. I am less convinced of that than Alson is. The sooner Trump is removed the more time Pence has to do things that will keep Trump Democrats in the Republican fold. Further, it seems unlikely that more than a small fraction of Trump Democrats would revert to a party whose next presidential candidate is likely to be Elizabeth Warren.

Most important, from the GOP’s point of view, is Pence’s image as sedate and “presidential” compared with Trump. This would go down well with a lot of voters in the center and center-right. It was their abandonment of Trump, I believe, that caused him to win several reliably Red States by smaller margins than Romney did in 2012.

Where does this leave me? All signs point to a completely ineffective Trump presidency from here on out. I doubt that he could now replace a retiring or deceased Supreme Court justice, for example. There’s much in Trump’s agenda worth pursuing (and some that isn’t). But if the agenda is to be rescued, Republicans should act quickly, replace Trump with Pence, and get on with moving the federal government’s policies rightward in an orderly way.

The early “chaos” bruited  by the left-wing media has become real chaos, and it’s hurting the conservative cause. That’s what I care about, not Donald J. Trump.

Impeachment may be a trap for Democrats, but it may be Republicans’ only way out of a trap.


Related reading: Rod Dreher, “Shut Your Mouth, Do Your Job,” The American Conservative, May 19, 2017

A Word of Warning to Leftists (and Everyone Else)

It is customary in democratic countries to deplore expenditure on armaments as conflicting with the requirements of the social services. There is a tendency to forget that the most important social service that a government can do for its people is to keep them alive and free.

Marshall of the Royal Air Force Sir John Cotesworth Slessor,
Strategy for the West

I am not a pessimist by nature, but I do consider myself a realist. Despite Trump, the U.S. is rapidly moving in the direction of the British Isles and continental Europe: welfare-dependent, nannied in the nth degree, and politically correct to the point of feminization.

This “ambiance” will dominate the military-diplomatic sphere under the next Obama — who will arrive four or eight years from now, given the fickleness of the American electorate.  The U.S. will then be open to military blackmail by a determined and strong regime in search of economic gains (e.g., de facto control of oil-rich regions) and political dominance over vast regions if not the globe. Inasmuch as the U.S. is properly and profitably engaged in the world (for the ultimate benefit of American consumers), the result will be vast harm to Americans’ interests.

Military blackmail by a state (or states acting in opportunistic coordination) might be accompanied by or follow in the wake of a major terrorist attack by an emboldened non-state actor. (I remember reading of evidence that bin Laden was emboldened to order the 9/11 strikes because previous U.S. responses to terrorism suggested softness.)

But such developments will come as a surprise to most Americans, whose solipsism blinds them to the realities of a world (out there) that hasn’t been — and won’t be — turned into a kindergarten by left-wing university administrators and faculty, generations of indoctrinated public-school teachers, tens of millions of their indoctrinees, politicians and otherwise hard-headed businessmen eager to signal their virtue by favoring political correctness over actual ability and accomplishment, and the media chorus that eagerly encourages and sustains all of with its skillfully slanted reportage.

When the surprise occurs, it probably will be too late to do anything about it, despite the sudden mugging by reality that will convert many leftists into conservatives.


Recommended reading and viewing:

Simon Sinek, “Millennials in the Workplace,” Inside Quest, October 29, 2016

Chenchen Zhang, “The Curious Rise of the ‘White Left’ as a Chinese Internet Insult,” openDemocracy, May 11, 2017

Mark Steyn, “Tomorrow By the Numbers,” Steyn Posts, May 10, 2017

Katie Kieffer, “Tick, Tock: EMP War Looms,” Townhall, May 15, 2017

Bruce Schneier, “Who Is Publishing NSA and CIA Secrets, and Why?Schneier on Security, May 15, 2017


Related posts:
Liberalism and Sovereignty
The Media, the Left, and War
A Grand Strategy for the United States
The Folly of Pacifism
Why We Should (and Should Not) Fight
Rating America’s Wars
Transnationalism and National Defense
The Next 9/11?
The Folly of Pacifism, Again
Patience as a Tool of Strategy
The War on Terror, As It Should Have Been Fought
Preemptive War
Preemptive War and Iran
Some Thoughts and Questions about Preemptive War
Defense as an Investment in Liberty and Prosperity
Mission Not Accomplished
The World Turned Upside Down
Defense Spending: One More Time
Presidential Treason
Walking the Tightrope Reluctantly
My Defense of the A-Bomb
Pacifism
Presidents and War

The JFK Standard

I believe that the media’s treatment of a president has more to do with his party affiliation and various “cosmetic” factors than with his policies or executive competence. To test this hypothesis (unscientifically), I constructed a table listing six factors, and gave JFK a 10 for each of them. (He was the last president to enjoy an extended honeymoon with the media, and it had to have been based more on “cosmetic” factors than anything else.) I then quickly assigned relative scores to each of JFK’s successors — and didn’t go back to change any scores. I didn’t add the scores until I had assigned every president a score for all six factors. Here’s the result:

Given the media’s long-standing bias toward Democrats, I arbitrarily gave each Democrat 10 points for party affiliation, as against zero for the Republicans. “Looks,” “wit,” and “elegance” (as seen through the eyes of the media) should be self-explanatory. “Wife” and “children” refer to contemporaneous media perceptions of each president’s wife and child or children. Jackie was the perfect First Lady, from the standpoint of looks, poise, and “culture.” And Caroline and John Jr. epitomized “adorable,” unlike the older and often unattractive (or notorious) offspring of later presidents.

I’d say that the total scores are a good indication of the relative treatment — favorable, middling, and unfavorable — given each president by the media.


Related:
Facts about Presidents
Is Character Really an Issue?
Blindsided by the Truth
Rating the Presidents, Again
The Modern Presidency: A Tour of American History
Nonsense about Presidents, IQ, and War
1963: The Year Zero

Where There’s Smoke

The wrothful Gersh Kuntzman has more than two things to say about Hillary Clinton’s latest p.r. push:

I voted for Clinton on Nov. 8 and thought she’d be a good president.

But she lost. And she still wants us to feel bad about that. And, worse, she’s still blaming everyone else.

On Tuesday at the Women for Women conference, she reminded us again what a flawed candidate she was last year — and what a flawed person she has always been….

She … said she would discuss the mistakes she made during the campaign — then declined to mention even one. Instead, she fell back on the usual suspects: The Russians and FBI Director James Comey, who indeed meddled in the election at the last minute.

“If the election had been on Oct. 27, I would be your President,” she said.

Boo hoo.

Sorry, Simon & Schuster may want Hillary Clinton to write the history, but I’m not about to let her re-write it. No one deserves more blame for the election debacle than Hillary Rodham Clinton.

Let us count the ways:

1. She was, indeed, untrustworthy: Remember her fainting spell at the 9/11 ceremony? Remember how long it took for her to tell the truth? Remember how that reminded every voter in America that Hillary Clinton’s first instinct is to lie? Just like she did when she claimed she had taken sniper fire during a First Lady trip to Bosnia. Just as she did when she said she never sent classified documents over her private email server.

Beyond that, she was too close to the Clinton Foundation, and didn’t have a good answer when the Associated Press reported that donors to the Foundation got an open channel to then-Secretary of State Clinton.

2. She ran a very poor campaign….

[W]hen she called half the country the “basket of deplorables,” it was pretty much over. As Mitt Romney learned four years earlier when he said 47% of the country was “freeloaders,” you’re not the smartest guy in the room if you make a gaffe as dumb as that.

3. She set up a private email server: It’s basic. The only reason to set up a private email server — and delete some of the emails on it — is because you want to hide something from the public. Clinton never provided a good answer to the simple question, “Why would you do that?”

4. Those Goldman-Sachs speeches. You can’t be a prostitute on Wall Street and then go to church on Main Street….

I don’t understand why a publishing firm would give Hillary Clinton millions of dollars to not even admit her mistakes. (Full disclosure: I have three far-more-interesting books that Simon and Schuster can have for a fraction of Clinton’s advance, including “Bad Seeds” (an unpublished novel), “Hitler Would Have Double-Parked” (an unpublished novel) and “Publish My Unpublished Novel” (an unpublished novel). So I don’t see why we can’t make a deal.)

She got what she deserved: She lost.

Now she needs to shut up and go home. [“Hillary Clinton Shouldn’t Be Writing a Book — She Should Be Drafting a Long Apology to America,” New York Daily News, May 2, 2017]

And as he makes clear elsewhere in the piece, he hates Trump.

Kuntzman omits a great deal. He could have mentioned Madame Rodham Clinton’s

internship at the law firm of Treuhaft, Walker and Burnstein, a firm was well known for its support of radical causes (two of its four partners were current or former Communist Party members)

conduct as a defense attorney in a child-rape case

mysteriously prescient ability to trade cattle futures

participation in the fraudulent Whitewater land-flipping scheme

involvement in trumped-up, politically based firing of White House travel-office employees (“Travelgate“)

involvement in the illegal procuring of background-check files on persons who had been White House employees during previous GOP administrations (“Filegate“)

enabling of Bill’s sexual predation, and attempt to deflect blame from him by concocting a “vast, right-wing conspiracy,” when the real problem was the truth about Bill

appropriation of gifts that had been made to the White House, not to her personally

solicitation of gifts while running for the Senate

dereliction of duty regarding the protection of the U.S. embassy in Benghazi, Libya

I’m sure there’s more, but that’s all I can think of at the moment.

It is a depressing commentary on the state of politics in America that such a venal, mendacious, corrupt, destructively ambitious person could have risen as far as Hillary Rodham Clinton rose. I often give thanks to the Framers for inserting the Electoral College between the voters and the presidency.

Presidents and Economic Growth

There’s an old and recurring claim that Democrat presidents produce greater economic growth than Republican ones. I addressed and debunked such a claim nine years ago, saying this (in part):

Given the long, downward trend in the real rate of GDP growth, it is statistical nonsense to pin the growth rate in any given year to a particular year of a particular president’s term. It is evident that GDP growth has been influenced mainly by the cumulative, anti-growth effects of government regulation. And GDP growth, in any given year, has been an almost-random variation on a downward theme.

How random? This random:


Derived from Bureau of Economic Analysis, “Current dollar and ‘real’ GDP,” as of April 28, 2017.

The one-year lag (which is usual in such analyses) allows for the delayed effects (if any) of a president’s economic policies. The usual suspects are claiming, laughably, that the tepid growth rate in the first calendar quarter of 2017 is somehow Trump’s fault.

Anyway, here’s the real story:

This is an updated version of a graph in “The Rahn Curve Revisited.,” from which the following equation is taken:

Yg = 0.0275 – 0.347F + 0.0769A – 0.000327R – 0.135P , where

Yg = real rate of GDP growth in a 10-year span (annualized)

F = fraction of GDP spent by governments at all levels during the preceding 10 years

A = the constant-dollar value of private nonresidential assets (business assets) as a fraction of GDP, averaged over the preceding 10 years

R = average number of Federal Register pages, in thousands, for the preceding 10-year period

P = growth in the CPI-U during the preceding 10 years (annualized).

Random, short-run fluctuations in GDP growth have almost nothing to do with the policies of a particular president (see the first graph). But there’s nothing random about the steady growth of government spending, the steadier growth of the regulatory burden, and the combined investment-killing and inflationary effects of both (see the second graph).

The long-run trend in GDP growth reflects the cumulative effects of policies carried out by the “deep state” — the apparatus that churns on with little change in direction from president to president: the special interests represented in the many committees of Congress, the Social Security Administration (which also encompasses Medicare and Medicaid), and the entire alphabet soup of federal regulatory agencies. Most of those entities became committed, long ago, to the growth of government spending and regulation. It will take more than a slogan to drain the swamp.

John H. Arnold characterizes war as “long periods of boredom punctuated by short moments of excitement.” I would say that the economy of the United States has been on a long slide into stagnation punctuated by brief periods of misplaced optimism.

Advice to Live By: Know Thyself

A good example off Hillary Clinton’s mind at work is given in this post about a new book by Jonathan Allen and Amie Parnes, Shattered: Inside Hillary Clinton’s Doomed Campaign. Clinton is quoted as saying to an aide, “I know I engender bad reactions from people.” That’s a roundabout way of saying “I irritate people.” But Clinton, unsurprisingly, tries to water down her self-criticism because it’s too hard for her to confront her own defects.

She’s far from alone in that respect. Self-delusion is a common trait, especially among politicians, who seem to be especially allergic to truth. But self-delusion is a counterproductive trait. It’s an essential ingredient in failure because it sets a person on an unsustainable course. Failure occurs at many levels, including the highest levels of American politics.

Hillary wouldn’t have made it to dogcatcher if she hadn’t married Bill and leveraged her position to carpetbag her way into the Senate. Look what happened to her in 2008 against a rival (Obama) who was more skilled at presenting himself to the public (though obviously a narcissist to those who weren’t beguiled by his rhetoric or smitten with the feel-good notion of electing America’s first semi-black president).

How did Hillary get the nomination in 2016? By being married to Bill and lining up her party’s big-wigs to support her as the “inevitable” candidate and, most important, a female one. (I expect that in 2020 there will be a big push in the Democrat Party to nominate an openly homosexual candidate for vice president, if not for president.)

Despite Clinton’s high-level backing — coupled with her (obviously contrived) leftward lurch and the political correctness of her gender —  her march to the Democrat nomination was almost halted again by a sincere-sounding leftist.

I believe that she lost the general election because of her tone-deafness, which makes her uncannily able to irritate other people. Knowing that the presidency is won in the electoral college, not in the national popular vote, and knowing that the outcome in States with large blocs of blue-collar voters could swing the election to Trump — but secure in her self-delusional arrogance — she referred to Trump supporters as “deplorables.”

Talk about engendering bad reactions. And she didn’t have to do it; she was safely ahead in the polls at that point. She had nothing to gain — the effete elite were safely locked up — and a lot to lose. But she couldn’t help herself because she gave too little thought to her effect on others.

Know thyself. Very old advice that remains good advice.

How to Pay for Streets and Highways

Tolls. Yes, everyone hates them. That’s an overstatement, of course. I love toll roads, as do a lot of people who either enjoy the less-stressful experience of driving on them or just want to get somewhere faster than they otherwise could.

Tolls are the way to go, because: Users — and only users — should pay for roads, in accordance with the frequency with which they use them and the amount of wear and tear to which they subject them. Sure, there are some taxes that are supposed to pay for streets and highways, but they don’t cover the full cost cost of construction and maintenance, and they’re often diverted to other uses.

It’s a simple matter to issue everyone a registration sticker that includes a toll tag. Big rigs get one kind of tag (which charges at the highest rate), and so on down to motorcycles and bicycles. Yes, that means you, the traffic-clogging bicyclist. From now on you’ll have to pay for the privilege of mixing with motor vehicles or cavorting in your own little-used lane, It’s a privilege that contributes to congestion by reducing the space available for motor-vehicle lanes and flow-enhancing features (e.g., turning lanes).

With mandatory toll tags, there would be no more mail-in payments, which means no more deadbeats. Everyone who wants to use a public highway would have to register a valid payment method: credit card, debit card, or direct debit to a checking account. No checking account? Too bad. Take the bus. If you don’t have a checking account, you probably can’t afford auto insurance. So you’re driving without it, and driving up other people’s insurance rates.

What about all the tag readers that would be required? I forgot to mention that registration stickers would have GPS trackers embedded in them.

Problem solved, except for the matter of “privacy,” that all-purpose excuse for the subversion of social norms. But “privacy” worshipers might be persuaded to go along, given these considerations:

  • “Unnecessary” trips would be discouraged, thus reducing the use of fossil fuels.
  • Businesses, as buyers of goods shipped over highways, would pay their “fair share.” (The cost of tolls would be passed on to customers, of course, but that wouldn’t negate the feel-good effect for anti-business crowd.)
  • “Sprawl” would be discouraged.
  • “Buy local” would be encouraged.
  • Internet retail would grow even faster than it has been growing (a negation of the preceding point, but the “buy local” crowd wouldn’t notice), which would further reduce the use of fossil fuels.

I suspect that the net effect of all this would be next to zero, but it would please me no end if users (and only users) paid for roads, and if bicyclists were forced to pay for the privilege of adding to traffic congestion — and for their smugness.