Unorthodox Economics: 5. Economic Progress, Microeconomics, and Microeconomics

This is the fifth entry in what I hope will become a book-length series of posts. That result, if it comes to pass, will amount to an unorthodox economics textbook. Here are the chapters that have been posted to date:

1. What Is Economics?
2. Pitfalls
3. What Is Scientific about Economics?
4. A Parable of Political Economy
5. Economic Progress, Microeconomics, and Macroeconomics

What is economic progress? It is usually measured as an increase in gross domestic product (GDP) or, better yet, per-capita GDP. But such measures say nothing about the economic status or progress of particular economic units. In fact, the economic progress of some economic units will be accompanied by the economic regress of others. GDP captures the net monetary effect of those gains and losses. And if the net effect is positive, the nation under study is said to have made economic progress. But that puts the cart of aggregate measures (macroeconomics) before the horse of underlying activity (microeconomics). This chapter puts them in the right order.

The economy of the United States (or any large political entity) consists of myriad interacting units. Some of them contribute to the output of the economy; some of them constrain the output; some of them are a drain upon it. The contributing units are the persons, families, private charities, and business (small and large) that produce economic goods (products and services) which are voluntarily exchanged for the mutual benefit of the trading parties. (Voluntary, private charities are among the contributing units because they help willing donors attain the satisfaction of improving the lot of persons in need. Voluntary charity — there is no other kind — is not a drain on the economy.)

Government is also a contributing unit to the extent that it provides a safe zone for the production and exchange of economic goods, to eliminate or reduce the debilitating effects of force and fraud. The safe zone is international as well as domestic when the principals of the U.S. government have the wherewithal and will to protect Americans’ overseas interests. The provision of a safe zone is usually referred to as the “rule of law”.

Most other governmental functions constrain or drain the economy. Those functions consist mainly of regulatory hindrances and forced “charity,” which includes Social Security, Medicare, Medicaid, and other federal, State, and local “welfare” programs. In “The Rahn Curve Revisited,” I estimate the significant negative effects of regulation and government spending on GDP.

There is a view that government contributes directly to economic progress by providing “infrastructure” (e.g., the interstate highway system) and underwriting innovations that are adopted and adapted by the private sector (e.g., the internet). Any such positive effects are swamped by the negative ones (see “The Rahn Curve Revisited”). Diverting resources to government uses in return for the occasional “social benefit” is like spending one’s paycheck on lottery tickets in return for the occasional $5 winner. Moreover, when government commandeers resources for any purpose — including the occasional ones that happen to have positive payoffs — the private sector is deprived of opportunities to put those resources to work in ways that more directly advance the welfare of consumers.

I therefore dismiss the thrill of occasionally discovering  a gold nugget in the swamp of government, and turn to the factors that underlie steady, long-term economic progress: hard work; smart work; saving and investment; invention and innovation; implementation (entrepreneurship); specialization and trade; population growth; and the rule of law. These are defined in the first section of “Economic Growth Since World War II“.

It follows that economic progress — or a lack thereof — is a microeconomic phenomenon, even though it is usually treated as a macroeconomic one. One cannot write authoritatively about macroeconomic activity without understanding the microeconomic activity that underlies it. Moreover, macroeconomic aggregates (e.g., aggregate demand, aggregate supply, GDP) are essentially meaningless because they represent disparate phenomena.

Consider A and B, who discover that, together, they can have more clothing and more food if each specializes: A in the manufacture of clothing, B in the production of food. Through voluntary exchange and bargaining, they find a jointly satisfactory balance of production and consumption. A makes enough clothing to cover himself adequately, to keep some clothing on hand for emergencies, and to trade the balance to B for food. B does likewise with food. Both balance their production and consumption decisions against other considerations (e.g., the desire for leisure).

A and B’s respective decisions and actions are microeconomic; the sum of their decisions, macroeconomic. The microeconomic picture might look like this:

  • A produces 10 units of clothing a week, 5 of which he trades to B for 5 units of food a week, 4 of which he uses each week, and 1 of which he saves for an emergency.
  • B, like A, uses 4 units of clothing each week and saves 1 for an emergency.
  • B produces 10 units of food a week, 5 of which she trades to A for 5 units of clothing a week, 4 of which she consumes each week, and 1 of which she saves for an emergency.
  • A, like B, consumes 4 units of food each week and saves 1 for an emergency.

Given the microeconomic picture, it is trivial to depict the macroeconomic situation:

  • Gross weekly output = 10 units of clothing and 10 units of food
  • Weekly consumption = 8 units of clothing and 8 units of food
  • Weekly saving = 2 units of clothing and 2 units of food

You will note that the macroeconomic metrics add no useful information; they merely summarize the salient facts of A and B’s economic lives — though not the essential facts of their lives, which include (but are far from limited to) the degree of satisfaction that A and B derive from their consumption of food and clothing.

The customary way of getting around the aggregation problem is to sum the dollar values of microeconomic activity. But this simply masks the aggregation problem by assuming that it is possible to add the marginal valuations (i.e., prices) of disparate products and services being bought and sold at disparate moments in time by disparate individuals and firms for disparate purposes. One might as well add two bananas to two apples and call the result four bapples.

The essential problem is that A and B will derive different kinds and amounts of enjoyment from clothing and food, and those different kinds and amounts of enjoyment cannot be summed in any meaningful way. If meaningful aggregation is impossible for A and B, how can it be possible for an economy that consists of millions of economic actors and an untold, constantly changing, often improving variety of goods and services?

GDP, in other words, is nothing more than what it seems to be on the surface: an estimate of the dollar value of economic output. It is not a measure of “social welfare” because there is no such thing. (See “Social Welfare” in Chapter 2). And yet it is a concept that infests microeconomics and macroeconomics.

Aggregate demand and aggregate supply are nothing but aggregations of the dollar values of myriad transactions. Aggregate demand is an after-the-fact representation of the purchases made by economic units; aggregate supply is an after-the-fact representation of the sales made by economic units. There is no “aggregate demander” or “aggregate supplier”.

Interest rates, though they tend to move in concert, are set at the microeconomic level by lenders and borrowers. Interest rates tend to move in concert because of factors that influence them: inflation, economic momentum, and the supply of money.

Inflation is a microeconomic phenomenon which is arbitrarily estimated by sampling the prices of defined “baskets” of products and services. The arithmetic involved doesn’t magically transform inflation into a macroeconomic phenomenon.

Economic momentum, as measured by changes in GDP, is likewise a microeconomic phenomenon disguised as a macroeconomic, as previously discussed.

The supply of money, over which the Federal Reserve has some control, is the closest thing there is to a truly macroeconomic phenomenon. But the Fed’s control of the supply of money, and therefor of interest rates, is tenuous.

Macroeconomic models of the economy are essentially worthless because they can’t replicate the billions of transactions that are the flesh and blood of the real economy. (See “Economic Modeling: A Case of Unrewarded Complexity“.) One of the simplest macroeconomic models — the Keynesian multiplier — is nothing more than a mathematical trick. (See “The Keynesian Multiplier: Fiction vs. Fact”.)

Macroeconomics is a sophisticated form of mental masturbation — nothing more, nothing less.

Econ 101

Whether you call it economics, econ, or eco — and whether the course number is 1, 10, 101, or something similar — one of the first things you learn (or did learn before PC-ness set in) was that unionization and the minimum wage reduce employment. Why? Because the objective of unionization and the minimum wage is to dictate a wage that is higher than workers would otherwise attain. The result, of course, is that employers will hire fewer workers than they would otherwise hire.

Ed Rensi, former CEO of McDonald’s USA, offers this testimony:

As of 2020, self-service ordering kiosks will be implemented at all U.S. McDonald’s locations. Other chains, including fast-casual brands like Panera and casual-dining brands like Chili’s, have already embraced this trend. Some restaurant concepts have even automated the food-preparation process; earlier this year, NBC News profiled “Flippy,” a robot hamburger flipper. Other upcoming concepts include virtual restaurants which eliminate the need for full-service restaurants (and staff) by only offering home delivery….

My concern about this is personal. Without my opportunity to start as a grill man, I would have never ended up running one of largest fast food chains in the world. I started working at McDonald’s making the minimum wage of 85 cents an hour. I worked hard and earned a promotion to restaurant manager within just one year, then went on to hold almost every position available throughout the company, eventually rising to CEO of McDonalds USA.

The kind of job that allowed me and many others to rise through the ranks is now being threatened by a rising minimum wage that’s pricing jobs out of the market. Without sacrificing food quality or taste, or abandoning the much-loved value menu, franchise owners must keep labor costs under control. One way to combat rising labor costs is by reducing the amount of employees needed.

Low-skill workers have to start somewhere. Unionization and the minimum wage kill entry-level jobs. This should be taught in 9th grade social studies (or whatever it’s called now), but it won’t be because the unionized lefties who control public schools wouldn’t allow it. They are greedy, hypocritical idiots who proclaim compassion for the poor while ensuring that they stay poor.

(H/T Dyspepsia Generation)


Related posts:
A Short Course in Economics
Addendum to a Short Course in Economics
Does the Minimum Wage Increase Unemployment?
The Public-School Swindle (a parallel treatment of the effects of unionization on teachers’ wages and employment)
Why I Am Anti-Union

Suicide or Destiny?

The list of related reading at the bottom of this post is updated occasionally.

The suicide to which I refer is the so-called suicide of the West, about which Jonah Goldberg has written an eponymous book. This is from Goldberg’s essay based on the book, “Suicide of the West” (National Review, April 12, 2018):

Almost everything about modernity, progress, and enlightened society emerged in the last 300 years. If the last 200,000 years of humanity were one year, nearly all material progress came in the last 14 hours. In the West, and everywhere that followed our example, incomes rose, lifespans grew, toil lessened, energy and water became ubiquitous commodities.

Virtually every objective, empirical measure that capitalism’s critics value improved with the emergence of Western liberal-democratic capitalism. Did it happen overnight? Sadly, no. But in evolutionary terms, it did….

Of course, material prosperity isn’t everything. But the progress didn’t stop there. Rapes, deaths by violence and disease, slavery, illiteracy, torture have all declined massively, while rights for women, minorities, the disabled have expanded dramatically. And, with the exception of slavery, which is a more recent human innovation made possible by the agricultural revolution, material misery was natural and normal for us. Then suddenly, almost overnight, that changed.

What happened? We stumbled into a different world. Following sociologist Robin Fox and historian Ernest Gellner, I call this different world “the Miracle.”…

Why stress that the Miracle was both unnatural and accidental? Because Western civilization generally, and America particularly, is on a suicidal path. The threats are many, but beneath them all is one constant, eternal seducer: human nature. Modernity often assumes that we’ve conquered human nature as much as we’ve conquered the natural world. The truth is we’ve done neither….

The Founders closely studied human nature, recognizing the dangers of despots and despotic majorities alike. They knew that humans would coalesce around common interests, forming “factions.” They also understood that you can’t repeal human nature. So, unlike their French contemporaries, they didn’t try. Instead, they established our system of separated powers and enumerated rights so that no faction, including a passionate majority, could use the state’s power against other factions.

But the Founders’ vision assumed many preconditions, the two most important of which were the people’s virtue and the role of civil society. “The general government . . . can never be in danger of degenerating into a monarchy, an oligarchy, an aristocracy, or any despotic or oppressive form so long as there is any virtue in the body of the people,” George Washington argued.

People learn virtue first and most importantly from family, and then from the myriad institutions family introduces them to: churches, schools, associations, etc. Every generation, Western civilization is invaded by barbarians, Hannah Arendt observed: “We call them children.” Civil society, starting with the family, civilizes barbarians, providing meaning, belonging, and virtue.

But here’s the hitch. When that ecosystem breaks down, people still seek meaning and belonging. And it is breaking down. Its corruption comes from reasons too numerous and complex to detail here, but they include family breakdown, mass immigration, the war on assimilation, and the rise of virtual communities pretending to replace real ones.

First, the market, as Joseph Schumpeter argued, maximizes efficiency with relentless rationality, tending to break down the sinews of tradition and the foundations of civil society that enable and instill virtue. Yet those pre-rational virtues make capitalism possible in the first place.

Second, capitalism also creates a mass class of resentful intellectuals, artists, journalists, and bureaucrats who are professionally, psychologically, and ideologically committed to undermining capitalism’s legitimacy (as noted by Schumpeter and James Burnham, the author of another book titled “Suicide of the West”). This adversarial elite is its own coalition.

Thus, people increasingly look to Washington and national politics for meaning and belonging they can’t find at home. As Mary Eberstadt recently argued, the rise in identity politics coincided with family breakdown, as alienated youth looked to the artificial tribes of racial or sexual solidarity for meaning. Populism, which always wants the national government to solve local problems, is in vogue on left and right precisely because local institutions and civil society generally no longer do their jobs. Indeed, populism is its own tribalism, because “We the People” invariably means “my people.” As Jan-Werner Müller notes in his book What Is Populism?: “Populism is always a form of identity politics.”

A video at the 2012 Democratic National Convention proclaimed that “government is the only thing we all belong to.” For conservatives, this was Orwellian. But for many Americans, it was an invitation to belong. That was the subtext of “The Life of Julia” and President Obama’s call for Americans to emulate SEAL Team Six and strive in unison — towards his goals….

The American Founding’s glory is that those English colonists took their cousins’ tradition, purified it into a political ideology, and extended it farther than the English ever dreamed. And they wrote it down, thank God. The Founding didn’t apply these principles as universally as its rhetoric implied. But that rhetoric was transformative. When the Declaration of Independence was written, some dismissed the beginning as flowery boilerplate; what mattered was the ending: Independence! But the boilerplate became a creed, and America’s story is the story of that creed — those mere words — unfolding to its logical conclusion….

It seems axiomatic to me that whatever words can create, they can destroy. And ingratitude is the destroyer’s form. We teach children that the moral of the Goose that Lays the Golden Egg is the danger of greed. But the real moral of the story is ingratitude. A farmer finds an animal, which promises to make him richer than he ever imagined. But rather than nurture and protect this miracle, he resents it for not doing more. In one version, the farmer demands two golden eggs per day. When the goose politely demurs, he kills it out of a sense of entitlement — the opposite of gratitude.

The Miracle is our goose. And rather than be grateful for it, our schools, our culture, and many of our politicians say we should resent it for not doing more. Conservatism is a form of gratitude, because we conserve only what we are grateful for. Our society is talking itself out of gratitude for the Miracle and teaching our children resentment. Our culture affirms our feelings as the most authentic sources of truth when they are merely the expressions of instincts, and considers the Miracle a code word for white privilege, greed, and oppression.

This is corruption. And it is a choice. Collectively, we are embracing entitlement over gratitude. That is suicidal.

I would put it this way: About 300 years ago there arose in the West the idea of innate equality and inalienable rights. At the same time, and not coincidentally, there arose the notion of economic betterment through free markets. The two concepts — political and economic liberty — are in fact inseparable. One cannot have economic liberty without political liberty; political liberty — the ownership of oneself — implies the ownership of the fruits of one’s own labor and the right to strive for prosperity. This latter striving, as Adam Smith pointed out, works not only for the betterment of the striver but also for the betterment of those who engage in trade with him. The forces of statism are on the march (and have been for a long time). The likely result is the loss of liberty and the vibrancy and prosperity that arises from it.

I want to be clear about liberty. It is not a spiritual state of bliss. It is, as I have written,

a modus vivendi, not the result of a rational political scheme. Though a rational political scheme, such as the one laid out in the Constitution of the United States, could promote liberty.

The key to a libertarian modus vivendi is the evolutionary development and widespread observance of social norms that foster peaceful coexistence and mutually beneficial cooperation.

Liberty, in sum, is not an easy thing to attain or preserve because it depends on social comity: mutual trust, mutual respect, and mutual forbearance. These are hard to inculcate and sustain in the relatively small groupings of civil society (family, church, club, etc.). They are almost impossible to attain or sustain in a large, diverse nation-state. Interests clash and factions clamor and claw for ascendancy over other factions. (It is called tribalism, and even anti-tribalists are tribal in their striving to impose their values on others). The Constitution, as Goldberg implies, has proved unequal to the task of preserving liberty, for reasons to which I will come.

I invoke the Constitution deliberately. This essay is about the United States, not the West in general. (Goldberg gets to the same destination after a while.) Much of the West has already committed “suicide” by replacing old-fashioned (“classical“) liberalism with oppressive statism. The U.S. is far down the same path. The issue at hand, therefore, is whether America’s “suicide” can be avoided.

Perhaps, but only if the demise of liberty is a choice. It may not be a choice, however, as Goldberg unwittingly admits when he writes about human nature.

On that point I turn to John Daniel Davidson, writing in “The West Isn’t Committing Suicide, It’s Dying of Natural Causes” (The Federalist, May 18, 2018):

Perhaps the Miracle, wondrous as it is, needs more than just our gratitude to sustain it. Perhaps the only thing that can sustain it is an older order, one that predates liberal democratic capitalism and gave it its vitality in the first place. Maybe the only way forward is to go back and rediscover the things we left behind at the dawn of the Enlightenment.

Goldberg is not very interested in all of that. He does not ask whether there might be some contradictions at the heart of the liberal order, whether it might contain within it the seeds of its undoing. Instead, Goldberg makes his stand on rather narrow grounds. He posits that the Enlightenment Miracle can be defended in purely secular, utilitarian terms, which he supposes are the only terms skeptics of liberal democratic capitalism will accept.

That forces him to treat the various illiberal ideologies that came out of Enlightenment thought (like communism) as nothing more than a kind of tribalism rather than a natural consequence of the hyper-rational scientism embedded in the liberal order itself. As Richard M. Reinsch II noted last week in an excellent review of Goldberg’s book over at Law and Liberty, “If you are going to set the Enlightenment Miracle as the standard of human excellence, one that we are losing, you must also clearly state the dialectic it introduces of an exaltation of reason, power, and science that can become something rather illiberal.”

That is to say, we mustn’t kid ourselves about the Miracle. We have to be honest, not just about its benefits but also its costs….

What about science and medical progress? What about the eradication of disease? What about technological advances? Isn’t man’s conquest of nature a good thing? Hasn’t the Enlightenment and the scientific revolution and the invention of liberal democratic capitalism done more to alleviate poverty and create wealth than anything in human history? Shouldn’t we preserve this liberal order and pass it on to future generations? Shouldn’t we inculcate in our children a profound sense of gratitude for all this abundance and prosperity?

This is precisely Goldberg’s argument. Yes, he says, man’s conquest of nature is a good thing. It’s the same species of argument raised earlier this year in reaction to Patrick Deneen’s book, “Why Liberalism Failed,” which calls into question the entire philosophical system that gave us the Miracle….

[Deneen] is not chiefly interested in the problems of the modern progressive era or the contemporary political Left. He isn’t alarmed merely by political tribalism and the fraying of the social order. Those things are symptoms, not the cause, of the illness he’s diagnosing. Even the social order at its liberal best—the Miracle itself—is part of the illness.

Deneen’s argument reaches back to the foundations of the liberal order in the sixteenth  and seventeenth centuries—prior to the appearance of the Miracle, in Goldberg’s telling—when a series of thinkers embarked on a fundamentally revisionist project “whose central aim was to disassemble what they concluded were irrational religious and social norms in the pursuit of civil peace that might in turn foster stability and prosperity, and eventually individual liberty of conscience and action.”

The project worked, as Goldberg has chronicled at length, but only up to a point. Today, says Deneen, liberalism is a 500-year-old experiment that has run its course and now “generates endemic pathologies more rapidly and pervasively than it is able to produce Band-Aids and veils to cover them.”

Taking the long view of history, Deneen’s book could be understood as an extension of Lewis’s argument in “The Abolition of Man.” The replacement of moral philosophy and religion with liberalism and applied science has begun, in our lifetimes, to manifest the dangers that Lewis warned about. Deneen, writing more than a half-century after Lewis, declares that the entire liberal project manifestly has failed.

Yes, the Miracle gave us capitalism and democracy, but it also gave us hyper-individualism, scientism, and communism. It gave us liberty and universal suffrage, but it also gave us abortion, euthanasia, and transgenderism. The abolition of man was written into the Enlightenment, in other words, and the suicide of the West that Goldberg warns us about isn’t really a suicide at all, because it isn’t really a choice: we aren’t committing suicide, we’re dying of natural causes.

Goldberg is correct that we have lost our sense of gratitude, that we don’t really feel like things are as good as all that. But a large part of the reason is that the liberal order itself has robbed us of our ability to articulate what constitutes human happiness. We have freedom, we have immense wealth, but we have nothing to tell us what we should do with it, nothing to tell us what is good.

R.R. Reno, in “The Smell of Death” (First Things, May 31, 2018), comes at it this way:

At every level, our elites oppose traditional regulation of behavior based on clear moral norms, preferring a therapeutic and bureaucratic approach. They seek to decriminalize marijuana. They have deconstructed male and female roles for children. They correct anyone who speaks of “sex,” preferring to speak of “gender,” which they insist is “socially constructed.” They have ushered in a view of free speech that makes it impossible to prevent middle school boys from watching pornography on their smart phones. They insist upon a political correctness that rejects moral correctness.

The upshot is American culture circa 2018. Our ideal is a liquid world of self-definition, characterized by plenary acceptance and mutual affirmation. In practice, the children of our elites are fortunate: Their families and schools carefully socialize them into the disciplines of twenty-first-century meritocratic success while preaching openness, inclusion, and diversity. But the rest are not so fortunate. Most Americans gasp for air as they tread water. More and more drown….

Liberalism has always been an elite project of deregulation. In the nineteenth century, it sought to deregulate pre-modern economies and old patterns of social hierarchy. It worked to the advantage of the talented, enterprising, and ambitious, who soon supplanted the hereditary aristocracy.

In the last half-century, liberalism has focused on deregulating personal life. This, too, has been an elite priority. It makes options available to those with the resources to exploit them. But it has created a world in which disordered souls kill themselves with drugs and alcohol—and in which those harboring murderous thoughts feel free to act upon them.

The penultimate word goes to Malcolm Pollack (“The Magic Feather“, Motus Mentis, July 6, 2018):

Our friend Bill Vallicella quoted this, from Michael Anton, on Independence Day:

For the founders, government has one fundamental purpose: to protect person and property from conquest, violence, theft and other dangers foreign and domestic. The secure enjoyment of life, liberty and property enables the “pursuit of happiness.” Government cannot make us happy, but it can give us the safety we need as the condition for happiness. It does so by securing our rights, which nature grants but leaves to us to enforce, through the establishment of just government, limited in its powers and focused on its core responsibility.

Bill approves, and adds:

This is an excellent statement. Good government secures our rights; it does not grant them. Whether they come from nature, or from God, or from nature qua divine creation are further questions that can be left to the philosophers. The main thing is that our rights are not up for democratic grabs, nor are they subject to the whims of any bunch of elitists that manages to insinuate itself into power.

I agree all round. I hope that my recent engagement with Mr. Anton about the ontology of our fundamental rights did not give readers the impression that I doubt for a moment the importance of Americans believing they possess them, or of the essential obligation of government to secure them (or of the people to overthrow a government that won’t).

My concerns are whether the popular basis for this critically important belief is sustainable in an era of radical and corrosive secular doubt (and continuing assault on those rights), and whether the apparently irresistible tendency of democracy to descend into faction, mobs, and tyranny was in fact a “poison pill” baked into the nation at the time of the Founding. I am inclined to think it was, but historical contingency and inevitability are nearly impossible to parse with any certainty.

Arnold Kling (“Get the Story Straight“, Library of Economics and Liberty, July 9, 2018) is more succinct:

Lest we fall back into a state of primitive tribalism, we need to understand the story of the Miracle. We need to understand that it is unnatural, and we should be grateful for the norms and institutions that restrained human nature in order to make the Miracle possible.

All of the writers I have quoted are on to something, about which I have written in “Constitution: Myths and Realities“. I call it the Framers’ fatal error.

The Framers’ held a misplaced faith in the Constitution’s checks and balances (see Madison’s Federalist No. 51 and Hamilton’s Federalist No. 81). The Constitution’s wonderful design — containment of a strictly limited central government through horizontal and vertical separation of powers — worked rather well until the Progressive Era. The design then cracked under the strain of greed and the will to power, as the central government began to impose national economic regulation at the behest of muckrakers and do-gooders. The design then broke during the New Deal, which opened the floodgates to violations of constitutional restraint (e.g., Medicare, Medicaid, Obamacare,  the vast expansion of economic regulation, and the destruction of civilizing social norms), as the Supreme Court has enabled the national government to impose its will in matters far beyond its constitutional remit.

In sum, the “poison pill” baked into the nation at the time of the Founding is human nature, against which no libertarian constitution is proof unless it is enforced resolutely by a benign power.

Barring that, it is may be too late to rescue liberty in America. I am especially pessimistic because of the unraveling of social comity since the 1960s, and by a related development: the frontal assault on freedom of speech, which is the final constitutional bulwark against oppression.

Almost overnight, it seems, the nation was catapulted from the land of Ozzie and Harriet, Father Knows Best, and Leave It to Beaver to the land of the free- filthy-speech movement, Altamont, Woodstock, Hair, and the unspeakably loud, vulgar, and violent offerings that are now plastered all over the air waves, the internet, theater screens, and “entertainment” venues.

The 1960s and early 1970s were a tantrum-throwing time, and many of the tantrum-throwers moved into positions of power, influence, and wealth, having learned from the success of their main ventures: the end of the draft and the removal of Nixon from office. They schooled their psychological descendants well, and sometimes literally on college campuses. Their successors on the campuses of today — students, faculty, and administrators — carry on the tradition of reacting with violent hostility toward persons and ideas that they oppose, and supporting draconian punishments for infractions of their norms and edicts. (For myriad examples, see The College Fix.)

Adherents of the ascendant culture esteem protest for its own sake, and have stock explanations for all perceived wrongs (whether or not they are wrongs): racism, sexism, homophobia, Islamophobia, hate, white privilege, inequality (of any kind), Wall  Street, climate change, Zionism, and so on. All of these are to be combated by state action that deprives citizens of economic and social liberties.

In particular danger are the freedoms of speech and association. The purported beneficiaries of the campaign to destroy those freedoms are “oppressed minorities” (women, Latinos, blacks, Muslims, the gender-confused, etc.) and the easily offended. The true beneficiaries are leftists. Free speech is speech that is acceptable to the left. Otherwise, it’s “hate speech”, and must be stamped out. Freedom of association is bigotry, except when it is practiced by leftists in anti-male, anti-conservative, pro-ethnic, and pro-racial causes. This is McCarthyism on steroids. McCarthy, at least, was pursuing actual enemies of liberty; today’s leftists are the enemies of liberty.

The organs of the state have been enlisted in an unrelenting campaign against civilizing social norms. We now have not just easy divorce, subsidized illegitimacy, and legions of non-mothering mothers, but also abortion, concerted (and deluded) efforts to defeminize females and to neuter or feminize males, forced association (with accompanying destruction of property and employment rights), suppression of religion, absolution of pornography, and the encouragement of “alternative lifestyles” that feature disease, promiscuity, and familial instability.

The state, of course, doesn’t act of its own volition. It acts at the behest of special interests — interests with a “cultural” agenda. They are bent on the eradication of civil society — nothing less — in favor of a state-directed Rousseauvian dystopia from which Judeo-Christian morality and liberty will have vanished, except in Orwellian doublespeak.

If there are unifying themes in this petite histoire, they are the death of common sense and the rising tide of moral vacuity. The history of the United States since the 1960s supports the proposition that the nation is indeed going to hell in a handbasket.

In fact, the speed at which it is going to hell seems to have accelerated since the Charleston church shooting and the legal validation of  same-sex “marriage” in 2015. It’s a revolution (e.g., this) piggy-backing on mass hysteria. Here’s the game plan:

  • Define opposition to illegal immigration, Islamic terrorism, same-sex marriage, transgenderism, and other kinds violent and anti-social behavior as “hate“.
  • Associate “hate” with conservatism.
  • Watch as normally conservative politicians, business people, and voters swing left rather than look “mean” and put up a principled fight for conservative values. (Many of them can’t put up such a fight, anyway. Trump’s proper but poorly delivered refusal to pin all of the blame on neo-Nazis for the Charlottesville riot just added momentum to the left’s cause because he’s Trump and a “fascist” by definition.)
  • Watch as Democrats play the “hate” card to retake the White House and Congress.

With the White House in the hands of a left-wing Democrat (is there any other kind now?) and an aggressive left-wing majority in Congress, freedom of speech, freedom of association, and property rights will become not-so-distant memories. “Affirmative action” (a.k.a. “diversity”) will be enforced on an unprecedented scale of ferocity. The nation will become vulnerable to foreign enemies while billions of dollars are wasted on the hoax of catastrophic anthropogenic global warming and “social services” for the indolent. The economy, already buckling under the weight of statism, will teeter on the brink of collapse as the regulatory regime goes into high gear and entrepreneurship is all but extinguished by taxation and regulation.

All of that will be secured by courts dominated by left-wing judges — from here to eternity.

And most of the affluent white enablers dupes of the revolution will come to rue their actions. But they won’t be free to say so.

Thus will liberty — and prosperity — die in America.

And it is possible that nothing can prevent it because it is written in human nature; specifically, a penchant for the kind of mass hysteria that seems to dominate campuses, the “news” and “entertainment” media, and the Democrat Party.

Christopher Booker describes this phenomenon presciently in his book about England and America of the 1950s and 1960s, The Neophiliacs (1970):

[T]here is no dream so powerful as one generated and subscribed to by a whole mass of people simultaneously — one of those mass projections of innumerable individual neuroses which we may call a group fantasy. This is why the twentieth century has equally been dominated by every possible variety of collective make-believe — whether expressed through mass political movements and forms of nationalism, or through mass social movements….

Any group fantasy is in some sense a symptom of social disintegration, of the breaking down of the balance and harmony between individuals, classes, generations, the sexes, or even nations. For the organic relationships of a stable and secure community, in which everyone may unself-consciously exist in his own separate place and right, a group fantasy substitutes the elusive glamor of identification with a fantasy community, of being swept along as part of a uniform mass united in a common cause. But the individuals making up the mass are not, of course, united in any real sense, except through their common dress, catch phrases, slogans, and stereotyped attitudes. Behind their conformist exteriors they remain individually as insecure as ever — and indeed become even more so, for the collective dream, such as that expressed through mass advertising or the more hysterical forms of fashion, is continually aggravating their fantasy-selves and appealing to them through their insecurities to merge themselves in the mass ever more completely….

This was the phenomenon of mass psychology which was portrayed in an extreme version by George Orwell in his 1984…. But in fact the pattern described was that of every group fantasy; exactly the same that we can see, for instance, in the teen age subculture of the fifties and sixties, … or that of the left-wing progressive intellectuals, with their dream heroes such as D. H. Lawrence or Che Guevera and their ritual abuse of the “reactionaries”….

… Obviously no single development in history has done more to promote both social disintegration and unnatural conformity than the advance and ubiquity of machines and technology. Not only must the whole pressure of an industrialized, urbanized, mechanized society tend to weld its members into an ever more rootless uniform mass, by the very nature of its impersonal organization and of the processes of mass-production and standardization. But in addition the twentieth century has also provided two other factors to aggravate and to feed the general neurosis; the first being the image-conveying apparatus of films, radio, television, advertising, mass-circulation newspapers and magazines; the second the feverishly increased pace of life, from communications and transport to the bewildering speed of change and innovation, all of which has created a profound subconscious restlessness which neurotically demands to be assuaged by more speed and more change of every kind….

The essence of fantasy is that it feeds on a succession of sensations or unresolved images, each one of which arouses anticipation, followed by inevitable frustration, leading to the demand for a new image to be put in its place. But the very fact that each sensation is fundamentally unsatisfying means that the fantasy itself becomes progressively more jaded…. And so we arrive at the fantasy spiral.

Whatever pattern of fantasy we choose to look at … she shall find that it is straining through a spiral of increasingly powerful sensations toward some kind of climax…. What happens therefore is simply that, in its pursuit of the elusive image of life, freedom, and self-assertion, the fantasy pushes on in an ever-mounting spiral of demand, ever more violent, more dream-like and fragmentary, and ever more destructive of the framework of order. Further and further pushes the fantasy, always in pursuit of the elusive climax, always further from reality — until it is actually bringing about the very opposite of its aims.

That, of course, is what will happen when the left and its dupes bring down the Constitution and all that it was meant to stand for: the protection of citizens and their voluntary institutions and relationships from predators, including not least governmental predators and the factions they represent.

The Constitution, in short, was meant to shield Americans from human nature. But it seems all too likely that human nature will destroy the shield.

Thus my call for a “Preemptive (Cold) Civil War“.


Related reading:
Fred Reed, “The Symptoms Worsen”, Fred on Everything, March 15, 2015
Christopher Booker, Global Warming: A Case Study in Groupthink, Global Warming Policy Foundation, 2018
Michael Mann, “Have Wars and Violence Declined?“, Theory and Society, February 2018
John Gray, “Steven Pinker Is Wrong about Violence and War”, The Guardian, March 13, 2015
Nikita Vladimirov, “Scholar Traces Current Campus Intolerance to 60’s Radicals“, Campus Reform, March 14, 2018
Nick Spencer, “Enlightenment and Progress: Why Steven Pinker Is Wrong” Mercatornet, March 19, 2018
Steven Hayward, “Deja Vu on Campus?“, PowerLine, April 15, 2018
William A. Nitze, “The Tech Giants Must Be Stopped“, The American Conservative, April 16, 2018
Steven Hayward, “Jonah’s Suicide Hotline, and All That Stuff“, PowerLine, May 15, 2018
Jeff Groom, “40 Years Ago Today: When Solzhenitsyn Schooled Harvard“, The American Conservative, June 8, 2018
Graham Allison, “The Myth of the Liberal Order: From Historical Accident to Conventional Wisdom“, Foreign Affairs, July/August 2018
Gilbert T. Sewall, “The America That Howard Zinn Made“, The American Conservative, July 10, 2018
Mary Eberstadt, “Two Nations, Revisited“, National Affairs, Summer 2018

Related posts and pages:
Constitution: Myths and Realities
Leftism
The Psychologist Who Played God
We, the Children of the Enlightenment
Society and the State
The Eclipse of “Old America”
Genetic Kinship and Society
The Fallacy of Human Progress
The Culture War
Ruminations on the Left in America
1963: The Year Zero
Academic Ignorance
The Euphemism Conquers All
Defending the Offensive
Superiority
Whiners
A Dose of Reality
Turning Points
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Social Justice vs. Liberty
The Left and “the People”
Liberal Nostrums
Liberty and Social Norms Re-examined
Equality
Academic Freedom, Freedom of Speech, and the Demise of Civility
Leftism As Crypto-Fascism: The Google Paradigm
What’s Going On? A Stealth Revolution
Disposition and Ideology
Down the Memory Hole
“Tribalists”, “Haters”, and Psychological Projection
Mass Murder: Reaping What Was Sown
Utopianism, Leftism, and Dictatorship
The Framers, Mob Rule, and a Fatal Error
Abortion, the “Me” Generation, and the Left
Abortion Q and A
Whence Polarization?
Negative Rights, Etc.
Social Norms, the Left, and Social Disintegration
Order vs. Authority
Can Left and Right Be Reconciled?
Rage on the Left
Rights, Liberty, the Golden Rule, and Leviathan

Driving Is an IQ Test

An automobile is a deadly weapon. Driving carelessly — in ways that are dangerous or confusing to others — signifies either a death-wish (a rare thing) or stupidity (much more likely).

I would estimate a driver’s a real-life IQ by deducting 5 points for each of the following habits from the driver’s pencil-and-paper IQ:

1. driving in the middle of a street or parking-lot lane, even as another vehicle approaches

2. waiting until the last split-second to cross or turn onto a street, despite an oncoming vehicle which has the right-of-way

3. ignoring stop signs, not just by failing to stop at them but also failing to look before not stopping

4. failing to plan a trip, which often leads to the practice of turning abruptly without giving a signal

5. looping to the left for a right turn, and looping to the right for a left turn

6. changing lanes or crossing lanes of traffic without looking around …

7. or without signaling

8. crossing the center line while taking a curve

9. taking a corner by cutting across the oncoming lane of traffic

10. zipping through a parking lot as if no child, other pedestrian, or vehicle might suddenly appear

11. yielding the right of way to a driver who doesn’t have it

12. parking off-center in a parking space when there’s not an off-center vehicle in an adjacent space

13. tailgating

14 & 15. (double points) using a cell-phone while driving (even if it’s a hands-off phone)

There are many drivers whose habits would put them in the class of moron, imbecile, or idiot. They are rightly called such names — and worse.


Related posts (with some bonus material thrown in because bad driving and Austin seem to be synonymous):
Driving and Politics (1)
Life in Austin (1)
Life in Austin (2)
Life in Austin (3)
Driving and Politics (2)
AGW in Austin?
Democracy in Austin
AGW in Austin? (II)
The Hypocrisy of “Local Control”

The Keynesian Multiplier: Fiction vs. Fact

There are a few economic concepts that are widely cited (if not understood) by non-economists. Certainly, the “law” of supply and demand is one of them. The Keynesian (fiscal) multiplier is another; it is

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier.

MULTIPLIER MATH

To show why the math is phony, I’ll start with a derivation of the multiplier. The derivation begins with the accounting identity  Y = C + I + G, which means that total output (Y) = consumption (C) + investment (I) + government spending (G). I could use a more complex identity that involves taxes, exports, and imports. But no matter; the bottom line remains the same, so I’ll keep it simple and use Y = C + I  + G.

Keep in mind that the aggregates that I’m writing about here — Y , C , I , G, and later S  — are supposed to represent real quantities of goods and services, not mere money. Keep in mind, also, that Y stands for gross domestic product (GDP); there is no real income unless there is output, that is, product.

Now for the derivation (right-click to enlarge this and later images):

Derivation of investment-govt spending multiplier

So far, so good. Now, let’s say that b = 0.8. This means that income-earners, on average, will spend 80 percent of their additional income on consumption goods (C), while holding back (saving, S) 20 percent of their additional income. With b = 0.8, k = 1/(1 – 0.8) = 1/0.2 = 5.  That is, every $1 of additional spending — let us say additional government spending (∆G) rather than investment spending (∆I) — will yield ∆Y = $5. In short, ∆Y = k(∆G), as a theoretical maximum. (Even if the multiplier were real, there are many things that would cause it to fall short of its theoretical maximum; see this, for example.)

How is it supposed to work? The initial stimulus (∆G) creates income (don’t ask how), a fraction of which (b) goes to C. That spending creates new income, a fraction of which goes to C. And so on. Thus the first round = ∆G, the second round = b(∆G), the third round = b(b)(∆G) , and so on. The sum of the “rounds” asymptotically approaches k(∆G). (What happens to S, the portion of income that isn’t spent? That’s part of the complicated phony story that I’ll examine in a future post.)

Note well, however, that the resulting ∆Y isn’t properly an increase in Y, which is an annual rate of output; rather, it’s the cumulative increase in total output over an indefinite number and duration of ever-smaller “rounds” of consumption spending.

The cumulative effect of a sustained increase in government spending might, after several years, yield a new Y — call it Y’ = Y + ∆Y. But it would do so only if ∆G persisted for several years. To put it another way, ∆Y persists only for as long as the effects of ∆G persist. The multiplier effect disappears after the “rounds” of spending that follow ∆G have played out.

The multiplier effect is therefore (at most) temporary; it vanishes after the withdrawal of the “stimulus” (∆G). The idea is that ∆Y should be temporary because a downturn will be followed by a recovery — weak or strong, later or sooner.

An aside is in order here: Proponents of big government like to trumpet the supposedly stimulating effects of G on the economy when they propose programs that would lead to permanent increases in G, holding other things constant. And other things (other government programs) are constant (at least) because they have powerful patrons and constituents, and are harder to kill than Hydra. If the proponents of big government were aware of the economically debilitating effects of G and the things that accompany it (e.g., regulations), most of them would simply defend their favorite programs all the more fiercely.

WHY MULTIPLIER MATH IS PHONY MATH

Now for my exposé of the phony math. I begin with Steven Landsburg, who borrows from the late Murray Rothbard:

. . . We start with an accounting identity, which nobody can deny:

Y = C + I + G

. . . Since all output ends up somewhere, and since households, firms and government exhaust the possibilities, this equation must be true.

Next, we notice that people tend to spend, oh, say about 80 percent of their incomes. What they spend is equal to the value of what ends up in their households, which we’ve already called C. So we have

C = .8Y

Now we use a little algebra to combine our two equations and quickly derive a new equation:

Y = 5(I+G)

That 5 is the famous Keynesian multiplier. In this case, it tells you that if you increase government spending by one dollar, then economy-wide output (and hence economy-wide income) will increase by a whopping five dollars. What a deal!

. . . [I]t was Murray Rothbard who observed that the really neat thing about this argument is that you can do exactly the same thing with any accounting identity. Let’s start with this one:

Y = L + E

Here Y is economy-wide income, L is Landsburg’s income, and E is everyone else’s income. No disputing that one.

Next we observe that everyone else’s share of the income tends to be about 99.999999% of the total. In symbols, we have:

E = .99999999 Y

Combine these two equations, do your algebra, and voila:

Y = 100,000,000

That 100,000,000 there is the soon-to-be-famous “Landsburg multiplier”. Our equation proves that if you send Landsburg a dollar, you’ll generate $100,000,000 worth of income for everyone else.

The policy implications are unmistakable. It’s just Eco 101!! [“The Landsburg Multiplier: How to Make Everyone Rich”, The Big Questions blog, June 25, 2013]

Landsburg attributes the nonsensical result to the assumption that

equations describing behavior would remain valid after a policy change. Lucas made the simple but pointed observation that this assumption is almost never justified.

. . . None of this means that you can’t write down [a] sensible Keynesian model with a multiplier; it does mean that the Eco 101 version of the Keynesian cross is not an example of such. This in turn calls into question the wisdom of the occasional pundit [Paul Krugman] who repeatedly admonishes us to be guided in our policy choices by the lessons of Eco 101. [“Multiple Comments”, op. cit,, June 26, 2013]

It’s worse than that, as Landsburg almost acknowledges when he observes (correctly) that Y = C + I + G is an accounting identity. That is to say, it isn’t a functional representation — a model — of the dynamics of the economy. Assigning a value to b (the marginal propensity to consume) — even if it’s an empirical value — doesn’t alter that fact that the derivation is nothing more than the manipulation of a non-functional relationship, that is, an accounting identity.

Consider, for example, the equation for converting temperature Celsius (C) to temperature Fahrenheit (F): F = 32 + 1.8C. It follows that an increase of 10 degrees C implies an increase of 18 degrees F. This could be expressed as ∆F/∆C = k* , where k* represents the “Celsius multiplier”. There is no mathematical difference between the derivation of the investment/government-spending multiplier (k) and the derivation of the Celsius multiplier (k*). And yet we know that the Celsius multiplier is nothing more than a tautology; it tells us nothing about how the temperature rises by 10 degrees C or 18 degrees F. It simply tells us that when the temperature rises by 10 degrees C, the equivalent rise in temperature F is 18 degrees. The rise of 10 degrees C doesn’t cause the rise of 18 degrees F.

Similarly, the Keynesian investment/government-spending multiplier simply tells us that if ∆Y = $5 trillion, and if b = 0.8, then it is a matter of mathematical necessity that ∆C = $4 trillion and ∆I + ∆G = $1 trillion. In other words, a rise in I + G of $1 trillion doesn’t cause a rise in Y of $5 trillion; rather, Y must rise by $5 trillion for C to rise by $4 trillion and I + G to rise by $1 trillion. If there’s a causal relationship between ∆G and ∆Y, the multiplier doesn’t portray it.

PHONY MATH DOESN’T EVEN ADD UP

Recall the story that’s supposed to explain how the multiplier works: The initial stimulus (∆G) creates income, a fraction of which (b) goes to C. That spending creates new income, a fraction of which goes to C. And so on. Thus the first round = ∆G, the second round = b(∆G), the third round = b(b)(∆G) , and so on. The sum of the “rounds” asymptotically approaches k(∆G). So, if b = 0.8, k = 5, and ∆G = $1 trillion, the resulting cumulative ∆Y = $5 trillion (in the limit). And it’s all in addition to the output that would have been generated in the absence of ∆G, as long as many conditions are met. Chief among them is the condition that the additional output in each round is generated by resources that had been unemployed.

In addition to the fact that the math behind the multiplier is phony, as explained above, it also yields contradictory results. If one can derive an investment/government-spending multiplier, one can also derive a “consumption multiplier”:

Derivation of consumption multiplier

Taking b = 0.8, as before, the resulting value of kc is 1.25. Suppose the initial round of spending is generated by C instead of G. (I won’t bother with a story to explain it; you can easily imagine one involving underemployed factories and unemployed persons.) If ∆C = $1 trillion, shouldn’t cumulative ∆Y = $5 trillion? After all, there’s no essential difference between spending $1 trillion on a government project and $1 trillion on factory output, as long as both bursts of spending result in the employment of underemployed and unemployed resources (among other things).

But with kc = 1.25, the initial $1 trillion burst of spending (in theory) results in additional output of only $1.25 trillion. Where’s the other $3.75 trillion? Nowhere. The $5 trillion is phony. What about the $1.25 trillion? It’s phony, too. The “consumption multiplier” of 1.25 is simply the inverse of b, where b = 0.8. In other words, Y must rise by $1.25 trillion if C is to rise by $1 trillion. More phony math.

CAN AN INCREASE IN G HELP IN THE SHORT RUN?

Can an exogenous increase in G spending really yield a short-term, temporary increase in GDP? Perhaps, but there’s many a slip between cup and lip. The following example goes beyond the bare theory of the Keynesian multiplier to address several practical and theoretical shortcomings (some which are discussed  “here” and “here“):

  1. Annualized real GDP (Y) drops from $16.5 trillion a year to $14 trillion a year because of the unemployment of resources. (How that happens is a different subject.)
  2. Government spending (G) is temporarily and quickly increased by an annual rate of $500 billion; that is, ∆G = $0.5 trillion. The idea is to restore Y to $16 trillion, given a multiplier of 5 (In standard multiplier math: ∆Y = (k)(∆G), where k = 1/(1 – MPC); k = 5, where MPC = 0.8.)
  3. The ∆G is financed in a way that doesn’t reduce private-sector spending. (This is almost impossible, given Ricardian equivalence — the tendency of private actors to take into account the long-term, crowding-out effects of government spending as they make their own spending decisions. The closest approximation to neutrality can be attained by financing additional G through money creation, rather than additional taxes or borrowing that crowds out the financing of private-sector consumption and investment spending.)
  4. To have the greatest leverage, ∆G must be directed so that it employs only those resources that are idle, which then acquire purchasing power that they didn’t have before. (This, too, is almost impossible, given the clumsiness of government.)
  5. A fraction of the new purchasing power flows, through consumption spending (C), to the employment of other idle resources. That fraction is called the marginal propensity to consume (MPC), which is the rate at which the owners of idle resources spend additional income on so-called consumption goods. (As many economists have pointed out, the effect could also occur as a result of investment spending. A dollar spent is a dollar spent, and investment  spending has the advantage of directly enabling economic growth, unlike consumption spending.)
  6. A remainder goes to saving (S) and is therefore available for investment (I) in future production capacity. But S and I are ignored in the multiplier equation: One story goes like this: S doesn’t elicit I because savers hoard cash and investment is discouraged by the bleak economic outlook. Here is a more likely story: The multiplier would be infinite (and therefore embarrassingly inexplicable) if S generated an equivalent amount of I, because the marginal propensity to spend (MPS) would be equal to 1, and the multiplier equation would look like this: k = 1/(1 – MPS) = ∞, where MPS = 1.
  7. In any event, the initial increment of C (∆C) brings forth a new “round” of production, which yields another increment of C, and so on, ad infinitum. If MPC = 0.8, then assuming away “leakage” to taxes and imports, the multiplier = k = 1/(1 – MPC), or k = 5 in this example.  (The multiplier rises with MPC and reaches infinity if MPC = 1. This suggests that a very high MPC is economically beneficial, even though a very high MPC implies a very low rate of saving and therefore a very low rate of growth-producing investment.)
  8. Given k = 5,  ∆G = $0.5T would cause an eventual increase in real output of $2.5 trillion (assuming no “leakage” or offsetting reductions in private consumption and investment); that is, ∆Y = [k][∆G]= $2.5 trillion. However, because G and Y usually refer to annual rates, this result is mathematically incoherent; ∆G = $0.5 trillion does not restore Y to $16.5 trillion.
  9. In any event, the increase in Y isn’t permanent; the multiplier effect disappears after the “rounds” resulting from ∆G have played out. If the theoretical multiplier is 5, and if transactional velocity is 4 (i.e., 4 “rounds” of spending in a year), more than half of the multiplier effect would be felt within a year from each injection of spending, and about two-thirds would be felt within two years of each injection. It seems unlikely, however, that the multiplier effect would be felt for much longer, because of changing conditions (e.g., an exogenous boost in private investment, private reemployment of resources, discouraged workers leaving the labor force, shifts in expectations about inflation and returns on investment).
  10. All of this ignores that fact that the likely cause of the drop in Y is not insufficient “aggregate demand”, but a “credit crunch” (Michael D. Bordo and Joseph G. Haubrich in “Credit Crises, Money, and Contractions: A Historical View”, Federal Reserve Bank of Cleveland, Working Paper 09-08, September 2009). “Aggregate demand” doesn’t exist, except as an after-the-fact measurement of the money value of goods and services comprised in Y. “Aggregate demand”, in other words, is merely the sum of millions of individual transactions, the rate and total money value of which decline for specific reasons, “credit crunch” being chief among them. Given that, an exogenous increase in G is likely to yield a real increase in Y only if the increase in G leads to an increase in the money supply (as it is bound to do when the Fed, in effect, prints money to finance it). But because of cash hoarding and a bleak investment outlook, the increase in the money supply is unlikely to generate much additional economic activity.

So much for that.

THE THEORETICAL MAXIMUM

A somewhat more realistic version of multiplier math — as opposed to the version addressed earlier — yields a maximum value of k = 1:

More rigorous derivation of Keynesian multiplier

How did I do that? In step 3, I made C a function of P (private-sector GDP) instead of Y (usually taken as the independent variable). Why? C is more closely linked to P than to Y, as an analysis of GDP statistics will prove. (Go here, download the statistics for the post-World War II era from tables 1.1.5 and 3.1, and see for yourself.)

THE TRUE MULTIPLIER

In fact, a sustained increase in government spending will have a negative effect on real output — a multiplier of less than 1, in other words.

Robert J. Barro of Harvard University opens an article in The Wall Street Journal with the statement that “economists have not come up with explanations … for multipliers above one”. Barro continues:

A much more plausible starting point is a multiplier of zero. In this case, the GDP is given, and a rise in government purchases requires an equal fall in the total of other parts of GDP — consumption, investment and net exports….

What do the data show about multipliers? Because it is not easy to separate movements in government purchases from overall business fluctuations, the best evidence comes from large changes in military purchases that are driven by shifts in war and peace. A particularly good experiment is the massive expansion of U.S. defense expenditures during World War II. The usual Keynesian view is that the World War II fiscal expansion provided the stimulus that finally got us out of the Great Depression. Thus, I think that most macroeconomists would regard this case as a fair one for seeing whether a large multiplier ever exists.

I have estimated that World War II raised U.S. defense expenditures by $540 billion (1996 dollars) per year at the peak in 1943-44, amounting to 44% of real GDP. I also estimated that the war raised real GDP by $430 billion per year in 1943-44. Thus, the multiplier was 0.8 (430/540). The other way to put this is that the war lowered components of GDP aside from military purchases. The main declines were in private investment, nonmilitary parts of government purchases, and net exports — personal consumer expenditure changed little. Wartime production siphoned off resources from other economic uses — there was a dampener, rather than a multiplier….

There are reasons to believe that the war-based multiplier of 0.8 substantially overstates the multiplier that applies to peacetime government purchases. For one thing, people would expect the added wartime outlays to be partly temporary (so that consumer demand would not fall a lot). Second, the use of the military draft in wartime has a direct, coercive effect on total employment. Finally, the U.S. economy was already growing rapidly after 1933 (aside from the 1938 recession), and it is probably unfair to ascribe all of the rapid GDP growth from 1941 to 1945 to the added military outlays. [“Government Spending Is No Free Lunch”, The Wall Street Journal, January 22, 2009]

This is from a paper by Valerie A. Ramsey:

… [I]t appears that a rise in government spending does not stimulate private spending; most estimates suggest that it significantly lowers private spending. These results imply that the government spending multiplier is below unity. Adjusting the implied multiplier for increases in tax rates has only a small effect. The results imply a multiplier on total GDP of around 0.5. [“Government Spending and Private Activity”, National Bureau of Economic Research, January 2012]

There is a key component of government spending which usually isn’t captured in estimates of the multiplier: transfer payments, which are mainly “social benefits” (e.g., Social Security, Medicare, and Medicaid). In fact, actual government spending in the U.S., including transfer payments, is about double the nominal amount that is represented in G, the standard measure of government spending (the actual cost of government operations, buildings, equipment, etc.). But transfer payments — like other government spending — are subsidized by directing resources from persons who are directly productive (active worker) and whose investments are directly productive (innovators, entrepreneurs, stockholders, etc.) to persons who (for the most part) are economically unproductive and counterproductive. It follows that real economic output must be affected by transfer payments.

Other factors are also important to economic growth, namely, private investment in business assets, the rate at which regulations are being issued, and inflation. The combined effects of these factors and aggregate government spending have been estimated:. I borrow from that estimate, with a slight, immaterial change in nomenclature:

gr = 0.0275 -0.347F + 0.0769A – 0.000327R – 0.135P

Where,

gr = real rate of GDP growth in a 10-year span (annualized)

F = fraction of GDP spent by governments at all levels during the preceding 10 years [including transfer payments]

A = the constant-dollar value of private nonresidential assets (business assets) as a fraction of GDP, averaged over the preceding 10 years

R = average number of Federal Register pages, in thousands, for the preceding 10-year period

P = growth in the CPI-U during the preceding 10 years (annualized).

The r-squared of the equation is 0.73 and the F-value is 2.00E-12. The p-values of the intercept and coefficients are 0.099, 1.75E-07, 1.96E-08, 8.24E-05, and 0.0096. The standard error of the estimate is 0.0051, that is, about half a percentage point.

Assume, for the sake of argument, that F rises while the other independent variables remain unchanged. A rise in F from 0.24 to 0.33 (the actual change from 1947 to 2007) would reduce the real rate of economic growth by 0.031 percentage points. The real rate of growth from 1947 to 1957 was 4 percent. Other things being the same, the rate of growth would have dropped to 0.9 percent in the period 2008-2017. It actually dropped to 1.4 percent, which is within the standard error of the equation. And the discrepancy could be the result of changes in the other variables — a disproportionate increase in business assets (A), for example.

Given that rg = -0.347F, other things being the same, then

Y1 = Y0(c – 0.347F)

Where,

Y1 = real GDP in the period after a change in F, other things being the same

Y0 = real GDP in the period during which F changes

c = a constant, representing the sum of 1 + 0.025 + the coefficients obtained from fixed values of A, R, and P

The true F multiplier, kT, is therefore negative:

kT = ∆Y/∆F = -0.347Y0

For example, with Y0 = 1000 , F =0 , and other things being the same,

∆Y = [1000 – (0)(1000)] = 1000, when F = 0

∆Y = [1000 – (-0.347)(1000)] = 653, when F = 1

Keeping in mind that the equation is based on an analysis of successive 10-year periods, the true F multiplier should be thought of as representing the effect of a change in the average value of F in a 10-year period on the average value of Y in a subsequent 10-year period.

This is not to minimize the deleterious effect of F (and other government-related factors) on Y. If the 1947-1957 rate of growth (4 percent) had been sustained through 2017, Y would have risen from $1.9 trillion in 1957 to $20 trillion in 2017. But because F, R, and P rose markedly over the years, the real rate of growth dropped sharply and Y reached only $17.1 trillion in 2017. That’s a difference of almost $3 trillion in a single year.

Such losses, summed over several decades, represent millions of jobs that weren’t created, significantly lower standards of living, greater burdens on the workers who support retirees and subsidize their medical care, and the loss of liberty that inevitably results when citizens are subjugated to tax collectors and regulators.

ADDENDUM: A REAL ECONOMIC EXPLANATION FOR THE INEFFECTIVENESS OF “STIMULUS” SPENDING

Consider a static, full-employment economy, in which the same goods and services are produced year after year, yielding the same incomes to the same owners of the same factors of production, which do not change in character (capital goods are maintained and replaced in kind). The owners of the factors of production spend and save their incomes in the same way year after year, so that the same goods and services are produced year after year, and those goods and services encompass the maintenance and in-kind replacement of capital goods. Further, the production cycle is such that all goods and services become available to buyers on the last day of the year, for use by the buyers during the coming year. (If that seems far-fetched, just change all instances of “year” in this post to “month”, “week”, “day”, “hour”, “minute”, or “second.” The analysis applies in every case.)

What would happen if there were a sudden alteration in this circular flow of production (supply), on the one hand, and consumption and investment (demand), on the other hand? Specifically, suppose that a component of the circular flow is a bilateral exchange between a gunsmith and a dairyman who produces butter: one rifle for ten pounds of butter. If the gunsmith decides that he no longer wants ten pounds of butter, and therefore doesn’t produce a rifle to trade for butter, the dairyman would reduce his output of butter by ten pounds.

A Keynesian would describe the situation as a drop in aggregate demand. There is no such thing as “aggregate demand”, of course; it’s just an abstraction for the level of economic activity, which really consists of a host of disparate transactions, the dollar value of which can be summed. Further, those disparate transactions represent not just demand, but demand and supply, which are two sides of the same coin.

In the case of the gunsmith and the dairyman, aggregate output drops by one rifle and ten pounds of butter. The reduction of output by one rifle is voluntary and can’t be changed by government action. The reduction of output by ten pounds of butter would be considered involuntary and subject to remediation by government – in the Keynesian view.

What can government do about the dairyman’s involuntary underemployment? Keynesians would claim that the federal government could print some money and buy the dairyman’s butter. This would not, however, result in the production of a rifle; that is, it would not restore the status quo ante. If the gunsmith has decided not to produce a rifle for reasons having nothing to do with the availability of ten pounds of butter, the government can’t change that by buying ten pounds of butter.

But … a Keynesian would say … if the government buys the ten pounds of butter, the dairyman will have money with which to buy other things, and that will stimulate the economy to produce additional goods and services worth at least as much as the rifle that’s no longer being produced. The Keynesian would have to explain how it’s possible to produce additional goods and services of any kind if only the gunsmith and dairyman are underemployed (one voluntarily, the other involuntarily). The gunsmith has declined to produce a rifle for reasons of his own, and it would be pure Keynesian presumption to assert that he could be lured into producing a rifle for newly printed money when he wouldn’t produce it for something real, namely, ten pounds of butter.

Well, what about the dairyman, who now has some newly printed money in his pocket? Surely, he can entice other economic actors to produce additional goods and services with the money, and trade those goods and services for his ten pounds of butter. The offer of newly printed money might entice some of them to divert some of their production to the dairyman, so that he would have buyers for his ten pounds of butter. Thus the dairyman might become fully employed, but the diversion of output in his direction would cause some other economic actors to be less than fully employed.

Would the newly printed money entice the entry of new producers, some combination of whom might buy the dairyman’s ten pounds of butter and restore him to full employment? It might, but so would private credit expansion in the normal course of events. The Keynesian money-printing solution would lead to additional output only where (a) private credit markets wouldn’t finance new production and (b) new production would be forthcoming despite the adverse conditions implied by (a). And the fact would remain that economic output has declined by one rifle, which fact can’t be changed by deficit spending or monetary expansion.

This gets us to the heart of the problem. Deficit spending (or expansionary monetary policy) can entice additional output only if there is involuntary underemployment, as in the case of the dairyman who would prefer to continue making and selling the ten pounds of butter that he had been selling to the gunsmith. And how do resources become involuntarily underemployed? Here are the causes, which aren’t mutually exclusive:

  • changes in perceived wants, tastes, and preferences, as in the case of the gunsmith’s decision to make one less rifle and forgo ten pounds of butter
  • reductions in output that are occasioned by forecasts of lower demand for particular goods and services
  • changes in perceptions of or attitudes toward risk, which reduce producers’ demand for resources, buyers’ demand for goods and services, or financiers’ willingness to extend credit to producers and buyers.

I am unaware of claims that deficit spending or monetary expansion can affect the first cause of underemployment, though there is plenty of government activity aimed at changing wants, tastes, and preferences for paternalistic reasons.

What about the second and third causes? Can government alleviate them by buying things or making more money available with which to buy things? The answer is no. What signal is sent by deficit spending or monetary expansion? This: Times are tough, demand is falling, credit is tight, In those circumstances, why would newly printed money in the pockets of buyers (e.g., the dairyman) or in the hands of banks entice additional production, purchases, lending, or borrowing?

The evidence of the Great Recession suggests strongly that printing money and spending it or placing it with banks does little if any good. The passing of the Great Recession — and of the Great Depression seventy years earlier — was owed to the eventual restoration of the confidence of buyers and sellers in the future course of the economy. In the case of the Great Depression, confidence was restored when the entry of the United States into World War II put an end to the New Deal, which actually prolonged the depression

In the case of the Great Recession, confidence was restored (though not as fully) by the end of “stimulus” spending. The lingering effort on the part of the Fed to stimulate the economy through quantitative easing probably undermined confidence rather than restoring it. In fact, the Fed announced that it would begin to raise interest rates in an effort to convince the business community that the Great Recession is really coming to an end.


Related reading:

Robert Higgs, “Regime Uncertainty: Why the Great Depression Lasted So Long and Why Prosperity Resumed after the War”, Independent Review, Spring 1997

Harold L. Cole and Lee E. Ohanian, “New Deal Policies and the Persistence
of the Great Depression: A General Equilibrium Analysis”, Federal Reserve Bank of Minneapolis, Working Paper 597, revised May 2001 (later published in the Journal of Political Economy, August 2004)

Casey B. Mulligan, “Simple Analytics and Empirics of the Government Spending Multiplier and Other ‘Keynesian’ Paradoxes”, National Bureau of Economic Research, Working Paper 15800, March 2010

Daniel J. Mitchell, “Data in New World Bank Report Shows that Large Public Sectors Reduce Economic Growth“, Cato at Liberty, February 9, 2012

Steven Kates et al., “Reassessing the Political Economy of John Stuart Mill”, Online Library of Liberty, July 2015

Related posts:

A Keynesian Fantasy Land
The Keynesian Fallacy and Regime Uncertainty
Why the “Stimulus” Failed to Stimulate
Say’s Law, Government, and Unemployment
Regime Uncertainty and the Great Recession
The Rahn Curve Revisited

Utilitarianism vs. Liberty

Utilitarianism is an empty concept. And it is inimical to liberty.

What is utilitarianism, as I use the term? This:

1. (Philosophy) the doctrine that the morally correct course of action consists in the greatest good for the greatest number, that is, in maximizing the total benefit resulting, without regard to the distribution of benefits and burdens

To maximize the total benefit is to maximize social welfare, which is the well-being of all persons, somehow measured and aggregated. A true social-welfare maximizer would strive to maximize the social welfare of the planet. But schemes to maximize social welfare usually are aimed at maximizing it for the persons in a particular country, so they really are schemes to maximize national welfare.

National welfare may conflict with planetary welfare; the former may be increased (by some arbitrary measure) at the expense of the latter. Suppose, for example, that Great Britain had won the Revolutionary War and forced Americans to live on starvation wages while making things for the enjoyment of the British people. A lot of Britons would have been better off materially (though perhaps not spiritually), while most Americans certainly would have been worse off. The national welfare of Great Britain would have been improved, if not maximized, “without regard to the distribution of benefits and burdens.” On a contemporary note, anti-globalists assert (wrongly) that globalization of commerce exploits the people of poor countries. If they were right, they would at least have the distinction of striving to maximize planetary welfare. (Though there is no such thing, as I will show.)

THE UTILITARIAN WORLD VIEW

A utilitarian will favor a certain policy if a comparison of its costs and benefits shows that the benefits exceed the costs — even though the persons bearing the costs are often not the persons who accrue the benefits. That is to say, utilitarianism authorizes the redistribution of income and wealth for the “greater good”. Thus the many governmental schemes that are redistributive by design, for example, the “progressive” income tax (i.e., the taxation of income at graduated rates), Social Security (which yields greater “returns” to low-income workers than to high-income workers, and which taxes current workers for the benefit of retirees), and Medicaid (which is mainly for the benefit of persons whose tax burden is low or nil).

One utilitarian justification of such schemes is the fallacious and short-sighted assertion that persons with higher incomes gain less “utility” as their incomes rise, whereas the persons to whom that income is transferred gain much more “utility” because their incomes are lower. This principle is sometimes stated as “a dollar means more to a poor man than to a rich one”.

That is so because utilitarians are accountants of the soul, who believe (implicitly, at least) that it is within their power to balance the unhappiness of those who bear costs against the happiness of those who accrue benefits. The precise formulation, according to John Stuart Mill, is “the greatest amount of happiness altogether” (Utilitarianism, Chapter II, Section 16.)

UTILITARIANISM AS ECONOMIC FALLACY, ARROGANCE, AND HYPOCRISY

It follows — if you accept the assumption of diminishing marginal utility and ignore the negative effect of redistribution on economic growth — that overall utility (a.k.a. the social welfare function) will be raised if income is redistributed from high-income earners to low-income earners, and if wealth is redistributed from the wealthier to the less wealthy. But in order to know when to stop redistributing income or wealth, you must be able to measure the utility of individuals with some precision, and you must be able to sum those individual views of utility across the entire nation. Nay, across the entire world, if you truly want to maximize social welfare.

Most leftists (and not a few economists) don’t rely on the assumption of diminishing marginal utility as a basis for redistributing income and wealth. To them, it’s just a matter of “fairness” or “social justice”. It’s odd, though, that affluent leftists seem unable to support redistributive schemes that would reduce their income and wealth to, say, the global median for each measure. “Fairness” and “social justice” are all right in their place — in lecture halls and op-ed columns — but the affluent leftist will keep them at a comfortable distance from his luxurious abode.

In any event, leftists (including some masquerading as economists) who deign to offer an economic justification for redistribution usually fall back on the assumption of the diminishing marginal utility (DMU) of income and wealth. In doing so, they commit (at least) four errors.

The first error is the fallacy of misplaced concreteness which is found in the notion of utility. Have you ever been able to measure your own state of happiness? I mean measure it, not just say that you’re feeling happier today than you were when your pet dog died. It’s an impossible task, isn’t it? If you can’t measure your own happiness, how can you (or anyone) presume to measure — and aggregate — the happiness of millions or billions of individual human beings? It can’t be done.

Which brings me to the second error, which is an error of arrogance. Given the impossibility of measuring one person’s happiness, and the consequent impossibility of measuring and comparing the happiness of many persons, it is pure arrogance to insist that “society” would be better off if X amount of income or wealth were transferred from Group A to Group B.

Think of it this way: A tax levied on Group A for the benefit of Group B doesn’t make Group A better off. It may make some smug members of Group A feel superior to other members of Group A, but it doesn’t make all members of Group A better off. In fact, most members of Group A are likely to feel worse off. It takes an arrogant so-and-so to insist that “society” is somehow better off even though a lot of persons (i.e., members of “society”) have been made worse off.

The third error lies in the implicit assumption embedded in the idea of DMU. The assumption is that as one’s income or wealth rises one continues to consume the same goods and services, but more of them. Thus the example of chocolate cake: The first slice is enjoyed heartily, the second slice is enjoyed but less heartily, the third slice is consumed reluctantly, and the fourth  slice is rejected.

But that’s a bad example. The fact is that having more income or wealth enables a person to consume goods and services of greater variety and higher quality. Given that, it is possible to increase one’s utility by shifting from a “third helping” of a cheap product to a “first helping” of an expensive one, and to keep on doing so as one’s income rises. Perhaps without limit, given the profusion of goods and services available to consumers.

And if should you run out of new and different things to buy (an unlikely event), you can make yourself happier by acquiring more income to amass more wealth, and (if it makes you happy) by giving away some of your wealth. How much happier? Well, if you’re a “scorekeeper” (as very wealthy persons seem to be), your happiness rises immeasurably when your wealth rises from, say, $10 million to $100 million to $1 billion — and if your wealth-based income rises proportionally. How much happier is “immeasurably happier”? Who knows? That’s why I say “immeasurably” — there’s no way of telling. Which is why it’s arrogant to say that a wealthy person doesn’t “need” his next $1 million or $10 million, or that they don’t give him as much happiness as the preceding $1 million or $10 million.

All of that notwithstanding, the committed believer in DMU will shrug and say that at some point DMU must set in. Which leads me to the fourth error, which is introspective failure. If you’re like most mere mortals (as I am), your income during your early working years barely covered your bills. If you’re more than a few years into your working career, subsequent pay raises probably made you feel better about your financial state — not just a bit better but a whole lot better. Those raises enabled you to enjoy newer, better things (as discussed above). And if your real income has risen by a factor of two or three or more — and if you haven’t messed up your personal life (which is another matter) — you’re probably incalculably happier than when you were just able to pay your bills. And you’re especially happy if you put aside a good chunk of money for your retirement, the anticipation and enjoyment of which adds a degree of utility (such a prosaic word) that was probably beyond imagining when you were in your twenties, thirties, and forties.

In sum, the idea that one’s marginal utility (an unmeasurable abstraction) diminishes with one’s income or wealth is nothing more than an assumption that simply doesn’t square with my experience. And I’m sure that my experience is far from unique, though I’m not arrogant enough to believe that it’s universal.

UTILITARIANISM VS. LIBERTY

I have defined liberty as

the general observance of social norms that enables a people to enjoy…peaceful, willing coexistence and its concomitant: beneficially cooperative behavior.

Where do social norms come into it? The observance of social norms — society’s customs and morals — creates mutual trust, respect, and forbearance, from which flow peaceful, willing coexistence and beneficially cooperative behavior. In such conditions, only a minimal state is required to deal with those who will not live in peaceful coexistence, that is, foreign and domestic aggressors. And prosperity flows from cooperative economic behavior — the exchange of goods and services for the mutual benefit of the parties who to the exchange.

Society isn’t to be confused with nation or any other kind of geopolitical entity. Society — true society — is

3a :  an enduring and cooperating social group whose members have developed organized patterns of relationships through interaction with one another.

A close-knit group, in other words. It should go without saying that the members of such a group will be bound by culture: language, customs, morals, and (usually) religion. Their observance of a common set of social norms enables them to enjoy peaceful, willing coexistence and beneficially cooperative behavior.

Free markets mimic some aspects of society, in that they are physical and virtual places where buyers and sellers meet peacefully (almost all of the time) and willingly, to cooperate for their mutual benefit. Free markets thus transcend (or can transcend) the cultural differences that delineate societies.

Large geopolitical areas also mimic some aspects of society, in that their residents meet peacefully (most of the time). But when “cooperation” in such matters as mutual aid (care for the elderly, disaster recovery, etc.) is forced by government; it isn’t true cooperation, which is voluntary.

In any event, the United States is not a society. Even aside from the growing black-white divide, the bonds of nationhood are far weaker than those of a true society (or a free market), and are therefore easier to subvert. Even persons of the left agree that mutual trust, respect, and forbearance are at a low ebb — probably their lowest ebb since the Civil War.

Therein lies a clue to the emptiness of utilitarianism. Why should a qualified white person care about or believe in the national welfare when, in furtherance of national welfare (or something), a job or university slot for which the white person applies is given, instead, to a less qualified black person because of racial quotas that are imposed or authorized by government? Why should a taxpayer care about or believe in the national welfare if he is forced by government to share the burden of enlarging it through government-enforced transfer payments to those who don’t pay taxes? By what right or gift of omniscience is a social engineer able to intuit the feelings of 300-plus million individual persons and adjudge that the national welfare will be maximized if some persons are forced to cede privileges or money to other persons?

Consider Robin Hanson’s utilitarian scheme, which he calls futarchy:

In futarchy, democracy would continue to say what we want, but betting markets would now say how to get it. That is, elected representatives would formally define and manage an after-the-fact measurement of national welfare, while market speculators would say which policies they expect to raise national welfare….

Futarchy is intended to be ideologically neutral; it could result in anything from an extreme socialism to an extreme minarchy, depending on what voters say they want, and on what speculators think would get it for them….

A betting market can estimate whether a proposed policy would increase national welfare by comparing two conditional estimates: national welfare conditional on adopting the proposed policy, and national welfare conditional on not adopting the proposed policy.

Get it? “Democracy would say what we want” and futarchy “could result in anything from an extreme socialism to an extreme minarchy, depending on what voters say they want.” Hanson the social engineer believes that the “values” to be maximized should be determined “democratically,” that is, by majorities (however slim) of voters. Further, it’s all right with Hanson if those majorities lead to socialism. So Hanson envisions national welfare that isn’t really national; it’s determined by what’s approved by one-half-plus-one of the persons who vote. Scratch that. It’s determined by the politicians who are elected by as few as one-half-plus-one of the persons who vote, and in turn by unelected bureaucrats and judges — many of whom were appointed by politicians long out of office. It is those unelected relics of barely elected politicians who really establish most of the rules that govern much of Americans’ economic and social behavior.

Hanson’s version of national welfare amounts to this: whatever is is right. If Hitler had been elected by a slim majority of Germans, thereby legitimating him in Hanson’s view, his directives would have expressed the national will of Germans and, to the extent that they were carried out, would have maximized the national welfare of Germany.

Hanson’s futarchy is so bizarre as to be laughable. Ralph Merkle nevertheless takes the ball from Hanson and runs with it:

We choose to be more specific [than Hanson] about the definition of what we shall call the “collective welfare”, for the very simple reason that “voting on values” retains the dubious voting mechanism as a core component of futarchy….

We can create a DAO Democracy capable of self-improvement which has unlimited growth potential by modifying futarchy to use an unmodifiable democratic collective welfare metric, adapting it to work as a Decentralized Autonomous Organization, implementing an initial system using simple components (these components including the democratic collective welfare metric, a mechanism for adopting legislation (bills)) and using a built-in prediction market to filter through and adopt proposals for improved components….

1) Anyone can propose a bill at any time….

8) Any existing law can be amended or repealed with the same ease with which a new law can be proposed….

13) The only time this governance process would support “the tyranny of the majority” would be if oppression of some minority actually made the majority better off, and the majority was made sufficiently better off that it outweighed the resulting misery to the minority.

So, for example, we should trust that the super-majority of voters whose incomes are below the national median wouldn’t further tax the voters whose incomes are above the national median? And we should assume that the below-median voters would eventually notice that the heavy-taxation policy is causing their real incomes to decline? And we should assume that those below-median voters would care in any event, given the psychic income they derive from sticking it to “the rich”? What a fairy tale. The next thing I would expect Merkle to assert is that the gentile majority of Germans didn’t applaud or condone the oppression of the Jewish minority, that Muslim hordes that surround Israel aren’t scheming to annihilate it, and on into the fantasy-filled night.

How many times must I say it? There is no such thing as a national, social, cosmic, global, or aggregate welfare function of any kind. (Go here for a long but probably not exhaustive list of related posts.)

To show why there’s no such thing as an aggregate welfare function, I usually resort to a homely example:

  • A dislikes B and punches B in the nose.
  • A is happier; B is unhappier.
  • Someone (call him Omniscient Social Engineer) somehow measures A’s gain in happiness, compares it with B’s loss of happiness, and declares that the former outweighs the latter. Thus it is a socially beneficial thing if A punches B in the nose, or the government takes money from B and gives it to A, or the government forces employers to hire people who look like A at the expense of people who look like B, etc.

If you’re a B — and there are a lot of them out there — do you believe that A’s gain somehow offsets your loss? Unless you’re a masochist or a victim of the Stockholm syndrome, you’ll be ticked off about what A has done to you, or government has done to you on A’s behalf. Who is an Omniscient Social Engineer — a Hanson or Merkle — to say that your loss is offset by A’s gain? That’s just pseudo-scientific hogwash, also known as utilitarianism. But that’s exactly what Hanson, Merkle, etc., are peddling when they invoke social welfare, national welfare, planetary welfare, or any other aggregate measure of welfare.

What about GDP as a measure of national welfare? Even economists — or most of them — admit that GDP doesn’t measure aggregate happiness, well-being, or any similar thing. To begin with, a lot of stuff is omitted from GDP, including so-called household production, which is the effort (not to mention love) that Moms (it’s usually Moms) put into the care, feeding, and hugging of their families. And for reasons hinted at in the preceding paragraph, the income that’s earned by A, B, C, etc., not only buys different things, but A, B, C, etc., place unique (and changing) values on those different things and derive different and unmeasurable degrees of happiness (and sometimes remorse) from them.

If GDP, which is is relatively easy to estimate (within a broad range of error), doesn’t measure national welfare, what could? Certainly not systems of the kind proposed by Hanson or Merkle, both of which pretend to aggregate that which can’t be aggregated: the happiness of an entire population. (Try it with one stranger, and see if you can arrive at a joint measure of happiness.)

The worst thing about utilitarian schemes and their real-world counterparts (regulation, progressive taxation, affirmative action, etc.) is that they are anti-libertarian. As I say here,

utilitarianism compromises liberty because it accords no value to individual decisions about preferred courses of action. Decisions, to a utilitarian, are valid only if they comply with the views of the utilitarian, who feigns omniscience about the (incommensurable) happiness of individuals.

No system can be better than the “system” of liberty, in which a minimal government protects its citizens from each other and from foreign enemies — and nothing else. Liberty was lost in the instant that government was empowered not only to protect A from B (and vice versa) but to inflict A’s preferences on B (and vice versa).

Futarchy — and every other utilitarian scheme — exhibits total disregard for liberty, and for the social norms on which it ultimately depends. That’s no surprise. Social or national welfare is obviously more important to utilitarians than liberty. If half of all Americans (or American voters) want something, all of us should have it, by God, even if “it” is virtual enslavement by the regulatory-welfare state, a declining rate of economic growth, and fewer jobs for young black men, who then take it out on each other, their neighbors, and random whites.

Patrick Henry didn’t say “Give me maximum national welfare or give me death,” he said “Give me liberty or give me death.” Liberty enables people to make their own choices about what’s best for them. And if they make bad choices, they can learn from them and go on to make better ones.

No better “system” has been invented or will ever be invented. Those who second-guess liberty — utilitarians, reformers, activists, social justice warriors, and all the rest — only undermine it. And in doing so, they most assuredly diminish the welfare of most people just to advance their own smug view of how the world should be arranged.

UTILITARIANISM AND GUN CONTROL VS. LIBERTY

Gun control has been much in the news in recent years and decades. The real problem isn’t guns, but morality, as discussed here. But arguments for gun control are utilitarian, and gun control is a serious threat to liberty.

Consider the relationship between guns and crime. Here is John Lott’s controversial finding (as summarized at Wikipedia several years ago):

[A]llowing adults to carry concealed weapons significantly reduces crime in America. [Lott] supports this position by an exhaustive tabulation of various social and economic data from census and other population surveys of individual United States counties in different years, which he fits into a large multifactorial mathematical model of crime rate. His published results generally show a reduction in violent crime associated with the adoption by states of laws allowing the general adult population to freely carry concealed weapons.

Suppose Lott is right. (There is good evidence that he isn’t wrong. RAND’s recent meta-study is laughably subjective.)

If more concealed weapons lead to less crime, then the proper utilitarian policy is for governments to be more lenient about owning and bearing firearms. A policy of leniency would also be consistent with two tenets of libertarian-conservatism:

  • the right of self-defense
  • taking responsibility for one’s own safety beyond that provided by guardians (be they family, friends, passing strangers, or minions of the state), because guardians can’t be everywhere, all the time, and aren’t always effective when they are present.

Only a foolish, extreme pacifist denies the first tenet. No one (but the same foolish pacifist) can deny the second tenet in good faith.

However, if Lott is right and government policy were to veer toward greater leniency, it is possible that more innocent persons will be killed by firearms than would otherwise be the case. The incidence of accidental shootings could rise, even as the rate of crime drops.

Which is worse, more crime or more accidental shootings? Not even a utilitarian can say, because no formula can objectively weigh the two things. (Not that that will stop a utilitarian from making up some weights, to arrive at a formula that supports his prejudice in the matter.) Both have psychological aspects (victimization, wound, grief) that defy quantification. The only reasonable way out of the dilemma is to favor liberty and punish wrong-doing where it occurs. The alternative — more restrictions on gun ownership — punishes many (those who would defend themselves), instead of punishing actual wrong-doers.

Suppose Lott is wrong, and more guns mean more crime. Would that militate against the right to own and bear arms? Only if utilitarianism is allowed to override liberty. Again, I would favor liberty, and punish wrong-doing where it occurs, instead of preventing some persons from defending themselves.

In sum, the ownership and carrying of guns isn’t a problem that’s amenable to a utilitarian solution. (Few problems are, and none of them involves government.) The ownership and carrying of guns is an emotional issue (and not only on the part of gun-grabbers). The natural reaction to highly publicized mass-shootings is to “do something”.

In fact, the “something” isn’t within the power of government to do, unless it undoes many policies that have subverted civil society over the past several decades. Mass shootings — and mass killings, in general — arise from social decay. Mass killings will not stop, or slow to a trickle, until the social decay stops and is reversed — which may be never.

So when the next restriction on guns fails to stop mass murder, the next restriction on guns (or explosives, etc.) will be adopted in an effort to stop it. And so on until self-defense, personal responsibility — and liberty — are fainter memories than they already are.

My point is that it doesn’t matter whether Lott is right or wrong. Utilitarianism has no place in it for liberty. My right to self-defense and my willingness to take personal responsibility for it —  given the likelihood that government will fail to defend me — shouldn’t be compromised by hysterical responses to notorious cases of mass murder. The underlying aim of the hysterics (and the left-wingers who encourage them) is the disarming of the populace. The necessary result will be the disarming of law-abiding citizens, so that they become easier prey for criminals and psychopaths.

A proper libertarian* eschews utilitarianism as a basis for government policy. The decision whether to own and carry a weapon for self-defense belongs to the individual, who (by his decision) accepts responsibility for his actions**. The role of the state in the matter is to deter aggressive acts on the part of criminals and psychopaths by visiting swift and certain justice upon them when they commit such acts.

CONCLUSION

Utilitarianism compromises liberty because it accords no value to the abilities, knowledge, and preferences of individuals. Decisions, to a utilitarian, are valid only if they serve to increase collective happiness, which is a mere fiction. Utilitarianism is nothing more than an excuse for imposing the utilitarian’s prejudices about the way the world ought to be.


* Libertarianism, by my reckoning, spans anarchism and the two major strains of minarchism: left-minarchism and right-minarchism. The internet-dominant strains of libertarianism (anarchism and left-minarchism) are, in fact, antithetical to liberty because they denigrate civil society. (For more on the fatuousness of  the dominant strains of “libertarianism,” see “On Liberty” and “The Meaning of Liberty”.) The conservative traditionalist (right-minarchist) is both a true libertarian and a true conservative.

** Criminals and psychopaths accept responsibility de facto, as persons subject to the laws that forbid the acts that they perform. Sane, law-abiding adults accept responsibility knowingly and willinglly. Restricting the ownership of firearms necessarily puts sane, law-abiding adults at the mercy of criminals and psychopaths.

The Folly of Pacifism (III)

This is a reworking of two earlier posts (here and here). Follow the second link to see a long list of related posts.

Winston Churchill said, “An appeaser is one who feeds the crocodile, hoping that it will eat him last.” I say that a person who promotes pacifism as state policy is one who offers himself and his fellow citizens as crocodile food.

Bryan Caplan, an irritating twit who professes economics at George Mason University, is an outspoken pacifist. He is also an outspoken advocate of open borders.

Caplan, like Linus of Peanuts, loves mankind; it’s people he can’t stand. In fact, his love of mankind isn’t love at all, but rather a kind of utilitarianism in which the “good of all” somehow outweighs the specific (though by no means limited) harms caused by lying down at an enemy’s feet or enabling illegal immigrants to feed at the public trough.

As Gregory Cochran puts it in the first installment of his review of Caplan’s The Case Against Education,

I don’t like Caplan. I think he doesn’t understand – can’t understand – human nature, and although that sometimes confers a different and interesting perspective, it’s not a royal road to truth. Nor would I want to share a foxhole with him: I don’t trust him.

That’s it, in a nutshell. Caplan’s pacifism reflects his untrustworthiness. He is a selective anti-tribalist:

I identify with my nuclear family, with my friends, and with a bunch of ideas.  I neither need nor want any broader identity.  I was born in America to a Democratic Catholic mother and a Republican Jewish father, but none of these facts define me.  When Americans, Democrats, Republicans, Catholics, and Jews commit misdeeds – as they regularly do – I feel no shame and offer no excuses.  Why?  Because I’m not with them.

Hollow words from man who, in large part, owes his freedom and comfortable life to the armed forces and police of the country that he disdains. And — more fundamentally — to the mostly peaceful and productive citizens in whose midst he lives, and whose taxes support the armed forces and police.

Caplan is a man out of place. His attitude toward his country would be justified if he lived in the Soviet Union, Communist China, North Korea, Cuba, or any number of other nation-states past and present. His family, friends, and “bunch of ideas” will be of little help to him when, say, Kim Jong-un (or his successor) lobs an ICBM in the vicinity of Washington, D.C., which is uncomfortably close to Caplan’s residence and workplace.

In his many writings on pacifism, Caplan has pooh-poohed the idea that “if you want peace, prepare for war”:

This claim is obviously overstated.  Is North Korea really pursuing the smart path to peace by keeping almost 5% of its population on active military duty?  How about Hitler’s rearmament?  Was the Soviet Union preparing for peace by spending 15-20% of its GDP on the Red Army?

Note the weasel-word, “overstated”, which gives Caplan room to backtrack in the face of evidence that preparedness for war can foster peace by deterring an enemy. (The defense buildup in the 1980s is arguably such a case, in which the Soviet Union was not only deterred but also brought to its knees.) Weasel-wording is typical of Caplan’s method of argumentation. He is harder to pin down than Jell-O.

In any event, Caplan’s pronouncement only attests to the fact that there are aggressive people and regimes out there, and that non-aggressors are naive to believe that those people and regimes will not attack you if you are not armed against them.

The wisdom of preparedness is nowhere better illustrated than in the world of the internet, where every innocent user is a target for the twisted and vicious purveyors of malware. Think of the millions of bystanders (myself included) whose sensitive personal information has been scooped by breaches of massive databases. Internet predators differ from armed ones only in their choice of targets and weapons, not in their essential disregard for the lives and property of others.

Interestingly, although Caplan foolishly decries preparedness, he isn’t against retaliation (which seems a strange position for a pacifist):

[D]oesn’t pacifism contradict the libertarian principle that people have a right to use retaliatory force?  No. I’m all for revenge against individual criminals.  My claim is that in practice, it is nearly impossible to wage war justly, i.e., without trampling on the rights of the innocent.

Why is it “nearly impossible to wage war justly”? Caplan puts it this way:

1. The immediate costs of war are clearly awful.  Most wars lead to massive loss of life and wealth on at least one side.  If you use a standard value of life of $5M, every 200,000 deaths is equivalent to a trillion dollars of damage.

2. The long-run benefits of war are highly uncertain.  Some wars – most obviously the Napoleonic Wars and World War II – at least arguably deserve credit for decades of subsequent peace.  But many other wars – like the French Revolution and World War I – just sowed the seeds for new and greater horrors.  You could say, “Fine, let’s only fight wars with big long-run benefits.”  In practice, however, it’s very difficult to predict a war’s long-run consequences.  One of the great lessons of Tetlock’s Expert Political Judgment is that foreign policy experts are much more certain of their predictions than they have any right to be.

3. For a war to be morally justified, its long-run benefits have to be substantially larger than its short-run costs.  I call this “the principle of mild deontology.”  Almost everyone thinks it’s wrong to murder a random person and use his organs to save the lives of five other people.  For a war to be morally justified, then, its (innocent lives saved/innocent lives lost) ratio would have to exceed 5:1.  (I personally think that a much higher ratio is morally required, but I don’t need that assumption to make my case).

It would seem that Caplan is not entirely opposed to war — as long as the ratio of lives saved to lives lost is acceptably high. But Caplan gets to choose the number of persons who may die for the sake of those who may thus live. He wears his God-like omniscience with such modesty.

Caplan’s soul-accountancy implies  a social-welfare function, wherein A’s death cancels B’s survival. I wonder if Caplan would feel the same way if A were Osama bin Laden (before 9/11) and B were Bryan Caplan or one of his family members or friends? He would feel the same way if he were a true pacifist. But he is evidently not one. His pacifism is selective, and his arguments for it are slippery.

What Caplan wants, I suspect, is the best of both worlds: freedom and prosperity for himself (and family members and friends) without the presence of police and armed forces, and the messy (but unavoidable) business of using them. Using them is an imperfect business; mistakes are sometimes made. It is the mistakes that Caplan (and his ilk) cringe against because they indulge in the nirvana fallacy. In this instance, it is a belief that there is a more-perfect world to be had if only “we” would forgo violence. Which gets us back to  Caplan’s unwitting admission that there are people out there who will do bad things even if they aren’t provoked.

National defense, like anything less than wide-open borders, violates another of Caplan’s pernicious principles. He seems to believe that the tendency of geographically proximate groups to band together in self-defense is a kind of psychological defect. He refers to it as “group-serving bias”.

That’s just a pejorative term which happens to encompass mutual self-defense. And who better to help you defend yourself than the people with whom you share space, be it a neighborhood, a city-state, a principality, or even a vast nation? As a member of one or the other, you may be targeted for harm by outsiders who wish to seize your land and control your wealth, or who simply dislike your way of life, even if it does them no harm.

Would it be “group-serving bias” if Caplan were to provide for the defense of his family members (and even some friends) by arming them if they happened to live in a high-crime neighborhood? If he didn’t provide for their defense, he would quickly learn the folly of pacifism, as family members and friends are robbed, maimed, and killed.

Pacifism is a sophomoric fantasy on a par with anarchism. It is sad to see Caplan’s intelligence wasted on the promulgation and defense of such a fantasy.

The Kennedy-Roberts Court in Retrospect

Despite Justice Kennedy’s return to the Court’s conservative wing in the term just concluded (details below), he was a central player in the Court’s war on federalism and long-standing social norms. Chief Justice Roberts has (nominally) presided over the Court for the past 13 terms. But Justice Kennedy — far more often than any justice of his era — has been the Court’s main (and inconsistent) “decider”.

Kennedy’s legacy has been dissected almost ad infiinitum in the several days since he announced his retirement. I will offer just two samples of the (rightly) negative commentary about Kennedy before turning to a statistical summary of the Kennedy-Roberts years.

Christopher Roach offers this in “Kennedy’s Departure Diminishes Supreme Court . . . And That’s a Good Thing” (American Greatness, June 29, 2018):

Since the Earl Warren era, the Supreme Court has assumed enormous power over our politics, and this has become a significant obstacle to the constitutional design of Americans living as a self-governing people….

[T]he Supreme Court routinely has interfered with American self-government, either undoing or forcing results at various levels of government in accordance with its idiosyncratic and elitist views….

The Court undid California’s referendum on gay marriage after having earlier reversed Colorado’s referendum preventing gays from being added to the long list of “protected classes” in employment laws. Using the broad and vague mandates of “substantive due process” and “equal protection,” the Court simply decided the people were wrong and “irrational,” and Justice Kennedy authored opinions that accorded with the views of his friends and neighbors in Washington, D.C. In the process, the Court forbade the people of California and Colorado from undertaking the most quintessentially self-governing act for which the Constitution was designed: passing laws on controversial matters through a referendum.

This is merely an example. The Supreme Court has also second-guessed how wars are conducted, how schools are run, … has created new rights while ignoring those enshrined in the Constitution itself, and generally assumed the role of “super legislature.”

In addressing salient social issues, the Supreme Court has functioned as something of a Delphic Oracle, divining hidden mysteries in the otherwise prosaic constitutional text that disallows historically permitted practices on immigration, the treatment of enemy prisoners, abortion, and much else where the Constitution’s text is either silent or agnostic.

While preempting legislative supremacy and the broad powers of the executive, the Court is, in fact, unrepresentative in all meaningful ways. It is not, of course, supposed to be a representative institution. It is supposed to be a technical and intellectual job, devoted to the analysis of laws in light of other laws and our general law in the form of the Constitution. But it hasn’t been that since the 1930s.

So, in that milieu, it should be, if not representative, at least faithful to and sympathetic with the American people. But far from being sympathetic, its progressivism has been hostile to the mass of people and their views, labeling them irrational and bigoted when they deviate from the very narrow consensus formed among the almost exclusively Ivy League pedigreed justices. The retiring Justice Kennedy mostly embraced this snobbish and busy-body ethos….

[H]e was central to the developing “gay marriage” jurisprudence, which short-circuited the development of such rules (and limits) through legislatures. The left is probably right that this (and other anti-majoritarian rulings) shaped public opinion and pulled it beyond what might have happened using legislative means by themselves. But, at the same time, this approach generated significant backlash and resentment. These types of decisions have also made presidential elections, which should be about governance, instead into potential proxy fights on every social issue under the sun, when such issues otherwise could be resolved organically and diversely through political processes among the various states.

Here is Elizabeth Slattery, writing in “The Legacy of Justice Kennedy” (The Daily Signal, June 27, 2018):

It’s not always been easy for Supreme Court watchers to pigeonhole Kennedy’s jurisprudence. In fact, one mainstay of his jurisprudence and view of the Constitution was its inconsistency.

He authored the majority opinion in Gonzales v. Carhart and co-authored the plurality in Planned Parenthood of Southeastern Pennsylvania v. Casey, where abortion regulations were upheld under the most deferential standard of review (rational basis).

But then he joined the liberals in Whole Women’s Health v. Hellerstedt, requiring Texas to meet a higher standard of review for its commonsense regulation of abortion providers.

In Schuette v. BAMN, a case about a state’s ability to prohibit racial preferences in college admissions, Kennedy wrote:“It is demeaning to the democratic process to presume that voters are not capable of deciding an issue of this sensitivity on decent and rational grounds. … Freedom embraces the right, indeed the duty, to engage in a rational, civic discourse in order to determine how best to form a consensus to shape the destiny of the Nation and its people.”

Yet the following year, in Obergefell v. Hodges, Kennedy was unwilling to extend the same goodwill to voters to decide through the democratic process whether their states should recognize same-sex marriages, cutting short a vibrant public debate over the issue.

Writing for the majority in Fisher v. University of Texas at Austin in 2013, Kennedy held that the university must prove that its use of race in admissions met the requirements of the 14th Amendment’s Equal Protection Clause and sent the case back to the lower court. When the case returned in 2016, Kennedy wrote for the majority again, gutting his 2013 decision and allowing the university to continue sorting students by race without defining its diversity goals or proving that race was necessary to meet its goals.

Do the numbers bear out the impression of Kennedy as an unreliable “conservative”? Yes.

In “U.S. Supreme Court: Lines of Succession and Ideological Alignment“, I have drawn on statistics provided by SCOTUSsblog to summarize the degree of disagreement among the various justices in non-unanimous cases during each of the Court’s past 13 terms. (The use of non-unanimous cases highlights the degree of disagreement among justices, which would be blurred if all cases were included in the analysis.) The statistics yield an index of defection (D) for each justice, by term:

D = percentage disagreement with members of own wing/percentage disagreement with members of opposite wing.

The wings are the “conservative” wing (Gorsuch, Alito, Thomas, Scalia, Roberts, and Kennedy) and the “liberal” wing (Breyer, Ginsburg, Kagan, Sotomayor, Souter, and Stevens).

The lower the index, the more prone is a justice to vote with the other members of his or her wing; the higher the index, the more prone is a justice to vote with members of the opposing wing. Here’s a graph of the indices, by term:

Kennedy’s long-standing proneness to defect more often than his colleagues grew markedly in the 2014-2015 terms and receded a bit in the 2016 term. His turnaround in the 2017 term restored him to the Court’s “conservative” wing.

Roberts slipped a bit in the 2017 term but is more in step with the “conservative” wing than he had been in the 2014-2015 terms.

Gorsuch started out strongly in his abbreviated 2016 term (he joined the Court in April 2017). His slippage in the 2017 term may have been due to the mix of cases at stake.

Perhaps that’s the reason for Roberts’s slippage in the 2017 term — or perhaps Roberts is “growing in office”, as leftists like to say about apostate conservatives. Time will tell.

What’s most striking about the preceding graphs, other than Kennedy’s marked departure from the “conservative” wing after the 2010 term, is the increasing coherence (ideological, not logical) of the “liberal” wing. This graph captures the difference between the wings:

The record of the past 6 terms is clear. The “liberals” stick together much more often than the “conservatives”. Perhaps that will change with the replacement of Kennedy by (one hopes) a real conservative.


See also the page “Constitution: Myths and Realities“, and these posts:
Substantive Due Process, Liberty of Contract, and the States’ Police Power
Substantive Due Process and the Limits of Privacy
Rethinking the Constitution: “Freedom of Speech, and of the Press”
Abortion and the Fourteenth Amendment
Obamacare: Neither Necessary nor Proper
Privacy Is Not Sacred
Our Perfect, Perfect Constitution
Constitutional Confusion
Obamacare, Slopes, Ratchets, and the Death-Spiral of Liberty
Another Thought or Two about the Obamacare Decision
The Court in Retrospect and Prospect (II)
Abortion Rights and Gun Rights
Getting “Equal Protection” Right
Does the Power to Tax Give Congress Unlimited Power? (II)
The Beginning of the End of Liberty in America
Substantive Due Process, Liberty of Contract, and States’ “Police Power”
Why Liberty of Contract Matters
Equal Protection in Principle and Practice
Freedom of Speech and the Long War for Constitutional Governance
Restoring the Contract Clause
The Kennedy Retirement: Hope Springs Eternal
Freedom of Speech: Getting It Right
Justice Thomas on Masterpiece Cakeshop

Analytical and Scientific Arrogance

It is customary in democratic countries to deplore expenditures on armaments as conflicting with the requirements of the social services. There is a tendency to forget that the most important social service that a government can do for its people is to keep them alive and free.

Marshal of the Royal Air Force Sir John Slessor, Strategy for the West

I’m returning to the past to make a timeless point: Analysis is a tool of decision-making, not a substitute for it.

That’s a point to which every analyst will subscribe, just as every judicial candidate will claim to revere the Constitution. But analysts past and present have tended to read their policy preferences into their analytical work, just as too many judges real their political preferences into the Constitution.

What is an analyst? Someone whose occupation requires him to gather facts bearing on an issue, discern robust relationships among the facts, and draw conclusions from those relationships.

Many professionals — from economists to physicists to so-called climate scientists — are more or less analytical in the practice of their professions. That is, they are not just seeking knowledge, but seeking to influence policies which depend on that knowledge.

There is also in this country (and in the West, generally) a kind of person who is an analyst first and a disciplinary specialist second (if at all). Such a person brings his pattern-seeking skills to the problems facing decision-makers in government and industry. Depending on the kinds of issues he addresses or the kinds of techniques that he deploys, he may be called a policy analyst, operations research analyst, management consultant, or something of that kind.

It is one thing to say, as a scientist or analyst, that a certain option (a policy, a system, a tactic) is probably better than the alternatives, when judged against a specific criterion (most effective for a given cost, most effective against a certain kind of enemy force). It is quite another thing to say that the option is the one that the decision-maker should adopt. The scientist or analyst is looking a small slice of the world; the decision-maker has to take into account things that the scientist or analyst did not (and often could not) take into account (economic consequences, political feasibility, compatibility with other existing systems and policies).

It is (or should be) unsconsionable for a scientist or analyst to state or imply that he has the “right” answer. But the clever arguer avoids coming straight out with the “right” answer; instead, he slants his presentation in a way that makes the “right” answer seem right.

A classic case in point is they hysteria surrounding the increase in “global” temperature in the latter part of the 20th century, and the coincidence of that increase with the rise in CO2. I have had much to say about the hysteria and the pseudo-science upon which it is based. (See links at the end of this post.) Here, I will take as a case study an event to which I was somewhat close: the treatment of the Navy’s proposal, made in the early 1980s, for an expansion to what was conveniently characterized as the 600-ship Navy. (The expansion would have involved personnel, logistics systems, ancillary war-fighting systems, stockpiles of parts and ammunition, and aircraft of many kinds — all in addition to a 25-percent increase in the number of ships in active service.)

The usual suspects, of an ilk I profiled here, wasted no time in making the 600-ship Navy seem like a bad idea. Of the many studies and memos on the subject, two by the Congressional Budget Office stand out a exemplars of slanted analysis by innuendo: “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches” (March 1982), and “Future Budget Requirements for the 600-Ship Navy: Preliminary Analysis” (April 1985). What did the “whiz kids” at CBO have to say about the 600-ship Navy? Here are excerpts of the concluding sections:

The Administration’s five-year shipbuilding plan, containing 133 new construction ships and estimated to cost over $80 billion in fiscal year 1983 dollars, is more ambitious than previous programs submitted to the Congress in the past few years. It does not, however, contain enough ships to realize the Navy’s announced force level goals for an expanded Navy. In addition, this plan—as has been the case with so many previous plans—has most of its ships programmed in the later out-years. Over half of the 133 new construction ships are programmed for the last two years of the five-year plan. Achievement of the Navy’s expanded force level goals would require adhering to the out-year building plans and continued high levels of construction in the years beyond fiscal year 1987. [1982 report, pp. 71-72]

Even the budget increases estimated here would be difficult to achieve if history is a guide. Since the end of World War II, the Navy has never sustained real increases in its budget for more than five consecutive years. The sustained 15-year expansion required to achieve and sustain the Navy’s present plans would result in a historic change in budget trends. [1985 report, p. 26]

The bias against the 600-ship Navy drips from the pages. The “argument” goes like this: If it hasn’t been done, it can’t be done and, therefore, shouldn’t be attempted. Why not? Because the analysts at CBO were a breed of cat that emerged in the 1960s, when Robert Strange McNamara and his minions used simplistic analysis (“tablesmanship”) to play “gotcha” with the military services:

We [I was one of the minions] did it because we were encouraged to do it, though not in so many words. And we got away with it, not because we were better analysts — most of our work was simplistic stuff — but because we usually had the last word. (Only an impassioned personal intercession by a service chief might persuade McNamara to go against SA [the Systems Analysis office run by Alain Enthoven] — and the key word is “might.”) The irony of the whole process was that McNamara, in effect, substituted “civilian judgment” for oft-scorned “military judgment.” McNamara revealed his preference for “civilian judgment” by elevating Enthoven and SA a level in the hierarchy, 1965, even though (or perhaps because) the services and JCS had been open in their disdain of SA and its snotty young civilians.

In the case of the 600-ship Navy, civilian analysts did their best to derail it by sending the barely disguised message that it was “unaffordable”. I was reminded of this “insight” by a colleague of long-standing who recently proclaimed that “any half-decent cost model would show a 600-ship Navy was unsustainable into this century.” How could a cost model show such a thing when the sustainability (affordability) of defense is a matter of political will, not arithmetic?

Defense spending fluctuates as function of perceived necessity. Consider, for example, this graph (misleadingly labeled “Recent Defense Spending”) from usgovernmentspending.com, which shows defense spending as a percentage of GDP for fiscal year (FY) 1792 to FY 2017:

What was “unaffordable” before World War II suddenly became affordable. And so it has gone throughout the history of the republic. Affordability (or sustainability) is a political issue, not a line drawn in the sand by an smart-ass analyst who gives no thought to the consequences of spending too little on defense.

I will now zoom in on the era of interest.

CBO’s “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches“, which crystallized opposition to the 600-ship Navy estimates the long-run, annual obligational authority required to sustain a 600-ship Navy (of the Navy’s design) to be about 20-percent higher in constant dollars than the FY 1982 Navy budget. (See Options I and II in Figure 2, p. 50.) The long-run would have begun around FY 1994, following several years of higher spending associated with the buildup of forces. I don’t have a historical breakdown of the Department of Defense (DoD) budget by service, but I found values for all-DoD spending on military programs at Office of Management and Budget Historical Tables. Drawing on Tables 5.2 and 10.1, I constructed a constant-dollar of DoD’s obligational authority (FY 1982 = 1):

FY Index
1983 1.08
1984 1.13
1985 1.21
1986 1.17
1987 1.13
1988 1.11
1989 1.10
1990 1.07
1991 0.97
1992 0.97
1993 0.90
1994 0.82
1995 0.82
1996 0.80
1997 0.80
1998 0.79
1999 0.84
2000 0.86
2001 0.92
2002 0.98
2003 1.23
2004 1.29
2005 1.28
2006 1.36
2007 1.50
2008 1.65
2009 1.61
2010 1.66
2011 1.62
2012 1.51
2013 1.32
2014 1.32
2015 1.25
2016 1.29
2017 1.34

There was no inherent reason that defense spending couldn’t have remained on the trajectory of the middle 1980s. The slowdown of the late 1980s was a reflection of improved relations between the U.S. and USSR. Those improved relations had much to do with the Reagan defense buildup, of which the goal of attaining a 600-ship Navy was an integral part.

The Reagan buildup helped to convince Soviet leaders (Gorbachev in particular) that trying to keep pace with the U.S. was futile and (actually) unaffordable. The rest — the end of the Cold War and the dissolution of the USSR — is history. The buildup, in other words, sowed the seeds of its own demise. But that couldn’t have been predicted with certainty in the early-to-middle 1980s, when CBO and others were doing their best to undermine political support for more defense spending. Had CBO and the other nay-sayers succeeded in their aims, the Cold War and the USSR might still be with us.

The defense drawdown of the mid-1990s was a deliberate response to the end of the Cold War and lack of other serious threats, not a historical necessity. It was certainly not on the table in the early 1980s, when the 600-ship Navy was being pushed. Had the Cold War not thawed and ended, there is no reason that U.S. defense spending couldn’t have continued at the pace of the middle 1980s, or higher. As is evident in the index values for recent years, even after drastic force reductions in Iraq, defense spending is now about one-third higher than it was in FY 1982.

John Lehman, Secretary of the Navy from 1981 to 1987, was rightly incensed that analysts — some of them on his payroll as civilian employees and contractors — were, in effect, undermining a deliberate strategy of pressing against a key Soviet weakness — the unsustainability of its defense strategy. There was much lamentation at the time about Lehman’s “war” on the offending parties, one of which was the think-tank for which I then worked. I can now admit openly that I was sympathetic to Lehman and offended by the arrogance of analysts who believed that it was their job to suggest that spending more on defense was “unaffordable”.

When I was a young analyst I was handed a pile of required reading material. One of the items was was Methods of Operations Research, by Philip M. Morse and George E. Kimball. Morse, in the early months of America’s involvement in World War II, founded the civilian operations-research organization from which my think-tank evolved. Kimball was a leading member of that organization. Their book is notable not just a compendium of analytical methods that were applied, with much success, to the war effort. It is also introspective — and properly humble — about the power and role of analysis.

Two passages, in particular, have stuck with me for the more than 50 years since I first read the book. Here is one of them:

[S]uccessful application of operations research usually results in improvements by factors of 3 or 10 or more…. In our first study of any operation we are looking for these large factors of possible improvement…. They can be discovered if the [variables] are given only one significant figure,…any greater accuracy simply adds unessential detail.

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. [p. 38]

Morse and Kimball — two brilliant scientists and analysts, who had worked with actual data (pardon the redundancy) about combat operations — counseled against making too much of quantitative estimates given the uncertainties inherent in combat. But, as I have seen over the years, analysts eager to “prove” something nevertheless make a huge deal out of minuscule differences in quantitative estimates — estimates based not on actual combat operations but on theoretical values derived from models of systems and operations yet to see the light of day. (I also saw, and still see, too much “analysis” about soft subjects, such as domestic politics and international relations. The amount of snake oil emitted by “analysts” — sometimes called scholars, journalists, pundits, and commentators — would fill the Great Lakes. Their perceptions of reality have an uncanny way of supporting their unabashed decrees about policy.)

The second memorable passage from Methods of Operations Research goes directly to the point of this post:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. [p. 10].

In the case of CBO and other opponents of the 600-ship Navy, substitute “cost estimate” for “operations research”, “responsible defense official” for “administrator in charge”, and “strategy” for “operations”. The principle is the same: The CBO and its ilk knew the price of the 600-ship Navy, but had no inkling of its value.

Too many scientists and analysts want to make policy. On the evidence of my close association with scientists and analysts over the years — including a stint as an unsparing reviewer of their products — I would say that they should learn to think clearly before they inflict their views on others. But too many of them — even those with Ph.D.s in STEM disciplines — are incapable of thinking clearly, and more than capable of slanting their work to support their biases. Exhibit A: Michael Mann, James Hansen (more), and their co-conspirators in the catastrophic-anthropogenic-global-warming scam.


Related posts:
The Limits of Science
How to View Defense Spending
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
The McNamara Legacy: A Personal Perspective
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
AGW in Austin? (II)
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Babe Ruth and the Hot-Hand Hypothesis
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unknowable
A (Long) Footnote about Science
Further Thoughts about Probability
Climate Scare Tactics: The Mythical Ever-Rising 30-Year Average
A Grand Strategy for the United States

The Probability That Something Will Happen

A SINGLE EVENT DOESN’T HAVE A PROBABILITY

A believer in single-event probabilities takes the view that a single flip of a coin or roll of a dice has a probability. I do not. A probability represents the frequency with which an outcome occurs over the very long run, and it is only an average that conceals random variations.

The outcome of a single coin flip can’t be reduced to a percentage or probability. It can only be described in terms of its discrete, mutually exclusive possibilities: heads (H) or tails (T). The outcome of a single roll of a die or pair of dice can only be described in terms of the number of points that may come up, 1 through 6 or 2 through 12.

Yes, the expected frequencies of H, T, and and various point totals can be computed by simple mathematical operations. But those are only expected frequencies. They say nothing about the next coin flip or dice roll, nor do they more than approximate the actual frequencies that will occur over the next 100, 1,000, or 10,000 such events.

Of what value is it to know that the probability of H is 0.5 when H fails to occur in 11 consecutive flips of a fair coin? Of what value is it to know that the probability of rolling a  7 is 0.167 — meaning that 7 comes up every 6 rolls, on average — when 7 may not appear for 56 consecutive rolls? These examples are drawn from simulations of 10,000 coin flips and 1,000 dice rolls. They are simulations that I ran once, not simulations that I cherry-picked from many runs. (The Excel file is at https://drive.google.com/open?id=1FABVTiB_qOe-WqMQkiGFj2f70gSu6a82. Coin flips are at the first tab, dice rolls are at the second tab.)

Let’s take another example, one that is more interesting and has generated much controversy of the years. It’s the Monty Hall problem,

a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let’s Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975…. It became famous as a question from a reader’s letter quoted in Marilyn vos Savant’s “Ask Marilyn” column in Parade magazine in 1990 … :

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice

Vos Savant’s response was that the contestant should switch to the other door…. Under the standard assumptions, contestants who switch have a 2/3 chance of winning the car, while contestants who stick to their initial choice have only a 1/3 chance.

Vos Savant’s answer is correct, but only if the contestant is allowed to play an unlimited number of games. A player who adopts a strategy of “switch” in every game will, in the long run, win about 2/3 of the time (explanation here). That is, the player has a better chance of winning if he chooses “switch” rather than “stay”.

Read the preceding paragraph carefully and you will spot the logical defect that underlies the belief in single-event probabilities: The long-run winning strategy (“switch”) is transformed into a “better chance” to win a particular game. What does that mean? How does an average frequency of 2/3 improve one’s chances of winning a particular game? It doesn’t. Game results are utterly random; that is, the average frequency of 2/3 has no bearing on the outcome of a single game.

I’ll try to drive the point home by returning to the coin-flip game, with money thrown into the mix. A $1 bet on H means a gain of $1 if H turns up, and a loss of $1 if T turns up. The expected value of the bet — if repeated over a very large number of trials — is zero. The bettor expects to win and lose the same number of times, and to walk away no richer or poorer than when he started. And for a very large number of games, the better will walk away approximately (but not necessarily exactly) neither richer nor poorer than when he started. How many games? In the simulation of 10,000 games mentioned earlier, H occurred 50.6 percent of the time. A very large number of games is probably at least 100,000.

Let us say, for the sake of argument, that a bettor has played 100,00 coin-flip games at $1 a game and come out exactly even. What does that mean for the play of the next game? Does it have an expected value of zero?

To see why the answer is “no”, let’s make it interesting and say that the bet on the next game — the next coin flip — is $10,000. The size of the bet should wonderfully concentrate the bettor’s mind. He should now see the situation for what it really is: There are two possible outcomes, and only one of them will be realized. An average of the two outcomes is meaningless. The single coin flip doesn’t have a “probability” of 0.5 H and 0.5 T and an “expected payoff” of zero. The coin will come up either H or T, and the bettor will either lose $10,000 or win $10,000.

To repeat: The outcome of a single coin flip doesn’t have an expected value for the bettor. It has two possible values, and the bettor must decide whether he is willing to lose $10,000 on the single flip of a coin.

By the same token (or coin), the outcome of a single roll of a pair of dice doesn’t have a 1-in-6 probability of coming up 7. It has 36 possible outcomes and 11 possible point totals, and the bettor must decide how much he is willing to lose if he puts his money on the wrong combination or outcome.

In summary, it is a logical fallacy to ascribe a probability to a single event. A probability represents the observed or computed average value of a very large number of like events. A single event cannot possess that average value. A single event has a finite number of discrete and mutually exclusive possible outcomes. Those outcomes will not “average out” in that single event. Only one of them will obtain, like Schrödinger’s cat.

To say or suggest that the outcomes will average out — which is what a probability implies — is tantamount to saying that Jack Sprat and his wife were neither skinny nor fat because their body-mass indices averaged to a normal value. It is tantamount to saying that one can’t drown by walking across a pond with an average depth of 1 foot, when that average conceals the existence of a 100-foot-deep hole.

It should go without saying that a specific event that might occur — rain tomorrow, for example — doesn’t have a probability.

WHAT ABOUT THE PROBABILITY OF PRECIPITATION?

Weather forecasters (meteorologists) are constantly saying things like “there’s an 80-percent probability of precipitation (PoP) in __________ tomorrow”. What do such statements mean? Not much:

It is not surprising that this issue is difficult for the general public, given that it is debated even within the scientific community. Some propose a “frequentist” interpretation: there will be at least a minimum amount of rain on 80% of days with weather conditions like they are today. Although preferred by many scientists, this explanation may be particularly difficult for the general public to grasp because it requires regarding tomorrow as a class of events, a group of potential tomorrows. From the perspective of the forecast user, however, tomorrow will happen only once. A perhaps less abstract interpretation is that PoP reflects the degree of confidence that the forecaster has that it will rain. In other words, an 80% chance of rain means that the forecaster strongly believes that there will be at least a minimum amount of rain tomorrow. The problem, from the perspective of the general public, is that when PoP is forecasted, none of these interpretations is specified.

There are clearly some interpretations that are not correct. The percentage expressed in PoP neither refers directly to the percent of area over which precipitation will fall nor does it refer directly to the percent of time precipitation will be observed on the forecast day Although both interpretations are clearly wrong, there is evidence that the general public holds them to varying degrees. Such misunderstandings are critical because they may affect the decisions that people make. If people misinterpret the forecast as percent time or percent area, they maybe more inclined to take precautionary action than are those who have the correct probabilistic interpretation, because they think that it will rain somewhere or some time tomorrow. The negative impact of such misunderstandings on decision making, both in terms of unnecessary precautions as well as erosion in user trust, could well eliminate any potential benefit of adding uncertainty information to the forecast. [Susan Joslyn, Nimor Nadav-Greenberg, and Rebecca M. Nichols, “Probability of Precipitations: Assessment and Enhancement of End-User Understanding“, Journal of the American Meteorological Society, February 2009, citations omitted]

The frequentist interpretation is close to be correct, but it still involves a great deal of guesswork. Rainfall in a particular location is influenced by many variables (e.g., atmospheric pressure, direction and rate of change of atmospheric pressure, ambient temperature, local terrain, presence or absence of bodies of water, vegetation, moisture content of the atmosphere, height of clouds above the terrain, depth of cloud cover). It is nigh unto impossible to say that today’s (or tomorrow’s or next week’s) weather conditions are like (or will be like) those that in the past resulted in rainfall in a particular location 80 percent of the time.

That leaves the Bayesian interpretation, in which the forecaster combines some facts (e.g., the presence or absence of a low-pressure system in or toward the area, the presence or absence of a flow of water vapor in or toward the area) with what he has observed in the past to arrive at a guess about future weather. He then attaches a probability to his guess to indicate the strength of his confidence in it.

Thus:

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

And thus:

The Bayesian approach to learning is based on the subjective interpretation of probability. The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p.

It is impossible to attach a probability — as properly defined in the first part of this article — to something that hasn’t happened, and may not happen. So when you read or hear a statement like “the probability of rain tomorrow is 80 percent”, you should mentally translate it into language like this:

X guesses that Y will (or will not) happen at time Z, and the “probability” that he attaches to his guess  indicates his degree of confidence in it.

The guess may be well-informed by systematic observation of relevant events, but it remains a guess. As most Americans have learned and relearned over the years, when rain has failed to materialize or has spoiled an outdoor event that was supposed to be rain-free.

BUT AREN’T SOME THINGS MORE LIKELY TO HAPPEN THAN OTHERS?

Of course. But only one thing will happen at a given time and place.

If a person walks across a shooting range where live ammunition is being used, his is more likely to be killed than if he walks across the same patch of ground when no one is shooting. And a clever analyst could concoct a probability of a person’s being shot by writing an equation that includes such variables as his size, the speed with which he walks, the number of shooters, their rate of fire, and the distance across the shooting range.

What would the probability estimate mean? It would mean that if a very large number of persons walked across the shooting range under identical conditions, approximately S percent of them would be shot. But the clever analyst cannot specify which of the walkers would be among the S percent.

Here’s another way to look at it. One person wearing head-to-toe bullet-proof armor could walk across the range a large number of times and expect to be hit by a bullet on S percent of his crossings. But the hardy soul wouldn’t know on which of the crossings he would be hit.

Suppose the hardy soul became a foolhardy one and made a bet that he could cross the range without being hit. Further, suppose that S is estimated to be 0.75; that is, 75 percent of a string of walkers would be hit, or a single (bullet-proof) walker would be hit on 75 percent of his crossings. Knowing the value of S, the foolhardy fellow offers to pay out $1 million dollars if he crosses the range unscathed — one time — and claim $4 million (for himself or his estate) if he is shot. That’s an even-money bet, isn’t it?

No it isn’t. This situation is exactly analogous to the $10,000 bet on a single coin flip, discussed above. But I will dissect this one in a different way, to the same end.

The bet should be understood for what it is, an either-or-proposition. The foolhardy walker will either lose $1 million or win $4 million. The bettor (or bettors) who take the other side of the bet will either win $1 million or lose $4 million.

As anyone with elementary reading and reasoning skills should be able to tell, those possible outcomes are not the same as the outcome that would obtain (approximately) if the foolhardy fellow could walk across the shooting range 1,000 times. If he could, he would come very close to breaking even, as would those who bet against him.

To put it as simply as possible:

When an event has more than one possible outcome, a single trial cannot replicate the average outcome of a large number of trials (replications of the event).

It follows that the average outcome of a large number of trials — the probability of each possible outcome — cannot occur in a single trial.

It is therefore meaningless to ascribe a probability to any possible outcome of a single trial.

MODELING AND PROBABILITY

Sometimes, when things interact, the outcome of the interactions will conform to an expected value — if that value is empirically valid. For example, if a pot of pure water is put over a flame at sea level, the temperature of the water will rise to 212 degrees Fahrenheit and water molecules will then begin to escape into the air in a gaseous form (steam).  If the flame is kept hot enough and applied long enough, the water in the pot will continue to vaporize until the pot is empty.

That isn’t a probabilistic description of boiling. It’s just a description of what’s known to happen to water under certain conditions.

But it bears a similarity to a certain kind of probabilistic reasoning. For example, in a paper that I wrote long ago about warfare models, I said this:

Consider a five-parameter model, involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model might easily yield a cumulative error of a hemibel [a factor of 3], given a twenty five percent error in each parameter.

Mathematically, 1.255 = 3.05. Which is true enough, but also misleadingly simple.

A mathematical model of that kind rests on the crucial assumption that the component probabilities are based on observations of actual events occurring in similar conditions. It is safe to say that the values assigned to the parameters of warfare models, econometric models, sociological models, and most other models outside the realm of physics, chemistry, and other “hard” sciences fail to satisfy that assumption.

Further, a mathematical model yields only the expected (average) outcome of a large number of events occurring under conditions similar to those from which the component probabilities were derived. (A Monte Carlo model merely yields a quantitative estimate of the spread around the average outcome.) Again, this precludes most models outside the “hard” sciences, and even some within that domain.

The moral of the story: Don’t be gulled by a statement about the expected outcome of an event, even when the statement seems to be based on a rigorous mathematical formula. Look behind the formula for an empirical foundation. And not just any empirical foundation, but one that is consistent with the situation to which the formula is being applied.

And when you’ve done that, remember that the formula expresses a point estimate around which there’s a wide — very wide — range of uncertainty. Which was the real point of the passage quoted above. The only sure things in life are death, taxes, and regulation.

PROBABILITY VS. OPPORTUNITY

Warfare models, as noted, deal with interactions among large numbers of things. If a large unit of infantry encounters another large unit of enemy infantry, and the units exchange gunfire, it is reasonable to expect the following consequences:

  • As the numbers of infantrymen increase, more of them will be shot, for a given rate of gunfire.
  • As the rate of gunfire increases, more of the infantrymen will be shot, for a given number of infantrymen.

These consequences don’t represent probabilities, though an inveterate modeler will try to represent them with a probabilistic model. They represent opportunities — opportunities for bullets to hit bodies. It is entirely possible that some bullets won’t hit bodies and some bodies won’t be hit by bullets. But more bullets will hit bodies if there are more bodies in a given space. And a higher proportion of a given number of bodies will be hit as more bullets enter a given space.

That’s all there is to it.

It has nothing to do with probability. The actual outcome of a past encounter is the actual outcome of that encounter, and the number of casualties has everything to do with the minutiae of the encounter and nothing to do with probability. A fortiori, the number of casualties resulting from a possible future encounter would have everything to do with the minutiae of that encounter and nothing to do with probability. Given the uniqueness of any given encounter, it would be wrong to characterize its outcome (e.g., number of casualties per infantryman) as a probability.


Related posts:
Understanding the Monty Hall Problem
The Compleat Monty Hall Problem
Some Thoughts about Probability
My War on the Misuse of Probability
Scott Adams Understands Probability
Further Thoughts about Probability

The Balderdash Chronicles

Balderdash is nonsense, to put it succinctly. Less succinctly, balderdash is stupid or illogical talk; senseless rubbish. Rather thoroughly, it is

balls, bull, rubbish, shit, rot, crap, garbage, trash, bunk, bullshit, hot air, tosh, waffle, pap, cobblers, bilge, drivel, twaddle, tripe, gibberish, guff, moonshine, claptrap, hogwash, hokum, piffle, poppycock, bosh, eyewash, tommyrot, horsefeathers, or buncombe.

I have encountered innumerable examples of balderdash in my 35 years of full-time work,  14 subsequent years of blogging, and many overlapping years as an observer of the political scene.  This essay documents some of the worst balderdash that I have come across.

THE LIMITS OF SCIENCE

Science (or what too often passes for it) generates an inordinate amount of balderdash. Consider an article in The Christian Science Monitor: “Why the Universe Isn’t Supposed to Exist”, which reads in part:

The universe shouldn’t exist — at least according to a new theory.

Modeling of conditions soon after the Big Bang suggests the universe should have collapsed just microseconds after its explosive birth, the new study suggests.

“During the early universe, we expected cosmic inflation — this is a rapid expansion of the universe right after the Big Bang,” said study co-author Robert Hogan, a doctoral candidate in physics at King’s College in London. “This expansion causes lots of stuff to shake around, and if we shake it too much, we could go into this new energy space, which could cause the universe to collapse.”

Physicists draw that conclusion from a model that accounts for the properties of the newly discovered Higgs boson particle, which is thought to explain how other particles get their mass; faint traces of gravitational waves formed at the universe’s origin also inform the conclusion.

Of course, there must be something missing from these calculations.

“We are here talking about it,” Hogan told Live Science. “That means we have to extend our theories to explain why this didn’t happen.”

No kidding!

Though there’s much more to come, this example should tell you all that you need to know about the fallibility of scientists. If you need more examples, consider these.

MODELS LIE WHEN LIARS MODEL

Not that there’s anything wrong with being wrong, but there’s a great deal wrong with seizing on a transitory coincidence between two variables (CO2 emissions and “global” temperatures in the late 1900s) and spurring a massively wrong-headed “scientific” mania — the mania of anthropogenic global warming.

What it comes down to is modeling, which is simply a way of baking one’s assumptions into a pseudo-scientific mathematical concoction. Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a general explanation of the sham, see this.

SCIENCE VS. SCIENTISM: STEVEN PINKER’S BALDERDASH

The examples that I’ve adduced thus far (and most of those that follow) demonstrate a mode of thought known as scientism: the application of the tools and language of science to create a pretense of knowledge.

No less a personage than Steven Pinker defends scientism in “Science Is Not Your Enemy”. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. Pinker says this, for example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists”. We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (e.g., bluffing, spotting “tells”, and avoiding “tells”).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real test of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification”. With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, discussed later.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

Much of the problem arises because of a simple fact that is too often overlooked: Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

In sum, scientists are human and fallible. It is in the best tradition of science to distrust their scientific claims and to dismiss their non-scientific utterances.

ECONOMICS: PHYSICS ENVY AT WORK

Economics is rife with balderdash cloaked in mathematics. Economists who rely heavily on mathematics like to say (and perhaps even believe) that mathematical expression is more precise than mere words. But, as Arnold Kling points out in “An Important Emerging Economic Paradigm”, mathematical economics is a language of faux precision, which is useful only when applied to well defined, narrow problems. It can’t address the big issues — such as economic growth — which depend on variables such as the rule of law and social norms which defy mathematical expression and quantification.

I would go a step further and argue that mathematical economics borders on obscurantism. It’s a cult whose followers speak an arcane language not only to communicate among themselves but to obscure the essentially bankrupt nature of their craft from others. Mathematical expression actually hides the assumptions that underlie it. It’s far easier to identify and challenge the assumptions of “literary” economics than it is to identify and challenge the assumptions of mathematical economics.

I daresay that this is true even for persons who are conversant in mathematics. They may be able to manipulate easily the equations of mathematical economics, but they are able to do so without grasping the deeper meanings — the assumptions and complexities — hidden by those equations. In fact, the ease of manipulating the equations gives them a false sense of mastery of the underlying, real concepts.

Much of the economics profession is nevertheless dedicated to the protection and preservation of the essential incompetence of mathematical economists. This is from “An Important Emerging Economic Paradigm”:

One of the best incumbent-protection rackets going today is for mathematical theorists in economics departments. The top departments will not certify someone as being qualified to have an advanced degree without first subjecting the student to the most rigorous mathematical economic theory. The rationale for this is reminiscent of fraternity hazing. “We went through it, so should they.”

Mathematical hazing persists even though there are signs that the prestige of math is on the decline within the profession. The important Clark Medal, awarded to the most accomplished American economist under the age of 40, has not gone to a mathematical theorist since 1989.

These hazing rituals can have real consequences. In medicine, the controversial tradition of long work hours for medical residents has come under scrutiny over the last few years. In economics, mathematical hazing is not causing immediate harm to medical patients. But it probably is working to the long-term detriment of the profession.

The hazing ritual in economics has as least two real and damaging consequences. First, it discourages entry into the economics profession by persons who, like Kling, can discuss economic behavior without resorting to the sterile language of mathematics. Second, it leads to economics that’s irrelevant to the real world — and dead wrong.

How wrong? Economists are notoriously bad at constructing models that adequately predict near-term changes in GDP. That task should be easier than sorting out the microeconomic complexities of the labor market.

Take Professor Ray Fair, for example. Professor Fair teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983, Prof. Fair has been forecasting changes in real GDP over the next four quarters. He has made 80 such forecasts based on a model that he has undoubtedly tweaked over the years. The current model is here. His forecasting track record is here. How has he done? Here’s how:

1. The median absolute error of his forecasts is 30 percent.

2. The mean absolute error of his forecasts is 70 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 4 percent; too low when real, four-quarter GDP growth is greater than 4 percent.

4. His forecasts have grown generally worse — not better — with time.

Prof. Fair is still at it. And his forecasts continue to grow worse with time:

fair-model-forecasting-errors-vs-time
This and later graphs pertaining to Prof. Fair’s forecasts were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing; 8 percent of the errors exceed 200 percent.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

fair-model-estimated-vs-actual-growth-rate

It fails the most important test; that is, it doesn’t reflect the downward trend in economic growth:

fair-model-year-over-year-growth-estimated-and-actual

THE INVISIBLE ELEPHANT IN THE ROOM

Professor Fair and his prognosticating ilk are pikers compared with John Maynard Keynes and his disciples. The Keynesian multiplier is the fraud of all frauds, not just in economics but in politics, where it is too often invoked as an excuse for taking money from productive uses and pouring it down the rathole of government spending.

The Keynesian (fiscal) multiplier is defined as

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier. Please go to “The Keynesian Multiplier: Fiction vs. Fact” for a detailed explanation of the phony math and a derivation of the true multiplier, which is decidedly negative. Here’s the short version:

  • The phony math involves the use of an accounting identity that can be manipulated in many ways, to “prove” many things. But the accounting identity doesn’t express an operational (or empirical) relationship between a change in government spending and a change in GDP.
  • The true value of the multiplier isn’t 5 (a common mathematical estimate), 1.5 (a common but mistaken empirical estimate used for government purposes), or any positive number. The true value represents the negative relationship between the change in government spending (including transfer payments) as a fraction of GDP and the change in the rate of real GDP growth. Specifically, where F represents government spending as a fraction of GDP,

a rise in F from 0.24 to 0.33 (the actual change from 1947 to 2007) would reduce the real rate of economic growth by 0.031 percentage points. The real rate of growth from 1947 to 1957 was 4 percent. Other things being the same, the rate of growth would have dropped to 0.9 percent in the period 2008-2017. It actually dropped to 1.4 percent, which is within the standard error of the estimate.

  • That kind of drop makes a huge difference in the incomes of Americans. In 10 years, rise GDP rises by almost 50 percent when the rate of growth is 4 percent, but only by 15 percent when the rate of growth is 1.9 percent. Think of the tens of millions of people who would be living in comfort rather than squalor were it not for Keynesian balderdash, which turns reality on its head in order to promote big government.

MANAGEMENT “SCIENCE”

A hot new item in management “science” a few years ago was the Candle Problem. Graham Morehead describes the problem and discusses its broader, “scientifically” supported conclusions:

The Candle Problem was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

Sam Glucksberg added a fascinating twist to this finding in his 1962 paper, “Influece of strength of drive on functional fixedness and perceptual recognition.” (Journal of Experimental Psychology 1962. Vol. 63, No. 1, 36-41). He studied the effect of financial incentives on solving the candle problem. To one group he offered no money. To the other group he offered an amount of money for solving the problem fast.

Remember, there are two candle problems. Let the “Simple Candle Problem” be the one where the tacks are outside the box — no functional fixedness. The solution is straightforward. Here are the results for those who solved it:

Simple Candle Problem Mean Times :

  • WITHOUT a financial incentive : 4.99 min
  • WITH a financial incentive : 3.67 min

Nothing unexpected here. This is a classical incentivization effect anybody would intuitively expect.

Now, let “In-Box Candle Problem” refer to the original description where the tacks start off in the box.

In-Box Candle Problem Mean Times :

  • WITHOUT a financial incentive : 7:41 min
  • WITH a financial incentive : 11:08 min

How could this be? The financial incentive made people slower? It gets worse — the slowness increases with the incentive. The higher the monetary reward, the worse the performance! This result has been repeated many times since the original experiment.

Glucksberg and others have shown this result to be highly robust. Daniel Pink calls it a legally provable “fact.” How should we interpret the above results?

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

All of this is found in a video (to which Morehead links), wherein Daniel Pink (an author and journalist whose actual knowledge of science and business appears to be close to zero) expounds the lessons of the Candle Problem. Pink displays his (no-doubt-profitable) conviction that the Candle Problem and related “science” reveals (a) the utter bankruptcy of capitalism and (b) the need to replace managers with touchy-feely gurus (like himself, I suppose). That Pink has worked for two of the country’s leading anti-capitalist airheads — Al Gore and Robert Reich — should tell you all that you need to know about Pink’s real agenda.

Here are my reasons for sneering at Pink and his ilk:

1. I have been there and done that. That is to say, as a manager, I lived through (and briefly bought into) the touchy-feely fads of the ’80s and ’90s. Think In Search of Excellence, The One Minute Manager, The Seven Habits of Highly Effective People, and so on. What did anyone really learn from those books and the lectures and workshops based on them? A perceptive person would have learned that it is easy to make up plausible stories about the elements of success, and having done so, it is possible to make a lot of money peddling those stories. But the stories are flawed because (a) they are based on exceptional cases; (b) they attribute success to qualitative assessments of behaviors that seem to be present in those exceptional cases; and (c) they do not properly account for the surrounding (and critical) circumstances that really led to success, among which are luck and rare combinations of personal qualities (e.g., high intelligence, perseverance, people-reading skills). In short, Pink and his predecessors are guilty of reductionism and the post hoc ergo propter hoc fallacy.

2. Also at work is an undue generalization about the implications of the Candle Problem. It may be true that workers will perform better — at certain kinds of tasks (very loosely specified) — if they are not distracted by incentives that are related to the performance of those specific tasks. But what does that have to do with incentives in general? Not much, because the Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

3. The Candle Problem, with its anti-incentive “lesson”, is therefore inapplicable to the real world, where incentives play a crucial and positive role:

  • The profit incentive leads firms to invest resources in the development and/or production of things that consumers are willing to buy because those things satisfy wants at the right price.
  • Firms acquire resources to develop and produce things by bidding for those resources, that is, by offering monetary incentives to attract the resources required to make the things that consumers are willing to buy.
  • The incentives (compensation) offered to workers of various kinds (from scientists with doctorates to burger-flippers) are generally commensurate with the contributions made by those workers to the production of things of value to consumers, and to the value placed on those things by consumers.
  • Workers agree to the terms and conditions of employment (including compensation) before taking a job. The incentive for most workers is to keep a job by performing adequately over a sustained period — not by demonstrating creativity in a few minutes. Some workers (but not a large fraction of them) are striving for performance-based commissions, bonuses, and profit-sharing distributions. But those distributions are based on performance over a sustained period, during which the striving workers have plenty of time to think about how they can perform better.
  • Truly creative work is done, for the most part, by persons who are hired for such work on the basis of their credentials (education, prior employment, test results). Their compensation is based on their credentials, initially, and then on their performance over a sustained period. If they are creative, they have plenty of psychological space in which to exercise and demonstrate their creativity.
  • On-the-job creativity — the improvement of protocols and tools by workers using them — does not occur under conditions of the kind assumed in the Candle Problem. Rather, on-the-job creativity flows from actual work and insights about how to do the work better. It happens when it happens, and has nothing to do with artificial time constraints and monetary incentives to be “creative” within those constraints.
  • Pink’s essential pitch is that incentives can be replaced by offering jobs that yield autonomy (self-direction), mastery (the satisfaction of doing difficult things well), and purpose (that satisfaction of contributing to the accomplishment of something important). Well, good luck with that, but I (and millions of other consumers) want what we want, and if workers want to make a living they will just have to provide what we want, not what turns them on. Yes, there is a lot to be said for autonomy, mastery, and purpose, but there is also a lot to be said for getting a paycheck. And, contrary to Pink’s implication, getting a paycheck does not rule out autonomy, mastery, and purpose — where those happen to go with the job.

Pink and company’s “insights” about incentives and creativity are 180 degrees off-target. McDonald’s could use the Candle Problem to select creative burger-flippers who will perform well under tight deadlines because their compensation is unrelated to the creativity of their burger-flipping. McDonald’s customers should be glad that McDonald’s has taken creativity out of the picture by reducing burger-flipping to the routine application of protocols and tools.

In summary:

  • The Candle Problem is an interesting experiment, and probably valid with respect to the performance of specific tasks against tight deadlines. I think the results apply whether the stakes are money or any other kind of prize. The experiment illustrates the “choke” factor, and nothing more profound than that.
  • I question whether the experiment applies to the usual kind of incentive (e.g., a commissions or bonus), where the “incentee” has ample time (months, years) for reflection and research that will enable him to improve his performance and attain a bigger commission or bonus (which usually isn’t an all-or-nothing arrangement).
  • There’s also the dissimilarity of the Candle Problem — which involves more-or-less randomly chosen subjects, working against an artificial deadline — and actual creative thinking — usually involving persons who are experts (even if the expertise is as mundane as ditch-digging), working against looser deadlines or none at all.

PARTISAN POLITICS IN THE GUISE OF PSEUDO-SCIENCE

There’s plenty of it to go around, but this one is a whopper. Peter Singer outdoes his usual tendentious self in this review of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. In the course of the review, Singer writes:

Pinker argues that enhanced powers of reasoning give us the ability to detach ourselves from our immediate experience and from our personal or parochial perspective, and frame our ideas in more abstract, universal terms. This in turn leads to better moral commitments, including avoiding violence. It is just this kind of reasoning ability that has improved during the 20th century. He therefore suggests that the 20th century has seen a “moral Flynn effect, in which an accelerating escalator of reason carried us away from impulses that lead to violence” and that this lies behind the long peace, the new peace, and the rights revolution. Among the wide range of evidence he produces in support of that argument is the tidbit that since 1946, there has been a negative correlation between an American president’s I.Q. and the number of battle deaths in wars involving the United States.

Singer does not give the source of the IQ estimates on which Pinker relies, but the supposed correlation points to a discredited piece of historiometry by Dean Keith Simonton, Simonton jumps through various hoops to assess the IQs of  every president from Washington to Bush II — to one decimal place. That is a feat on a par with reconstructing the final thoughts of Abel, ere Cain slew him.

Before I explain the discrediting of Simonton’s obviously discreditable “research”, there is some fun to be had with the Pinker-Singer story of presidential IQ (Simonton-style) for battle deaths. First, of course, there is the convenient cutoff point of 1946. Why 1946? Well, it enables Pinker-Singer to avoid the inconvenient fact that the Civil War, World War I, and World War II happened while the presidency was held by three men who (in Simonton’s estimation) had high IQs: Lincoln, Wilson, and FDR.

The next several graphs depict best-fit relationships between Simonton’s estimates of presidential IQ and the U.S. battle deaths that occurred during each president’s term of office.* The presidents, in order of their appearance in the titles of the graphs are Harry S Truman (HST), George W. Bush (GWB), Franklin Delano Roosevelt (FDR), (Thomas) Woodrow Wilson (WW), Abraham Lincoln (AL), and George Washington (GW). The number of battle deaths is rounded to the nearest thousand, so that the prevailing value is 0, even in the case of the Spanish-American War (385 U.S. combat deaths) and George H.W. Bush’s Gulf War (147 U.S. combat deaths).

This is probably the relationship referred to by Singer, though Pinker may show a linear fit, rather than the tighter polynomial fit used here:

It looks bad for the low “IQ” presidents — if you believe Simonton’s estimates of IQ, which you shouldn’t, and if you believe that battle deaths are a bad thing per se, which they aren’t. I will come back to those points. For now, just suspend your well-justified disbelief.

If the relationship for the HST-GWB era were statistically meaningful, it would not change much with the introduction of additional statistics about “IQ” and battle deaths, but it does:




If you buy the brand of snake oil being peddled by Pinker-Singer, you must believe that the “dumbest” and “smartest” presidents are unlikely to get the U.S. into wars that result in a lot of battle deaths, whereas some (but, mysteriously, not all) of the “medium-smart” presidents (Lincoln, Wilson, FDR) are likely to do so.

In any event, if you believe in Pinker-Singer’s snake oil, you must accept the consistent “humpback” relationship that is depicted in the preceding four graphs, rather than the highly selective, one-shot negative relationship of the HST-GWB graph.

More seriously, the relationship in the HST-GWB graph is an evident ploy to discredit certain presidents (especially GWB, I suspect), which is why it covers only the period since WWII. Why not just say that you think GWB is a chimp-like, war-mongering, moron and be done with it? Pseudo-statistics of the kind offered up by Pinker-Singer is nothing more than a talking point for those already convinced that Bush=Hitler.

But as long as this silly game is in progress, let us continue it, with a new rule. Let us advance from one to two explanatory variables. The second explanatory variable that strongly suggests itself is political party. And because it is not good practice to omit relevant statistics (a favorite gambit of liars), I estimated an equation based on “IQ” and battle deaths for the 27 men who served as president from the first Republican presidency (Lincoln’s) through the presidency of GWB.  The equation looks like this:

U.S. battle deaths (000) “owned” by a president =

-80.6 + 0.841 x “IQ” – 31.3 x party (where 0 = Dem, 1 = GOP)

In other words, battle deaths rise at the rate of 841 per IQ point (so much for Pinker-Singer). But there will be fewer deaths with a Republican in the White House (so much for Pinker-Singer’s implied swipe at GWB).

All of this is nonsense, of course, for two reasons: Simonton’s estimates of IQ are hogwash, and the number of U.S. battle deaths is a meaningless number, taken by itself.

With regard to the hogwash, Simonton’s estimates of presidents’ IQs put every one of them — including the “dumbest,” U.S. Grant — in the top 2.3 percent of the population. And the mean of Simonton’s estimates puts the average president in the top 0.1 percent (one-tenth of one percent) of the population. That is literally incredible. Good evidence of the unreliability of Simonton’s estimates is found in an entry by Thomas C. Reeves at George Mason University’s History New Network. Reeves is the author of A Question of Character: A Life of John F. Kennedy, the negative reviews of which are evidently the work of JFK idolators who refuse to be disillusioned by facts. Anyway, here is Reeves:

I’m a biographer of two of the top nine presidents on Simonton’s list and am highly familiar with the histories of the other seven. In my judgment, this study has little if any value. Let’s take JFK and Chester A. Arthur as examples.

Kennedy was actually given an IQ test before entering Choate. His score was 119…. There is no evidence to support the claim that his score should have been more than 40 points higher [i.e., the IQ of 160 attributed to Kennedy by Simonton]. As I described in detail in A Question Of Character [link added], Kennedy’s academic achievements were modest and respectable, his published writing and speeches were largely done by others (no study of Kennedy is worthwhile that downplays the role of Ted Sorensen)….

Chester Alan Arthur was largely unknown before my Gentleman Boss was published in 1975. The discovery of many valuable primary sources gave us a clear look at the president for the first time. Among the most interesting facts that emerged involved his service during the Civil War, his direct involvement in the spoils system, and the bizarre way in which he was elevated to the GOP presidential ticket in 1880. His concealed and fatal illness while in the White House also came to light.

While Arthur was a college graduate, and was widely considered to be a gentleman, there is no evidence whatsoever to suggest that his IQ was extraordinary. That a psychologist can rank his intelligence 2.3 points ahead of Lincoln’s suggests access to a treasure of primary sources from and about Arthur that does not exist.

This historian thinks it impossible to assign IQ numbers to historical figures. If there is sufficient evidence (as there usually is in the case of American presidents), we can call people from the past extremely intelligent. Adams, Wilson, TR, Jefferson, and Lincoln were clearly well above average intellectually. But let us not pretend that we can rank them by tenths of a percentage point or declare that a man in one era stands well above another from a different time and place.

My educated guess is that this recent study was designed in part to denigrate the intelligence of the current occupant of the White House….

That is an excellent guess.

The meaninglessness of battle deaths as a measure of anything — but battle deaths — should be evident. But in case it is not evident, here goes:

  • Wars are sometimes necessary, sometimes not. (I give my views about the wisdom of America’s various wars at this post.) Necessary or not, presidents usually act in accordance with popular and elite opinion about the desirability of a particular war. Imagine, for example, the reaction if FDR had not gone to Congress on December 8, 1941, to ask for a declaration of war against Japan, or if GWB had not sought the approval of Congress for action in Afghanistan.
  • Presidents may have a lot to do with the decision to enter a war, but they have little to do with the external forces that help to shape that decision. GHWB, for example, had nothing to do with Saddam’s decision to invade Kuwait and thereby threaten vital U.S. interests in the Middle East. GWB, to take another example, was not a party to the choices of earlier presidents (GHWB and Clinton) that enabled Saddam to stay in power and encouraged Osama bin Laden to believe that America could be brought to its knees by a catastrophic attack.
  • The number of battle deaths in a war depends on many things outside the control of a particular president; for example, the size and capabilities of enemy forces, the size and capabilities of U.S. forces (which have a lot to do with the decisions of earlier administrations and Congresses), and the scope and scale of a war (again, largely dependent on the enemy).
  • Battle deaths represent personal tragedies, but — in and of themselves — are not a measure of a president’s wisdom or acumen. Whether the deaths were in vain is a separate issue that depends on the aforementioned considerations. To use battle deaths as a single, negative measure of a president’s ability is rank cynicism — the rankness of which is revealed in Pinker’s decision to ignore Lincoln and FDR and their “good” but deadly wars.

To put the last point another way, if the number of battle death deaths is a bad thing, Lincoln and FDR should be rotting in hell for the wars that brought an end to slavery and Hitler.
__________
* The numbers of U.S. battle deaths, by war, are available at infoplease.com, “America’s Wars: U.S. Casualties and Veterans”. The deaths are “assigned” to presidents as follows (numbers in parentheses indicate thousands of deaths):

All of the deaths (2) in the War of 1812 occurred on Madison’s watch.

All of the deaths (2) in the Mexican-American War occurred on Polk’s watch.

I count only Union battle deaths (140) during the Civil War; all are “Lincoln’s.” Let the Confederate dead be on the head of Jefferson Davis. This is a gift, of sorts, to Pinker-Singer because if Confederate dead were counted as Lincoln, with his high “IQ,” it would make Pinker-Singer’s hypothesis even more ludicrous than it is.

WW is the sole “owner” of WWI battle deaths (53).

Some of the U.S. battle deaths in WWII (292) occurred while HST was president, but Truman was merely presiding over the final months of a war that was almost won when FDR died. Truman’s main role was to hasten the end of the war in the Pacific by electing to drop the A-bombs on Hiroshima and Nagasaki. So FDR gets “credit” for all WWII battle deaths.

The Korean War did not end until after Eisenhower succeeded Truman, but it was “Truman’s war,” so he gets “credit” for all Korean War battle deaths (34). This is another “gift” to Pinker-Singer because Ike’s “IQ” is higher than Truman’s.

Vietnam was “LBJ’s war,” but I’m sure that Singer would not want Nixon to go without “credit” for the battle deaths that occurred during his administration. Moreover, LBJ had effectively lost the Vietnam war through his gradualism, but Nixon chose nevertheless to prolong the agony. So I have shared the “credit” for Vietnam War battle deaths between LBJ (deaths in 1965-68: 29) and RMN (deaths in 1969-73: 17). To do that, I apportioned total Vietnam War battle deaths, as given by infoplease.com, according to the total number of U.S. deaths in each year of the war, 1965-1973.

The wars in Afghanistan and Iraq are “GWB’s wars,” even though Obama has continued them. So I have “credited” GWB with all the battle deaths in those wars, as of May 27, 2011 (5).

The relative paucity of U.S. combat  deaths in other post-WWII actions (e.g., Lebanon, Somalia, Persian Gulf) is attested to by “Post-Vietnam Combat Casualties”, at infoplease.com.

A THIRD APPEARANCE BY PINKER

Steven Pinker, whose ignominious outpourings I have addressed twice here, deserves a third strike (which he shall duly be awarded). Pinker’s The Better Angels of Our Nature is cited gleefully by leftists and cockeyed optimists as evidence that human beings, on the whole, are becoming kinder and gentler because of:

  • The Leviathan – The rise of the modern nation-state and judiciary “with a monopoly on the legitimate use of force,” which “can defuse the [individual] temptation of exploitative attack, inhibit the impulse for revenge, and circumvent…self-serving biases.”
  • Commerce – The rise of “technological progress [allowing] the exchange of goods and services over longer distances and larger groups of trading partners,” so that “other people become more valuable alive than dead” and “are less likely to become targets of demonization and dehumanization”;
  • Feminization – Increasing respect for “the interests and values of women.”
  • Cosmopolitanism – the rise of forces such as literacy, mobility, and mass media, which “can prompt people to take the perspectives of people unlike themselves and to expand their circle of sympathy to embrace them”;
  • The Escalator of Reason – an “intensifying application of knowledge and rationality to human affairs,” which “can force people to recognize the futility of cycles of violence, to ramp down the privileging of their own interests over others’, and to reframe violence as a problem to be solved rather than a contest to be won.”

I can tell you that Pinker’s book is hogwash because two very bright leftists — Peter Singer and Will Wilkinson — have strongly and wrongly endorsed some of its key findings. I dispatched Singer in earlier. As for Wilkinson, he praises statistics adduced by Pinker that show a decline in the use of capital punishment:

In the face of such a decisive trend in moral culture, we can say a couple different things. We can say that this is just change and says nothing in particular about what is really right or wrong, good or bad. Or we can take take say this is evidence of moral progress, that we have actually become better. I prefer the latter interpretation for basically the same reasons most of us see the abolition of slavery and the trend toward greater equality between races and sexes as progress and not mere morally indifferent change. We can talk about the nature of moral progress later. It’s tricky. For now, I want you to entertain the possibility that convergence toward the idea that execution is wrong counts as evidence that it is wrong.

I would count convergence toward the idea that execution is wrong as evidence that it is wrong, if that idea were (a) increasingly held by individuals who (b) had arrived at their “enlightenment” unnfluenced by operatives of the state (legislatures and judges), who take it upon themselves to flout popular support of the death penalty. What we have, in the case of the death penalty, is moral regress, not moral progress.

Moral regress because the abandonment of the death penalty puts innocent lives at risk. Capital punishment sends a message, and the message is effective when it is delivered: it deters homicide. And even if it didn’t, it would at least remove killers from our midst, permanently. By what standard of morality can one claim that it is better to spare killers than to protect innocents? For that matter, by what standard of morality is it better to kill innocents in the womb than to spare killers? Proponents of abortion (like Singer and Wilkinson) — who by and large oppose capital punishment — are completely lacking in moral authority.

Returning to Pinker’s thesis that violence has declined, I quote a review at Foseti:

Pinker’s basic problem is that he essentially defines “violence” in such a way that his thesis that violence is declining becomes self-fulling. “Violence” to Pinker is fundamentally synonymous with behaviors of older civilizations. On the other hand, modern practices are defined to be less violent than newer practices.

A while back, I linked to a story about a guy in my neighborhood who’s been arrested over 60 times for breaking into cars. A couple hundred years ago, this guy would have been killed for this sort of vandalism after he got caught the first time. Now, we feed him and shelter him for a while and then we let him back out to do this again. Pinker defines the new practice as a decline in violence – we don’t kill the guy anymore! Someone from a couple hundred years ago would be appalled that we let the guy continue destroying other peoples’ property without consequence. In the mind of those long dead, “violence” has in fact increased. Instead of a decline in violence, this practice seems to me like a decline in justice – nothing more or less.

Here’s another example, Pinker uses creative definitions to show that the conflicts of the 20th Century pale in comparison to previous conflicts. For example, all the Mongol Conquests are considered one event, even though they cover 125 years. If you lump all these various conquests together and you split up WWI, WWII, Mao’s takeover in China, the Bolshevik takeover of Russia, the Russian Civil War, and the Chinese Civil War (yes, he actually considers this a separate event from Mao), you unsurprisingly discover that the events of the 20th Century weren’t all that violent compared to events in the past! Pinker’s third most violent event is the “Mideast Slave Trade” which he says took place between the 7th and 19th Centuries. Seriously. By this standard, all the conflicts of the 20th Century are related. Is the Russian Revolution or the rise of Mao possible without WWII? Is WWII possible without WWI? By this consistent standard, the 20th Century wars of Communism would have seen the worst conflict by far. Of course, if you fiddle with the numbers, you can make any point you like.

There’s much more to the review, including some telling criticisms of Pinker’s five reasons for the (purported) decline in violence. That the reviewer somehow still wants to believe in the rightness of Pinker’s thesis says more about the reviewer’s optimism than it does about the validity of Pinker’s thesis.

That thesis is fundamentally flawed, as Robert Epstein points out in a review at Scientific American:

[T]he wealth of data [Pinker] presents cannot be ignored—unless, that is, you take the same liberties as he sometimes does in his book. In two lengthy chapters, Pinker describes psychological processes that make us either violent or peaceful, respectively. Our dark side is driven by a evolution-based propensity toward predation and dominance. On the angelic side, we have, or at least can learn, some degree of self-control, which allows us to inhibit dark tendencies.

There is, however, another psychological process—confirmation bias—that Pinker sometimes succumbs to in his book. People pay more attention to facts that match their beliefs than those that undermine them. Pinker wants peace, and he also believes in his hypothesis; it is no surprise that he focuses more on facts that support his views than on those that do not. The SIPRI arms data are problematic, and a reader can also cherry-pick facts from Pinker’s own book that are inconsistent with his position. He notes, for example, that during the 20th century homicide rates failed to decline in both the U.S. and England. He also describes in graphic and disturbing detail the savage way in which chimpanzees—our closest genetic relatives in the animal world—torture and kill their own kind.

Of greater concern is the assumption on which Pinker’s entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease? By this logic, when we reach a world population of nine billion in 2050, Pinker will conceivably be satisfied if a mere two million people are killed in war that year.

The biggest problem with the book, though, is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Pinker’s belief that violence is on the decline reminds me of “it’s different this time”, a phrase that was on the lips of hopeful stock-pushers, stock-buyers, and pundits during the stock-market bubble of the late 1990s. That bubble ended, of course, in the spectacular crash of 2000.

Predictions about the future of humankind are better left in the hands of writers who see human nature whole, and who are not out to prove that it can be shaped or contained by the kinds of “liberal” institutions that Pinker so obviously favors.

Consider this, from an article by Robert J. Samuelson at The Washington Post:

[T]he Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar. Amid the controversy over leaks from the National Security Agency, this looms as an even bigger downside.

By cyberwarfare, I mean the capacity of groups — whether nations or not — to attack, disrupt and possibly destroy the institutions and networks that underpin everyday life. These would be power grids, pipelines, communication and financial systems, business record-keeping and supply-chain operations, railroads and airlines, databases of all types (from hospitals to government agencies). The list runs on. So much depends on the Internet that its vulnerability to sabotage invites doomsday visions of the breakdown of order and trust.

In a report, the Defense Science Board, an advisory group to the Pentagon, acknowledged “staggering losses” of information involving weapons design and combat methods to hackers (not identified, but probably Chinese). In the future, hackers might disarm military units. “U.S. guns, missiles and bombs may not fire, or may be directed against our own troops,” the report said. It also painted a specter of social chaos from a full-scale cyberassault. There would be “no electricity, money, communications, TV, radio or fuel (electrically pumped). In a short time, food and medicine distribution systems would be ineffective.”

But Pinker wouldn’t count the resulting chaos as violence, as long as human beings were merely starving and dying of various diseases. That violence would ensue, of course, is another story, which is told by John Gray in The Silence of Animals: On Progress and Other Modern Myths. Gray’s book — published  18 months after Better Angels — could be read as a refutation of Pinker’s book, though Gray doesn’t mention Pinker or his book.

The gist of Gray’s argument is faithfully recounted in a review of Gray’s book by Robert W. Merry at The National Interest:

The noted British historian J. B. Bury (1861–1927) … wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future…. The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

… Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong….

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption….

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

In the Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny….

And yet the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves….

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy.

RACE AS A SOCIAL CONSTRUCT

David Reich‘s hot new book, Who We Are and How We Got Here, is causing a stir in genetic-research circles. Reich, who takes great pains to assure everyone that he isn’t a racist, and who deplores racism, is nevertheless candid about race:

I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”

Groundbreaking advances in DNA sequencing technology have been made over the last two decades. These advances enable us to measure with exquisite accuracy what fraction of an individual’s genetic ancestry traces back to, say, West Africa 500 years ago — before the mixing in the Americas of the West African and European gene pools that were almost completely isolated for the last 70,000 years. With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real….

Self-identified African-Americans turn out to derive, on average, about 80 percent of their genetic ancestry from enslaved Africans brought to America between the 16th and 19th centuries. My colleagues and I searched, in 1,597 African-American men with prostate cancer, for locations in the genome where the fraction of genes contributed by West African ancestors was larger than it was elsewhere in the genome. In 2006, we found exactly what we were looking for: a location in the genome with about 2.8 percent more African ancestry than the average.

When we looked in more detail, we found that this region contained at least seven independent risk factors for prostate cancer, all more common in West Africans. Our findings could fully account for the higher rate of prostate cancer in African-Americans than in European-Americans. We could conclude this because African-Americans who happen to have entirely European ancestry in this small section of their genomes had about the same risk for prostate cancer as random Europeans.

Did this research rely on terms like “African-American” and “European-American” that are socially constructed, and did it label segments of the genome as being probably “West African” or “European” in origin? Yes. Did this research identify real risk factors for disease that differ in frequency across those populations, leading to discoveries with the potential to improve health and save lives? Yes.

While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another.

But whether we like it or not, that line has already been crossed. A recent study led by the economist Daniel Benjamin compiled information on the number of years of education from more than 400,000 people, almost all of whom were of European ancestry. After controlling for differences in socioeconomic background, he and his colleagues identified 74 genetic variations that are over-represented in genes known to be important in neurological development, each of which is incontrovertibly more common in Europeans with more years of education than in Europeans with fewer years of education.

It is not yet clear how these genetic variations operate. A follow-up study of Icelanders led by the geneticist Augustine Kong showed that these genetic variations also nudge people who carry them to delay having children. So these variations may be explaining longer times at school by affecting a behavior that has nothing to do with intelligence.

This study has been joined by others finding genetic predictors of behavior. One of these, led by the geneticist Danielle Posthuma, studied more than 70,000 people and found genetic variations in more than 20 genes that were predictive of performance on intelligence tests.

Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly. And since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.

You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work. Indeed, the study led by Dr. Kong showed that in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century….

So how should we prepare for the likelihood that in the coming years, genetic studies will show that many traits are influenced by genetic variations, and that these traits will differ on average across human populations? It will be impossible — indeed, anti-scientific, foolish and absurd — to deny those differences. [“How Genetics Is Changing Our Understanding of ‘Race’“, The New York Times, March 23, 2018]

Reich engages in a lot of non-scientific wishful thinking about racial differences and how they should be treated by “society” — none of which is in his purview as a scientist. Reich’s forays into psychobabble have been addressed at length by Steve Sailer (here and here) and Gregory Cochran (here, here, here, here, and here). Suffice it to say that Reich is trying in vain to minimize the scientific fact of racial differences that show up crucially in intelligence and rates of violent crime.

The lesson here is that it’s all right to show that race isn’t a social construct as long as you proclaim that it is a social construct. This is known as talking out of both sides of one’s mouth — another manifestation of balderdash.

DIVERSITY IS GOOD, EXCEPT WHEN IT ISN’T

I now invoke Robert Putnam, a political scientist known mainly for his book Bowling Alone: The Collapse and Revival of American Community (2005), in which he

makes a distinction between two kinds of social capital: bonding capital and bridging capital. Bonding occurs when you are socializing with people who are like you: same age, same race, same religion, and so on. But in order to create peaceful societies in a diverse multi-ethnic country, one needs to have a second kind of social capital: bridging. Bridging is what you do when you make friends with people who are not like you, like supporters of another football team. Putnam argues that those two kinds of social capital, bonding and bridging, do strengthen each other. Consequently, with the decline of the bonding capital mentioned above inevitably comes the decline of the bridging capital leading to greater ethnic tensions.

In later work on diversity and trust within communities, Putnam concludes that

other things being equal, more diversity in a community is associated with less trust both between and within ethnic groups….

Even when controlling for income inequality and crime rates, two factors which conflict theory states should be the prime causal factors in declining inter-ethnic group trust, more diversity is still associated with less communal trust.

Lowered trust in areas with high diversity is also associated with:

  • Lower confidence in local government, local leaders and the local news media.
  • Lower political efficacy – that is, confidence in one’s own influence.
  • Lower frequency of registering to vote, but more interest and knowledge about politics and more participation in protest marches and social reform groups.
  • Higher political advocacy, but lower expectations that it will bring about a desirable result.
  • Less expectation that others will cooperate to solve dilemmas of collective action (e.g., voluntary conservation to ease a water or energy shortage).
  • Less likelihood of working on a community project.
  • Less likelihood of giving to charity or volunteering.
  • Fewer close friends and confidants.
  • Less happiness and lower perceived quality of life.
  • More time spent watching television and more agreement that “television is my most important form of entertainment”.

It’s not as if Putnam is a social conservative who is eager to impart such news. To the contrary, as Michal Jonas writes in “The Downside of Diversity“, Putnam’s

findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis … , he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer….

After releasing the initial results in 2001, Putnam says he spent time “kicking the tires really hard” to be sure the study had it right. Putnam realized, for instance, that more diverse communities tended to be larger, have greater income ranges, higher crime rates, and more mobility among their residents — all factors that could depress social capital independent of any impact ethnic diversity might have.

“People would say, ‘I bet you forgot about X,’” Putnam says of the string of suggestions from colleagues. “There were 20 or 30 X’s.”

But even after statistically taking them all into account, the connection remained strong: Higher diversity meant lower social capital. In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”

“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes….

In a recent study, [Harvard economist Edward] Glaeser and colleague Alberto Alesina demonstrated that roughly half the difference in social welfare spending between the US and Europe — Europe spends far more — can be attributed to the greater ethnic diversity of the US population. Glaeser says lower national social welfare spending in the US is a “macro” version of the decreased civic engagement Putnam found in more diverse communities within the country.

Economists Matthew Kahn of UCLA and Dora Costa of MIT reviewed 15 recent studies in a 2003 paper, all of which linked diversity with lower levels of social capital. Greater ethnic diversity was linked, for example, to lower school funding, census response rates, and trust in others. Kahn and Costa’s own research documented higher desertion rates in the Civil War among Union Army soldiers serving in companies whose soldiers varied more by age, occupation, and birthplace.

Birds of different feathers may sometimes flock together, but they are also less likely to look out for one another. “Everyone is a little self-conscious that this is not politically correct stuff,” says Kahn….

In his paper, Putnam cites the work done by Page and others, and uses it to help frame his conclusion that increasing diversity in America is not only inevitable, but ultimately valuable and enriching. As for smoothing over the divisions that hinder civic engagement, Putnam argues that Americans can help that process along through targeted efforts. He suggests expanding support for English-language instruction and investing in community centers and other places that allow for “meaningful interaction across ethnic lines.”

Some critics have found his prescriptions underwhelming. And in offering ideas for mitigating his findings, Putnam has drawn scorn for stepping out of the role of dispassionate researcher. “You’re just supposed to tell your peers what you found,” says John Leo, senior fellow at the Manhattan Institute, a conservative think tank. [Michael Jonas, “The downside of diversity,” The Boston Globe (boston.com), August 5, 2007]

What is it about academics like Reich and Putnam who can’t bear to face the very facts that they have uncovered? The magic word is “academics”. They are denizens of a milieu in which the facts of life about race, guns, sex, and many other things are in the habit of being suppressed in favor of “hope and change”, and the facts be damned.

ONE MORE BIT OF RACE-RELATED BALDERDASH

I was unaware of the Implicit Association Test (IAT) until a few years ago, when I took a test at YourMorals.Org that purported to measure my implicit racial preferences. IAT has been exposed as junk, John. J. Ray calls it:

Psychologists are well aware that people often do not say what they really think.  It is therefore something of a holy grail among them to find ways that WILL detect what people really think. A very popular example of that is the Implicit Associations test (IAT).  It supposedly measures racist thoughts whether you are aware of them or not.  It sometimes shows people who think they are anti-racist to be in fact secretly racist.

I dismissed it as a heap of junk long ago (here and here) but it has remained very popular and is widely accepted as revealing truth.  I am therefore pleased that a very long and thorough article has just appeared which comes to the same conclusion that I did.

The article in question (which has the same title as Ray’s post) is by Jesse Singal. It appeared at Science of Us on January 11, 2017. Here are some excerpts:

Perhaps no new concept from the world of academic psychology has taken hold of the public imagination more quickly and profoundly in the 21st century than implicit bias — that is, forms of bias which operate beyond the conscious awareness of individuals. That’s in large part due to the blockbuster success of the so-called implicit association test, which purports to offer a quick, easy way to measure how implicitly biased individual people are….

Since the IAT was first introduced almost 20 years ago, its architects, as well as the countless researchers and commentators who have enthusiastically embraced it, have offered it as a way to reveal to test-takers what amounts to a deep, dark secret about who they are: They may not feel racist, but in fact, the test shows that in a variety of intergroup settings, they will act racist….

[The] co-creators are Mahzarin Banaji, currently the chair of Harvard University’s psychology department, and Anthony Greenwald, a highly regarded social psychology researcher at the University of Washington. The duo introduced the test to the world at a 1998 press conference in Seattle — the accompanying press release noted that they had collected data suggesting that 90–95 percent of Americans harbored the “roots of unconscious prejudice.” The public immediately took notice: Since then, the IAT has been mostly treated as a revolutionary, revelatory piece of technology, garnering overwhelmingly positive media coverage….

Maybe the biggest driver of the IAT’s popularity and visibility, though, is the fact that anyone can take the test on the Project Implicit website, which launched shortly after the test was unveiled and which is hosted by Harvard University. The test’s architects reported that, by October 2015, more than 17 million individual test sessions had been completed on the website. As will become clear, learning one’s IAT results is, for many people, a very big deal that changes how they view themselves and their place in the world.

Given all this excitement, it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way….

Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such.

How does IAT work? Singal summarizes:

You sit down at a computer where you are shown a series of images and/or words. First, you’re instructed to hit ‘i’ when you see a “good” term like pleasant, or to hit ‘e’ when you see a “bad” one like tragedy. Then, hit ‘i’ when you see a black face, and hit ‘e’ when you see a white one. Easy enough, but soon things get slightly more complex: Hit ‘i’ when you see a good word or an image of a black person, and ‘e’ when you see a bad word or an image of a white person. Then the categories flip to black/bad and white/good. As you peck away at the keyboard, the computer measures your reaction times, which it plugs into an algorithm. That algorithm, in turn, generates your score.

If you were quicker to associate good words with white faces than good words with black faces, and/or slower to associate bad words with white faces than bad words with black ones, then the test will report that you have a slight, moderate, or strong “preference for white faces over black faces,” or some similar language. You might also find you have an anti-white bias, though that is significantly less common. By the normal scoring conventions of the test, positive scores indicate bias against the out-group, while negative ones indicate bias against the in-group.

The rough idea is that, as humans, we have an easier time connecting concepts that are already tightly linked in our brains, and a tougher time connecting concepts that aren’t. The longer it takes to connect “black” and “good” relative to “white” and “good,” the thinking goes, the more your unconscious biases favor white people over black people.

Singal continues (at great length) to pile up the mountain of evidence against IAT, and to caution against reading anything into the results it yields.

Having become aware of the the debunking of IAT, I went to the website of Project Implicit. When I reached this page, I was surprised to learn that I could not only find out whether I’m a closet racist but also whether I prefer dark or light skin tones, Asians or non-Asians, Trump or a previous president, and several other things or their opposites. I chose to discover my true feelings about Trump vs. a previous president, and was faced with a choice between Trump and Clinton.

What was the result of my several minutes of tapping “e” and “i” on the keyboard of my PC? This:

Your data suggest a moderate automatic preference for Bill Clinton over Donald Trump.

Balderdash! Though Trump is obviously not of better character than Clinton, he’s obviously not of worse character. And insofar as policy goes, the difference between Trump and Clinton is somewhat like the difference between a non-silent Calvin Coolidge and an FDR without the patriotism. (With apologies to the memory of Coolidge, my favorite president.)

What did I learn from the IAT? I must have very good reflexes. A person who processes information rapidly and then almost instantly translates it into a physical response should be able to “beat” the IAT. And that’s probably what I did in the Trump vs. Clinton test.

Perhaps the IAT for racism could be used to screen candidates for fighter-pilot training. Only “non-racists” would be admitted. Anyone who isn’t quick enough to avoid the “racist” label isn’t quick enough to win a dogfight.

OTHER “LIBERAL” DELUSIONS

There are plenty of them under the heading of balderdash. It’s also known as magical thinking, in which “ought” becomes “is” and the forces of nature and human nature can be held in abeyance by edict. The following examples revisit some ground already covered here:

  • Men are unnecessary.
  • Women can do everything that men can do, but it doesn’t work the other way … just because.
  • Mothers can work outside the home without damage to their children.
  • Race is a “social construct”; there is no such thing as intelligence; women and men are mentally and physically equal in all respects; and the under-representation of women and blacks in certain fields is therefore due to rank discrimination (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).
  • A minimum wage can be imposed without an increase in unemployment, a “fact” which can be “proven” only by concocting special cases of limited applicability.
  • Taxes can be raised without discouraging investment and therefore reducing the rate of economic growth.
  • Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”. There can “free lunches” all around.
  • Health insurance premiums will go down while the number of mandates is increased.
  • The economy can be stimulated through the action of the Keynesian multiplier, which is nothing but phony math.
  • “Green” programs create jobs (but only because they are inefficient).
  • Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).

There’s much more in a different vein here.

BALDERDASH AS EUPHEMISTIC THINKING

Balderdash, as I have sampled it here, isn’t just nonsense — it’s nonsense in the service of an agenda. The agenda is too often the expansion of government power. Those who favor the expansion of government power don’t like to think that it hurts people. (“We’re from the government and we’re here to help.”) This is a refusal to face facts, which is amply if not exhautively illustrated in the preceding entries.

But there’s a lot more where that comes from; for example:

  • Crippled became handicapped, which became disabled and then differently abled or something-challenged.
  • Stupid became learning disabled, which became special needs (a euphemistic category that houses more than the stupid).
  • Poor became underprivileged, which became economically disadvantaged, which became (though isn’t overtly called) entitled (as in entitled to other people’s money).
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

Why do lefties — lovers of big government — persist in varnishing the truth? They are — they insist — strong supporters of science, which is (ideally) the pursuit of truth. Well, that’s because they aren’t really supporters of science (witness their devotion to the “unsettled” science of AGW, among many fabrications). Nor do they really want the truth. They simply want to portray the world as they would like it to be, or to lie about it so that they can strive to reshape it to their liking.

BALDERDASH IN THE SERVICE OF SLAVERY, MODERN STYLE

I will end with this one, which is less conclusive than what has gone before, but which further illustrates the left’s penchant for evading reality in the service of growing government.

Thomas Nagel writes:

Some would describe taxation as a form of theft and conscription as a form of slavery — in fact some would prefer to describe taxation as slavery too, or at least as forced labor. Much might be said against these descriptions, but that is beside the point. For within proper limits, such practices when engaged in by governments are acceptable, whatever they are called. If someone with an income of $2000 a year trains a gun on someone with an income of $100000 a year and makes him hand over his wallet, that is robbery. If the federal government withholds a portion of the second person’s salary (enforcing the laws against tax evasion with threats of imprisonment under armed guard) and gives some of it to the first person in the form of welfare payments, food stamps, or free health care, that is taxation. In the first case it is (in my opinion) an impermissible use of coercive means to achieve a worthwhile end. In the second case the means are legitimate, because they are impersonally imposed by an institution designed to promote certain results. Such general methods of distribution are preferable to theft as a form of private initiative and also to individual charity. This is true not only for reasons of fairness and efficiency, but also because both theft and charity are disturbances of the relations (or lack of them) between individuals and involve their individual wills in a way that an automatic, officially imposed system of taxation does not. [Mortal Questions, “Ruthlessness in Public Life,” pp. 87-88]

How many logical and epistemic errors can a supposedly brilliant philosopher make in one (long) paragraph? Too many:

  • “For within proper limits” means that Nagel is about to beg the question by shaping an answer that fits his idea of proper limits.
  • Nagel then asserts that the use by government of coercive means to achieve the same end as robbery is “legitimate, because [those means] are impersonally imposed by an institution designed to promote certain results.” Balderdash! Nagel’s vision of government as some kind of omniscient, benevolent arbiter is completely at odds with reality.  The “certain results” (redistribution of income) are achieved by functionaries, armed or backed with the force of arms, who themselves share in the spoils of coercive redistribution. Those functionaries act under the authority of bare majorities of elected representatives, who are chosen by bare majorities of voters. And those bare majorities are themselves coalitions of interested parties — hopeful beneficiaries of redistributionist policies, government employees, government contractors, and arrogant statists — who believe, without justification, that forced redistribution is a proper function of government.
  • On the last point, Nagel ignores the sordid history of the unconstitutional expansion of the powers of government. Without justification, he aligns himself with proponents of the “living Constitution.”
  • Nagel’s moral obtuseness is fully revealed when he equates coercive redistribution with “fairness and efficiency,” as if property rights and liberty were of no account.
  • The idea that coercive redistribution fosters efficiency is laughable. It does quite the opposite because it removes resources from productive uses — including job-creating investments. The poor are harmed by coercive redistribution because it drastically curtails economic growth, from which they would benefit as job-holders and (where necessary) recipients of private charity (the resources for which would be vastly greater in the absence of coercive redistribution).
  • Finally (though not exhaustively), Nagel’s characterization of private charity as a “disturbance of the relations … among individuals” is so wrong-headed that it leaves me dumbstruck. Private charity arises from real relations among individuals — from a sense of community and feelings of empathy. It is the “automatic, officially imposed system of taxation” that distorts and thwarts (“disturbs”) the social fabric.

In any event, taxation for the purpose of redistribution is slavery: the subjection of one person to others, namely, agents of the government and the recipients of the taxes extracted from the person who pays them under threat of punishment. It’s slavery without whips and chains, but slavery nevertheless.

Rights, Liberty, the Golden Rule, and Leviathan

Rights arise from voluntary and enduring social relationships. In that respect, they are natural because they represent the accommodations that a people make with each other in order to coexist peacefully and to their mutual benefit. (Natural rights, as I define them, are not the same thing as the kind of “natural rights” that many philosophers, political theorists, mystics and opportunistic politicians claim to find hovering in human beings like Platonic essences. See this, this, this, and this, for example.)

Natural rights, in sum, are the interpersonal claims that a people agree upon and (mainly) observe in their daily interactions. The claims can be negative (do not kill, except in self-defense) or positive (children must be clothed, fed, and taught about rights). For reasons discussed later, such claims are valid and generally honored even if there isn’t a superior power (a chieftain, monarch, or state apparatus) to enforce them.

Liberty is the condition in which agreed rights are generally observed, and enforced when they are violated. Liberty, in other words, is the condition of peaceful, willing coexistence and its concomitant: beneficially cooperative behavior. Peaceful, willing coexistence does not imply “an absence of constraints, impediments, or interference”, which is a standard definition of liberty. Rather, it implies that there is necessarily a degree of compromise (voluntary constraint) for the sake of beneficially cooperative behavior. Even happy marriages are replete with voluntary constraints on behavior, constraints that enable the partners to enjoy the blessings of union.

That’s all there is to it. Liberty isn’t a nirvana-like state of euphoria; it’s just what everyday life is like when people are able to coexist by their own lights, perhaps under the aegis of a superior power which does nothing but ensure that they are able to do so.

The persistence of natural rights and liberty among a people is fostered primarily by mutual trust, respect, and forbearance. Punishment of violations of rights (and therefore of liberty) helps, too, as long as the punishment is generally agreed upon and applied consistently.

Natural rights, as discussed thus far, are distinct from “rights” (sometimes “natural rights”) that people demand of a superior power. (See, for example, the UN Declaration of Human Rights, which is a wish-list of things that people are “entitled” to.) Those are really privileges. Government can (and sometimes does) recognize and protect truly natural rights, but it doesn’t manufacture them. The Bill of Rights, for example, consists of a hodge-podge of actual rights (e.g., the right to bear arms), and privileges (e.g., protection from self-incrimination). Some of the latter are special dispensations made necessary by the existence of government itself, that is, promises made by the government to protect the people from its superior power.

As mentioned in passing earlier, rights are usually divided into two categories: negative and positive. Negative rights are natural rights that can be exercised without requiring anything of others but reciprocal forbearance [1]. Wikipedia puts it this way:

Adrian has a negative right to x against Clay if and only if Clay is prohibited from acting upon Adrian in some way regarding x…. A case in point, if Adrian has a negative right to life against Clay, then Clay is required to refrain from killing Adrian….

To spin out the example, there is a negative right not to be harmed (killed in this case) as long as Clay is forbidden to kill Adrian, Adrian is forbidden to kill Clay, both are forbidden to kill others, and others are forbidden to kill anyone. This is a widely understood and accepted negative right. But it is not an unconditional right. There are also widely understood and accepted exceptions to it, such as killing in self-defense.

In any event, the textbook explanation of negative rights, such as the one given by Wikipedia, is appealing. But it is simplistic, like John Stuart Mill’s harm principle.

“Negative rights” and “harm”, by themselves, are mere abstractions. It seems obvious that a person shouldn’t be harmed as long as he is doing no harm to others, which is the essence of Wikipedia‘s explanation. But “harm” is the operative word. Harm isn’t an abstraction; it’s a real thing — many real things — with concrete meanings. And those concrete meanings arise from social interactions and the norms born of them.

For example, libertarians consider it a negative right to sell one’s home to another person without interference by one’s neighbors (or the state acting on their behalf). One’s neighbors must forbear intervention, just as the seller must forbear intervention against the sales of the neighbors’ homes. But intervention may be necessary to prevent harm.

The part that libertarians usually get wrong is forbearance. Libertarians assume forbearance. They assume forbearance because they assume away — or simply ignore — the possibility that a voluntary transaction between two parties may result in harm to third parties.

But what if the buyer is an absentee owner who rents rooms to all and sundry (resulting in parking problems, an eyesore property, etc.)? Libertarians reject zoning as an infringement on the negative right of property ownership. So what are put-upon neighbors supposed to do about the absentee landlord who rents rooms to all and sundry? Well, the neighbors can always complain to the city government if things get out of hand, can’t they? Yes, but in the meantime harm will have been done, and the police may not be able to put a stop to it unless the harm actually violates a statute or ordinance that the police and courts are willing and able to enforce without being attacked as racist pigs, or some such thing.

Does the libertarian conception of negative rights have room in it for homeowners’ associations that actually allow neighborhoods to define harm, as it applies to their particular circumstances, and act to prevent it? In my experience, the libertarian conception of negative property rights — thou shalt not interfere in the sale of a house — has become enshrined in statutes and ordinances that de-fang homeowners’ associations, making them powerless to prevent harm by enforcing restrictive covenants (e.g., against renting rooms) that libertarians decry as infringements of negative rights.

The only negative rights worthy of the name are specific rights that are recognized within a voluntary and enduring association of persons. Violations of those rights undermine the fabric of mutual trust and mutual forbearance that enable a people to coexist in beneficial, voluntary cooperation. That — not some imaginary nirvana — is liberty.

By the same token, a voluntary and enduring association of persons can recognize positive rights. That is to say, positive rights — those broadly accepted as part and parcel of peaceful, willing coexistence and its concomitant: beneficially cooperative behavior — are just as much an aspect of liberty as are negative rights. (Doctrinaire libertarians, who aren’t really libertarians, mistakenly decry all positive rights as antithetical to liberty.)

Returning to the Wikipedia article quoted above, and the example of Adrian and Clay,

Adrian has a positive right to x against Clay if and only if Clay is obliged to act upon Adrian in some way regarding x…. [I]f Adrian has a positive right to life against Clay, then Clay is required to act as necessary to preserve the life of Adrian.

Negative and positive rights are compatible with each other in the context of the Golden Rule, or ethic of reciprocity: One should treat others as one would expect others to treat oneself. This is a truly natural law, for reasons I will come to.

The Golden Rule can be expanded into two, complementary sub-rules:

  • Do no harm to others, lest they do harm to you.
  • Be kind and charitable to others, and they will be kind and charitable to you.

The first sub-rule fosters negative rights. The second sub-rule fosters positive rights. But, as discussed earlier, the rights in question are specific — not abstract injunctions — because they are understood and recognized in the context of voluntary and enduring social relationships.

I call the Golden Rule a natural law because it’s neither a logical construct (e.g., the “given-if-then” formulation discussed here) nor a state-imposed one. Its long history and widespread observance (if only vestigial) suggest that it embodies an understanding that arises from the similar experiences of human beings across time and place. The resulting behavioral convention, the ethic of reciprocity, arises from observations about the effects of one’s behavior on that of others and mutual agreement (tacit or otherwise) to reciprocate preferred behavior, in the service of self-interest and empathy.

That is to say, the convention is a consequence of the observed and anticipated benefits of adhering to it. Those benefits accrue not only to the person who complies with the Golden Rule in a particular situation (the actor), but also to the person (or persons) who benefit from compliance (the beneficiary). The consequences of compliance don’t usually redound immediately to the actor, but they redound indirectly over the long-term because the actor (and many more like him) do their part to preserve the convention. It follows that the immediate impetus for observance of the convention is a mixture of two considerations: (a) an understanding of the importance of preserving the convention and (b) empathy on the part of the actor toward the beneficiary.

The Golden Rule will be widely observed within a group only if the members of the group are (a) generally agreed about the definition of harm, (b) value kindness and charity (in the main), and (c) perhaps most importantly see that their acts have beneficial consequences. If those conditions are not met, the Golden Rule descends from convention to slogan.

Is the Golden Rule susceptible of varying interpretations across groups, and is it therefore a vehicle for moral relativism? Yes, with qualifications. It’s true that groups vary in their conceptions of permissible behavior. For example, the idea of allowing, encouraging, or aiding the death of old persons is not everywhere condemned. (Many — with whom I wouldn’t choose to coexist voluntarily — embrace it as a concomitant of a government-run or government-regulated health-care “system” that treats the delivery of medical services as matter of rationing.) Infanticide has a long history in many cultures; modern, “enlightened” cultures have simply replaced it with abortion. (More behavior that is beyond the pale of my preferred society.) Slavery is still an acceptable practice in some places, though those enslaved (as in the past) usually are outsiders. Homosexuality has a long history of condemnation, and occasional acceptance. (To be pro-homosexual nowadays — and especially to favor homosexual “marriage” — has joined the litany of “causes” that connote membership in the tribe of “enlightened” “progressives” [a.k.a., “liberals” and leftists], along with being for abortion [i.e., pre-natal infanticide] and against the consumption of fossil fuels — except for one’s McMansion and SUV, of course.)

The foregoing recitation suggests a mixture of reasons for favoring or disfavoring various behaviors, that is, regarding them as beneficial or harmful. Those reasons range from utilitarianism (calculated weighing of costs and benefits) to status-signaling. In between, there are religious and consequentialist reasons for favoring or disfavoring various behaviors. Consequentialist reasoning goes like this: Behavior X can be indulged responsibly and without harm to others, but there a strong risk that it will not be indulged responsibly, or that it will lead to behavior Y, which has repercussions for others. Therefore, it’s better to put X off-limits, or to severely restrict and monitor it.

Consequentialist reasoning applies to euthanasia (it’s easy to slide from voluntary to involuntary acts, especially when the state controls the delivery of medical care); infanticide and abortion (forms of involuntary euthanasia and signs of disdain for life); homosexuality (a depraved, risky practice — especially among males — that can ensnare impressionable young persons who see it as an “easy” way to satisfy sexual urges); alcohol and drugs (addiction carries a high cost, for the addict, the addict’s family, and sometimes for innocent bystanders). In the absence of governmental edicts to the contrary, long-standing attitudes toward such behaviors would prevail in most places. (Socially and geographically isolated enclaves are welcome to kill themselves off and purify the gene pool.)

The exceptions discussed above to the contrary notwithstanding, there’s a mainstream interpretation of the Golden Rule — one that still holds in many places — which rules out certain kinds of behavior, except in extreme situations, and permits certain other kinds of behavior. There is, in other words, a “core” Golden Rule that comes down to this:

  • Killing is wrong, except in self-defense. (Capital punishment is just that: punishment. It’s also a deterrent to murder. It isn’t “murder,” muddle-headed defenders of baby-murder to the contrary notwithstanding.)
  • Various kinds of unauthorized “takings” are wrong, including theft (outright and through deception). (This explains popular resistance to government “takings” ,especially when it’s done on behalf of private parties. The view that it’s all right to borrow money from a bank and not repay it arises from the mistaken beliefs that (a) it’s not tantamount to theft and (b) it harms no one because banks can “afford it”.)
  • Libel and slander are wrong because they are “takings” by word instead of deed.
  • It is wrong to turn spouse against spouse, child against parent, or friend against friend. (And yet, such things are commonly portrayed in books, films, and plays as if they are normal occurrences, often desirable ones. And it seems to me that reality increasingly mimics “art”.)
  • It is right to be pleasant and kind to others, even under provocation, because “a mild answer breaks wrath: but a harsh word stirs up fury” (Proverbs 15:1).
  • Charity is a virtue, but it should begin at home, where the need is most certain and the good deed is most likely to have its intended effect. (Leftists turn a virtue into an imposition when they insist that “charity” — as in income redistribution — is a proper job of government.)

None of these observations would be surprising to a person raised in the Judeo-Christian tradition, or even in the less vengeful branches of Islam. The observations would be especially unsurprising to an American who was raised in a rural, small-town, or small-city setting, well removed from a major metropolis, or who was raised in an ethnic enclave in a major metropolis. For it is such persons and, to some extent, their offspring who are the principal heirs and keepers of the Golden Rule in America.

An ardent individualist — particularly an anarcho-capitalist — might insist that social comity can be based on the negative sub-rule, which is represented by the first five items in the “core” list. I doubt it. There’s but a short psychological distance from mean-spiritedness — failing to be kind and charitable — to sociopathy, a preference for harmful acts. Ardent individualists will disagree with me because they view kindness and charity as their business, and no one else’s. They’re right about that, but kindness and charity are nevertheless indispensable to the development of mutual trust among people who in an enduring social relationship. Without mutual trust, mutual restraint becomes problematic and co-existence becomes a matter of “getting the other guy before he gets you” — a convention that I hereby dub the Radioactive Rule.

Nevertheless, the positive sub-rule, which is represented by the final two items in the “core” list, can be optional for the occasional maverick. An extreme individualist (or introvert or grouch) could be a member in good standing of a society that lives by the Golden Rule. He would be a punctilious practitioner of the negative rule, and would not care that his unwillingness to offer kindness and charity resulted in coldness toward him. Coldness is all he would receive (and want) because, as a punctilious practitioner of the negative rule; his actions wouldn’t necessarily invite harm.

But too many extreme individualists would threaten the delicate balance of self-interested and voluntarily beneficial behavior that’s implied in the Golden Rule. Even if lives and livelihoods did not depend on acts of kindness and charity — and they probably would — mistrust would set it in. And from there, it would be a short distance to the Radioactive Rule.

Of course, the delicate balance would be upset if the Golden Rule were violated with impunity. For that reason, the it must be backed by sanctions. Non-physical sanctions would range from reprimands to ostracism. For violations of the negative sub-rule, imprisonment and corporal punishment would not be out of the question.

Now comes a dose of reality. Self-governance is possible only for a group of about 25 to 150 persons: the size of a hunter-gatherer band or Hutterite colony. It seems that self-governance breaks down when a group is larger than 150 persons. Why should that happen? Because mutual trust, mutual respect, and mutual forbearance — the things implied in the Golden Rule — depend very much on personal connections. A person who is loathe to say a harsh word to an acquaintance, friend, or family member — even when provoked — often waxes abusive toward strangers, especially in this era of e-mail and comment threads, where face-to-face encounters aren’t involved.

More generally, it’s a human tendency to treat family members, friends, and acquaintances differently than strangers; the former are accorded more trust, more cooperation, and more kindness than the latter. Why? Because there’s usually a difference between the consequences of behavior that’s directed toward strangers and the consequences of behavior that’s directed toward persons one knows, lives among, and depends upon for restraint, cooperation, and help. The allure of  doing harm without penalty (“getting away with something”) or receiving without giving (“getting something for nothing”)  becomes harder to resist as one’s social distance from others increases.

The preference of like for like is derided by libertarians and leftists as tribalism, which is like the pot calling the kettle black. There’s no one who is more tribal than a leftist, who weighs every word spoken by another person to ensure that person’s alignment with the left’s current dogmas. (Libertarians have it easier, inasmuch as most of them are loners by disposition, and thrive on contrariness.) But the preference of like for like is quite rational: Cooperation and help include mutual defense (and concerted attack, in the case of leftists).

When self-governance breaks down, it becomes necessary to spin off a new group or to establish a central power (a state) to establish and enforce rules of behavior (negative and positive). The problem, of course, is that those vested with the power of the state quickly learn to use it to advance their own preferences and interests, and to perpetuate their power by granting favors to those who can keep them in office. It is a rare state that is created for the sole purpose of protecting its citizens from one another (as the referee of last resort) and from outsiders, and rarer still is the state that remains true to such purposes.

In sum, the Golden Rule — as a uniting way of life — is quite unlikely to survive the passage of a group from a self-governing community to a component of a state. Nor does the Golden Rule as a uniting way of life have much chance of revival or survival where the state already dominates. The Golden Rule may operate within non-kinship groups (e.g., parishes, clubs, urban enclaves) by regulating the interactions among the members of such groups. It may have a vestigial effect on face-to-face interactions between stranger and stranger, but that effect arises in part from the fear of giving offense that will be met with hostility or harm, not from a communal bond.

In any event, the dominance of the state distorts behavior. For example, the state may enable and encourage acts (e.g., abortion, homosexuality) that had been discouraged as harmful by group norms. And the state will diminish the ability of members of a group to bestow charity on one another through the loss of income to taxes and the displacement of private charity by state-run schemes that mimic charity (e.g., Social Security).

The all-powerful state destroys liberty, even while sometimes defending it. This is done not just by dictating how people must live their lives, which is bad enough. It is also done by eroding the social bonds that liberty is built upon — the bonds that secure peaceful, willing coexistence and its concomitant: beneficially cooperative behavior.
__________
[1] Here is a summary of negative rights, by Randy Barnett:

A libertarian … favors the rigorous protection of certain individual rights that define the space within which people are free to choose how to act. These fundamental rights consist of (1) the right of private property, which includes the property one has in one’s own person; (2) the right of freedom of contract by which rights are transferred by one person to another; (3) the right of first possession, by which property comes to be owned from an unowned state; (4) the right to defend oneself and others when fundamental rights are being threatened; and (5) the right to restitution or compensation from those who violate another’s fundamental rights. [“Is the Constitution Libertarian?”, Georgetown Public Law Research Paper No. 1432854 (posted at SSRN July 14, 2009), p. 3]

Borrowing from and elaborating on Barnett’s list, I come to the following set of negative rights:

  • freedom from force and fraud (including the right of self-defense against force)
  • property ownership (including the right of first possession)
  • freedom of contract (including contracting to employ/be employed)
  • freedom of association and movement
  • restitution or compensation for violations of the foregoing rights.

This set of negative rights that would obtain in a state which devolves political decisions to the level of socially cohesive groups, while serving only as the defender of such rights (in the last resort) against domestic and foreign predators.

Suicide

Suicide has garnered a lot of attention in recent days. As noted in a study by the Centers for Disease Control and Prevention, the rate has been rising steadily since it bottomed out in 2000. I discussed suicide at some length in “Suicidal Despair and the ‘War on Whites’” (June 26, 2017). I have updated a few graphs and a bit of text to accommodate the latest figures. But the bottom line remains unchanged. What is it? The “war on whites” is a red herring. Go there and see for yourself.

Climate Scare Tactics: The Mythical Ever-Rising 30-Year Average

Our local weather Nazi, of whom I have written before, jumped on his high horse yesterday and lectured viewers about the 401 (?) consecutive months of above-average temperatures. He didn’t come out and say it — this time — but his thoughts and prayers are running in the direction of a climatic version of gun confiscation. Just take away those fossil fuels, etc., and Earth will return to its “correct” temperature. In his case, because anything above 75 in Austin is too warm for him, attaining the “correct” temperature would require a return to something like a real ice age.

In any event, the statistic that has the weather Nazi — and other climate hysterics — all a-twitter goes like this. The global temperature for every month since February 1985 has exceeded the rolling, 30-year average for that month. This statistic must be derived from surface thermometer readings, inasmuch as satellite readings didn’t begin until the late 1970s. We know all about those surface thermometer readings: spotty coverage, poor siting (often in locations surrounded by concrete and traffic), missing data, and — worst of all — frequent downward adjustments of historical numbers to make recent decades look hotter than they were. There’s no mention of the “pause” between the El Niños of the late 1990s and mid 2010s, of course.

Here’s something closer to the truth, based on satellite-recorded temperatures in the lower troposphere, from the Global Temperature Report by the Earth System Science Center of the University of Alabama-Huntsville:

Here are the yearly averages:

Three comments:

  • The anomalies are small.
  • There are many negative values before the onset of the major El Niño that began late in 2014 and lasted until mid-2016. (A negative value means that the reading for a month was below the 30-year average for that month.)
  • The effects of that El Niño are wearing off.

Note to local weather Nazi: Give it a rest.


Related posts:
AGW: The Death Knell
Not-So-Random Thoughts (XIV) (second item)
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Hurricane Hysteria
Much Ado about the Unknown and Unknowable
Hot Is Better Than Cold: A Small Case Study

Justice Thomas on “Masterpiece Cakeshop”

It is well known by now that cake maker Jack Phillips, proprietor of Masterpiece Cakeshop in Denver, prevailed in an opinion written by Justice Kennedy.  At issue were the Colorado Civil Rights Commission’s actions in assessing Phillips’s reasons for declining to make a cake for a same-sex couple’s wedding celebration. The commission’s actions violated the free exercise clause of the First Amendment. Specifically, in Kennedy’s words:

The Commission gave “every appearance,” of adjudicating [Phillips’s] religious objection based on a negative normative “evaluation of the particular justification” for his objection and the religious grounds for it, but government has no role in expressing or even suggesting whether the religious ground for Phillips’ conscience-based objection is legitimate or illegitimate. The inference here is thus that Phillips’ religious objection was not considered with the neutrality required by the Free Exercise Clause. The State’s interest could have been weighed against Phillips’ sincere religious objections in a way consistent with the requisite religious neutrality that must be strictly observed. But the official expressions of hostility to religion in some of the commissioners’ comments were inconsistent with that requirement, and the Commission’s disparate consideration of Phillips’ case compared to the cases of the other bakers suggests the same.

This is a narrow ruling, as many commentators have observed, in that it does not address the fundamental issue of the right of Phillips (or anyone similarly situated) to refuse to express views contrary to his beliefs — religious or not.

Justice Thomas, in a concurring opinion (joined by Justice Gorsuch), gets it right:

The First Amendment, applicable to the States through the Fourteenth Amendment, prohibits state laws that abridge the “freedom of speech.” When interpreting this command, this Court has distinguished between regulations of speech and regulations of conduct. The latter generally do not abridge the freedom of speech, even if they impose “incidental burdens” on expression….

Although public-accommodations laws generally regulate conduct, particular applications of them can burden protected speech. When a public-accommodations law “ha[s] the effect of declaring . . . speech itself to be the public accommodation,” the First Amendment applies with full force…. When [a Massachusetts] law required the sponsor of a St. Patrick’s Day parade to include a parade unit of gay, lesbian, and bisexual Irish-Americans, the Court unanimously held that the law violated the sponsor’s right to free speech. Parades are “a form of expression,” this Court explained, and the application of the public-accommodations law “alter[ed] the expressive content” of the parade by forcing the sponsor to add a new unit. The addition of that unit compelled the organizer to “bear witness to the fact that some Irish are gay, lesbian, or bisexual”; “suggest . . . that people of their sexual orientation have as much claim to unqualified social acceptance as heterosexuals”;and imply that their participation “merits celebration.” While this Court acknowledged that the unit’s exclusion might have been “misguided, or even hurtful,” ibid., it rejected the notion that governments can mandate“thoughts and statements acceptable to some groups or,indeed, all people” as the “antithesis” of free speech….

The parade . . . was an example of what this Court has termed “expressive conduct.” This Court has long held that “the Constitution looks beyond written or spoken words as mediums of expression,” and that “[s]ymbolism is a primitive but effective way of communicating idea.” Thus, a person’s “conduct may be ‘sufficiently imbued with elements of communication to fall within the scope of the First and Fourteenth Amendments.’” Applying this principle, the Court has recognized a wide array of conduct that can qualify as expressive, including nude dancing, burning the American flag, flying an upside-down American flag with a taped-on peace sign, wearing a military uniform, wearing a black armband, conducting a silent sit-in, refusing to salute the American flag, and flying a plain red flag….

Phillips’ creation of custom wedding cakes is expressive. The use of his artistic talents to create a well-recognized symbol that celebrates the beginning of a marriage clearly communicates a message. The use of his artistic talents to create a well-recognized symbol that celebrates the beginning of a marriage clearly communicates a message—certainly more so than nude dancing….

States cannot punish protected speech because some group finds it offensive, hurtful, stigmatic, unreasonable, or undignified. “If there is a bedrock principle underlying the First Amendment, it is that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.”… If the only reason a public-accommodations law regulates speech is “to produce a society free of . . . biases” against the protected groups, that purpose is “decidedly fatal” to the law’s constitutionality, “for it amounts to nothing less than a proposal to limit speech in the service of orthodox expression.”…

[T]he fact that this Court has now decided Obergefell v. Hodges [does not] somehow diminish Phillips’ right to free speech. [As CJ Roberts wrote in in dissenting opinion in Obergefell,] “It is one thing . . . to conclude that the Constitution protects a right to same-sex marriage; it is something else to portray everyone who does not share [that view] as bigoted” and unentitled to express a different view. This Court is not an authority on matters of conscience, and its decisions can (and often should) be criticized. The First Amendment gives individuals the right to disagree about the correctness of Obergefell and the morality of same-sex marriage. [The majority opinion in ] Obergefell itself emphasized that the traditional understanding of marriage “long has been held—and continues to be held—in good faith by reasonable and sincere people here and throughout the world.” If Phillips’ continued adherence to that understanding makes him a minority after Obergefell, that is all the more reason to insist that his speech be protected….

In Obergefell, I warned that the Court’s decision would “inevitabl[y] . . . come into conflict” with religious liberty,“as individuals . . . are confronted with demands to participate in and endorse civil marriages between same-sex couples.” This case proves that the conflict has already emerged. Because the Court’s decision vindicates Phillips’ right to free exercise, it seems that religious liberty has lived to fight another day. But, in future cases, the freedom of speech could be essential to preventing Obergefell from being used [in Justice Alito’s words] to “stamp out every vestige of dissent” and “vilify Americans who are unwilling to assent to the new orthodoxy.”

That should have been the majority opinion.


Related posts:
The Writing on the Wall
How to Protect Property Rights and Freedom of Association and Expression
The Beginning of the End of Liberty in America
Marriage: Privatize It and Revitalize It
Equal Protection in Principle and Practice
Freedom of Speech and the Long War for Constitutional Governance
Freedom of Speech: Getting It Right

Revisiting the Laffer Curve

Among the claims made in favor of the Tax Cuts and Jobs Act of 2017 was that the resulting tax cuts would pay for themselves. Thus the Laffer curve returned briefly to prominence, after having been deployed to support the Reagan and Bush tax cuts of 1981 and 2001.

The idea behind the Laffer curve is straightforward. Taxes inhibit economic activity, that is, the generation of output and income. Tax-rate reductions therefore encourage work, which yields higher incomes. Higher incomes mean that there is more saving from which to finance growth-producing capital investment. Lower tax rates also make investment more attractive by increasing the expected return on capital investments. Lower tax rates therefore stimulate economic output by encouraging work and investment (supply-side economics). Under the right conditions, lower tax rates may generate enough additional income to yield an increase in tax revenue.

I believe that there are conditions under which the Laffer curve works as advertised. But so what? The Laffer curve focuses attention on the wrong economic variable: tax revenue. The economic variables that really matter — or that should matter — are the real rate of growth and the income available to Americans after taxes. More (real) economic growth means higher (real) income, across the board. More government spending means lower (real) income; the Keynesian multiplier is a cruel myth.

A new Laffer curve is in order, one that focuses on the effects of taxation on economic growth, and thus on the aggregate output of products and services available to consumers.

Let us begin at the beginning, with this depiction of the Laffer curve (via Forbes):

This is an unusually sophisticated depiction of the curve, in that it shows a growth-maximizing tax rate which is lower than the revenue-maximizing rate. It also shows that the growth-maximizing rate is greater than zero, for a good reason.

With real taxes (i.e., government spending) at zero or close to it, the rule of law would break down and the economy would be a shambles. But government spending above that required to maintain the rule of law (i.e., adequate policing, administration of justice, and national defense) interferes with the efficient operation of markets, both directly (by pulling resources out of productive use) and indirectly (by burdensome regulation financed by taxes).

Thus a tax rate higher than that required to sustain the rule of law1 leads to a reduction in the rate of (real) economic growth because of disincentives to work and invest. A reduction in the rate of growth pushes GDP below its potential level. Further, the effect is cumulative. A reduction in GDP means a reduction in investment, which means a reduction in future GDP, and on and on.

I will quantify the Laffer curve in two steps. First, I will estimate the tax rate at which revenue is maximized, taking the simplistic view that changes in the tax rate do not change the rate of economic growth. I will draw on Christina D. Romer and David H. Romer’s “The Macroeconomic Effects of Tax Changes: Estimates Based on a New Measure of Fiscal Shocks” (American Economic Review, June 2010, pp. 763-801).

The Romers estimate the effects of exogenous changes in taxes on GDP. (“Exogenous” meaning tax cuts aimed at stimulating the economy, as opposed, for example, to tax increases triggered by economic growth.) Here is their key finding:

Figure 4 summarizes the estimates by showing the implied effect of a tax increase of one percent of GDP on the path of real GDP (in logarithms), together with the one-standard-error bands. The effect is steadily down, first slowly and then more rapidly, finally leveling off after ten quarters. The estimated maximum impact is a fall in output of 3.0 percent. This estimate is overwhelmingly significant (t = –3.5). The two-standard-error confidence interval is (–4.7%,–1.3%). In short, tax increases appear to have a very large, sustained, and highly significant negative impact on output. Since most of our exogenous tax changes are in fact reductions, the more intuitive way to express this result is that tax cuts have very large and persistent positive output effects. [pp. 781-2]

The Romers assess the effects of tax cuts over a period of only 12 quarters (3 years). Some of the resulting growth in GDP during that period takes the form of greater spending on capital investments, the payoff from which usually takes more than 3 years to realize. So a tax cut of 1 percent of GDP yields more than a 3-percent rise in GDP over the longer run. But let’s keep it simple and use the relationship obtained by the Romers: a 1-percent tax cut (as a percentage of GDP) results in a 3-percent rise in GDP.

With that number in hand, and knowing the effective tax rate (33 percent of GDP in 20172), it is then easy to compute the short-run effects of changes in the effective tax rate on GDP, after-tax GDP, and tax revenue:


Effective tax revenue represents the dollar amount extracted from the economy through government spending at the stated percentage of GDP. (Spending includes transfer payments, which take from those who produce and give to those who do not.) Effective tax rate represents the dollar amount extracted from the economy, divided by GDP at the given tax rate. (GDP is based on the Romers’ estimate of the marginal effect of a change in the tax rate.)

It is a coincidence that tax revenue is maximized at the current (2017) effective tax rate of 33 percent. The coincidence occurs because, according to the Romers, every $1 change in tax revenue (or government spending that draws resources from the real economy) yields a $3 change in GDP, at the margin. If the marginal rate of return were lower than 3:1, the revenue-maximizing rate would be greater than 33 percent. If the marginal rate of return were higher than 3:1, the revenue-maximizing rate would be less than 33 percent.

In any event, the focus on tax revenue is entirely misplaced. What really matters, given that the prosperity of Americans is (or should be) of paramount interest, is GDP and especially after-tax GDP. Both would rise markedly in response to marginal cuts in real taxes (i.e., government spending). Democrats don’t want to hear that, of course, because they want government to decide how Americans spend the money that they earn. The idea that a far richer America would need far less government — subsidies, nanny-state regulations, etc. — frightens them.

It gets better (or worse, if you’re a big-government fan) when looking at the long-run effects of lower government spending on the rate of growth. I am speaking of the Rahn curve, which I estimate here. Holding other things the same, every percentage-point reduction in the real tax rate (government spending as a fraction of GDP) means an increase of 0.35 percentage point in the rate of real GDP growth. This is a long-run relationship because it takes time to convert some of the tax reduction to investment, and then to reap the additional output generated by the additional investment. It also takes time for workers to respond to the incentive of lower taxes by adding to their skills, working harder, and working in more productive jobs.

This graph depicts the long-run effects of changes in the effective tax rate, taking into account changes in the real growth rate from a base of 2.8 percent (the year-over-year rate for the most recent fiscal quarter):

Note that the same real tax revenue would be realized at an effective tax rate of 13 percent of GDP. At that rate, GDP would rise to 2.5 times its 2017 value (instead of 1.6 times as shown in Figure 1), and after-tax GDP would rise to 3.3 times its 2017 value (instead of 2.1 times as shown in Figure 1).

The real Laffer curve — the one that people ought to pay attention to — is the Rahn curve. Holding everything else constant, here is the relationship between the real growth rate and the effective tax rate:

Laffer revisited_3

At the current effective tax rate — 33 percent of GDP — the economy is limping along at about one-third of its potential growth. That is actually good news, inasmuch as the real growth rate dipped perilously close to 1 percent several times during the Obama administration, even after the official end of the Great Recession.

But it will take many years of spending cuts (relative to GDP, at least) and deregulation to push growth back to where it was in the decades immediately after World War II. Five percent isn’t out of the question.
__________

1. Total government spending, when transfer payments were negligible, amounted to between 5 and 10 percent of GDP between the Civil War and the Great Depression (Series F216-225, “Percent Distribution of National Income or Aggregate Payments, by Industry, in Current Prices: 1869-1968,” in Chapter F, National Income and Wealth, Historical Statistics of the United States, Colonial Times to 1970: Part 1). The cost of an adequate defense is a lot higher than it was in those relatively innocent times. Defense spending now accounts for about 3.5 percent of GDP. An increase to 5 percent wouldn’t render the U.S. invulnerable, but it would do a lot to deter potential adversaries. So at 10 percent of GDP, government spending on policing, the administration of justice, and defense — and nothing else — should be more than adequate to sustain the rule of law.

2. The effective tax rate on GDP in 2017 was 33.4 percent. That number represents total government government expenditures (line 37 of BEA Table 3.1), divided GDP (line 1 of BEA Table 1.15). The nominal tax rate on GDP was 30 percent; that is, government receipts (line 34 of BEA Table 3.1) accounted for 30 percent of GDP. (The BEA tables are accessible here.) I use the effective tax rate in this analysis because it truly represents the direct costs levied on the economy by government. (The indirect cost of regulatory activity adds about $2 trillion, bringing the total effective tax to 44 percent.)

Selected Writings about Intelligence

I have treated intelligence many times; for example:

Positive Rights and Cosmic Justice: Part IV
Race and Reason: The Achievement Gap — Causes and Implications
“Wading” into Race, Culture, and IQ
The Harmful Myth of Inherent Equality
Bigger, Stronger, and Faster — But Not Quicker?
The IQ of Nations
Some Notes about Psychology and Intelligence
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
More about Intelligence
Not-So-Random Thoughts (XXI), fifth item
Intelligence and Intuition
Intelligence, Personality, Politics, and Happiness
Intelligence As a Dirty Word

The material below consists entirely of quotations from cited sources. The quotations are consistent with and confirm several points made in the earlier posts:

  • Intelligence has a strong genetic component; it is heritable.
  • Race is a real manifestation of genetic differences among sub-groups of human beings. Those subgroups are not only racial but also ethnic in character.
  • Intelligence therefore varies by race and ethnicity, though it is influenced by environment.
  • Specifically, intelligence varies in the following way: There are highly intelligent persons of all races and ethnicities, but the proportion of highly intelligent persons is highest among Ashkenazi Jews, followed in order by East Asians, Northern Europeans, Hispanics (of European/Amerindian descent), and sub-Saharan Africans — and the American descendants of each group.
  • Males are disproportionately represented among highly intelligent persons, relative to females. Males have greater quantitative skills (including spatio-temporal aptitude) relative to females; whereas, females have greater verbal skills than males.
  • Intelligence is positively correlated with attractiveness, health, and longevity.
  • The Flynn effect (rising IQ) is a transitory environmental effect brought about by environment (e.g., better nutrition) and practice (e.g., learning and application of technical skills). The Woodley effect is (probably) a long-term dysgenic effect among people whose survival and reproduction depends more on technology (devised by a relatively small portion of the populace) than on the ability to cope with environmental threats (i.e., intelligence).

I have moved the supporting material to a new page: “Intelligence“.

Freedom of Speech: Getting It Right

Congress shall make no law … abridging the freedom of speech….

Constitution of the United States, Amendment I

* * *

[T]he sole end for which mankind are warranted, individually or collectively in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others….

If all mankind minus one, were of one opinion, and only one person were of the contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind.

John Stuart Mill, On Liberty (1869), Chapter I and Chapter II

* * *

[T]he character of every act depends upon the circumstances in which it is done. The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic. It does not even protect a man from an injunction against uttering words that may have all the effect of force. The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent. It is a question of proximity and degree.

Oliver Wendell Holmes Jr., Schenck v. United States (1919)

* * *

To justify suppression of free speech there must be reasonable ground to fear that serious evil will result if free speech is practiced. There must be reasonable ground to believe that the danger apprehended is imminent. There must be reasonable ground to believe that the evil to be prevented is a serious one.

Louis D. Brandeis, Whitney v. People of State of California (1927),
joined by Holmes

* * *

The First Amendment has been systematically misapplied for the past 100 years, thanks mainly to Holmes and Brandeis. Mill’s generalizations are fatuous nonsense. Here is a palate-cleanser:

[O]nly where advocacy of and organization for an overthrow of government is deemed to be a “clear and present danger” can such advocacy or organization be curbed. Which is somewhat like waiting to shoot at an enemy armed with a long-range rifle until you are able to see the whites of his eyes. Or, perhaps more aptly in the 21st century, waiting until a terrorist strikes before acting against him. Which is too late, of course, and impossible in the usual case of suicide-cum-terror.

And therein lies the dangerous folly of free-speech absolutism….

The First Amendment, in the hands of the Supreme Court, has become inimical to the civil and state institutions that enable liberty….

[Mill’s harm principle] is empty rhetoric….

Harm must be defined. And its definition must arise from voluntarily evolved social norms. Such norms evince and sustain the mutual trust, respect, forbearance, and voluntary aid that — taken together — foster willing, peaceful coexistence and beneficially cooperative behavior. And what is liberty but willing, peaceful coexistence and beneficially cooperative behavior?

Behavior is shaped by social norms. Those norms once were rooted in the Ten Commandments and time-tested codes of behavior. They weren’t nullified willy-nilly in accordance with the wishes of “activists,” as amplified through the megaphone of the mass media, and made law by the Supreme Court. What were those norms? Here are some of the most important ones:

Marriage is a union of one man and one woman. Nothing else is marriage, despite legislative, executive, and judicial decrees that substitute brute force for the wisdom of the ages.

Marriage comes before children. This is not because people are pure at heart, but because it is the responsible way to start life together and to ensure that one’s children enjoy a stable, nurturing home life.

Marriage is until “death do us part.” Divorce is a recourse of last resort, not an easy way out of marital and familial responsibilities or the first recourse when one spouse disappoints or angers the other.

Children are disciplined — sometimes spanked — when they do wrong. They aren’t given long, boring, incomprehensible lectures about why they’re doing wrong. Why not? Because they usually know they’re doing wrong and are just trying to see what they can get away with.

Drugs are taken for the treatment of actual illnesses, not for recreational purposes.

Income is earned, not “distributed.” Persons who earn a lot of money are to be respected. If you envy them to the point of wanting to take their money, you’re a pinko-commie-socialist (no joke).

People should work, save, and pay for their own housing. The prospect of owning one’s own home, by dint of one’s own labor, is an incentive to work hard and to advance oneself through the acquisition of marketable skills.

Welfare is a gift that one accepts as a last resort, it is not a right or an entitlement, and it is not bestowed on persons with convenient disabilities….

A mother who devotes time and effort to the making of a good home and the proper rearing of her children is a pillar of civilized society. Her life is to be celebrated, not condemned as “a waste.”

Homosexuality is a rare, aberrant kind of behavior. (And that was before AIDS proved it to be aberrant.) It’s certainly not a “lifestyle” to be celebrated and shoved down the throats of all who object to it.

Privacy is a constrained right. It doesn’t trump moral obligations, among which are the obligations to refrain from spreading a deadly disease and to preserve innocent life.

Addiction isn’t a disease; it’s a surmountable failing….

Justice is a dish best served hot, so that would-be criminals can connect the dots between crime and punishment. Swift and sure punishment is the best deterrent of crime. Capital punishment is the ultimate deterrent because an executed killer can’t kill again.

Peace is the result of preparedness for war; lack of preparedness invites war.

The list isn’t exhaustive, but it’s certainly representative. The themes are few and simple: respect others, respect tradition, restrict government to the defense of society from predators foreign and domestic. The result is liberty: A regime of mutually beneficial coexistence based on mutual trust and respect. That’s all it takes — not big government bent on dictating new norms just because it can.

But by pecking away at social norms that underlie mutual trust and respect, “liberals” have sundered the fabric of civilization….

The right “peaceably to assemble, and to petition the Government for a redress of grievances” has become the right to assemble a mob, disrupt the lives of others, destroy the property of others, injure and kill others, and (usually) suffer no consequences for doing so — if you are a leftist or a member of one of the groups patronized by the left, that is.

But that’s not the end of it. There’s a reverse slippery-slope effect when it comes to ideas opposed by the left. There are, for example, speech codes at government-run universities; hate-crime laws, which effectively punish speech that offends a patronized group; and penalties in some States for opposing same-sex “marriage”….

In sum, there is no longer such a thing as the kind of freedom of speech intended by the Framers of the Constitution. There is on the one hand license for “speech” that subverts and flouts civilizing social norms — the norms that underlie liberty. There is on the other hand a growing tendency to suppress speech that supports civilizing social norms.

Freedom of Speech and the Long War for Constitutional Governance“,
Politics and Prosperity

* * *

See also:
Rethinking the Constitution: “Freedom of Speech, and of the Press”
Abortion and the Fourteenth Amendment
Privacy Is Not Sacred
The Contemporary Meaning of the Bill of Rights: First Amendment
How to Protect Property Rights and Freedom of Association and Expression
The Beginning of the End of Liberty in America
There’s More to It Than Religious Liberty
Equal Protection in Principle and Practice
Academic Freedom, Freedom of Speech, and the Demise of Civility
Preemptive (Cold) Civil War
The Framers, Mob Rule, and a Fatal Error
The Constitution: Myths and Realities