On Self-Ownership and Desert

INTRODUCTION

Fernando Teson, one of the Bleeding Heart Libertarians, addresses self-ownership:

Self-ownership is the property right that a person has over her natural assets, that is, over her mind and body. As is well known (and nicely summarized in Matt [Zwolinski]’s post,) Lockeans think that this right can, under appropriate circumstances, justify ownership over external assets.  Most libertarians endorse the idea of self-ownership. Some progressives do too, but an important line of progressive thought rejects self-ownership.  According to John Rawls (in A Theory of Justice,) natural assets are collective property. That is, they belong to society, not to the person who possesses them. The reason for this, Rawls thinks, is that just as we do not deserve being born rich or poor, so we don’t deserve our natural talents. For this reason, societal arrangements that reward talented persons are only justified if they benefit the least talented.

I am exasperated by claims, like Teson’s and Rawls’s, that appeal to abstract principles which adduce to human beings abstract, Platonic attributes. One such attribute is “natural rights” — a close kin of self-ownership. I am especially exasperated when such attributes are bestowed by third parties speaking from a position of judgmental omniscience. Desert is an excellent case in point.

The attribution to humans of ethereal characteristics (like self-ownership and desert) exemplifies the fallacy of reification:  “the error of treating as a “real thing” something which is not a real thing, but merely an idea.”

Self-ownership is in a class with “natural rights” as a condition that somehow inheres in a person by virtue of his status as a human being. I have dealt with “natural rights” at length (e.g., here, here, here, here, and here), and will not repeat myself. The rest of this post takes up self-ownership and desert.

SELF-OWNERSHIP

The argument for self-ownership, as forumalated by Robert Nozick, goes like this (according to R.N. Johnson’s summary of the political philosophy of Robert Nozick):

The self-ownership argument is based on the idea that human beings are of unique value. It is one way of construing the fundamental idea that people must be treated as equals. People are “ends in themselves”. To say that a person is an end in herself is to say that she cannot be treated merely as a means to some other end. What makes a person an end is the fact that she has the capacity to choose rationally what she does. This makes people quite different from anything else, such as commodities or animals. The latter can be used by us as mere means to our ends without doing anything morally untoward, since they lack the ability to choose for themselves how they will act or be used. Human beings, having the ability to direct their own behavior by rational decision and choice, can only be used in a way that respects this capacity. And this means that people can’t be used by us unless they consent.

The paradigm of violating this requirement to treat people as ends in themselves is thus slavery. A slave is a person who is used as a mere means, that is, without her consent. That is, a slave is someone who is owned by another person. And quite obviously the reverse of slavery is self-ownership. If no one is a slave, then no one owns another person, and if no one owns another person, then each person is only owned by herself. Hence, we get the idea that treating people as ends in themselves is treating them as owning themselves.

In summary:

1. I own myself because I am capable of making rational choices for myself.

2. If someone else “uses” me without my consent (e.g., enslaves me or steals food from me), he is denying my self-ownership.

3. Therefore, when someone else “uses” me he is treating me as a means to an end; whereas, I am an end in myself because I own myself.

Oops. I went in a circle. I own myself; therefore, I cannot be used by someone else, because I own myself.

Nozick’s proposition amounts to nothing more than the assertion that everyone must act from the same principle. Immanuel Kant made essentially the same assertion in his categorical imperative:

Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end.

Well, what if the person making that statement believes that his end is to be a slave-owner — and that he has the power to make me a slave?

The fact is that people, all too often, do not act according to Nozick’s or Kant’s imperatives. As Dr. Johnson said, I refute it thus: Look around you. Rights are a social construct. They exist only to the extent that they are reciprocally recognized and enforced. There are very good reasons that rights should be only negative ones (here and here, for example). But those reasons do not trump the realities of human nature (follow the links in the final paragraph of the introduction).

The concept of self-ownership, as with many ideals, arises from the ideal world of “ought” instead of the real world of “is.”

DESERT

Desert is a more infuriating concept than self-ownership. Self-ownership, at least, is an attribute which supposedly inheres in me by virtue of my humanity. (That it does not inhere in me can be seen readily by looking at my 1040, my real-estate tax bill, and the myriad federal, State, and local regulations that govern my behavior and transactions with others.) Desert, on the other hand, is mine only if someone else says that it does.

The Wikipedia article about desert gives this illustration:

In ordinary usage, to deserve is to earn or merit a reward; in philosophy, the distinction is drawn in the term desert to include the case that that which one receives as one’s just deserts may well be unwelcome, or a reward. For example, if one scratches off a winning lottery ticket, one may be entitled to the money, but one does not necessarily deserve it in the same way one would deserve $5 for mowing a lawn, or a round of applause for performing a solo.

Whether or not one “deserves” one’s lottery winnings depends arbitrarily on who is making the judgment. The arbitrariness is readily seen in the opposing views of Rawls and Nozick (from the same article):

One of the most controversial rejections of the concept of desert was made by the political philosopher John Rawls. Rawls, writing in the mid to late twentieth century, claimed that a person cannot claim credit for being born with greater natural endowments (such as superior intelligence or athletic abilities), as it is purely the result of the ‘natural lottery’. Therefore, that person does not morally deserve the fruits of his or her talents and/or efforts, such as a good job or a high salary. However, Rawls was careful to explain that, even though he dismissed the concept of moral Desert, people can still legitimately expect to receive the benefits of their efforts and/or talents. The distinction here lies between Desert and, in Rawls’ own words, ‘Legitimate Expectations’.[1]

Rawls'[s] remarks about natural endowments provoked an often-referred response by Robert Nozick. Nozick claimed that to treat peoples’ natural talents as collective assets is to contradict the very basis of the deontological liberalism Rawls wishes to defend, i.e. respect for the individual and the distinction between persons.[2] Nozick argued that Rawls’ suggestion that not only natural talents but also virtues of character are undeserved aspects of ourselves for which we cannot take credit, “can succeed in blocking the introduction of a person’s autonomous choices and actions (and their results) only by attributing everything noteworthy about the person completely to certain sorts of ‘external’ factors. So denigrating a person’s autonomy and prime responsibility for his actions is a risky line to take for a theory that otherwise wishes to buttress the dignity and self-respect of autonomous beings.”[3]

Jonathan Pearce, writing at samizdata.net blog, sorts it out:

[T]he idea of “deserving” poor or “undeserving” rich is, in my view, loaded with ideological significance, depending on who is using the term. Clearly, people feel a lot more relaxed about handing out money – either from a charity or from a government department – to people who are down on their luck but of good character, than they are about handing it out to the feckless. Similarly, it follows that there is more support for taxing supposedly “undeserved” wealth than “earned” wealth. The trouble with such words, of course, as has been shown by FA Hayek in his famous demolition of payment-by-merit in The Constitution of Liberty, is who gets to decide whether our circumstances came about due to “desert” or not. Such a person would have to have the foresight of a god. It is, as Hayek argued, impossible to do this without some omnipotent authority being able to weigh up a person’s potential, and then being able to measure whether that person, in the face of a vast array of alternatives, made the most of that potential. (“Desert according to whom?“)

Rawls and his fellow travelers (who are usually found on the left) simply cannot stand the idea of individual differences, and so they attribute them to “luck.” The idea of luck, as I have said elsewhere, “is mainly an excuse and rarely an explanation. We prefer to apply ‘luck’ to outcomes when we don’t like the true explanations for them.” In the case of desert, the idea of luck is used as an excuse for redistribution, even though it is an inadequate explanation for variations in economic and social outcomes.

I am “lucky” because I was born with above-average intelligence. I did not earn it, it just happened to me. So what? I had to do something with it, right? And I did do something with it, but not as much as I could have, because I lacked the temperamental qualities required to pursue great wealth and political power. I chose, instead, to earn just enough to enable an early retirement, which is comfortable but far from lavish. I could just as easily have chosen to earn less than I did.

There are many, many, many individuals whose IQs are lower than mine but whose earnings far exceed mine, and whose abodes make mine look like a shack. Do I begrudge them their earnings and lavish living? Not a bit. Not even if they are dumb-as-doorknob Hollywood “liberals” whose idea of an intellectual conversation is to tell each other that Bush is a Nazi.

By the same token, there are a lot of individuals whose IQs are higher than mine, and I am willing to bet that some of them did not do as well financially as I did. So what? Should they have done better than me just because they have higher IQs? I Where is that rule is written? I will wager that there’s not a Democrat to be found who would subscribe to it.

Everyone deserves what they earn as long as they earn it without resorting to fraud, theft, or coercion. Members of Congress, by the way, resort to coercion when it comes to paying themselves. Yes, there is a constitutional provision that congressional raises can’t take effect until the next session of Congress, but incumbents are almost certain of re-election, and most incumbents run for re-election. The constitutional provision is mere window-dressing.

Back to the topic at hand. Tell me again why I am where I am because of luck. I had to do something with my genetic inheritance. I did what I wanted to do, which was not as much as I might have done. Others, less “lucky” than me did more with their genetic inheritance. And others, more “lucky” than me did less with their genetic inheritance.

Well, I could go on in the same vein about looks, athletic skills, skin color, parents’ wealth, family connections, and all the rest. But I think you get the picture. “Luck” is a starting point. Where we end up depends on what we do with our “luck”.

Not so fast, you say. What about family connections? Suppose Smedley Smythe’s father, who owns General Junk Foods Incorporated, makes Smedley the CEO of GJFI and pays him $1 million a year. If Smythe senior is the sole owner of the company, that is his prerogative. The million is coming out of his hide or, if consumers are willing to pay higher prices to defray the million, out of consumers’ pockets. But no one is forcing consumers to buy things from GJFI; if its prices are too high, consumers will turn elsewhere and Smythe senior will rue his nepotism. Suppose GJFI is a publicly owned company? In the end, it amounts to the same thing; if the nepotism hurts the bottom line, its shareholders should rebel. If it doesn’t, well…

Now what about those who are born poor, who are not especially bright, good looking, or athletic, and who are, say, black rather than white. Do they deserve what they earn? The hard, cold answer is “yes” — if what they earn is earned without benefit of fraud, theft, or coercion. Why should I want to pay you more because of the circumstances of your birth, your IQ, your looks, your athleticism, or your skin color. What matters is what you can do for me and how much I am willing to pay for it.

But what about individuals who are poor because they have been unable to “rise above” their genetic inheritance and family circumstances. What about individuals who are poor because they have incurred serious illnesses or have been severely injured? What about individuals who didn’t save enough to support themselves in their old age? And on and on.

Those seem like hard questions, but there is a straightforward answer to them. Such individuals may be helped legitimately, by private parties. As I say here,

Every bad thing that happens to an individual is a bad thing for that individual. Whether it is a thing that calls for action by another individual is for that other individual (or a group of them acting in concert) to decide on the basis of love, empathy, conscience, specific obligation, or rational calculation about the potential consequences of the bad thing and of helping or not helping the person to whom it has happened….

There is no universal social-welfare function. Therefore, it is up to the potential alms-giver to give or not, based on his knowledge and preferences. No third party is in a moral position to make that choice or to prescribe the criteria for making it. Governments have the power to force a choice other than the one that the potential alms-giver would make, but power is not morality.

Charity is a voluntary act that one commits without a sense of obligation; one helps one’s family, friends, neighbors, etc., out of love, affection, empathy, or other social bond. The fact that charity may strengthen a social bond and heighten the benefits flowing from it is an incidental fact, not a consideration. Duty, on the other hand, arises from specific obligations, formal or informal. These include the obligations of parent to child, teacher to pupil, business partner to business partner, and the like. Charity can be mistaken for duty only in the mind of a philosopher for whom love, affection, and individuality are alien concepts.

What happens, instead, is that individuals — whether needy or not — are helped illegitimately through coercive government programs that draw on free-floating guilt, large measures of political opportunism and economic illiteracy, and coercive state action.

Except for criminals and “public servants,” we deserve what we inherit (or do not), what we earn (or do not), what comes to us by chance (or does not), and what is given to us voluntarily (or is not).

By what divine right do John Rawls and his followers make judgments about who is deserving and who is not? The “veil of ignorance” is a smokescreen for redistribution under the pretext of omniscience.

CONCLUSION

Self-ownership and desert belong in the pantheon of empty concepts, along with altruism.

Evolution and the Golden Rule

Famed biologist E.O. Wilson has recanted the evolutionary theory of kin selection:

apparent strategies in evolution that favor the reproductive success of an organism’s relatives, even at a cost to their own survival and/or reproduction.

Here is an explanation of Wilson’s change of mind:

Wilson said he first gave voice to his doubts in 2004, by which point kin selection theory had been widely accepted as the explanation for the evolution of altruism. “I pointed out that there were a lot of problems with the kin selection hypothesis, with the original Hamilton formulation, and with the way it had been elaborated mathematically by a very visible group of enthusiasts,” Wilson said. “So I suggested an alternative theory.”

The alternative theory holds that the origins of altruism and teamwork have nothing to do with kinship or the degree of relatedness between individuals. The key, Wilson said, is the group: Under certain circumstances, groups of cooperators can out-compete groups of non-cooperators, thereby ensuring that their genes — including the ones that predispose them to cooperation — are handed down to future generations. This so-called group selection, Wilson insists, is what forms the evolutionary basis for a variety of advanced social behaviors linked to altruism, teamwork, and tribalism — a position that other scientists have taken over the years, but which historically has been considered, in Wilson’s own word, “heresy.” (“Where does good come from?” in The Boston Globe online, April 17, 2011)

I will concede a role for evolution in the development of human behavioral norms. But, as I say in “Evolution, Human Nature, and ‘Natural Rights’,”

The painful truth that vast numbers of human beings — past and present — have not acted and do not act as if there are “natural rights” suggests that the notion of “natural rights” is of little practical consequence….

Even if humans are wired to leave others alone as they are left alone, it is evident that they are not wired exclusively in that way.

Cooperative behavior is a loosely observed norm, at best. (For the benefit of “liberals,” I must point out that cooperation can only be voluntary; state-coerced “cooperation” is dictated by force.) Cooperation, such as it is, probably occurs for the reasons I give in “The Golden Rule and the State“:

I call the Golden Rule a natural law because it’s neither a logical construct … nor a state-imposed one. Its long history and widespread observance (if only vestigial) suggest that it embodies an understanding that arises from the similar experiences of human beings across time and place. The resulting behavioral convention, the ethic of reciprocity, arises from observations about the effects of one’s behavior on that of others and mutual agreement (tacit or otherwise) to reciprocate preferred behavior, in the service of self-interest and empathy. That is to say, the convention is a consequence of the observed and anticipated benefits of adhering to it.

I must qualify the term “convention,” to say that the Golden Rule will be widely observed within any group only if the members of that group are generally agreed about the definition of harm, value kindness and charity (in the main), and (perhaps most importantly) see that their acts have consequences. If those conditions are not met, the Golden Rule descends from convention to admonition.

Is the Golden Rule susceptible of varying interpretations across groups, and is it therefore a vehicle for moral relativism? I say “yes,” with qualifications. It’s true that groups vary in their conceptions of permissible behavior. For example, the idea of allowing, encouraging, or aiding the death of old persons is not everywhere condemned, and many recognize it as an inevitable consequence of a health-care “system” that is government-controlled (even indirectly) and treats the delivery of medical services as a matter of rationing…. Infanticide has a long history in many cultures; modern, “enlightened” cultures have simply replaced it with abortion. Slavery is still an acceptable practice in some places, though those enslaved (as in the past) usually are outsiders. Homosexuality has a long history of condemnation and occasional acceptance. To be pro-homosexual — and especially to favor homosexual “marriage” — has joined the litany of “causes” that signal leftist “enlightenment,” along with being for abortion and against the consumption of fossil fuels (except for one’s SUV, of course).

The foregoing recitation suggests a mixture of reasons for favoring or disfavoring certain behaviors. Those reasons range from purely utilitarian ones (agreeable or not) to status-signaling. In between, there are religious and consequentialist reasons, which are sometimes related. Consequentialist reasoning goes like this: Behavior X can be indulged responsibly and without harm to others, but there lurks the danger that it will not be, or that it will lead to behavior Y, which has repercussions for others. Therefore, it’s better to put X off-limits or to severely restrict and monitor it. Consequentialist reasoning applies to euthanasia (it’s easy to slide from voluntary to involuntary acts, especially when the state controls the delivery of medical care), infanticide and abortion (forms of involuntary euthanasia and signs of disdain for life), homosexuality (a depraved, risky practice that can ensnare impressionable young persons who see it as an “easy” way to satisfy sexual urges), alcohol and drugs (addiction carries a high cost, for the addict, the addict’s family, and sometimes for innocent bystanders). A taste or tolerance for destructive behavior identifies a person as an untrustworthy social partner.

It seems to me that the exceptions listed above are just that. There’s a mainstream interpretation of the Golden Rule — one that still holds in many places — which rules out certain kinds of behavior, except in extreme situations, and permits certain other kinds of behavior. There is, in other words, a “core” Golden Rule that comes down to this:

  • Murder is wrong, except in self-defense. (Capital punishment is just that: punishment. It’s also a deterrent to murder. It isn’t “murder,” muddle-headed defenders of baby-murder to the contrary notwithstanding.)
  • Various kinds of unauthorized “taking” are wrong, including theft (outright and through deception). (This explains popular resistance to government “taking,” especially when it’s done on behalf of private parties. The view that it’s all right to borrow money from a bank and not repay it arises from the mistaken beliefs that (a) it’s not tantamount to theft and (b) it harms no one because banks can “afford it.”)
  • Libel and slander are wrong because they are “takings” by word instead of deed.
  • It is wrong to turn spouse against spouse, child against parent, or friend against friend. (And yet, such things are commonly portrayed in books, films, and plays as if they are normal occurrences, often desirable ones. And it seems to me that reality increasingly mimics “art.”)
  • It is right to be pleasant and kind to others, even under provocation, because “a mild answer breaks wrath: but a harsh word stirs up fury” (Proverbs 15:1).
  • Charity is a virtue, but it should begin at home, where the need is most certain and the good deed is most likely to have its intended effect.

None of these observations would be surprising to a person raised in the Judeo-Christian tradition, or even in the less vengeful branches of Islam. The observations would be especially unsurprising to an American who was raised in a rural, small-town, or small-city setting, well removed from a major metropolis, or who was raised in an ethnic enclave in a major metropolis. For it is such persons and, to some extent, their offspring who are the principal heirs and keepers of the Golden Rule in America.

There is far more to human behavior than biological and evolutionary determinism. (Not that Wilson is guilty of that, but many others are.) It is especially simplistic to rely on biological and evolutionary explanations of the particular subset of behavioral rules known as “rights.” For the final word on that point, I return to “Evolution, Human Nature, and ‘Natural Rights'”:

[T]he Golden Rule represents a social compromise that reconciles the various natural imperatives of human behavior (envy, combativeness, meddlesomeness, etc.). Even though human beings have truly natural proclivities, those proclivities do not dictate the existence of “natural rights.” They certainly do not dictate “natural rights” that are solely the negative rights of libertarian doctrine. To the extent that negative rights prevail, it is as part and parcel of the “bargain” that is embedded in the Golden Rule; that is, they are honored not because of their innateness in humans but because of their beneficial consequences.

Is College for Everyone?

Of course not. But don’t tell that to Obamanauts and other purveyors of what is mistakenly taken for compassionate wisdom these days.

This is from my post, “The Higher Education Bubble“:

When I entered college [in 1958], I was among the 28 percent of high-school graduates then attending college. It was evident to me that about half of my college classmates didn’t belong in an institution of higher learning. Despite that, the college-enrollment rate among high-school graduates has since doubled.

Here is a recent view from the front lines of higher education in the United States:

America, ever-idealistic, seems wary of the vocational-education track. We are not comfortable limiting anyone’s options. Telling someone that college is not for him seems harsh and classist and British, as though we were sentencing him to a life in the coal mines. I sympathize with this stance; I subscribe to the American ideal. Unfortunately, it is with me and my red pen that that ideal crashes and burns.

Sending everyone under the sun to college is a noble initiative. Academia is all for it, naturally. Industry is all for it; some companies even help with tuition costs. Government is all for it; the truly needy have lots of opportunities for financial aid. The media applauds it—try to imagine someone speaking out against the idea. To oppose such a scheme of inclusion would be positively churlish. But one piece of the puzzle hasn’t been figured into the equation, to use the sort of phrase I encounter in the papers submitted by my English 101 students. The zeitgeist of academic possibility is a great inverted pyramid, and its rather sharp point is poking, uncomfortably, a spot just about midway between my shoulder blades.

For I, who teach these low-level, must-pass, no-multiple-choice-test classes, am the one who ultimately delivers the news to those unfit for college: that they lack the most-basic skills and have no sense of the volume of work required; that they are in some cases barely literate; that they are so bereft of schemata, so dispossessed of contexts in which to place newly acquired knowledge, that every bit of information simply raises more questions. They are not ready for high school, some of them, much less for college. (“In the Basement of the Ivory Tower,” The Atlantic, June 2008; h/t Maverick Philosopher)

Perhaps the higher-education bubble is about to burst. A serious effort to reduce government spending would surely lead to the reduction of tax subsidies to state colleges and universities. Or so one can hope.

The Evil That Is Done with Good Intentions

Social Security, Medicare, and Medicaid do several bad things at once:

They crowd out prospective providers of retirement funds, medical insurance, and medical care.

They create “moral hazard” by lulling people into the false belief that they will be well-taken-care of in their old age, thereby making it less likely that they will put aside money for their old age.

They therefore cause under-saving and, thus, under-investment in those things upon which economic growth depends: innovation and business creation.

If growth were not hobbled, there would be far fewer people in need of welfare programs and far more money available for voluntary assistance to those who truly cannot care for themselves.

Related posts:
Economic Growth since WWII
A Social Security Reader
The Price of Government
The Commandeered Economy
Rationing and Health Care
The Perils of Nannyism: The Case of Obamacare
The Price of Government Redux
More about the Perils of Obamacare
Health-Care Reform: The Short of It
The Mega-Depression
Presidential Chutzpah
As Goes Greece
The Real Burden of Government
Toward a Risk-Free Economy
The Rahn Curve at Work
The Illusion of Prosperity and Stability
The “Forthcoming Financial Collapse”
Estimating the Rahn Curve: Or, How Government Inhibits Economic Growth
The Deficit Commission’s Deficit of Understanding
Undermining the Free Society
The Bowles-Simpson Report
The Bowles-Simpson Band-Aid
Build It and They Will Pay
Government vs. Community
The Stagnation Thesis

Does World War II “Prove” Keynesianism?

In “How the Great Depression Ended,” I say that

World War II did bring about the end of the Great Depression, not directly by full employment during the war but because that full employment created a “glut” of saving. After the war that “glut” jump-started

  • capital spending by businesses, which — because of FDR’s demise — invested more than they otherwise would have; and
  • private consumption spending, which — because of the privations of the Great Depression and the war years — would have risen sharply regardless of the political climate.

That analysis is by no means an endorsement of simple-minded Keynesianism (as propounded by Paul Krugman, for example), which holds that the government can spend the economy out of a recession or depression, if only it spends “enough” (which is always more than it actually spends). But there is no point in pumping additional money into an economy unless the money elicits productive endeavors: business creation and expansion, leading to net capital formation and job creation.

Pumping additional money into government programs results in the misdirection of resources, at best, and in the discouragement of productive private activity, at worst. Discouragement takes two forms: crowding-out and active interference (usually through regulatory inhibitions).

The answer to the question of this post’s title is that World War II has nothing to do with Keynes or Keynesianism, as it is widely understood. Employment and output (measured in dollars) rose sharply during World War II, but most of the additional output was devoted to the war effort. Huge increases in government spending did not lead to huge increases in the material well-being of Americans, most of whom were working harder while being deprived of the fruits of their labors, through rationing.

If anything, the post-war recovery “proves” the folly and wastefulness of efforts to stimulate an economy through government spending. It was not government spending that re-started the U.S. economy after World War II, it was private spending on capital investments and consumer goods. Some of that private spending was encouraged by the end of regime uncertainty. That end was brought about by the curtailment of New Deal initiatives (until the 1960s) because of the war and FDR’s death. Private spending — which was boosted by wartime saving — would have been purely inflationary had businesses not been willing and able to create jobs and expand output.

Rating America’s Wars

In “Why We Should (and Should Not) Fight” I say that

American armed forces should be used only to preserve, protect, and defend the interests of Americans.

I ended that post with an assessment of the engagements in Iraq, Afghanistan, and Libya. But what about earlier American wars? Here are my thumbnail assessments of them (the dates indicate years in which U.S. forces were involved in combat):

Indian Wars (1637-1918). This long, episodic battle with Native Americans was justified when the purpose was to defend Americans and justly condemned when the purposes were genocide and theft  of Indian lands by force or fraud. There is probably much more to be ashamed of than to be proud of in the history of the Indian Wars.

Revolutionary War (1775-1783). The struggle for self-government deserves praise whether the motivation was liberty in general or the economic interests of colonial planters, merchants, and manufacturers. The latter is a subset of the former, and the outcome of the war served both ends. In that regard, many of the leaders of the armed struggle also became prominent figures in the establishment of the Articles of Confederation and Constitution. Both documents were aimed at preserving and extending the liberty for which the revolution was waged.

War of 1812 (1812-1815). A leading cause of this war was the imposition by Britain of restrictions intended to impede American commerce with France. That, alone, would have justified the war if Britain could not be dissuaded by peaceful means, which it could not be. The U.S. had other legitimate grievances: impressment of American sailors into the British navy and British support of Indian raids in the Northwest Territory. The War of 1812 was, in effect, a belated and creditable resumption of the Revolutionary War.

Mexican-American War (1846-1848). The proximate cause of the war was the attempt by Mexico to retake Texas, which had won independence from Mexico in 1836 and annexed itself to the United States in 1845. The resulting war enabled the U.S. to acquire from Mexico — for $18,250,000 — land that is now California, Nevada, Utah, New Mexico, most of Arizona and Colorado, and parts of Texas, Oklahoma, Kansas, and Wyoming. The U.S. was right to prosecute the war and entirely reasonable about the terms and conditions for resolving it.

Civil War (1861-1865). The war that is still being fought (with words) by many Americans pitted the morally reprehensible Southern defenders of slavery against Northerners, led by Abraham Lincoln, who hewed to the dubious proposition that secession is impermissible under the Constitution. The Civil War can be justified only in that it ended slavery in the United States, which was not Mr. Lincoln’s original aim in prosecuting it.

Spanish-American War (1898). This unnecessary war was fought on the excuse of Spanish atrocities in Cuba and the still-mysterious sinking of the USS Maine in Havana Harbor. It was in fact an exercise in imperialism through which the U.S. acquired the dubious honor of controlling Cuba, Puerto Rico, Guam, and the Philippines — altogether more trouble than they were worth. It is especially galling that Theodore Roosevelt rode the Spanish-American War to fame, and eventually to the imperial presidency.

World War I (1917-1918). The immediate cause of the entry of the United States into this war was German acts of belligerence — sabotage and the sinking of U.S. merchant ships. Those acts were aimed at preventing the U.S. from selling war supplies to Britain. Germany, in other words, was sorely provoked, and the U.S. government could not realistically claim to be a neutral party in what was really a European war, with Asian and African sideshows involving opportunistic attacks on German interests in those regions. Had the U.S. stayed neutral and avoided war, Germany might have won, though a stalemate was more likely. In either event, an exhausted Germany would hardly have been a threat to the U.S., and might even have welcomed trade with the U.S. as it rebuilt in the post-war years. All of this was last in the anti-German hysteria of the time, which played well to the super-majority of Americans whose roots were in the British Isles. It is pure hindsight to say that a victorious or stalemated Germany probably would not have produced the Third Reich, but true nevertheless. America’s entry into World War I was a mistake, in any event, but it turned out to be a horrendously costly one.

World War II (1941-1945). While Anglo-American and French politicians pursued the illusion that peace could be maintained through diplomacy and treaties, Adolf Hitler and Japan’s military caste pursued dominion through conquest. The Third Reich and Empire of the Rising Sun failed to dominate the world only because of (a) Hitler’s fatal invasion of Russia, (b) Japan’s wrong-headed attack on Pearl Harbor, and (c) the fact that the United States of 1941 had time and space on its side. Had the latter not been true, Americans could well have found themselves cut off from the world — and much the poorer for it — if not enslaved. World War II clearly ranks just behind the War of 1812 as the most necessary war in America’s post-Revolutionary history.

Cold War (1947-1991). This necessary, long, and costly “war” of deterrence through preparedness enabled the U.S. to protect Americans’ legitimate economic interests around the world by limiting the expansion of the Soviet empire. The Cold War had some “hot” moments and points of high drama. Perhaps the most notable of them was the so-called Cuban Missile Crisis of 1962, which was not the great victory proclaimed by the Kennedy administration and its political and academic sycophants. (For more on this point, go here and scroll down to the section on Kennedy.) That the U.S. won the Cold War because the USSR’s essential bankruptcy was exposed by Ronald Reagan’s defense buildup is a fact that only left-wingers and dupes will deny. They continue to betray their doomed love of communism by praising the hapless Mikhail Gorbachev for doing the only thing he could do in the face of U.S. superiority: surrender and sunder the Soviet empire. America’s Cold War victory owes nothing to LBJ (who wasted blood and treasure in Vietnam), Richard Nixon (who would have sold his mother for 30 pieces of silver), or Jimmy Carter (whose love for anti-American regimes and rebels knows no bounds).

Korean War (1950-1953). The Korean War was unnecessary, in that it was invited by the Truman administration’s policies: exclusion of Korea from the Asian defense perimeter and massive cuts in the U.S. defense budget. But it was essential to defend South Korea so that the powers behind North Korea (Communist China and, by extension, the USSR) would grasp the willingness of the U.S. to maintain a forward defensive posture against aggression. That signal was blunted by Truman’s decision to sack MacArthur when the general persisted in his advocacy of attacking Chinese bases following the entry of the Chinese into the war. The end result was a stalemate, where a decisive victory might have broken the back of communistic adventurism around the globe. The Korean War, as it was fought by the U.S., became “a war to foment war.”

Vietnam War (1965-1973). Whereas the Korean War was a necessary war against communist expansionism, the Vietnam War was an unnecessary entanglement in a civil war in which one side happened to be communist. Nevertheless, the U.S., having made a costly commitment to the prosecution of the war, should have fought it to victory. Instead, unlike the case of Korea, U.S. forces were withdrawn and it took little time for North Vietnam to swallow South Vietnam. American resolve suffered a body blow, from which it rebounded only partially by winning the Cold War, thanks to Reagan’s defense buildup in the 1980s. When it came to actual warfare, however, Vietnam repeated and reinforced the pattern of compromise and retreat that had begun with the Korean War, and which eventuated in the 9/11 attacks.

Gulf War (1990-1991). This war began with Saddam Hussein’s invasion of oil-rich Kuwait. U.S. action to repel the invasion was fully justified by the potential economic effects of Saddam’s capture of Kuwait’s petroleum reserves and oil production. The proper response to Saddam’s aggression would have been not only to defeat the Iraqi army but also to depose Saddam. The failure to do so further reinforced the pattern of compromise and retreat that had begun in Korea, and necessitated the long, contentious Iraq War of the 2000s.

The quick victory in Iraq, coupled with the coincidental end of the Cold War, helped to foster a belief that the peace had been won. (That belief was given an academic imprimatur in Francis Fukuyama’s The End of History and the Last Man.) The stage was set for Clinton’s much-ballyhooed fiscal restraint, which was achieved by cutting the defense budget. Clinton’s lack of resolve in the face of terrorism underscored the evident unwillingness of American “leaders” to defend Americans’ interests, thus inviting 9/11.  (For more about Clinton’s foreign and defense policy, go here and scroll down to the section on Clinton.)

Which leads us back to the wars and skirmishes of the 21st century.

The Public-School Swindle

I have a relative by marriage who’s a retired public-school teacher. She loved to moan about her “low” pay. She wasn’t alone, of course. Her refrain has been heard throughout the land for decade. Truth be told, however, she and her ilk were and are overpaid, as several commentators have explained (e.g., here, here, and here). The following diagram illustrates the machinations that yield above-market compensation for public-school teachers (and other) “public servants”.

Here’s a step-step-explanation:

1. The diagonal, solid-black lines represent the demand for teachers in the absence of tax-funded (public) schools (D-no pu) and the supply of teachers in the absence of tax-funded schools (S-no pu). The intersection of the S and D curves yields the level of teacher compensation (C-no pu) and employment (E-no pu) that would result were there nothing but private schools. (I am, for now, putting aside the question whether government should require school attendance through a certain age or grade, or dictate what is taught in schools.)

2. The picture changes dramatically with the introduction of tax-funded schools (indicated by the red lines). The supply of teachers for public schools (Spu) is to the left of S-no pu because (a) not all teachers are willing to work in public schools and (b) not all teachers are “qualified” to teach in public schools. The second condition arises when potential teachers have learned too much about the subjects they would teach, at the expense of taking too few (or none) of the “education” courses that enable fairly dim education majors to compile inflated grade-point averages.

3. The horizontal, solid red line indicates the inflated compensation (Cpu) that is offered by tax-funded school systems. This above-market rate of compensation is the product of an inter-scholastic “arms race”, in which school systems — goaded by administrators, teachers, parents, and (often) local businesspersons — seek to outdo the lavishness of other school systems, not only in the compensation of teachers and administrators but also in the number and kinds of non-essential courses and activities, and the lavishness and modernity of facilities and equipment. All of which is paid for (in the main) by taxpayers and consumers who have no say in the matter, but whose income and property can be seized for failure to pay the requisite taxes.

5. Not surprisingly, there are more teachers who are willing to work at public-school rates of compensation than public schools can hire (Epu), even with their inflated budgets. That is why some teachers turn to private schools, others accept substitute-teaching jobs, and some end up doing things like selling used cars. The green lines represent the supply of (Spr) and demand for (Dpr) private-school teachers. and the corresponding compensation of (Cpr) and number of teachers employed by (Epr) private schools.

6. The supply of teachers to private schools consists of (a) those teachers who cannot get jobs with public schools but are willing to teach in private schools and (b) those teachers who abhor the thought of teaching in public schools and are therefore willing to accept lower compensation for the privilege of teaching in private schools. The compensation of private-school teachers is lower than that of public-school teachers because

  • the compensation of public-school teachers is artificially inflated by the vast amounts of tax money extracted from persons who would not otherwise be in the market for education, let alone public-school education, and
  • the vastness of the tax burden limits the ability of persons who are in the market for education to pay for private schooling, that it, it artificially reduces the demand for private schooling.

Because the subsidization of public schools, there are far more teachers than would be the case in an entirely private system. Advocates of tax-funded education would count that as a plus, as they would the above-market wages of public-school teachers. In fact, it is a minus, because it means that resources are being diverted to less productive uses than they would be were education an entirely private matter. Moreover, mediocre teachers and administrators — often outfitted with lavish facilities and equipment — are being paid more than necessary to “educate” children in useless subjects, at the expense of taxpayers who could put that money to work providing better homes, relevant training, and more jobs for those same children.

This analysis undoubtedly applies to higher education as well as K-12 education. The presence of tax-funded colleges and universities unnecessarily drives up the cost of higher education and burdens many persons who derive no benefit from it.

In summary, public “education” — at all levels — is not just a rip-off of taxpayers, it is also an employment scheme for incompetents (especially at the K-12 level) and a paternalistic redirection of resources to second- and third-best uses. And, to top it off, public education has led to the creation of an army of left-wing zealots who, for many decades, have inculcated America’s children and young adults in the advantages of collective, non-market, anti-libertarian institutions, where paternalistic “empathy” supplants personal responsibility.


Related reading:
Mark J. Perry, “The Public-Sector Premium for School Teachers“, Carpe Diem, March 3, 2011
Ironman, “How Much Do Public-School Teachers Really Make Compared to Private-School Teachers?“, Political Calculations, March 30, 2017
Andrew J. Biggs, “No, Teachers Are Not Underpaid“, City Journal, April 26, 2018

Related posts:
School Vouchers and Teachers’ Unions
Whining about Teachers’ Pay: Another Lesson about the Evils of Public Education
I Used to Be Too Smart to Understand This
International Law vs. Homeschooling
GIGO
Religion in Public Schools: The Wrong and Right of It
The Home Schooler Threat?
The Real Burden of Government
The Higher Education Bubble
Our Miss Brooks
“Intellectuals and Society”: A Review

Why We Should (and Should Not) Fight

G.W. Bush’s decision to invade Iraq and overthrow Saddam Hussein — a decision that was approved by Congress — was justified on several grounds. One of those grounds was a humanitarian consideration: Saddam’s record as a brutally oppressive dictator.

But humanitarian acts have nothing to do with the interests of Americans, except for the mistaken belief that the “rest of the world” (presumably including our enemies and potential enemies) will think better of the United States for such acts. The belief, as I say, is mistaken. Our foreign enemies and potential enemies see such things as evidence of American softness, when they do not see them as ways of obtaining U.S. weapons for future use against American interests. Our foreign “friends” (the sneer is well-advised) see the humanitarian acts of the U.S. government as one, two, or all of the following: (a) substitutes for their own humanitarian acts, which may accordingly be curtailed or withheld, (b) evidence of America’s “imperial” aims, and (c) evidence of the willingness of Americans to expend lives and treasure, sometimes in vain, for elusive or illusory objectives.

From the point of view of American taxpayers, the commission of humanitarian acts by the U.S. government is almost always and certainly a waste of money. (I have elsewhere discussed and dismissed the proposition that such acts are morally superior to the alternative of letting taxpayers decide how best to use their money.)  It follows that now military operation can or should be justified solely on the basis of humanitarianism. And yet, that is the essential justification of Obama’s adventure in Libya.

Were Obama to come right out and say that our military involvement in Libya is really aimed at ensuring a continuous flow of petroleum from that country’s wells, refineries, and ports, he would be accused of waging a campaign of “blood for oil.” That, of course, was a leftist rallying cry against Bush’s invasion of Iraq, and Obama — as a man of the left and opponent of the Iraq war — does not want to be painted with the same brush.

Bush, too, sought to avoid the taint of “blood for oil.” But, in reality, it was in the interest of the U.S. (and other nations) to restore the flow of Iraqi oil to (or above) the rate attained before the imposition of UN sanctions.

Nevertheless, political discourse has become so mealy-mouthed since the end of World War II that no American politician dare speak of an economic motivation for the use of military force. And so, American politicians must adopt the language of hypocrisy, cant, and political correctness to justify acts that are either (a) unjustifiable because they are purely humanitarian or (b) fully justifiable as being in the interest of Americans, period.

In sum, American armed forces should be used only to preserve, protect, and defend the interests of Americans. To that end, American armed forces certainly may be used preemptively as well as reactively. And as long as it remains economically advantageous for Americans to import oil from other countries, it will be a legitimate use of American armed forces to defend those imports — at the source and every step of the way to this country. I would say the same about any resource whose importation is vital to the well-being of Americans.

The decision whether to use force to protect Americans and their interests, in any given instance, requires a judgment as to the likely costs, benefits, and success of the venture. For practical purposes, it is the president who makes that judgment, but he is ill-advised to commit armed forces without the backing of Congress. When armed forces have been committed, they should remain committed until the objective has been met, unless it becomes clear — to the president and Congress, the media and protesters to the contrary — that the objective cannot be met without incurring unacceptable costs.

A reversal of course sends a very strong signal to our enemies and potential enemies that America’s leadership is unwilling to do what it takes to protect Americans and their interests. Such a signal, of course, makes all the more likely that someone will act against Americans and their interests.

All of that said, I come to the following conclusions about current military engagements involving American armed forces:

  • Iraq was worth the effort, assuming that a post-withdrawal Iraq remains a relatively stable, oil-producing nation in the midst of surrounding turmoil.
  • Afghanistan is worth only the effort required to destroy its usefulness as an al Qaeda base. If that cannot be achieved, the large-scale U.S. presence in Afghanistan should be scaled back to a special operations force dedicated solely to the detection and destruction of al Qaeda facilities and personnel.
  • Libya is worth only the effort required to ensure that it remains a major oil-exporting nation. Aiding the Libyan rebels is likely to backfire because of the strong possibility that al Qaeda or its ilk will emerge triumphant in a rebel-led post-Gaddifi regime (as seems to be the case in Egypt’s post-Mubarak regime). Given that possibility, the U.S. government should withdraw all support of the NATO operation, with the aim of (a) bringing about the end of that operation or (b) forcing a “willing coalition” of European nations to do what it takes to ensure that a post-Gaddafi regime is no worse than neutral toward the West.

Earlier wars are treated here.

Related posts:
Libertarian Nay-Saying on Foreign and Defense Policy
Libertarianism and Preemptive War: Part I
Right On! For Libertarian Hawks Only
Understanding Libertarian Hawks
More about Libertarian Hawks and Doves
Sorting Out the Libertarian Hawks and Doves
Libertarianism and Preemptive War: Part II
Give Me Liberty or Give Me Non-Aggression?
More Final(?) Words about Preemption and the Constitution
Thomas Woods and War
“Peace for Our Time”
How to View Defense Spending
More Stupidity from Cato
Anarchistic Balderdash
Cato’s Usual Casuistry on Matters of War and Peace
A Point of Agreement
The Unreality of Objectivism
A Grand Strategy for the United States
The Folly of Pacifism