Voluntary Taxation

Will Wilkinson, writing at The Economist, quotes Ayn Rand and begs to differ with her:

Ayn Rand’s position on government finance is unusual, to say the least. Rand was not an anarchist and believed in the possibility of a legitimate state, but did not believe in taxation. This left her in the odd and almost certainly untenable position of advocating a minimal state financed voluntarily. In her essay “Government Financing in a Free Society”, Rand wrote:

In a fully free society, taxation—or, to be exact, payment for governmental services—would be voluntary. Since the proper services of a government—the police, the armed forces, the law courts—are demonstrably needed by individual citizens and affect their interests directly, the citizens would (and should) be willing to pay for such services, as they pay for insurance.

This is faintly ridiculous. From one side, the libertarian anarchist will agree that people are willing to pay for these services, but that a government monopoly in their provision will lead only to inefficiency and abuse. From the other side, the liberal statist will defend the government provision of the public goods Rand mentions, but will quite rightly argue that Rand seems not to grasp perhaps the main reason government coercion is needed, especially if one believes, as Rand does, that individuals ought to act in their rational self-interest.

It’s true that we each benefit from the availability of genuinely public goods, but we benefit most if we are able to enjoy them without paying for them. A rationally self-interested individual will not voluntarily pay for public goods if she believes others will pay and she can get a free ride. But if we’re all rationally self-interested, and we know we’re all rationally self-interested, we know everyone else will also try to get a free ride, in which case it is doubly irrational to voluntarily pitch in. (from “Ayn Rand on Tax Day,” free registration required)

Wilkinson’s analysis is more than faintly wrong. A rationally self-interested individual will voluntarily pay for something if his expected benefit is worth (to him) the price he pays. The fact that a purchase might yield uncompensated benefits to third parties (i.e., positive externalities) is beside the point. Individuals do many things with their money that benefit others, without expecting to be repaid by those others. Individuals also do things that benefit others, in more than the ordinary way of voluntary exchange — sometimes for money, sometimes not, and sometimes at the risk of life and limb.

In addition to the obvious but signifcant case of philanthropy, there are subtle things like building an elegant house with beautifully landscaped grounds. Clusters of such houses on upscale streets yield satisfaction not only to their owners but also to drivers, joggers, and strollers who pass through the neighborhood — often with the main purpose of enjoying the elegance and beauty that surrounds them.

A similar case in point is the practice observed in many neighborhoods of creating elaborate displays of Christmas lights. Such displays not only please the homeowners who create them (or pay someone to create them) but also the flocks of sightseers who are drawn to such displays. Homeowners (for the most part) do this without compensation from sightseers. (Some homeowners in a less-affluent neighborhood in Austin, which is known for its over-the-top lighting concoctions, have been known to invite voluntary donations to help defray the cost of their displays.)

Finally, on this point, there are not-so-subtle examples of doing good for others as a habit and even a way of life. Many persons devote many hours a week to voluntary work in schools, hospitals, and the like. Then there are firefighters, police officers, and a goodly fraction of the members of the armed forces who perform jobs that put them in harm’s way, and do so not only for the money they earn but often because they feel a duty to make their towns, cities, and nation safer for the inhabitants thereof.

In any event, a rationally self-interested person who values national defense or the justice system would be a good candidate for making voluntary contributions to support those kinds of governmental functions. It would be a simple thing for influential and very wealthy individuals and major corporations to parlay their self-interest into the creation of organizations that raise money from like-minded individuals and corporations. Imagine a version of the American Heart Association called the American Defense Association; imagine a version of the Junior League called the Justice League. If anything, it should be easier to entice “voluntary taxes” in support of essential functions like defense and justice than it is to entice contributions to charitable organizations, which seldom yield more than “feel good” benefits to donors.

Not all fund-raising efforts for charities succeed in obtaining donations from everyone they solicit, but fund-raisers neither expect nor require 100-percent success. Similarly, an American Defense Association or Justice League would not require 100-percent success in its efforts to raise enough money to defray the costs of national defense and domestic justice. It is enough that the prospect of being “taxed voluntarily” to support such causes would appeal to a large number of affluent taxpayers.

Of particular interest to fund-raisers would be those individuals and couples with adjusted gross incomes in the top 50 percent of the AGI distribution. For tax year 2008, the top 50 percent paid 97 percent of federal income taxes collected by the federal government. Before the Great Recession and associated “stimulus” spending, when the federal budget was nearly in balance, spending on national defense and justice (at all levels of government) accounted for about 20 percent of all government spending. It seems to me that the a rationally self-interested person or couple in the top 50 percent would leap at the chance to eliminate all of his or their taxes if the alternative were to donate a smaller amount to the causes of defense and justice. There would be holdouts — especially among affluent leftists, of course — but there would also be the usual donors who give far more than their “fair share.”

Consider, for example, the persons in the top 1 percent of the AGI distribution, who paid 38 percent of the federal income taxes collected for 2008, or the persons in the top 10 percent, who paid 70 percent of the taxes. Members of those groups (as well as others in the top 50 percent) would have a strong incentive to ensure the provision of defense and justice, understanding (as most of them do) the importance of order and stability to their livelihoods.

Further, I expect that many of top income-earners would lead example (as they do for charities) with their contributions. Additionally, I would expect them to be leading contributors to advertising campaigns that explain the economic benefits of maintaining a robust defense and vigilant system of justice while, at the same time, paying a lot less for government services. Chief among the benefits would be stronger economic growth — as money is saved and invested instead of being poured down so many rat-holes and into counterproductive regulatory agencies. In the end, there would be more jobs, higher incomes, less need for charity, and more money with which to dispense charity to truly needy individuals.

In summary, Wilkinson’s analysis seems rooted in a sterile conception of rational self-interest. It seems to assume that bright, hard-working, high-earning individuals cannot perceive the real benefits that would flow from “voluntary taxation” for certain purposes, namely, national defense and domestic justice.

Osama, Obama, and 2012

Obama did what any president should have done. However, because Osama was killed by U.S. forces on Obama’s watch, much of the glory will redound to Obama. But the glory really belongs to the team of Americans who conducted the raid on Osama’s lair, to the intelligence apparatus that led the team there, and to everyone directly involved in command and support of the operation.

The killing of Osama, at this late date, probably will have little or no effect on the operations of al-Qaeda and other terrorist organizations. The killing of Obama is a symbolic act of justice, and that’s about all it is. But that, in itself, is worth a lot to any American who abhors the 9/11 attacks for what they were: murderous attacks on innocent persons by cold-blooded fanatics. Anyone who is celebrating today but who said ten years ago that “we asked for it” is a hypocrite who should be wearing sackcloth instead of celebrating.

It remains to be seen whether the almost-certain surge in Obama’s popularity will last. There is much about the man and his policies that deserves deep unpopularity. Yesterday’s events will recede from view before long, and Americans will return to their struggles with unemployment, inflation, intrusive government, and mounting debt. It is those things that most likely will occupy Americans’ minds when they cast their votes in November 2012.

A case in point: Bush senior enjoyed a surge of popularity following the decisive (but incomplete) victory in the Gulf War of 1990-91, but he was nevertheless unable to win re-election in 1992. The third-party candidacy of Ross Perot had a lot to do with Bush’s unseating. But had the election taken place right after the defeat of Saddam’s forces, Bush probably would have won handily. Unfortunately for Bush — and the country — the election took place 20 months later, by which time Americans’ discontent with their economic lot led too many of them to vote for Perot and Clinton.

As Yogi says, “It ain’t over ’til it’s over.”

America’s Financial Crisis Is Now

Reissued here.

On Self-Ownership and Desert

INTRODUCTION

Fernando Teson, one of the Bleeding Heart Libertarians, addresses self-ownership:

Self-ownership is the property right that a person has over her natural assets, that is, over her mind and body. As is well known (and nicely summarized in Matt [Zwolinski]’s post,) Lockeans think that this right can, under appropriate circumstances, justify ownership over external assets.  Most libertarians endorse the idea of self-ownership. Some progressives do too, but an important line of progressive thought rejects self-ownership.  According to John Rawls (in A Theory of Justice,) natural assets are collective property. That is, they belong to society, not to the person who possesses them. The reason for this, Rawls thinks, is that just as we do not deserve being born rich or poor, so we don’t deserve our natural talents. For this reason, societal arrangements that reward talented persons are only justified if they benefit the least talented.

I am exasperated by claims, like Teson’s and Rawls’s, that appeal to abstract principles which adduce to human beings abstract, Platonic attributes. One such attribute is “natural rights” — a close kin of self-ownership. I am especially exasperated when such attributes are bestowed by third parties speaking from a position of judgmental omniscience. Desert is an excellent case in point.

The attribution to humans of ethereal characteristics (like self-ownership and desert) exemplifies the fallacy of reification:  “the error of treating as a “real thing” something which is not a real thing, but merely an idea.”

Self-ownership is in a class with “natural rights” as a condition that somehow inheres in a person by virtue of his status as a human being. I have dealt with “natural rights” at length (e.g., here, here, here, here, and here), and will not repeat myself. The rest of this post takes up self-ownership and desert.

SELF-OWNERSHIP

The argument for self-ownership, as forumalated by Robert Nozick, goes like this (according to R.N. Johnson’s summary of the political philosophy of Robert Nozick):

The self-ownership argument is based on the idea that human beings are of unique value. It is one way of construing the fundamental idea that people must be treated as equals. People are “ends in themselves”. To say that a person is an end in herself is to say that she cannot be treated merely as a means to some other end. What makes a person an end is the fact that she has the capacity to choose rationally what she does. This makes people quite different from anything else, such as commodities or animals. The latter can be used by us as mere means to our ends without doing anything morally untoward, since they lack the ability to choose for themselves how they will act or be used. Human beings, having the ability to direct their own behavior by rational decision and choice, can only be used in a way that respects this capacity. And this means that people can’t be used by us unless they consent.

The paradigm of violating this requirement to treat people as ends in themselves is thus slavery. A slave is a person who is used as a mere means, that is, without her consent. That is, a slave is someone who is owned by another person. And quite obviously the reverse of slavery is self-ownership. If no one is a slave, then no one owns another person, and if no one owns another person, then each person is only owned by herself. Hence, we get the idea that treating people as ends in themselves is treating them as owning themselves.

In summary:

1. I own myself because I am capable of making rational choices for myself.

2. If someone else “uses” me without my consent (e.g., enslaves me or steals food from me), he is denying my self-ownership.

3. Therefore, when someone else “uses” me he is treating me as a means to an end; whereas, I am an end in myself because I own myself.

Oops. I went in a circle. I own myself; therefore, I cannot be used by someone else, because I own myself.

Nozick’s proposition amounts to nothing more than the assertion that everyone must act from the same principle. Immanuel Kant made essentially the same assertion in his categorical imperative:

Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end.

Well, what if the person making that statement believes that his end is to be a slave-owner — and that he has the power to make me a slave?

The fact is that people, all too often, do not act according to Nozick’s or Kant’s imperatives. As Dr. Johnson said, I refute it thus: Look around you. Rights are a social construct. They exist only to the extent that they are reciprocally recognized and enforced. There are very good reasons that rights should be only negative ones (here and here, for example). But those reasons do not trump the realities of human nature (follow the links in the final paragraph of the introduction).

The concept of self-ownership, as with many ideals, arises from the ideal world of “ought” instead of the real world of “is.”

DESERT

Desert is a more infuriating concept than self-ownership. Self-ownership, at least, is an attribute which supposedly inheres in me by virtue of my humanity. (That it does not inhere in me can be seen readily by looking at my 1040, my real-estate tax bill, and the myriad federal, State, and local regulations that govern my behavior and transactions with others.) Desert, on the other hand, is mine only if someone else says that it does.

The Wikipedia article about desert gives this illustration:

In ordinary usage, to deserve is to earn or merit a reward; in philosophy, the distinction is drawn in the term desert to include the case that that which one receives as one’s just deserts may well be unwelcome, or a reward. For example, if one scratches off a winning lottery ticket, one may be entitled to the money, but one does not necessarily deserve it in the same way one would deserve $5 for mowing a lawn, or a round of applause for performing a solo.

Whether or not one “deserves” one’s lottery winnings depends arbitrarily on who is making the judgment. The arbitrariness is readily seen in the opposing views of Rawls and Nozick (from the same article):

One of the most controversial rejections of the concept of desert was made by the political philosopher John Rawls. Rawls, writing in the mid to late twentieth century, claimed that a person cannot claim credit for being born with greater natural endowments (such as superior intelligence or athletic abilities), as it is purely the result of the ‘natural lottery’. Therefore, that person does not morally deserve the fruits of his or her talents and/or efforts, such as a good job or a high salary. However, Rawls was careful to explain that, even though he dismissed the concept of moral Desert, people can still legitimately expect to receive the benefits of their efforts and/or talents. The distinction here lies between Desert and, in Rawls’ own words, ‘Legitimate Expectations’.[1]

Rawls'[s] remarks about natural endowments provoked an often-referred response by Robert Nozick. Nozick claimed that to treat peoples’ natural talents as collective assets is to contradict the very basis of the deontological liberalism Rawls wishes to defend, i.e. respect for the individual and the distinction between persons.[2] Nozick argued that Rawls’ suggestion that not only natural talents but also virtues of character are undeserved aspects of ourselves for which we cannot take credit, “can succeed in blocking the introduction of a person’s autonomous choices and actions (and their results) only by attributing everything noteworthy about the person completely to certain sorts of ‘external’ factors. So denigrating a person’s autonomy and prime responsibility for his actions is a risky line to take for a theory that otherwise wishes to buttress the dignity and self-respect of autonomous beings.”[3]

Jonathan Pearce, writing at samizdata.net blog, sorts it out:

[T]he idea of “deserving” poor or “undeserving” rich is, in my view, loaded with ideological significance, depending on who is using the term. Clearly, people feel a lot more relaxed about handing out money – either from a charity or from a government department – to people who are down on their luck but of good character, than they are about handing it out to the feckless. Similarly, it follows that there is more support for taxing supposedly “undeserved” wealth than “earned” wealth. The trouble with such words, of course, as has been shown by FA Hayek in his famous demolition of payment-by-merit in The Constitution of Liberty, is who gets to decide whether our circumstances came about due to “desert” or not. Such a person would have to have the foresight of a god. It is, as Hayek argued, impossible to do this without some omnipotent authority being able to weigh up a person’s potential, and then being able to measure whether that person, in the face of a vast array of alternatives, made the most of that potential. (“Desert according to whom?“)

Rawls and his fellow travelers (who are usually found on the left) simply cannot stand the idea of individual differences, and so they attribute them to “luck.” The idea of luck, as I have said elsewhere, “is mainly an excuse and rarely an explanation. We prefer to apply ‘luck’ to outcomes when we don’t like the true explanations for them.” In the case of desert, the idea of luck is used as an excuse for redistribution, even though it is an inadequate explanation for variations in economic and social outcomes.

I am “lucky” because I was born with above-average intelligence. I did not earn it, it just happened to me. So what? I had to do something with it, right? And I did do something with it, but not as much as I could have, because I lacked the temperamental qualities required to pursue great wealth and political power. I chose, instead, to earn just enough to enable an early retirement, which is comfortable but far from lavish. I could just as easily have chosen to earn less than I did.

There are many, many, many individuals whose IQs are lower than mine but whose earnings far exceed mine, and whose abodes make mine look like a shack. Do I begrudge them their earnings and lavish living? Not a bit. Not even if they are dumb-as-doorknob Hollywood “liberals” whose idea of an intellectual conversation is to tell each other that Bush is a Nazi.

By the same token, there are a lot of individuals whose IQs are higher than mine, and I am willing to bet that some of them did not do as well financially as I did. So what? Should they have done better than me just because they have higher IQs? I Where is that rule is written? I will wager that there’s not a Democrat to be found who would subscribe to it.

Everyone deserves what they earn as long as they earn it without resorting to fraud, theft, or coercion. Members of Congress, by the way, resort to coercion when it comes to paying themselves. Yes, there is a constitutional provision that congressional raises can’t take effect until the next session of Congress, but incumbents are almost certain of re-election, and most incumbents run for re-election. The constitutional provision is mere window-dressing.

Back to the topic at hand. Tell me again why I am where I am because of luck. I had to do something with my genetic inheritance. I did what I wanted to do, which was not as much as I might have done. Others, less “lucky” than me did more with their genetic inheritance. And others, more “lucky” than me did less with their genetic inheritance.

Well, I could go on in the same vein about looks, athletic skills, skin color, parents’ wealth, family connections, and all the rest. But I think you get the picture. “Luck” is a starting point. Where we end up depends on what we do with our “luck”.

Not so fast, you say. What about family connections? Suppose Smedley Smythe’s father, who owns General Junk Foods Incorporated, makes Smedley the CEO of GJFI and pays him $1 million a year. If Smythe senior is the sole owner of the company, that is his prerogative. The million is coming out of his hide or, if consumers are willing to pay higher prices to defray the million, out of consumers’ pockets. But no one is forcing consumers to buy things from GJFI; if its prices are too high, consumers will turn elsewhere and Smythe senior will rue his nepotism. Suppose GJFI is a publicly owned company? In the end, it amounts to the same thing; if the nepotism hurts the bottom line, its shareholders should rebel. If it doesn’t, well…

Now what about those who are born poor, who are not especially bright, good looking, or athletic, and who are, say, black rather than white. Do they deserve what they earn? The hard, cold answer is “yes” — if what they earn is earned without benefit of fraud, theft, or coercion. Why should I want to pay you more because of the circumstances of your birth, your IQ, your looks, your athleticism, or your skin color. What matters is what you can do for me and how much I am willing to pay for it.

But what about individuals who are poor because they have been unable to “rise above” their genetic inheritance and family circumstances. What about individuals who are poor because they have incurred serious illnesses or have been severely injured? What about individuals who didn’t save enough to support themselves in their old age? And on and on.

Those seem like hard questions, but there is a straightforward answer to them. Such individuals may be helped legitimately, by private parties. As I say here,

Every bad thing that happens to an individual is a bad thing for that individual. Whether it is a thing that calls for action by another individual is for that other individual (or a group of them acting in concert) to decide on the basis of love, empathy, conscience, specific obligation, or rational calculation about the potential consequences of the bad thing and of helping or not helping the person to whom it has happened….

There is no universal social-welfare function. Therefore, it is up to the potential alms-giver to give or not, based on his knowledge and preferences. No third party is in a moral position to make that choice or to prescribe the criteria for making it. Governments have the power to force a choice other than the one that the potential alms-giver would make, but power is not morality.

Charity is a voluntary act that one commits without a sense of obligation; one helps one’s family, friends, neighbors, etc., out of love, affection, empathy, or other social bond. The fact that charity may strengthen a social bond and heighten the benefits flowing from it is an incidental fact, not a consideration. Duty, on the other hand, arises from specific obligations, formal or informal. These include the obligations of parent to child, teacher to pupil, business partner to business partner, and the like. Charity can be mistaken for duty only in the mind of a philosopher for whom love, affection, and individuality are alien concepts.

What happens, instead, is that individuals — whether needy or not — are helped illegitimately through coercive government programs that draw on free-floating guilt, large measures of political opportunism and economic illiteracy, and coercive state action.

Except for criminals and “public servants,” we deserve what we inherit (or do not), what we earn (or do not), what comes to us by chance (or does not), and what is given to us voluntarily (or is not).

By what divine right do John Rawls and his followers make judgments about who is deserving and who is not? The “veil of ignorance” is a smokescreen for redistribution under the pretext of omniscience.

CONCLUSION

Self-ownership and desert belong in the pantheon of empty concepts, along with altruism.

Evolution and the Golden Rule

Famed biologist E.O. Wilson has recanted the evolutionary theory of kin selection:

apparent strategies in evolution that favor the reproductive success of an organism’s relatives, even at a cost to their own survival and/or reproduction.

Here is an explanation of Wilson’s change of mind:

Wilson said he first gave voice to his doubts in 2004, by which point kin selection theory had been widely accepted as the explanation for the evolution of altruism. “I pointed out that there were a lot of problems with the kin selection hypothesis, with the original Hamilton formulation, and with the way it had been elaborated mathematically by a very visible group of enthusiasts,” Wilson said. “So I suggested an alternative theory.”

The alternative theory holds that the origins of altruism and teamwork have nothing to do with kinship or the degree of relatedness between individuals. The key, Wilson said, is the group: Under certain circumstances, groups of cooperators can out-compete groups of non-cooperators, thereby ensuring that their genes — including the ones that predispose them to cooperation — are handed down to future generations. This so-called group selection, Wilson insists, is what forms the evolutionary basis for a variety of advanced social behaviors linked to altruism, teamwork, and tribalism — a position that other scientists have taken over the years, but which historically has been considered, in Wilson’s own word, “heresy.” (“Where does good come from?” in The Boston Globe online, April 17, 2011)

I will concede a role for evolution in the development of human behavioral norms. But, as I say in “Evolution, Human Nature, and ‘Natural Rights’,”

The painful truth that vast numbers of human beings — past and present — have not acted and do not act as if there are “natural rights” suggests that the notion of “natural rights” is of little practical consequence….

Even if humans are wired to leave others alone as they are left alone, it is evident that they are not wired exclusively in that way.

Cooperative behavior is a loosely observed norm, at best. (For the benefit of “liberals,” I must point out that cooperation can only be voluntary; state-coerced “cooperation” is dictated by force.) Cooperation, such as it is, probably occurs for the reasons I give in “The Golden Rule and the State“:

I call the Golden Rule a natural law because it’s neither a logical construct … nor a state-imposed one. Its long history and widespread observance (if only vestigial) suggest that it embodies an understanding that arises from the similar experiences of human beings across time and place. The resulting behavioral convention, the ethic of reciprocity, arises from observations about the effects of one’s behavior on that of others and mutual agreement (tacit or otherwise) to reciprocate preferred behavior, in the service of self-interest and empathy. That is to say, the convention is a consequence of the observed and anticipated benefits of adhering to it.

I must qualify the term “convention,” to say that the Golden Rule will be widely observed within any group only if the members of that group are generally agreed about the definition of harm, value kindness and charity (in the main), and (perhaps most importantly) see that their acts have consequences. If those conditions are not met, the Golden Rule descends from convention to admonition.

Is the Golden Rule susceptible of varying interpretations across groups, and is it therefore a vehicle for moral relativism? I say “yes,” with qualifications. It’s true that groups vary in their conceptions of permissible behavior. For example, the idea of allowing, encouraging, or aiding the death of old persons is not everywhere condemned, and many recognize it as an inevitable consequence of a health-care “system” that is government-controlled (even indirectly) and treats the delivery of medical services as a matter of rationing…. Infanticide has a long history in many cultures; modern, “enlightened” cultures have simply replaced it with abortion. Slavery is still an acceptable practice in some places, though those enslaved (as in the past) usually are outsiders. Homosexuality has a long history of condemnation and occasional acceptance. To be pro-homosexual — and especially to favor homosexual “marriage” — has joined the litany of “causes” that signal leftist “enlightenment,” along with being for abortion and against the consumption of fossil fuels (except for one’s SUV, of course).

The foregoing recitation suggests a mixture of reasons for favoring or disfavoring certain behaviors. Those reasons range from purely utilitarian ones (agreeable or not) to status-signaling. In between, there are religious and consequentialist reasons, which are sometimes related. Consequentialist reasoning goes like this: Behavior X can be indulged responsibly and without harm to others, but there lurks the danger that it will not be, or that it will lead to behavior Y, which has repercussions for others. Therefore, it’s better to put X off-limits or to severely restrict and monitor it. Consequentialist reasoning applies to euthanasia (it’s easy to slide from voluntary to involuntary acts, especially when the state controls the delivery of medical care), infanticide and abortion (forms of involuntary euthanasia and signs of disdain for life), homosexuality (a depraved, risky practice that can ensnare impressionable young persons who see it as an “easy” way to satisfy sexual urges), alcohol and drugs (addiction carries a high cost, for the addict, the addict’s family, and sometimes for innocent bystanders). A taste or tolerance for destructive behavior identifies a person as an untrustworthy social partner.

It seems to me that the exceptions listed above are just that. There’s a mainstream interpretation of the Golden Rule — one that still holds in many places — which rules out certain kinds of behavior, except in extreme situations, and permits certain other kinds of behavior. There is, in other words, a “core” Golden Rule that comes down to this:

  • Murder is wrong, except in self-defense. (Capital punishment is just that: punishment. It’s also a deterrent to murder. It isn’t “murder,” muddle-headed defenders of baby-murder to the contrary notwithstanding.)
  • Various kinds of unauthorized “taking” are wrong, including theft (outright and through deception). (This explains popular resistance to government “taking,” especially when it’s done on behalf of private parties. The view that it’s all right to borrow money from a bank and not repay it arises from the mistaken beliefs that (a) it’s not tantamount to theft and (b) it harms no one because banks can “afford it.”)
  • Libel and slander are wrong because they are “takings” by word instead of deed.
  • It is wrong to turn spouse against spouse, child against parent, or friend against friend. (And yet, such things are commonly portrayed in books, films, and plays as if they are normal occurrences, often desirable ones. And it seems to me that reality increasingly mimics “art.”)
  • It is right to be pleasant and kind to others, even under provocation, because “a mild answer breaks wrath: but a harsh word stirs up fury” (Proverbs 15:1).
  • Charity is a virtue, but it should begin at home, where the need is most certain and the good deed is most likely to have its intended effect.

None of these observations would be surprising to a person raised in the Judeo-Christian tradition, or even in the less vengeful branches of Islam. The observations would be especially unsurprising to an American who was raised in a rural, small-town, or small-city setting, well removed from a major metropolis, or who was raised in an ethnic enclave in a major metropolis. For it is such persons and, to some extent, their offspring who are the principal heirs and keepers of the Golden Rule in America.

There is far more to human behavior than biological and evolutionary determinism. (Not that Wilson is guilty of that, but many others are.) It is especially simplistic to rely on biological and evolutionary explanations of the particular subset of behavioral rules known as “rights.” For the final word on that point, I return to “Evolution, Human Nature, and ‘Natural Rights'”:

[T]he Golden Rule represents a social compromise that reconciles the various natural imperatives of human behavior (envy, combativeness, meddlesomeness, etc.). Even though human beings have truly natural proclivities, those proclivities do not dictate the existence of “natural rights.” They certainly do not dictate “natural rights” that are solely the negative rights of libertarian doctrine. To the extent that negative rights prevail, it is as part and parcel of the “bargain” that is embedded in the Golden Rule; that is, they are honored not because of their innateness in humans but because of their beneficial consequences.

Is College for Everyone?

Of course not. But don’t tell that to Obamanauts and other purveyors of what is mistakenly taken for compassionate wisdom these days.

This is from my post, “The Higher Education Bubble“:

When I entered college [in 1958], I was among the 28 percent of high-school graduates then attending college. It was evident to me that about half of my college classmates didn’t belong in an institution of higher learning. Despite that, the college-enrollment rate among high-school graduates has since doubled.

Here is a recent view from the front lines of higher education in the United States:

America, ever-idealistic, seems wary of the vocational-education track. We are not comfortable limiting anyone’s options. Telling someone that college is not for him seems harsh and classist and British, as though we were sentencing him to a life in the coal mines. I sympathize with this stance; I subscribe to the American ideal. Unfortunately, it is with me and my red pen that that ideal crashes and burns.

Sending everyone under the sun to college is a noble initiative. Academia is all for it, naturally. Industry is all for it; some companies even help with tuition costs. Government is all for it; the truly needy have lots of opportunities for financial aid. The media applauds it—try to imagine someone speaking out against the idea. To oppose such a scheme of inclusion would be positively churlish. But one piece of the puzzle hasn’t been figured into the equation, to use the sort of phrase I encounter in the papers submitted by my English 101 students. The zeitgeist of academic possibility is a great inverted pyramid, and its rather sharp point is poking, uncomfortably, a spot just about midway between my shoulder blades.

For I, who teach these low-level, must-pass, no-multiple-choice-test classes, am the one who ultimately delivers the news to those unfit for college: that they lack the most-basic skills and have no sense of the volume of work required; that they are in some cases barely literate; that they are so bereft of schemata, so dispossessed of contexts in which to place newly acquired knowledge, that every bit of information simply raises more questions. They are not ready for high school, some of them, much less for college. (“In the Basement of the Ivory Tower,” The Atlantic, June 2008; h/t Maverick Philosopher)

Perhaps the higher-education bubble is about to burst. A serious effort to reduce government spending would surely lead to the reduction of tax subsidies to state colleges and universities. Or so one can hope.

The Evil That Is Done with Good Intentions

Social Security, Medicare, and Medicaid do several bad things at once:

They crowd out prospective providers of retirement funds, medical insurance, and medical care.

They create “moral hazard” by lulling people into the false belief that they will be well-taken-care of in their old age, thereby making it less likely that they will put aside money for their old age.

They therefore cause under-saving and, thus, under-investment in those things upon which economic growth depends: innovation and business creation.

If growth were not hobbled, there would be far fewer people in need of welfare programs and far more money available for voluntary assistance to those who truly cannot care for themselves.

Related posts:
Economic Growth since WWII
A Social Security Reader
The Price of Government
The Commandeered Economy
Rationing and Health Care
The Perils of Nannyism: The Case of Obamacare
The Price of Government Redux
More about the Perils of Obamacare
Health-Care Reform: The Short of It
The Mega-Depression
Presidential Chutzpah
As Goes Greece
The Real Burden of Government
Toward a Risk-Free Economy
The Rahn Curve at Work
The Illusion of Prosperity and Stability
The “Forthcoming Financial Collapse”
Estimating the Rahn Curve: Or, How Government Inhibits Economic Growth
The Deficit Commission’s Deficit of Understanding
Undermining the Free Society
The Bowles-Simpson Report
The Bowles-Simpson Band-Aid
Build It and They Will Pay
Government vs. Community
The Stagnation Thesis

Does World War II “Prove” Keynesianism?

In “How the Great Depression Ended,” I say that

World War II did bring about the end of the Great Depression, not directly by full employment during the war but because that full employment created a “glut” of saving. After the war that “glut” jump-started

  • capital spending by businesses, which — because of FDR’s demise — invested more than they otherwise would have; and
  • private consumption spending, which — because of the privations of the Great Depression and the war years — would have risen sharply regardless of the political climate.

That analysis is by no means an endorsement of simple-minded Keynesianism (as propounded by Paul Krugman, for example), which holds that the government can spend the economy out of a recession or depression, if only it spends “enough” (which is always more than it actually spends). But there is no point in pumping additional money into an economy unless the money elicits productive endeavors: business creation and expansion, leading to net capital formation and job creation.

Pumping additional money into government programs results in the misdirection of resources, at best, and in the discouragement of productive private activity, at worst. Discouragement takes two forms: crowding-out and active interference (usually through regulatory inhibitions).

The answer to the question of this post’s title is that World War II has nothing to do with Keynes or Keynesianism, as it is widely understood. Employment and output (measured in dollars) rose sharply during World War II, but most of the additional output was devoted to the war effort. Huge increases in government spending did not lead to huge increases in the material well-being of Americans, most of whom were working harder while being deprived of the fruits of their labors, through rationing.

If anything, the post-war recovery “proves” the folly and wastefulness of efforts to stimulate an economy through government spending. It was not government spending that re-started the U.S. economy after World War II, it was private spending on capital investments and consumer goods. Some of that private spending was encouraged by the end of regime uncertainty. That end was brought about by the curtailment of New Deal initiatives (until the 1960s) because of the war and FDR’s death. Private spending — which was boosted by wartime saving — would have been purely inflationary had businesses not been willing and able to create jobs and expand output.

Rating America’s Wars

In “Why We Should (and Should Not) Fight” I say that

American armed forces should be used only to preserve, protect, and defend the interests of Americans.

I ended that post with an assessment of the engagements in Iraq, Afghanistan, and Libya. But what about earlier American wars? Here are my thumbnail assessments of them (the dates indicate years in which U.S. forces were involved in combat):

Indian Wars (1637-1918). This long, episodic battle with Native Americans was justified when the purpose was to defend Americans and justly condemned when the purposes were genocide and theft  of Indian lands by force or fraud. There is probably much more to be ashamed of than to be proud of in the history of the Indian Wars.

Revolutionary War (1775-1783). The struggle for self-government deserves praise whether the motivation was liberty in general or the economic interests of colonial planters, merchants, and manufacturers. The latter is a subset of the former, and the outcome of the war served both ends. In that regard, many of the leaders of the armed struggle also became prominent figures in the establishment of the Articles of Confederation and Constitution. Both documents were aimed at preserving and extending the liberty for which the revolution was waged.

War of 1812 (1812-1815). A leading cause of this war was the imposition by Britain of restrictions intended to impede American commerce with France. That, alone, would have justified the war if Britain could not be dissuaded by peaceful means, which it could not be. The U.S. had other legitimate grievances: impressment of American sailors into the British navy and British support of Indian raids in the Northwest Territory. The War of 1812 was, in effect, a belated and creditable resumption of the Revolutionary War.

Mexican-American War (1846-1848). The proximate cause of the war was the attempt by Mexico to retake Texas, which had won independence from Mexico in 1836 and annexed itself to the United States in 1845. The resulting war enabled the U.S. to acquire from Mexico — for $18,250,000 — land that is now California, Nevada, Utah, New Mexico, most of Arizona and Colorado, and parts of Texas, Oklahoma, Kansas, and Wyoming. The U.S. was right to prosecute the war and entirely reasonable about the terms and conditions for resolving it.

Civil War (1861-1865). The war that is still being fought (with words) by many Americans pitted the morally reprehensible Southern defenders of slavery against Northerners, led by Abraham Lincoln, who hewed to the dubious proposition that secession is impermissible under the Constitution. The Civil War can be justified only in that it ended slavery in the United States, which was not Mr. Lincoln’s original aim in prosecuting it.

Spanish-American War (1898). This unnecessary war was fought on the excuse of Spanish atrocities in Cuba and the still-mysterious sinking of the USS Maine in Havana Harbor. It was in fact an exercise in imperialism through which the U.S. acquired the dubious honor of controlling Cuba, Puerto Rico, Guam, and the Philippines — altogether more trouble than they were worth. It is especially galling that Theodore Roosevelt rode the Spanish-American War to fame, and eventually to the imperial presidency.

World War I (1917-1918). The immediate cause of the entry of the United States into this war was German acts of belligerence — sabotage and the sinking of U.S. merchant ships. Those acts were aimed at preventing the U.S. from selling war supplies to Britain. Germany, in other words, was sorely provoked, and the U.S. government could not realistically claim to be a neutral party in what was really a European war, with Asian and African sideshows involving opportunistic attacks on German interests in those regions. Had the U.S. stayed neutral and avoided war, Germany might have won, though a stalemate was more likely. In either event, an exhausted Germany would hardly have been a threat to the U.S., and might even have welcomed trade with the U.S. as it rebuilt in the post-war years. All of this was last in the anti-German hysteria of the time, which played well to the super-majority of Americans whose roots were in the British Isles. It is pure hindsight to say that a victorious or stalemated Germany probably would not have produced the Third Reich, but true nevertheless. America’s entry into World War I was a mistake, in any event, but it turned out to be a horrendously costly one.

World War II (1941-1945). While Anglo-American and French politicians pursued the illusion that peace could be maintained through diplomacy and treaties, Adolf Hitler and Japan’s military caste pursued dominion through conquest. The Third Reich and Empire of the Rising Sun failed to dominate the world only because of (a) Hitler’s fatal invasion of Russia, (b) Japan’s wrong-headed attack on Pearl Harbor, and (c) the fact that the United States of 1941 had time and space on its side. Had the latter not been true, Americans could well have found themselves cut off from the world — and much the poorer for it — if not enslaved. World War II clearly ranks just behind the War of 1812 as the most necessary war in America’s post-Revolutionary history.

Cold War (1947-1991). This necessary, long, and costly “war” of deterrence through preparedness enabled the U.S. to protect Americans’ legitimate economic interests around the world by limiting the expansion of the Soviet empire. The Cold War had some “hot” moments and points of high drama. Perhaps the most notable of them was the so-called Cuban Missile Crisis of 1962, which was not the great victory proclaimed by the Kennedy administration and its political and academic sycophants. (For more on this point, go here and scroll down to the section on Kennedy.) That the U.S. won the Cold War because the USSR’s essential bankruptcy was exposed by Ronald Reagan’s defense buildup is a fact that only left-wingers and dupes will deny. They continue to betray their doomed love of communism by praising the hapless Mikhail Gorbachev for doing the only thing he could do in the face of U.S. superiority: surrender and sunder the Soviet empire. America’s Cold War victory owes nothing to LBJ (who wasted blood and treasure in Vietnam), Richard Nixon (who would have sold his mother for 30 pieces of silver), or Jimmy Carter (whose love for anti-American regimes and rebels knows no bounds).

Korean War (1950-1953). The Korean War was unnecessary, in that it was invited by the Truman administration’s policies: exclusion of Korea from the Asian defense perimeter and massive cuts in the U.S. defense budget. But it was essential to defend South Korea so that the powers behind North Korea (Communist China and, by extension, the USSR) would grasp the willingness of the U.S. to maintain a forward defensive posture against aggression. That signal was blunted by Truman’s decision to sack MacArthur when the general persisted in his advocacy of attacking Chinese bases following the entry of the Chinese into the war. The end result was a stalemate, where a decisive victory might have broken the back of communistic adventurism around the globe. The Korean War, as it was fought by the U.S., became “a war to foment war.”

Vietnam War (1965-1973). Whereas the Korean War was a necessary war against communist expansionism, the Vietnam War was an unnecessary entanglement in a civil war in which one side happened to be communist. Nevertheless, the U.S., having made a costly commitment to the prosecution of the war, should have fought it to victory. Instead, unlike the case of Korea, U.S. forces were withdrawn and it took little time for North Vietnam to swallow South Vietnam. American resolve suffered a body blow, from which it rebounded only partially by winning the Cold War, thanks to Reagan’s defense buildup in the 1980s. When it came to actual warfare, however, Vietnam repeated and reinforced the pattern of compromise and retreat that had begun with the Korean War, and which eventuated in the 9/11 attacks.

Gulf War (1990-1991). This war began with Saddam Hussein’s invasion of oil-rich Kuwait. U.S. action to repel the invasion was fully justified by the potential economic effects of Saddam’s capture of Kuwait’s petroleum reserves and oil production. The proper response to Saddam’s aggression would have been not only to defeat the Iraqi army but also to depose Saddam. The failure to do so further reinforced the pattern of compromise and retreat that had begun in Korea, and necessitated the long, contentious Iraq War of the 2000s.

The quick victory in Iraq, coupled with the coincidental end of the Cold War, helped to foster a belief that the peace had been won. (That belief was given an academic imprimatur in Francis Fukuyama’s The End of History and the Last Man.) The stage was set for Clinton’s much-ballyhooed fiscal restraint, which was achieved by cutting the defense budget. Clinton’s lack of resolve in the face of terrorism underscored the evident unwillingness of American “leaders” to defend Americans’ interests, thus inviting 9/11.  (For more about Clinton’s foreign and defense policy, go here and scroll down to the section on Clinton.)

Which leads us back to the wars and skirmishes of the 21st century.

Joe Stiglitz, Ig-Nobelist

This reminds me of these:

Joe Stiglitz, Ig-Nobelist

Taxing the Rich

More about Taxing the Rich

Social Justice

More Social Justice

The Public-School Swindle

I have a relative by marriage who’s a retired public-school teacher. She loved to moan about her “low” pay. She wasn’t alone, of course. Her refrain has been heard throughout the land for decade. Truth be told, however, she and her ilk were and are overpaid, as several commentators have explained (e.g., here, here, and here). The following diagram illustrates the machinations that yield above-market compensation for public-school teachers (and other) “public servants”.

Here’s a step-step-explanation:

1. The diagonal, solid-black lines represent the demand for teachers in the absence of tax-funded (public) schools (D-no pu) and the supply of teachers in the absence of tax-funded schools (S-no pu). The intersection of the S and D curves yields the level of teacher compensation (C-no pu) and employment (E-no pu) that would result were there nothing but private schools. (I am, for now, putting aside the question whether government should require school attendance through a certain age or grade, or dictate what is taught in schools.)

2. The picture changes dramatically with the introduction of tax-funded schools (indicated by the red lines). The supply of teachers for public schools (Spu) is to the left of S-no pu because (a) not all teachers are willing to work in public schools and (b) not all teachers are “qualified” to teach in public schools. The second condition arises when potential teachers have learned too much about the subjects they would teach, at the expense of taking too few (or none) of the “education” courses that enable fairly dim education majors to compile inflated grade-point averages.

3. The horizontal, solid red line indicates the inflated compensation (Cpu) that is offered by tax-funded school systems. This above-market rate of compensation is the product of an inter-scholastic “arms race”, in which school systems — goaded by administrators, teachers, parents, and (often) local businesspersons — seek to outdo the lavishness of other school systems, not only in the compensation of teachers and administrators but also in the number and kinds of non-essential courses and activities, and the lavishness and modernity of facilities and equipment. All of which is paid for (in the main) by taxpayers and consumers who have no say in the matter, but whose income and property can be seized for failure to pay the requisite taxes.

5. Not surprisingly, there are more teachers who are willing to work at public-school rates of compensation than public schools can hire (Epu), even with their inflated budgets. That is why some teachers turn to private schools, others accept substitute-teaching jobs, and some end up doing things like selling used cars. The green lines represent the supply of (Spr) and demand for (Dpr) private-school teachers. and the corresponding compensation of (Cpr) and number of teachers employed by (Epr) private schools.

6. The supply of teachers to private schools consists of (a) those teachers who cannot get jobs with public schools but are willing to teach in private schools and (b) those teachers who abhor the thought of teaching in public schools and are therefore willing to accept lower compensation for the privilege of teaching in private schools. The compensation of private-school teachers is lower than that of public-school teachers because

  • the compensation of public-school teachers is artificially inflated by the vast amounts of tax money extracted from persons who would not otherwise be in the market for education, let alone public-school education, and
  • the vastness of the tax burden limits the ability of persons who are in the market for education to pay for private schooling, that it, it artificially reduces the demand for private schooling.

Because the subsidization of public schools, there are far more teachers than would be the case in an entirely private system. Advocates of tax-funded education would count that as a plus, as they would the above-market wages of public-school teachers. In fact, it is a minus, because it means that resources are being diverted to less productive uses than they would be were education an entirely private matter. Moreover, mediocre teachers and administrators — often outfitted with lavish facilities and equipment — are being paid more than necessary to “educate” children in useless subjects, at the expense of taxpayers who could put that money to work providing better homes, relevant training, and more jobs for those same children.

This analysis undoubtedly applies to higher education as well as K-12 education. The presence of tax-funded colleges and universities unnecessarily drives up the cost of higher education and burdens many persons who derive no benefit from it.

In summary, public “education” — at all levels — is not just a rip-off of taxpayers, it is also an employment scheme for incompetents (especially at the K-12 level) and a paternalistic redirection of resources to second- and third-best uses. And, to top it off, public education has led to the creation of an army of left-wing zealots who, for many decades, have inculcated America’s children and young adults in the advantages of collective, non-market, anti-libertarian institutions, where paternalistic “empathy” supplants personal responsibility.


Related reading:
Mark J. Perry, “The Public-Sector Premium for School Teachers“, Carpe Diem, March 3, 2011
Ironman, “How Much Do Public-School Teachers Really Make Compared to Private-School Teachers?“, Political Calculations, March 30, 2017
Andrew J. Biggs, “No, Teachers Are Not Underpaid“, City Journal, April 26, 2018

Related posts:
School Vouchers and Teachers’ Unions
Whining about Teachers’ Pay: Another Lesson about the Evils of Public Education
I Used to Be Too Smart to Understand This
International Law vs. Homeschooling
GIGO
Religion in Public Schools: The Wrong and Right of It
The Home Schooler Threat?
The Real Burden of Government
The Higher Education Bubble
Our Miss Brooks
“Intellectuals and Society”: A Review

Why We Should (and Should Not) Fight

G.W. Bush’s decision to invade Iraq and overthrow Saddam Hussein — a decision that was approved by Congress — was justified on several grounds. One of those grounds was a humanitarian consideration: Saddam’s record as a brutally oppressive dictator.

But humanitarian acts have nothing to do with the interests of Americans, except for the mistaken belief that the “rest of the world” (presumably including our enemies and potential enemies) will think better of the United States for such acts. The belief, as I say, is mistaken. Our foreign enemies and potential enemies see such things as evidence of American softness, when they do not see them as ways of obtaining U.S. weapons for future use against American interests. Our foreign “friends” (the sneer is well-advised) see the humanitarian acts of the U.S. government as one, two, or all of the following: (a) substitutes for their own humanitarian acts, which may accordingly be curtailed or withheld, (b) evidence of America’s “imperial” aims, and (c) evidence of the willingness of Americans to expend lives and treasure, sometimes in vain, for elusive or illusory objectives.

From the point of view of American taxpayers, the commission of humanitarian acts by the U.S. government is almost always and certainly a waste of money. (I have elsewhere discussed and dismissed the proposition that such acts are morally superior to the alternative of letting taxpayers decide how best to use their money.)  It follows that now military operation can or should be justified solely on the basis of humanitarianism. And yet, that is the essential justification of Obama’s adventure in Libya.

Were Obama to come right out and say that our military involvement in Libya is really aimed at ensuring a continuous flow of petroleum from that country’s wells, refineries, and ports, he would be accused of waging a campaign of “blood for oil.” That, of course, was a leftist rallying cry against Bush’s invasion of Iraq, and Obama — as a man of the left and opponent of the Iraq war — does not want to be painted with the same brush.

Bush, too, sought to avoid the taint of “blood for oil.” But, in reality, it was in the interest of the U.S. (and other nations) to restore the flow of Iraqi oil to (or above) the rate attained before the imposition of UN sanctions.

Nevertheless, political discourse has become so mealy-mouthed since the end of World War II that no American politician dare speak of an economic motivation for the use of military force. And so, American politicians must adopt the language of hypocrisy, cant, and political correctness to justify acts that are either (a) unjustifiable because they are purely humanitarian or (b) fully justifiable as being in the interest of Americans, period.

In sum, American armed forces should be used only to preserve, protect, and defend the interests of Americans. To that end, American armed forces certainly may be used preemptively as well as reactively. And as long as it remains economically advantageous for Americans to import oil from other countries, it will be a legitimate use of American armed forces to defend those imports — at the source and every step of the way to this country. I would say the same about any resource whose importation is vital to the well-being of Americans.

The decision whether to use force to protect Americans and their interests, in any given instance, requires a judgment as to the likely costs, benefits, and success of the venture. For practical purposes, it is the president who makes that judgment, but he is ill-advised to commit armed forces without the backing of Congress. When armed forces have been committed, they should remain committed until the objective has been met, unless it becomes clear — to the president and Congress, the media and protesters to the contrary — that the objective cannot be met without incurring unacceptable costs.

A reversal of course sends a very strong signal to our enemies and potential enemies that America’s leadership is unwilling to do what it takes to protect Americans and their interests. Such a signal, of course, makes all the more likely that someone will act against Americans and their interests.

All of that said, I come to the following conclusions about current military engagements involving American armed forces:

  • Iraq was worth the effort, assuming that a post-withdrawal Iraq remains a relatively stable, oil-producing nation in the midst of surrounding turmoil.
  • Afghanistan is worth only the effort required to destroy its usefulness as an al Qaeda base. If that cannot be achieved, the large-scale U.S. presence in Afghanistan should be scaled back to a special operations force dedicated solely to the detection and destruction of al Qaeda facilities and personnel.
  • Libya is worth only the effort required to ensure that it remains a major oil-exporting nation. Aiding the Libyan rebels is likely to backfire because of the strong possibility that al Qaeda or its ilk will emerge triumphant in a rebel-led post-Gaddifi regime (as seems to be the case in Egypt’s post-Mubarak regime). Given that possibility, the U.S. government should withdraw all support of the NATO operation, with the aim of (a) bringing about the end of that operation or (b) forcing a “willing coalition” of European nations to do what it takes to ensure that a post-Gaddafi regime is no worse than neutral toward the West.

Earlier wars are treated here.

Related posts:
Libertarian Nay-Saying on Foreign and Defense Policy
Libertarianism and Preemptive War: Part I
Right On! For Libertarian Hawks Only
Understanding Libertarian Hawks
More about Libertarian Hawks and Doves
Sorting Out the Libertarian Hawks and Doves
Libertarianism and Preemptive War: Part II
Give Me Liberty or Give Me Non-Aggression?
More Final(?) Words about Preemption and the Constitution
Thomas Woods and War
“Peace for Our Time”
How to View Defense Spending
More Stupidity from Cato
Anarchistic Balderdash
Cato’s Usual Casuistry on Matters of War and Peace
A Point of Agreement
The Unreality of Objectivism
A Grand Strategy for the United States
The Folly of Pacifism

More Social Justice

Matt Zwolinksi has a post at Bleeding Heart Libertarians in which he asks “What Is Social Justice?” He offers a couple of specific answers and alludes to others. One of his offerings is something he calls prioritarianism.

Prioritarianism, as I understand it from Zwolinski’s explanation, assumes that (a) the welfare of an individual can be quantified, (b) the welfare of individuals can be summed, (c) the welfare-value of a marginal dollar is inversely proportional to the initial welfare state of the recipient, (d) the inverse relationship is stronger at lower initial welfare-values, and (e) most importantly, in accordance with (b), the welfare gained by the person to whom a marginal dollar is given somehow cancels the welfare lost by the person from whom that dollar is taken.

If this is a valid prescription for “social justice,” it must be capable of implementation. Otherwise, it is no more useful than a map of the Kingdom of Oz.

And who should be in charge of measuring welfare, summing it, and weighing the gains and losses in order to arrive at a socially “just” distribution of income, whatever that is? Well, we know the answer to that question: It has to be the state — or more accurately — elected officials and bureaucrats: people not known for their perspicacity, objectivity, and even-handedness.

In the alternative, a just society could be one where individuals engage in voluntary, cooperative exchanges of goods and services for their mutual betterment, and from the fruits of which they voluntarily aid those whom they know to be in need of aid.

The alternative is inevitably attacked as “unjust.” But it should be noted that such attacks come from individuals (philosophers, politicians, do-gooders, etc.) who would impose their own views of “social justice” on everyone. How any such imposition can be considered more “just” than a regime of voluntary, cooperative, mutually beneficial behavior is beyond me.

I submit that what we now have in the United States is a statist, “prioritarian” regime, with all of real-life arbitrariness, scheming, and graft that inexorably accompanies statism. What we need badly is a reversion to the kind of constitutional order that would allow the alternative to flourish.

Related posts:
Economic Growth since WWII
The Price of Government
The Commandeered Economy
The Price of Government Redux
The Mega-Depression
The Real Burden of Government
Toward a Risk-Free Economy
The Rahn Curve at Work
The Illusion of Prosperity and Stability
Estimating the Rahn Curve: Or, How Government Inhibits Economic Growth

Soros the Bootlegger

In the preceding post I summarized Bruce Yandle’s theory of regulation, which Yandle calls “Baptists and Bootleggers.” The “Baptists” are well-meaning parties who want to protect the public from something that they, the “Baptists,” consider harmful. The “bootleggers” are parties (usually incumbent producers of a product or service) who stand to benefit from regulations that put make it difficult or impossible for competition to arise.

The “bootleggers” side of the equation is known as regulatory capture, which “occurs when a … regulatory agency created to act in the public interest instead advances the commercial or special interests that dominate the industry or sector it is charged with regulating.” Regulatory capture is a common phenomenon, and it should be a telling argument for deregulation. It isn’t, of course, because of the all-too-human tendency to believe that with the right people or party in charge of things, capture would vanish. Good luck with that.

Anyway, it seems that George Soros, in addition to his other sins, is a “bootlegger” par excellence. Michael Knox Beran, writing in City Journal (“Exposing the Elites“), begins with this:

In 1997 George Soros, writing in The Atlantic, declared: “The main enemy of the open society, I believe, is no longer the communist but the capitalist threat.”

The words marked the beginning of a decade and a half of plutocratic progressivism. In July 2003, AFL-CIO political director Steven Rosenthal conferred with some of America’s richest tycoons at El Mirador, Soros’s estate in Southampton, to figure out how to defeat George W. Bush. In August 2004, the president of the Service Employees International Union, Andy Stern—the “most important labor boss in America”—traveled to Aspen to plot strategy in a moneyed conclave that included savings and loan moguls Herbert and Marion Sandler, Progressive Insurance founder Peter Lewis, and businessman John Sperling. Warren Buffett, de facto chairman of the country’s billionaires’ club, endorsed the candidacy of presidential aspirant Barack Obama, while the Democracy Alliance, which Matthew Vadum and James Dellinger dub “Billionaires for Big Government,” bankrolled progressive groups like ACORN and the Center for American Progress.

Beran then explains this odd alliance of plutocrats and “progressives”:

Is there something novel in these alliances which, Demos scholar David Callahan observes, have brought some of the nation’s most notable elites together during the last decade to make common cause with some of the country’s most progressive leaders? Hardly: pacts between munificent plutocrats and progressive reformers are one of the oldest tricks in oligarchy’s playbook….

[Henry] James’s and [Lionel] Trilling’s belief that social pity conceals an unacknowledged desire for power finds corroboration in the behavior of today’s elites, who in promoting the ostensibly virtuous cause of social reform are making a shrewd investment in their own continued dominance. Much of today’s big money was made during the extraordinary period of market liberalization that began around 1980 and came to an end with the crash of 2008. In pushing for a revival of the social state, tycoons who benefited from freer markets seek to limit market competition. If they succeed, they will forestall the emergence of a new generation of innovators, young Turks who would otherwise push the old Croesuses aside.

Classic “bootlegger” behavior. And Soros is a classic “bootlegger.” Ed Lasky, writing at American Thinker (“Soros Wins under Obama’s Energy Policies“), makes a good case that Soros is engaged in an act of massive “bootlegging”:

Are Barack Obama’s energy policies influenced by hedge fund billionaire and political patron, George Soros?

Abby Wisse Schacter, in the New York Post, notes that the Obama administration is clamping down on oil and gas development in America (both onshore and offshore) but is hell-bent on helping other nation’s tap their resources and points out that such help is being showered specifically in New Guinea, of all places.

It is starting to look obvious that the administration doesn’t want oil exploration and extraction at home while it is promoting the same exploration and extraction elsewhere — specifically Brazil and New Guinea….

Others have commented on Obama’s generosity regarding Brazil’s oil wealth and how those actions might help George Soros.

But focus should now turn towards the exotic land of New Guinea.

New Guinea? Why there? Why is he using our taxpayer dollars to help energy development in New Guinea? Hasn’t Secretary of the Interior Salazar bemoaned that his budget is just not large enough to process all the drilling permits submitted for tapping America’s oil and gas wealth? Why are he and the President devoting staff and money to help that undeveloped island nation?

Perhaps, he just wants to pay back George Soros, who was so instrumental in helping his election and the election of fellow Democrats across America. George Soros is the Patron Saint of the Democratic Party and was a very early and generous supporter of Barack Obama’s.  Soros even used a loophole in Federal campaign laws that allowed him and his family to give outsized donations to Barack Obama; he also fielded his army of so-called 527 groups (such as MoveOn.Org) to help Obama win the Oval Office.

Soros also stands to massively benefit if New Guinea becomes an energy power, especially if the American taxpayer subsidizes this development….

We won’t be the beneficiaries from the spending of tax dollars in New Guinea? We may actually be the losers from all that spending.

We have an abundance of natural gas (due to the tapping of our own shale gas reserves); we don’t need LNG. We have such vast amounts of natural gas that ports that were built to import LNG are being reconfigured to export LNG. Why is Obama spending our tax dollars to help a foreign competitor while increasing taxes exponentially on  American oil and gas companies? Why encourage New Guinea to develop its LNG capability to export to China, Japan, and other nations when we can and should export our own LNG to them?

But helping America’s oil and gas industry (and helping lower the energy bills for Americans) is not and never has been on the agenda of Barack Obama.

Obama’s rewarding his friends and donors, who no doubt will reciprocate by supporting him in 2012, is Cook County Politics writ large. That modus operandi has always guided him.

Does his agenda include helping further enrich George Soros, sugar daddy of the Democratic Party?

The “Baptists” in this case are environmentalists and their allies, who’d rather have Americans pay $10 for a gallon of gasoline than run the slightest risk of environmental damage. Well, that’s the excuse, anyway. The fact of the matter is that they’ve been duped into supporting a party that prizes power above all else, and multi-billionaires like George Soros, who profit from that power.

P.S. It’s also possible — and not unlikely — that Soros also has a bigger objective than making himself richer: http://www.newsrealblog.com/2011/03/28/communism-loving-george-soros-wants-to-kill-capitalism/

Bootleggers, Baptists, and Pornography

Bruce Yandle’s “Bootleggers and Baptists–The Education of a Regulatory Economist” appeared 28 years ago in Cato Institute’s Regulation (vol 7, no. 3). Yandle explains how he came to the evocative phrase “Bootleggers and Baptists”:

I joined the Council on Wage and Price Stability in 1976. There my assignment was to review proposed regulations from the Environmental Protection Agency (EPA), the Federal Trade Commission (FTC), the Department of Transportation (DOT), and parts of the Department of Health, Education, and Welfare (HEW)…. I was ready to educate the regulators. But then I began to talk with some of them, and I began to hear from people in the industries affected by the rules. To my surprise, many regulators knew quite a bit about economics. Even more surprising was that industry representatives were not always opposed to the costly rules and occasionally were even fearful that we would succeed in getting rid of some of them. It was in considerable confusion that I returned later to my university post, still unable to explain what I had observed and square it with the economics I thought I understood.

That marked the beginning of a new approach to my research on regulation. First, instead of assuming that regulators really intended to minimize costs but somehow proceeded to make crazy mistakes, I began to assume that they were not trying to minimize costs at all — at least not the costs I had been concerned with. They were trying to minimize their costs, just as most sensible people do….

Second, I asked myself, what do industry and labor want from the regulators? They want protection from competition, from technological change, and from losses that threaten profits and jobs. A carefully constructed regulation can accomplish all kinds of anticompetitive goals of this sort, while giving the citizenry the impression that the only goal is to serve the public interest.

Indeed, the pages of history are full of episodes best explained by a theory of regulation I call “bootleggers and Baptists.” Bootleggers, you will remember, support Sunday closing laws that shut down all the local bars and liquor stores. Baptists support the same laws and lobby vigorously for them. Both parties gain, while the regulators are content because the law is easy to administer. Of course, this theory is not new. In a democratic society, economic forces will always play through the political mechanism in ways determined by the voting mechanism employed. Politicians need resources in order to get elected. Selected members of the public can gain resources through the political process, and highly organized groups can do that quite handily. The most successful ventures of this sort occur where there is an overarching public concern to be addressed (like the problem of alcohol) whose “solution” allows resources to be distributed from the public purse to particular groups or from one group to another (as from bartenders to bootleggers).-

Where does pornography come in? For a long time, pornography was prohibited, just as alcoholic beverages were (for the most part) during Prohibition. That didn’t stop the production of pornography, of course, but it did reduce the flow of output, making pornography more lucrative  — for those willing to buck the law —  than it would have been in the absence of prohibition.

It should come as no surprise that — even in this day of government-approved licentiousness — there are members of the port industry who are critical of the approval of the .xxx domain. According to NewsLime.com,

Internet Corp. for Assigned Names and Numbers (ICNN), the group that supervises the naming system of the Internet, approved .xxx domain for use in pornographic sites. This decision was made amid opposition from porn stars and other people in the industry who contended that the approval will just lead to censorship.

Religious groups also argued that web content of pornographic sites will be legitimized when they are given their own corner of the Internet….

Critics that [sic] include Vivid Entertainment, producer of adult video, and Free Speech Coalition contended that the triple x suffix of the domain would make a virtual section of the Internet that would undermine speech and would eventually lead to censorship.

What the “bootleggers” in the porn industry mean, of course, is that their commercial products will lose value because the .xxx domain will encourage entry into the porn market. Some of the entrants undoubtedly will provide “free samples” in the hope of getting viewers to pay for the more “tantalizing” material that is locked behind paywalls.

The  “Baptists” are the religious groups, of course. And they are sincere in their opposition to .xxx, whereas the “bootleggers” are merely cynical in their opposition.

So, there you have it. Another case study in “Bootleggers and Baptists.” For more, read Yandle’s article in its entirety. Also, read Yandle’s “Bootleggers and Baptists in Retrospect (Regulation, vol. 22, no. 3),” which appeared 15 years later.

Positive Liberty vs. Liberty

There is a special kind of liberty known as “positive liberty,” which is inimical to “liberty,” as that term is properly understood. To show why, I begin by expanding on an earlier post, where I offer the following definition of liberty:

peaceful, willing coexistence and its concomitant: beneficially cooperative behavior

Liberty, thus defined, is liberty — full stop. It is neither negative nor positive. It is a modus vivendi that is accepted and practiced by a social group, in keeping with the group’s behavioral norms. There is no liberty if those norms do not include voice and exit, because willing coexistence then becomes problematic. (For a further elaboration, see “On Liberty” and scroll down to “What Liberty Is.”)

However, peaceful, willing coexistence is likely (and perhaps only) to be found where a close-knit social group lives by the Golden Rule:

One should treat others as one would like others to treat oneself….

The Golden Rule can be expanded into two, complementary sub-rules:

  • Do no harm to others, lest they do harm to you.
  • Be kind and charitable to others, and they will be kind and charitable to you.

The first sub-rule — the negative one — is compatible with the idea of negative rights, but it doesn’t demand them. The second sub-rule — the positive one — doesn’t yield positive rights because it’s a counsel to kindness and charity, not a command.

I call the Golden Rule a natural law because it’s neither a logical construct … nor a state-imposed one. Its long history and widespread observance (if only vestigial) suggest that it embodies an understanding that arises from the similar experiences of human beings across time and place. The resulting behavioral convention, the ethic of reciprocity, arises from observations about the effects of one’s behavior on that of others and mutual agreement (tacit or otherwise) to reciprocate preferred behavior, in the service of self-interest and empathy. That is to say, the convention is a consequence of the observed and anticipated benefits of adhering to it.

I must qualify the term “convention,” to say that the Golden Rule will be widely observed within any group only if the members of that group are generally agreed about the definition of harm, value kindness and charity (in the main), and (perhaps most importantly) see that their acts have consequences. If those conditions are not met, the Golden Rule descends from convention to admonition.

However,

Self-governance by mutual consent and mutual restraint — by voluntary adherence to the Golden Rule — is possible only for a group of about 25 to 150 persons: the size of a hunter-gatherer band or Hutterite colony. It seems that self-governance breaks down when a group is larger than 150 persons. Why should that happen? Because mutual trust, mutual restraint, and mutual aid — the things implied in the Golden Rule — depend very much on personal connections. A person who is loathe to say a harsh word to an acquaintance, friend, or family member — even when provoked — often waxes abusive toward strangers, especially in this era of e-mail and comment threads, where face-to-face encounters aren’t involved.  More generally, it’s a human tendency to treat acquaintances differently than strangers; the former are accorded more trust, more cooperation, and more kindness than the latter. Why? Because there’s usually a difference between the consequences of behavior that’s directed toward strangers and the consequences of behavior that’s directed toward persons one knows, lives among, and depends upon for restraint, cooperation, and help. The allure of  doing harm without penalty (“getting away with something”) or receiving without giving (“getting something for nothing”)  becomes harder to resist as one’s social distance from others increases.

When self-governance breaks down, it becomes necessary to spin off a new group or to establish a central power (a state) to establish and enforce rules of behavior (negative and positive). The problem, of course, is that those vested with the power of the state quickly learn to use it to advance their own preferences and interests, and to perpetuate their power by granting favors to those who can keep them in office. It is a rare state that is created for the sole purpose of protecting its citizens from one another and from outsiders, and rarer still is the state that remains true to such purposes.

In sum, the Golden Rule — as a uniting way of life — is quite unlikely to survive the passage of a group from community to state. Nor does the Golden Rule as a uniting way of life have much chance of revival or survival where the state already dominates. The Golden Rule may have limited effect within well-defined groups (e.g., parishes, clubs, urban enclaves, rural communities), by regulating the interactions among the members of such groups. It may have a vestigial effect on face-to-face interactions between stranger and stranger, but that effect arises mainly from the fear that offense or harm will be met with the same, not from a communal bond.

In any event, the dominance of the state distorts behavior. For example, the state may enable and encourage acts (e.g., abortion, homosexuality) that had been discouraged as harmful by group norms; the ability of members of the group to bestow charity on one another may be diminished by the loss of income to taxes and discouraged by the establishment of state-run schemes that mimic the effects of charity (e.g., Social Security).

The attainment of something that all Americans would recognize as liberty is next to impossible. The United States does not comprise a single, close-knit social group, nor even a collection of close-knit social groups. It is a motley, shifting conglomeration of (mostly) loose-knit groups with widely varying social norms and conceptions of harm. It is only a slight exaggeration to say that America is a nation of strangers.

It follows that the only kind of state-sponsored liberty which is possible in America is so-called negative liberty, that is, a regime of negative rights:

  • freedom from force and fraud (including the right of self-defense against force)
  • property ownership (including the right of first possession)
  • freedom of contract (including contracting to employ/be employed)
  • freedom of association and movement.

But we are far from such a regime:

[M]ost government enactments deny negative rights; for example, they

  • compel the surrender of income to government agencies for non-protective purposes (violating freedom from force and property ownership)
  • compel the transfer of income to persons who did not earn the income (violating freedom from force and property ownership)
  • direct how business property may be used, through restrictions on the specifications to which goods must be manufactured (violating property ownership)
  • force the owners of businesses (in non-right-to-work-States) to recognize and bargain with labor unions (violating property rights and freedom of contract)
  • require private businesses to hire certain classes of persons (“protected groups”) and undertake additional expenses for the “accommodation” of handicapped persons (violating property rights and freedom of contract)
  • require private businesses to restrict or ban smoking (violating property rights and freedom of association)
  • mandate attendance at tax-funded schools and the subjects taught in those schools, even where those teachings run counter to the moral values that parents are trying to inculcate (violating freedom from force and freedom of association)
  • limit political speech through restrictions on political contributions and the publication of political advertisements (violating freedom from force and freedom of association).

On top of that,

[s]uch enactments also trample social norms. First, and fundamentally, they convey the message that government, not private social institutions, is the proper locus of moral instruction and interpersonal mediation. Persons who seek special treatment (privileges, a.k.a. positive rights) learn that they can resort to government for “solutions” to their “problems,” which encourages other persons to do the same thing, and so on. In the end — which we have not quite reached — social institutions lose their power to instruct and mediate, and become merely sources of solace and entertainment.

There is much more in the pages of this blog (e.g., here and here). The sum and substance of it all is that liberty is a dead letter in America. It has succumbed to a series of legislative, executive, and judicial acts that have, on the one hand, suppressed and distorted voluntary social and economic relationships and, on the other hand, bestowed positive rights on selected groups to the general detriment of liberty. Positive rights are grants of privilege that can come only at the expense of others, and which are therefore incompatible with the “willing” aspect of liberty.

The clamor for positive liberty ought to set off alarm bells in the minds of libertarians because positive liberty, wrongly understood, justifies positive rights. The last thing this nation needs is what passes for a philosophical justification of positive rights. The first thing this nation needs is a lot fewer positive rights.

Positive liberty is nevertheless on the agenda of the philosophers who blog at Bleeding Heart Libertarians. What is it? According to Wikipedia:

Positive liberty is defined as the power and resources to act to fulfill one’s own potential (this may include freedom from internal constraints); as opposed to negative liberty, which is freedom from external restraint….

…Specifically, … in order to be free, a person should be free from inhibitions of the social structure in carrying out their free will. Structurally speaking classism, sexism or racism can inhibit a person’s freedom….

In other words, it is not enough to have “peaceful, willing coexistence and its concomitant: beneficially cooperative behavior.” That kind of liberty — liberty in the fullest sense — encompasses the acts of love, affection, friendship, neighborliness, and voluntary obligation that help individuals acquire the “power and resources” with which they may strive to attain the fruits of liberty, insofar as they are willing and able to do so.

That should be enough to satisfy the proponents of positive liberty at Bleeding Heart Libertarians, but I suspect otherwise. I would be more sanguine were they proponents of a proper definition of liberty, but they are not. Thus, armed with an inchoate definition of liberty, they are prepared to do battle for positive liberty and, I fear, the positive rights that are easily claimed as necessary to it; to wit:

  • A lack of “power” entitles certain groups to be represented, as groups, in the councils of government (a right that is not extended to other groups).
  • A lack of “resources” becomes the welfare entitlements of various kinds — for personal characteristics ranging from low intelligence to old age — which threaten to suck ever more resources out the productive, growth-producing sectors of the economy.
  • The exercise of “free will” becomes the attainment of certain “willed” outcomes, regardless of one’s ability or effort, which then justifies such things as an affirmative-action job, admission to a university, a tax-subsidized house, etc.
  • “Classism,” “sexism,” “racism,” and now “beauty-ism” become excuses for discriminating against vast swaths of the populace who practice none of those things.

With respect to the final point, a certain degree of unpleasantness inevitably accompanies liberty. Legal attempts to stifle that unpleasantness simply spread injustice by fomenting resentment and covert resistance, while creating new, innocent victims who are deemed guilty until they can prove their innocence.

In sum, the line between positive liberty and positive rights is so fine that the advocacy of positive liberty, however well meant, easily becomes the basis for preserving and extending the burden of positive rights that Americans now carry.

Peter Presumes to Preach

Thanks (?) to one of the Bleeding Heart Libertarians (Jason Brennan, in “Class Experiment on Helping the Poor“), I was introduced to an essay by Peter Singer, “Famine, Affluence, and Morality.” Singer was writing in 1972, when there were thought to be nine million destitute refugees in Bangladesh as a result of the Bhola cyclone of 1970 and atrocities committed by the Pakistani Army during the Bangladesh Liberation War of 1971.

I hope that Brennan, who teaches philosophy at Brown University, is using Singer’s essay to illustrate fallacious reasoning about moral obligations. For that is the lesson to be drawn from Singer’s presumptuous sermon on moral duty and its fulfillment.

I begin the lesson by arranging pertinent excerpts of Singer’s essay to give the main points of his argument:

[1.] I begin with the assumption that suffering and death from lack of food, shelter, and medical care are bad….

[2.] My next point is this: if it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it….

[3.] The uncontroversial appearance of the principle just stated is deceptive. If it were acted upon, even in its qualified form, our lives, our society, and our world would be fundamentally changed. For the principle takes, firstly, no account of proximity or distance. It makes no moral difference whether the person I can help is a neighbor’s child ten yards from me or a Bengali whose name I shall never know, ten thousand miles away. Secondly, the principle makes no distinction between cases in which I am the only person who could possibly do anything and cases in which I am just one among millions in the same position….

[a.] The fact that a person is physically near to us, so that we have personal contact with him, may make it more likely that we shall assist him, but this does not show that we ought to help him rather than another who happens to be further away. If we accept any principle of impartiality, universalizability, equality, or whatever, we cannot discriminate against someone merely because he is far away from us (or we are far away from him)….

[b.] There may be a greater need to defend the second implication of my principle – that the fact that there are millions of other people in the same position, in respect to the [persons in need], as I am, does not make the situation significantly different from a situation in which I am the only person who can prevent something very bad from occurring. Again, of course, I admit that there is a psychological difference between the cases; one feels less guilty about doing nothing if one can point to others, similarly placed, who have also done nothing. Yet this can make no real difference to our moral obligations….

[4.] The outcome of this argument is that our traditional moral categories are upset. The traditional distinction between duty and charity cannot be drawn, or at least, not in the place we normally draw it….

[5.] It follows from some forms of utilitarian theory that we all ought, morally, to be working full time to increase the balance of happiness over misery…. Given the present conditions in many parts of the world, … it does follow from my argument that we ought, morally, to be working full time to relieve great suffering of the sort that occurs as a result of famine or other disasters…. [W]e ought to be preventing as much suffering as we can without sacrificing something else of comparable moral importance.

Singer continues:

I now want to consider a number of points, more practical than philosophical, which are relevant to the application of the moral conclusion we have reached….

This argument [against private giving] seems to assume that the more people there are who give to privately organized famine relief funds, the less likely it is that the government will take over full responsibility for such aid. This assumption is unsupported, and does not strike me as at all plausible. The opposite view – that if no one gives voluntarily, a government will assume that its citizens are uninterested in famine relief and would not wish to be forced into giving aid – seems more plausible….

I do not … dispute the contention that governments of affluent nations should be giving many times the amount of genuine, no-strings-attached aid that they are giving now….

[Another] point raised by the conclusion reached earlier relates to the question of just how much we all ought to be giving away…. [E]arlier I put forward both a strong and a moderate version of the principle of preventing bad occurrences. The strong version, which required us to prevent bad things from happening unless in doing so we would be sacrificing something of comparable moral significance, does seem to require reducing ourselves to the level of marginal utility [the level at which, by giving more, I would cause as much suffering to myself or my dependents as I would relieve by my gift]. I should also say that the strong version seems to me to be the correct one. I proposed the more moderate version – that we should prevent bad occurrences unless, to do so, we had to sacrifice something morally significant – only in order to show that, even on this surely undeniable principle, a great change in our way of life is required. On the more moderate principle, it may not follow that we ought to reduce ourselves to the level of marginal utility, for one might hold that to reduce oneself and one’s family to this level is to cause something significantly bad to happen…. Even if we accepted the principle only in its moderate form, however, it should be clear that we would have to give away enough to ensure that the consumer society, dependent as it is on people spending on trivia rather than giving to famine relief, would slow down and perhaps disappear entirely. There are several reasons why this would be desirable in itself. The value and necessity of economic growth are now being questioned not only by conservationists, but by economists as well. There is no doubt, too, that the consumer society has had a distorting effect on the goals and purposes of its members. Yet looking at the matter purely from the point of view of overseas aid, there must be a limit to the extent to which we should deliberately slow down our economy; for it might be the case that if we gave away, say, 40 percent of our Gross National Product, we would slow down the economy so much that in absolute terms we would be giving less than if we gave 25 percent of the much larger GNP that we would have if we limited our contribution to this smaller percentage.

Singer’s dicta make it evident that Singer not only is a strong utilitarian but also considers himself the keeper of the collective conscience of mankind. He knows how to measure the pain and pleasure of individuals, how to sum those quantities, and how to redistribute the world’s goods so as to arrive at a sustainable level of net pleasure.

The sustainable level, in Singer’s benighted view, is not the maximum that human beings could produce through their ingenuity, which is never a limited resource. No, the maximum, in Singer’s view, is much less than that because he is also a puritan who “knows” that there is entirely too much “consumerism,” and that its devotees ought to be made to scale it back to the “right” level — as defined by Singer.

In sum, nothing counts unless Singer says it counts. That rules out many values which compete or interfere with Singer’s view of what the world should be like. Those values include liberty, bonds of love and affection, the striving to better oneself and to leave something behind for one’s descendants, the cooperative spirit without which material progress and mutual acts of kinds and charity cannot flourish, and much more.

Singer’s world is a world in which governments apply a formula whereby persons having an “excess” of worldly goods — above some arbitrarily determined minimum — are required to forfeit that “excess” to those who have less than the minimum.

With this understanding of Singer’s mindset, the “logic” of his argument becomes apparent. I restate it more plainly below. Each restatement is accompanied by a libertarian alternative, in bold, italicized type.

1. I begin by appealing to the image of 9 million suffering human beings, as a way of lulling the unwary reader into believing that I am a caring human being, when in fact I have an authoritarian penchant for imposing my views on others.

Every bad thing that happens to an individual is a bad thing for that individual. Whether it is a thing that calls for action by another individual is for that other individual (or a group of them acting in concert) to decide on the basis of love, empathy, conscience, specific obligation, or rational calculation about the potential consequences of the bad thing and of helping or not helping the person to whom it has happened.

2. If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought to do it. However, it is morally wrong for anyone to have more in the way of material possessions than anyone else. The limit of sacrifice is therefore defined by whatever one has to give up in order to reduce himself and his dependents and descendants to the standard of living that would result through massive income redistribution.

There is no universal social-welfare function. Therefore, it is up to the potential alms-giver to give or not, based on his knowledge and preferences. No third party is in a moral position to make that choice or to prescribe the criteria for making it. Governments have the power to force a choice other than the one that the potential alms-giver would make, but power is not morality.

3a. It is wrong to favor persons nearer to oneself over persons who are farther away. I am able to say that because I believe that such things as family, religion, ethnicity, club, church, community, and nation have no moral relevance. It matters not that individuals may form bonds of mutual respect and affection that lead them to commit acts of kindness and charity toward one another, and to treat each other with restraint. Such things are beyond the ken of the cold rationalist that I am.

It is foolish to say that persons with whom one shares no connection are as important as persons to whom one is connected. It is equally foolish to ignore the positive value of social connections. The personal choice about helping others (or not) may properly take into account the effects of that choice on those connections, without which there would be for more anti-social acts and state interventions.

3b. One’s moral obligation to give aid is unaffected by failure of others to do so.

Moral obligations arise from individual circumstances and mutual understandings, not from philosophical abstractions. But if one is inclined to help others in need, it is reasonable to ask whether a certain amount of money will materially aid those others. If not, withholding the amount may be the wisest course because it will be available for use in a case where it can have a material affect. Giving for the sake of giving can be irrational if one is truly committed to helping others.

4. Charity is duty; therefore, it is not charity.

Charity is a voluntary act that one commits without a sense of obligation; one helps one’s family, friends, neighbors, etc., out of love, affection, empathy, or other social bond. The fact that charity may strengthen a social bond and heighten the benefits flowing from it is an incidental fact, not a consideration. Duty, on the other hand, arises from specific obligations, formal or informal. These include the obligations of parent to child, teacher to pupil, business partner to business partner, and the like. Charity can be mistaken for duty only in the mind of a philosopher for whom love, affection, and individuality are alien concepts.

5. There is a universal social welfare function, and everyone ought to be striving, at all times, to maximize it. Moreover, only I know how to maximize universal social welfare. Anyone who contravenes my edicts is acting anti-socially and ought to be brought into line by the state (as long as it acts according to my dictates, of course).

If there is a universal social welfare function, then reducing the level of consumption in an affluent society just for the sake of reducing it (as Singer would) makes no sense; the outcome would be a reduction of social welfare. Of course, it may be that Singer would be so gratified by the reduction of others’ welfare that his own would rise by enough to offset that reduction. The preceding (facetious) observation points to the emptiness of the concept of a social welfare function, which implies that A’s unhappiness at having money stolen by B (or taxed away for B’s benefit) is canceled by B’s happiness at acquiring the money that he has acquired from A (by theft or taxation).

Finally, at the risk of seeming cold-hearted, I must ask the following question: Given the scarcity of resources (at a given time), is it not better to put those resources to work where they will do the most good? I disagree with Singer’s arguments for abortion and euthanasia (including “death panels“) because, among other things, such practices put us on a slippery-slope toward eugenics. But I can do disagree with Singer and still say that, given a choice, I will (and do) give to those who have a chance of a better life (especially if I love them) before giving to those whose lives seem hopeless.

My first duty (as Singer would say) is to those whom I love. And by helping to secure a future for them, I am also increasing the possibility that one or more of them will invent, develop, or apply technologies that help to prevent the kinds of suffering for which Singer merely prescribes palliatives.

Other posts about Peter Singer:
Peter Singer’s Fallacy
Peter Singer’s Agenda
Singer Said It
Rationing and Health Care

Other related posts:
Greed, Cosmic Justice, and Social Welfare
Positive Rights and Cosmic Justice
Utilitarianism, “Liberalism,” and Omniscience
Utilitarianism vs. Liberty
The Mind of a Paternalist
Accountants of the Soul
Rawls Meets Bentham
Enough of “Social Welfare”
The Left
Social Justice
The Left’s Agenda

Substantive Due Process and the Limits of Privacy

TWO KINDS OF DUE PROCESS

David Bernstein of The Volokh Conspiracy discussesThe One and Only Substantive Due Process Clause,” (120 Yale Law Journal 408), by Ryan C. Williams, who is not a law professor but a living, breathing, practicing attorney. Here is the abstract of the article:

The nature and scope of the rights protected by the Due Process Clauses of the Fifth and Fourteenth Amendments are among the most debated topics in all of constitutional law. At the core of this debate is the question of whether these clauses should be understood to protect only “procedural” rights, such as notice and the opportunity for a hearing, or whether the due process guarantee should be understood to encompass certain “substantive” protections as well. An important though little explored assumption shared by participants on both sides of this debate is that the answer to the substantive due process question must be the same for both provisions. This Article questions that assumption by separately examining the historical evidence regarding the original public meaning of the Due Process Clauses of both the Fifth and Fourteenth Amendments with a single question in mind: did the original meaning of each clause, at the time of its enactment, encompass a recognizable form of substantive due process? At the time of the Fifth Amendment’s ratification in 1791, the phrase “due process of law,” and the closely related phrase “law of the land,” were widely understood to refer primarily to matters relating to judicial procedure, with the second phrase having a somewhat broader connotation referring to existing positive law. Neither of these meanings was broad enough to encompass something that would today be recognized as “substantive due process.” Between 1791 and the Fourteenth Amendment’s enactment in 1868, due process concepts evolved dramatically, through judicial decisions at the state and federal levels and through the invocation of due process concepts by both proslavery and abolitionist forces in the course of constitutional arguments over the expansion of slavery. By 1868, a recognizable form of substantive due process had been embraced by courts in at least twenty of the thirty-seven then-existing states as well as by the United States Supreme Court and the authors of the leading treatises on constitutional law. As a result, this Article concludes that the original meaning of one, and only one, of the two Due Process Clauses—the Due Process Clause of the Fourteenth Amendment—was broad enough to encompass a recognizable form of substantive due process [emphasis added].

What is substantive due process? Ryan helpfully contrasts it with procedural due process:

[T]he distinction between adjudication-related conduct and nonadjudication-related conduct is sufficiently distinct to serve as a useful dividing line for distinguishing between substantive and procedural rights.

Under the dichotomy sketched above, an interpretation of the Due Process Clauses can be categorized as “procedural due process” if it imposes no constraints on governmental deprivations of “life, liberty, or property” that do not relate to the form of adjudication that must be provided in connection with such deprivations and the procedures that must be observed in connection with such adjudication. By contrast, an interpretation of the Due Process Clauses can be classified as “substantive due process” if, and only if, it would prohibit governmental actors, in at least some circumstances, from depriving individuals of life, liberty, or property even if those individuals receive an adjudication in which “even the fairest possible procedure[s]” are observed. (Id. at 419)

Governmental power, in other words, has limits, and those limits may not (or should not) be breached simply by observing the niceties of judicial or legislative procedure.

THE LOCHNER ERA

Of particular interest are what Ryan calls “Police Powers” Due Process and “Fundamental Rights” Due Process. The former most famously (or infamously) prevailed in the U.S. Supreme Court’s so-called Lochner era (roughly 1897-1937), when the Court

invalidated state and federal legislation that inhibited business or otherwise limited the free market, including laws on minimum wage, child labor, regulations of banking, insurance and transportation industries.

The era takes its name from Lochner v. New York (1905), in which the Supreme Court struck down a State statute that attempted to impose a maximum-hours limitation on bakers. (I discuss this case in “Substantive Due Process, Liberty of Contract, and the States’ Police Power.”) Ryan writes about the “police powers” emphasis of the Lochner era:

The Lochner-era Court’s application of the Due Process Clauses encompassed review of both the ends that the legislature sought to achieve and the means employed to achieve such ends; if the Court determined that either the ends or means chosen exceeded the legislature’s legitimate authority, the law was condemned as a violation of due process. This more flexible conception of due process allowed for legislation to be upheld even if it interfered with preexisting rights or affected identifiable interests in different ways, so long as the government could point to some legitimate justification for the legislature’s decision. Conversely, legislation that fell outside the scope of the state’s traditional police powers could be invalidated even if it did not deprive individuals of preexisting property rights and did not operate unequally. The Lochner-era police powers cases also differed from the earlier property-focused vested rights and general law interpretations by placing principal emphasis on the protection of individual “liberty” rather than “property.” (Id. at 426-7)

The Court’s embrace of substantive due process was broken by the exigencies of the Great Depression, in which a “chastened” and reshaped Court found adequate justification to repudiate the Constitution in favor of the New Deal.

THE REINVENTION OF SUBSTANTIVE DUE PROCESS

The Court nevertheless resumed its embrace of substantive due process, in a different guise, when various majorities discovered “fundamental rights” in the emanations and penumbrae of the Constitution:

[A] new paradigm of substantive due process decisionmaking began to emerge in cases such as Griswold v. Connecticut [1965, contraception], Shapiro v. Thompson [welfare as a newcomer to a State, regardless of residency requirements, 1969], and Roe v. Wade [1973, abortion]. This new approach, which is the Court’s currently prevailing framework for dealing with substantive due process claims, places principal emphasis on identifying a narrow category of liberty interests that are deemed sufficiently “fundamental” to warrant heightened scrutiny and “forbids the government to infringe . . . ‘fundamental’ liberty interests at all . . . unless the infringement is narrowly tailored to serve a compelling state interest.” (Id. at 427, links added)

Why substantive due process for individuals proclaiming “lifestyle” rights but not for individuals and business owners striving to better their economic lot?

It is likely no coincidence that … early twentieth-century critics of the Supreme Court’s Lochner-era substantive due process jurisprudence, who conducted the first detailed examinations of the pre-Fourteenth Amendment meaning of “due process of law,” failed to identify much support for substantive due process. Nor is it a coincidence that more recent critics of post-Lochner substantive due process decisions have tended to endorse the conclusions of the Lochner-era critics. (Id. at 509-10)

In other words, it all depends on the ideological complexion of the Court. Perhaps even a Court with a solid originalist majority (i.e., a Court with one less Kennedy and at least two more Thomases) would not roll back the precedents of Griswold v. Connecticut and Lawrence v. Texas (2003, homosexual sodomy), but I would be surprised if it did not roll back the precedent of Roe v. Wade et seq.

If there is a fundamental right to privacy, surely it does not encompass everything that flows from private acts. And yet through judicial sleight-of-hand, Roe v. Wade moved constitutional interpretation in that direction.

THE “PRIVACY RIGHT” AND ROE V. WADE

I have written elsewhere about Roe v. Wade:

Abortion was considered murder long before States began to legislate against it in the 19th century. The long-standing condemnation of abortion — even before quickening — is treated thoroughly in Marvin Olasky’s Abortion Rites: A Social History of Abortion in America. Olasky corrects the slanted version of American history upon which the U.S. Supreme Court relied in Roe v. Wade.

Because abortion was not a right at the time of the adoption of the Ninth Amendment, there is no unenumerated right to abortion in the Constitution. The majority in Roe v. Wade (1973) instead seized upon and broadened a previously manufactured “privacy right” in order to legalize abortion….

In effect, the Roe v. Wade majority acknowledged that abortion is not even an unenumerated right. It then manufactured from specified procedural rights enumerated in the Bill of Rights — rights which are totally unrelated to abortion — and from strained precedents involving “penumbras” and “emanations,” a general right to privacy in order to find a “privacy” right to abortion….

It is therefore unsurprising that the majority in Roe v. Wade could not decide whether the general privacy right is located in the Ninth Amendment or the Fourteenth Amendment. Neither amendment, of course, is the locus of a general privacy right because none is conferred by the Constitution, nor could the Constitution ever confer such a right, for it would interfere with such truly compelling state interests as the pursuit of justice. The majority simply chose to ignore that unspeakable consequence by conjuring a general right to privacy for the limited purpose of ratifying abortion.

The spuriousness of the majority’s conclusion is evident in its flinching from the logical end of its reasoning: abortion anywhere at anytime. Instead, the majority delivered this:

The privacy right involved, therefore, cannot be said to be absolute. . . . We, therefore, conclude that the right of personal privacy includes the abortion decision, but that this right is not unqualified and must be considered against important state interests in regulation.

That is, the majority simply drew an arbitrary line between life and death — but in the wrong place. It is as if the majority understood, but wished not to acknowledged, the full implications of a general right to privacy. Such a general right could be deployed by unprincipled judges to decriminalize a variety of heinous acts.

The Fourteenth Amendment may countenance a lot of things, but it should not be used to countenance murder.

More about Taxing the Rich

This is a sequel to “Taxing the Rich,” which reproduces my correspondence with a correspondent who laments the unequal “distribution” of income and wealth. This installment is heavily edited, for the sake of brevity. And, for the sake of clarity, I have reorganized our exchange so that each of his points (in italics) is followed immediately by my response (in bold type).

My correspondent opens by referring to a link that I sent him about the distribution of wealth in the United States:

Granted 25% owning 87%  is a lot better than 2% owning 90% like in S. America, but satisfactory? Warren Buffett doesn’t think so. Nor George Soros. Nor do I. So certainly not revolution, but reformation seems in order.

It may be that George Soros and Warren Buffet don’t like the way things are, but their own wealth merely proves that they’re good at making money, not at setting economic policy for the country. Experts who venture outside their own fields of expertise remind me of the doctors who used to endorse Camel cigarettes.

I’m not sure why it’s “bad” that one-fourth of the people in this country own a relatively large fraction of the wealth in this country. The composition of the one-fourth has changed greatly over time, and will continue to change greatly. There’s no entrenched aristocracy that somehow controls the country or determines who gets how much income, except to the extent that money buys a certain amount of political influence. But you will have noticed that the left has been far more influential than the right for a long time, and that a lot of the very rich (perhaps most of them) tend to favor government welfare programs, highly progressive taxes, and other things associated with your party.

In any event, big fortunes are (usually) made by people who did something for their money — invented computer software, picked good businesses in which to invest — and so on. They don’t steal their money from anyone. (The same is true of John D. Rockefeller and the other so-called robber barons of the late 1800s and early 1900s, popular mythology to the contrary.) What they really do is make a lot of money from their investments while — and this is important — also creating better jobs and higher incomes for a lot of Americans. It’s a win-win thing. And it’s been going on for more than 200 years.

So, I can’t understand why it’s thought of as “bad” that some people earn large fortunes in the process of contributing to the growth of the country’s economy. The “concentration” of wealth in a fraction of the populace is just something that happens — it’s not part of a plot. And it means that the wealthy are doing something good, for their own benefit and the benefit of a lot of other people, not that they’ve stolen from others or are somehow oppressing them.

I do understand why this theme is popular now, in a time of economic stress and concerted efforts to reduce the size of government. But, as I’ve said before, I don’t think those efforts can be pinned on “the rich,” though some of them are sympathetic and supportive — just as some of them, like Soros, are unsympathetic and opposed. There are millions of taxpayers who are also feeling the pinch, and they are fed up with the inexorable rise of government spending. The push to cut the size and cost of government is about as “grass roots” as anything I’ve seen in my lifetime.

I do think experts can develop more than one field, especially when they can afford to, and weren’t the doctors endorsing camels paid to do so? Probably not your best example.

And it may not be so bad that 25% own 87%, may be the inevitable Darwinian sort out as you suggest, and it’s not ownership I find so objectionable if I felt they paid their fair share.

I agree that it’s possible for someone to be expert in more than one field, but I haven’t yet read any utterances by Buffet or Soros on economic policy that go beyond pushing their political views. Perhaps I’m not paying enough attention to them, but I doubt that they have anything to offer that I don’t get from reading a variety of “real” economists. It’s probably true that the doctors were paid for endorsing Camels, but the analogy holds true: doctors aren’t necessarily experts in all aspects of medicine. I wouldn’t ask a thoracic surgeon for advice about how to deal with allergies, for example. But that’s beside the main point, which is the question of economic policy and whether there’s something “wrong” with a skewed distribution of income and wealth, and whether high-income people are paying a “fair share” of taxes” (given that they’re already paying the lion’s share).  Bear with me to the end, because you’ll find out that my objective is to defend all taxpayers, and to promote growth that benefits all Americans. My defense of high rollers is merely incidental.

My thoughts about “fair share” and “Darwinism” are given below.

Is it not true that real wages/earnings of middle class or those less than or = $250,000/yr have shrunk or remained stagnant over last 30 years while income of top 2-3% (say, over $1,000,000/yr?) has grown exponentially? I’ve certainly seen studies re CEO pay.

The income of households in all quintiles of the income distribution has been rising, with some bumps along the way. This graph covers 1967-2003: http://en.wikipedia.org/wiki/File:United_States_Income_Distribution_1967-2003.svg. There’s been no significant change since 2003, as indicated by this Excel spreadsheet from the Census Bureau: http://www.census.gov/compendia/statab/2011/tables/11s0689.xls.
Also, it’s important to keep in mind that people aren’t “stuck” in a particular quintile; there’s a general tendency to move up as one ages, and then to drop down a bit after retiring. For more, see this: http://mjperry.blogspot.com/2008/02/rich-getting-richer-and-poor-are.html.

My thoughts about CEO compensation are given below.

Isn’t the Tea Party focus on cutting government budgets just months after another unneeded/unwarranted huge tax break for the wealthiest a bit disingenuous? There will always be tug between those of us who think wealthy should pay more because they can afford to as either a Judeo/Christian responsibility or just self protection from violence, but…

Although there’s nothing new about organized efforts to cut government spending (it was a main theme of the GOP from the 1920s to the early 1950s), the current Tea Party movement began in February 2009. It had been gaining ground as an anti-spending movement well before the extension of the Bush tax cuts late in 2010.

Anyway, I don’t think of the two things — spending cuts and tax cuts — as contradictory. They are really complementary. If you see them as contradictory because tax cuts can exacerbate deficits, that may be because you don’t want to see the spending cuts. (I don’t know that for sure, it’s just a guess.) I, on the other hand, see tax cuts as putting more pressure on the folks in Washington to make some spending cuts. The serious problem with the looming deficits isn’t so much the mounting pile of government debt and high interest payments (though those are problems). The serious problem is that government spending absorbs resources that could be put to work building the economy through the operation of the market mechanism — which is how this country’s economic progress has been achieved. (Economic “Darwinism” is a matter of offering things that people value in their daily lives and business operations. If you’re good at it, you’ll prosper; if you’re not, you won’t. I can’t think of a fairer system; any alternative rewards people for doing the wrong things or doing them badly. In that respect, corporate welfare is no better than the other kind.)

As you know from our earlier exchange, high-income people already are paying the lion’s share of taxes in this country. (And, surprisingly, more than their peers in the other industrialized nations: http://www.taxfoundation.org/blog/show/27134.html.) I don’t think it has much (or anything) to do with Judeo-Christian ethics or protecting themselves from violence. It’s the law, and governments have a lot of power when it comes to enforcing the law. Some would gladly pay more, which they can do by sending a donation to the U.S. Treasury. But it isn’t their place to speak for all high-income people, which is gross presumption. A mega-millionaire or billionaire who invests his money in new technologies is doing more for working people than a billionaire who sends the same amount of money to D.C.

The U.S. was prosperous resulting in surplus under Clinton, so we couldn’t go back to the higher tax rates of the Clinton years?

The prosperity under Clinton was a continuation of the growth that began with Reagan’s tax cuts and the success of the Fed’s anti-inflationary efforts in the early 1980s. There was a minor recession in 1990-91, but growth had resumed before Clinton took office. The surpluses that he eventually realized had a lot to do with the fact the his spending proclivities were reined in by the GOP-controlled Congress. So, no, I don’t see any magic in the higher tax rates of the Clinton years. The real magic is a combination of reduced government spending and more incentives for people to do things that create wealth for themselves and jobs and higher incomes for others — that is, lower tax rates across the board.

I know that you’d call this “trickle down economics,” but it’s not really. I’m not just trying to defend high-income earners from ill-advised taxation, I’m trying to defend everyone from it. Economic growth requires not only big investments by high rollers but also small investments by “little people.” Why? Because (a) most new jobs are created in smaller businesses (http://www.sba.gov/advocacy/847; http://www.sba.gov/advocacy/7495/8424), and (b) from acorns do mighty oaks grow (think Ford, Microsoft, and the like). And growth benefits working people generally, and everyone who has a spare dollar to invest in a mutual fund (stocks for the risk-takers, bonds for the risk-averse).

Mega millionaires & billionaires continue to make off with a disproportionate share have pitted us against each other.

I’m not sure what you mean when you refer to “disproportionate share.” Let’s take highly paid athletes. The top-100 single-season salaries in baseball (through 2010) range from A-Rod’s $33 million in 2009 to Richie Sexson’s $16 million in 2008. The average major-league player’s salary in 2010 was $3.3 million (http://baseball.about.com/od/newsrumors/a/2010baseballteampayrolls.htm). Are those multi-millionaires making off with a “disproportionate share,” relative to (say) a member of the grounds crew? Or are they simply being paid a requisite amount for their expected contributions to their teams’ bottom lines?

Is the case of CEOs vs. floor sweepers any different? If so, why? In any event, high CEO compensation is mainly a symbolic thing that’s a handy target for griping. Top CEOs don’t make any more than baseball players (http://www.theglobeopinion.com/section/business/executive-compensation), and they’re on the line for the performance of companies that are vastly larger than baseball clubs.

Related posts:
The Causes of Economic Growth
A Short Course in Economics
Addendum to a Short Course in Economics
Enough of “Social Welfare”
The Case of the Purblind Economist
Economic Growth since WWII
The Price of Government
Does the Minimum Wage Increase Unemployment?
The Price of Government Redux
The Mega-Depression
The Real Burden of Government
Toward a Risk-Free Economy
The Rahn Curve at Work
The Illusion of Prosperity and Stability
Society and the State
The “Forthcoming Financial Collapse”
Estimating the Rahn Curve: Or, How Government Inhibits Economic Growth
The Deficit Commission’s Deficit of Understanding
Undermining the Free Society
The Bowles-Simpson Report
The Bowles-Simpson Band-Aid
Build It and They Will Pay
Government vs. Community
The Stagnation Thesis
Government Failure: An Example

Taxing the Rich

UPDATED 03/20/11

The quotation below is the text of a private message I sent to a friend who laments the unequal “distribution” of income and wealth. (I use sneer quotes around “distribution” because the use of the word suggests that there is a pie to be divided; a fallacy that I address later.) The friend likens the United States to corrupt Latin American regimes of yore, where wealth and political power were concentrated in the hands of land owners. He then attributes this mythical situation to Reagan’s “trickle down economics.” Emotion trumps facts, as usual on the left.

I’m sending this for your private consideration, in view of your latest post. I’m not sure what prompted that post, or what it’s based on, but there are a lot of misconceptions about income distribution and tax burdens in the United States. First, income rises across the board; it isn’t just “the rich” who get richer (http://mjperry.blogspot.com/2008/02/rich-getting-richer-and-poor-are.html).Second, unlike Latin America, there is a lot of mobility across income groups; “the rich” are not the same set of people from year to year and decade to decade (http://mjperry.blogspot.com/2007/11/despite-mythology-income-mobility-is.html). Third, most of “the rich” in the U.S. got that way by earning their incomes (http://mjperry.blogspot.com/2008/04/inheritance-is-not-main-driver-of.html), unlike Latin American aristocrats. Fourth, “the rich” already pay the lion’s share of income taxes (http://www.taxfoundation.org/news/show/250.html#Data).

Although most people in the United States deserve what they earn, because they’re not stealing it from other people, I agree that there are a lot of high rollers who earn more than they would if they weren’t granted special privileges by government. The giveaways to the financial industry are a notorious example. But bankers are only the tip of the iceberg, which extends down through many income levels. Not far from the top are members of Congress, who have one of the best self-created pension rackets going. (A cushy, guaranteed, taxpayer-funded pension amounts to a substantial, untaxed bonus.) Further down the ladder, but still worthy of note, are government employees — whether unionized or not — whose low quit-rates attest to the fact that their compensation (which usually includes generous pension benefits) is above what they could earn in the private sector. It is only in the past few years that public-sector employees have begun the feel the effects of tight budgets. Which is to say that they’ve had a free ride for a long time, at taxpayers’ expense.

Bottom line: A high income isn’t necessarily a sign of political corruption or special privileges. Nor is it clear that high-income earners are paying less than their “share.” A good case can be made that they’re paying too much, because it’s high-income earners whose investments fund growth-producing, job-creating technology and business start-ups. What bothers me is people — at all income levels — who are given special privileges by government, which the rest of us pay for in taxes and higher prices for certain goods and services.

UPDATE:

My friend replies:

Thanks for offering your views, but I can’t see how 2% essentially owning 90% is good for the country, nor do I think it will stand.

My response:

I don’t know what you mean by 2% “owning 90%” of the country. It’s true that wealth is concentrated, but that has been true since the birth of the Republic, and it’s to be expected because wealth is strongly (but not perfectly) correlated with income. And except where government grants the kinds of privileges I mentioned earlier, one’s income depends on talent and effort. Further, as I pointed out earlier, income disparities aren’t permanent; there’s plenty of mobility in the U.S., up and down the ladder.

The U.S. isn’t a feudal aristocracy, ripe for revolution because of actual oppression. Some people make it, and some don’t; those who make it (excepting the beneficiaries of government privileges) are able to do so because, in this country, they’re free to reap the fruits of their effort and ability.

Moreover, there’s no fixed “pie” of wealth that’s jealously guarded by a ruling clique. There’s a (usually) growing pie, to which each able person adds as he or she is willing and able. And it belongs to those who produce it — not to “the country.”

If you’re interested in actual facts (as opposed to myths and slogans), you should read the links I sent previously (if you haven’t already) and also this one: http://en.wikipedia.org/wiki/Wealth_in_the_United_States.