The Kennedy Legacy

Luci Baines Johnson:

Senator Kennedy was our family’s cherished friend and the defender of the flame for social justice our parents believed in – decent health care, education, and civil rights for every American. He could have had a life of self service; he chose a life of public service.

Jeff Jacoby:

Born into riches and influence, he could have lived a life of ease, indulging his appetites and paying scant attention to those less fortunate. He chose a different life, and became a towering advocate for the deprived, the disabled, and the dispossessed. I didn’t always like his answers, but I honor him for caring so greatly about the questions.

Don Boudreaux:

While Kennedy didn’t choose a life of ease, he did something much worse: he chose a life of power. That choice satisfied an appetite that is far grosser, baser, and more anti-social than are any of the more private appetites that many rich people often choose to satisfy.

Americans would have been much better off had Ted Kennedy spent his wealth exclusively, say, on the pursuit of sexual experiences and the building of palatial private homes in which to cavort, or to take drugs, or to engage in whatever private dissipations his wealth afforded him.

Instead, Mr. Kennedy spent much of his wealth and time pursuing power over others (and of the garish ‘glory’ that accompanies such power). He did waste his life satisfying unsavory appetites; unfortunately, the appetites he satisfied were satisfied not only at his expense, but at the expense of the rest of us. Mr. Kennedy’s constant feeding of his appetite for power wasted away other people’s prosperity and liberties.

I am squarely with Boudreaux.

As for Kennedy’s “public service” and “caring,” which might be called altruistic, I say this:

There is no essential difference between altruism . . . and the pursuit of self-interest. . . . In fact, the common belief that there is a difference between altruism and the pursuit of self-interest is one cause of (excuse for) purportedly compassionate but actually destructive government intervention in human affairs.*

Americans — poor and rich alike — have paid, and paid dearly, for the assuagement of Edward Kennedy’s ego.

That notwithstanding, Kennedy will be remembered, for the most part, as a “compassionate” politician. But his “compassion” was purchased with our liberty and prosperity.

Related reading: “Red Ted” (at Classical Values), “The Dark Side of Ted Kennedy’s Legacy” (at Carpe Diem), and “Kennedy’s Big Government Paternalism” (at Real Clear Politics).

__________

* I do not mean to disparage acts that have beneficial consequences, merely the assumptions that (a) behavior labeled “altruistic” is unselfish and (b) motivation is more important than result. For more on the second point, see this post.

The Commandeered Economy

Revised and updated, here.

 

Civil War, Close Elections, and Voters’ Remorse

Apropos the proposition that a new (cold) civil war is emerging, I introduce the following graph into evidence:

victors-margin-in-presidential-elections1Source: Derived from data available at Dave Leip’s Atlas of U.S. Presidential Elections.

If the closeness of recent elections (as measured by the popular vote) means anything, it means that the United States is about as divided is it was in the the last quarter of the 19th century, when the passions surrounding the Civil War still raged in the country. Not that there’s anything wrong with disunity, as long as you’re on the right side of it. Nor is unity desirable when it revolves around statists (like the Roosevelts and LBJ) or criminals (like Nixon and Clinton).

The apparent “healing” of 2008 — which merely reflected Bush’s unpopularity and McCain’s lameness as a candidate — has lasted about as long as a Hollywood marriage:

obamas-net-approval2Source: Derived from the presidential tracking poll at Rasmussen Reports.

On the subject of voters’ remorse — for that is what it is — I note that just before the election of 2008 Democrats led Republicans by 47 percent to 41 percent on Rasmussen’s “generic congressional ballot,” whereas the most recent poll (August 16) has Republicans ahead of Democrats by 43 percent to 38 percent.

Putting Risks in Perspective

According to the Centers for Disease Control, about eight-tenths of one percent of Americans died in 2005 (the most recent year for which CDC has published death rates). That’s about 800 persons (825.9 to be precise) out of every 100,000.

To put that number in perspective, imagine a dozen dozen eggs (i.e., a gross of eggs, for those who still know the numeric meaning of “gross”). Only about one of those eggs is broken in the span of a year, in spite of all of the hazards to which the eggs are exposed.

Remember that analogy the next time you read or hear about the “threats” posed by heart disease, cancer, Alzheimer’s, motor-vehicle accidents, firearms, etc., etc., etc. The combined effect of all such “threats” is close to nil; more than 99 percent of Americans survive every year, and more than 70 percent of those who don’t survive are old (age 65 and older). But that’s not the kind of “news” of that sells advertising.

(For much more about mortality in the United States, go here.)

The Fed and Business Cycles

UPDATED 06/11/11

The following graphs depict the length of expansions and contractions (and the trends in both), before and since the creation of the Federal Reserve System in 1913.



Source: “Business Cycle Expansions and Contractions,” National Bureau of Economic Research.

The creation of the Fed might have had a hand in the lengthening of expansions and the shortening of contractions, but many other factors have been at work.

What the graphs don’t depict is the relative severity of the various contractions. It is worth noting that the worst of them all — the Great Depression — occurred after the creation of the Fed and, in part, because of actions taken by the Fed. (A note to the history-challenged: The Great Depression began in September 1929 and ended only because of America’s entry into World War II.) Moreover, the worst downturn since the Great Depression — the Great Recession — was clearly the work of the Fed, in unwitting(?) complicity with the politicians who insisted on expanding home ownership through subprime loans.

In any event, the long-run cost of economic stability has been high. (See this, this, and this, for example.)

*     *     *

Related reading: Scott Sumner, “In the 1930s It Seemed “Obvious” That Financial Turmoil Had Caused the Great Depression,” EconLog, February 17, 2014

Related post: Mr. Greenspan Doth Protest Too Much

A New (Cold) Civil War or Secession?

Max Borders writes:

. . . Toleration and open discourse is, well, no longer tolerated. We are slowly becoming a place of mere de jure free speech. And even that is being eroded by the day. I have to blame so-called “progressives” the most for this. They are people for whom the end justifies the means. So any lip-service they occasionally pay to civil liberties is but a tool of convenience to be employed on the way to getting things their way. A new cold civil war is emerging.

There is much truth in that. But I would go further.

Thanks to the lethal combination of left-statist demagoguery and voter irrationality, Americans are, among other things, saddled with

  • the destruction of civilizing social norms, together with a high threshold of tolerance for criminals and a concomitant softness on matters of national defense
  • the continuing nationalization of medical care, which began in earnest with Medicare and continues in the guise of Obamacare (with all of its potential for bureaucratic abuses of power)
  • severe restrictions on economic output in order to satisfy the dictatorial urge to puritanism that lies behind the “warming” industry

As I have said,

[t]hese encroachments . . . are morally illegitimate because their piecemeal character has robbed Americans of voice and mooted the exit option. And so, we have discovered — too late — that we are impotent hostages in our own land.

I am reminded of these words:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.–That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.

What we need is not a civil war — cold or hot — but a serious movement toward secession:

The legal basis for the perpetuation of the United States disappears when the federal government abrogates the Constitution. Given that the federal government has long failed to honor Amendment X [among many other parts of the Constitution], there is a prima facie case that the United States no longer exists as a legal entity. Secession then becomes more than an option for the States: It becomes their duty, both as sovereign entitities and as guardians of their citizens’ sovereignty.

See “The Constitution: Myths and Realities“.

Negative Rights, Social Norms, and the Constitution

In a recent post about negative rights, I quote Randy Barnett, who explains that such “rights that define the space within which people are free to choose how to act.” Well, not quite.

Think about it. A libertarian regime would protect these negative rights:

  • freedom from force and fraud (including the right of self-defense against force)
  • property ownership (including the right of first possession)
  • freedom of contract (including contracting to employ/be employed)
  • freedom of association and movement.

But those rights enclose a cavernous “space,” within which human behavior can find many self-destructive outlets unless it is shaped by social norms — socially evolved rules (as opposed to government-dictated ones) which delineate morally and socially acceptable behavior. Think of the ways in which your present behavior is shaped by the moral lessons of your childhood and by your experiences as a child, student, spouse, parent, friend, co-worker, neighbor, church member, club member, team member, and the like.

In sum, negative rights are meaningless absent a framework of social norms that is consistent with negative rights and which directs behavior along constructive paths. Conversely, constructive social norms are undermined where government fails to protect negative rights or actively denies them. There is, for example, no right of freedom from force in a community where violence is the norm and government is unable to protect residents from violence; there is no right of property ownership in a community where government seizes property as it sees fit to do so; there is no right of freedom of movement for slaves; and so on. (Obviously, I am referring to rights as they are actually enjoyed, not “natural” rights.)

In the United States, the history of negative rights parallels the history of the Constitution:

The original Constitution protected the rights to life, liberty, and property against infringement by the federal government in two ways. First and foremost, Congress was not given a general legislative power but only those legislative powers “herein granted,” referring to those powers enumerated in Article I, section 8. It is striking how these powers avoid expressly restricting the rightful exercise of liberty. The power “to raise and support armies” does not include an express power of conscription, which would interfere with the property one has in one’s own person. The power to establish the post office does not expressly claim a power to make the government post office a monopoly, which would interfere with the freedom of contract of those who wish to contract with a private mail company of the sort founded by Lysander Spooner. (By contrast, the Articles of Confederation did accord the power in Congress to establish a postal monopoly.)…

Two years after its enactment, the Constitution was amended by the Bill of Rights. These ten amendments included several express guarantees of such liberties as the freedom of speech, press, assembly, and the right to keep and bear arms. The Bill of Rights barred takings for public use without just compensation. It provided additional procedural assurances that the laws would be applied accurately and fairly to particular individuals.
All of the rights enumerated in the Bill of Rights are consistent with modern libertarian political philosophy. And to this list of rights was added the Ninth Amendment that said, “The enumeration in the Constitution of certain rights shall not
be construed to deny or disparage others retained by the people.” In this way, even liberty rights that were not listed were given express constitutional protection. Finally, the Tenth Amendment reaffirmed that Congress could exercise only those
powers to which it was delegated “by this Constitution.”…

While the Thirteenth Amendment’s ban on involuntary servitude expanded the Constitution’s protection of individual liberty against abuses by states, it was the Fourteenth Amendment that radically altered the federalism of the original
Constitution. For the first time, Congress and the courts could invalidate any state laws that “abridge[d] the privileges or immunities of citizens of the United States.” The original meaning of “privileges or immunities” included the same natural rights retained by the people to which the Ninth Amendment referred, but also the additional enumerated rights contained in the Bill of Rights. The Due Process Clause of the Fourteenth Amendment placed a federal check on how state laws are applied to particular persons, while the Equal Protection Clause imposed a duty on state executive branches to extend the protection of the law on all persons without
discrimination. (Randy Barnett, “Is the Constitution Libertarian?,” p. 14-17)

However,

the Supreme Court has upheld countless federal laws restricting
liberty, primarily under the power of Congress “to regulate commerce . . . among the several states” combined with an open-ended reading of the Necessary and Proper Clause. Further it has upheld the power of Congress to spend tax revenue for purposes other than “for carrying into execution” its enumerated powers, thereby exceeding the scope of the Necessary and Proper Clause….

Beginning in the 1930s, the Supreme Court . . . adopted a presumption of constitutionality whenever a statute restricted unenumerated liberty rights. In the 1950s it made this presumption effectively irrebuttable. Now it will only protect those liberties that are listed, or a very few unenumerated rights such as the right of privacy. (op. cit., pp. 15, 17-18)

What the law giveth, the law taketh away. The power of the States, individually, to trample negative rights has been supplanted by the far greater power of the central government to trample negative rights.

Generally, negative rights are trampled by every government enactment that does more than protect negative rights.  Which is to say that most government enactments deny negative rights; for example, they

  • compel the surrender of income to government agencies for non-protective purposes (violating freedom from force and property ownership)
  • compel the transfer of income to persons who did not earn the income (violating freedom from force and property ownership)
  • direct how business property may be used, through restrictions on the specifications to which goods must be manufactured (violating property ownership)
  • force the owners of businesses (in non-right-to-work-States) to recognize and bargain with labor unions (violating property rights and freedom of contract)
  • require private businesses to hire certain classes of persons (“protected groups”) and undertake additional expenses for the “accommodation” of handicapped persons (violating property rights and freedom of contract)
  • require private businesses to restrict or ban smoking (violating property rights and freedom of association)
  • mandate attendance at tax-funded schools and the subjects taught in those schools, even where those teachings run counter to the moral values that parents are trying to inculcate (violating freedom from force and freedom of association)
  • limit political speech through restrictions on political contributions and the publication of political advertisements (violating freedom from force and freedom of association).

Such enactments also trample social norms. First, and fundamentally, they convey the message that government, not private social institutions, is the proper locus of moral instruction and interpersonal mediation. Persons who seek special treatment (privileges, a.k.a. positive rights) learn that they can resort to government for “solutions” to their “problems,” which encourages other persons to do the same thing, and so on. In the end — which we have not quite reached — social institutions lose their power to instruct and mediate, and become merely sources of solace and entertainment.

More specifically, government enactments have

  • engendered disrespect for life by authorizing abortion
  • legitimated lewd, lascivious, inconsiderate, and violent behavior in the name of “freedom of expression” and “freedom of speech,” even while distancing children from the moral lessons of religion by declaring freedom from religion where the Constitution only prohibits government establishment of religion
  • undermined the role of the family as a central civilizing force by encouraging the breakup of families (welfare rules, easy divorce)
  • usurped the authority of parents by usurping their authority to instill moral values (as mentioned above)
  • encouraged the absence of mothers from the home through subsidized day-care and “affirmative action”
  • engendered disrespect for constructive economic behavior by rewarding shiftlessness (welfare) and penalizing success (progressive income tax, the “death tax,” etc.)
  • shifted the burden of care for the elderly from families to “society” (i.e., taxpayers) through Social Security, Medicare, and Medicaid, thus teaching the wrong lessons about the value of life and respect for old persons.

I could go on and on, but I hope that I have made my point: Politicians — in their zeal to pander to special interests — have damaged the general interest through their disregard of negative rights and the framework of civilizing norms that transforms negative rights into constructive behavior.

How could this have happened? Here is my explanation:

The Framers underestimated the will to power that animates office-holders. The Constitution‘s wonderful design — horizontal and vertical separation of powers — which worked rather well until the late 1800s, cracked under the strain of populism, as the central government began to impose national economic regulation at the behest of muckrakers and do-gooders. The Framers’ design then broke under the burden of the Great Depression, as the Supreme Court of the 1930s (and since) has enabled the central government to impose its will at will. The Framers’ fundamental error can be found in Madison’s Federalist No. 51. Madison was correct in this:

. . . It is of great importance in a republic not only to guard the society against the oppression of its rulers, but to guard one part of the society against the injustice of the other part. Different interests necessarily exist in different classes of citizens. If a majority be united by a common interest, the rights of the minority will be insecure. . . .

But Madison then made the error of assuming that, under a central government, liberty is guarded by a diversity of interests:

[One method] of providing against this evil [is] . . . by comprehending in the society so many separate descriptions of citizens as will render an unjust combination of a majority of the whole very improbable, if not impracticable. . . . [This] method will be exemplified in the federal republic of the United States. Whilst all authority in it will be derived from and dependent on the society, the society itself will be broken into so many parts, interests, and classes of citizens, that the rights of individuals, or of the minority, will be in little danger from interested combinations of the majority.

In a free government the security for civil rights must be the same as that for religious rights. It consists in the one case in the multiplicity of interests, and in the other in the multiplicity of sects. The degree of security in both cases will depend on the number of interests and sects; and this may be presumed to depend on the extent of country and number of people comprehended under the same government. This view of the subject must particularly recommend a proper federal system to all the sincere and considerate friends of republican government, since it shows that in exact proportion as the territory of the Union may be formed into more circumscribed Confederacies, or States oppressive combinations of a majority will be facilitated: the best security, under the republican forms, for the rights of every class of citizens, will be diminished: and consequently the stability and independence of some member of the government, the only other security, must be proportionately increased. . . .

Madison then went on to contradict what he said in Federalist No. 46 about the States being a bulwark of liberty:

It can be little doubted that if the State of Rhode Island was separated from the Confederacy and left to itself, the insecurity of rights under the popular form of government within such narrow limits would be displayed by such reiterated oppressions of factious majorities that some power altogether independent of the people would soon be called for by the voice of the very factions whose misrule had proved the necessity of it. In the extended republic of the United States, and among the great variety of interests, parties, and sects which it embraces, a coalition of a majority of the whole society could seldom take place on any other principles than those of justice and the general good; whilst there being thus less danger to a minor from the will of a major party, there must be less pretext, also, to provide for the security of the former, by introducing into the government a will not dependent on the latter, or, in other words, a will independent of the society itself. It is no less certain than it is important, notwithstanding the contrary opinions which have been entertained, that the larger the society, provided it lie within a practical sphere, the more duly capable it will be of self-government. And happily for the REPUBLICAN CAUSE, the practicable sphere may be carried to a very great extent, by a judicious modification and mixture of the FEDERAL PRINCIPLE.

Madison understood that a majority can tyrannize a minority. He understood that the States are better able to prevent the rise of tyranny if the powers of the central government are circumscribed. But he then assumed that the States themselves could not resist tyranny within their own borders. Madison overlooked the importance of exit as the ultimate check on tyranny. He assumed that, in creating a new central government with powers greatly exceeding those of the Confederacy, a majority of States would not tyrannize the minority and that minorities with overlapping interests would not concert to tyrannize the majority. Madison was so anxious to see the Constitution ratified that he oversold himself and the States’ ratifying conventions on the ability of the central government to hold itself in check. Thus the Constitution was lamentably silent on nullification and secession.

What has been done by presidents, Congresses, and courts will be very hard to undo. Too many interests are vested in the regulatory-welfare state that has usurped the Framers’ noble vision. Democracy (that is, vote-selling) and log-rolling are more powerful than words on paper. Even a Supreme Court majority of “strict constructionists” probably would decline to roll back the New Deal and most of what has come in its wake.

Explaining a Team’s W-L Record

According to Baseball-Reference.com:

The Pythagorean Theorem of Baseball is a creation of Bill James which relates the number of runs a team has scored and surrendered to its actual winning percentage, based on the idea that runs scored/runs allowed is a better indicator of a team’s (future) performance than a team’s actual winning percentage. This results in a formula which is referred to as Pythagorean Winning Percentage….

There are two ways of calculating Pythagorean Winning Percentage (W%). The more commonly used, and simpler version uses an exponent of 2 in the formula.

W%=[(Runs Scored)^2]/[(Runs Scored)^2 + (Runs Allowed)^2]

More accurate versions of the formula use 1.81 or 1.83 as the exponent.

W%=[(Runs Scored)^1.81]/[(Runs Scored)^1.81 + (Runs Allowed)^1.81]

Expected W-L can then be obtained by multiplying W% by the team’s total number of games played, then rounding off….

The rationale behind Pythagorean Winning Percentage is that, while winning as many games as possible is still the ultimate goal of a baseball team, a team’s run differential (once a sufficient number of games have been played) provides a better idea of how well a team is actually playing. Therefore, barring personnel issues (injuries, trades), a team’s actual W-L record will approach the Pythagorean Expected W-L record over time, not the other way around. Expected W-L is almost always within 3 games of actual W-L at the end of a season (although a recent exception is the 2005 and 2007 Arizona Diamondbacks, who both beat their expected W-L by 11 games). Deviations from expected W-L are often attributed to the quality of a team’s bullpen, or more dubiously, “clutch play”; many sabermetrics advocates believe the deviations are the result of luck and random chance.

I agree with those who say that deviations reflect the quality of a team’s bullpen. A more precise formula can be obtained by regressing winning percentage on two explanatory variables: RFA (runs scored/[runs scored + runs allowed]) and saves recorded by a team’s bullpen. The result for the American League in 2008:

W-L percentage (expressed as a decimal fraction) = -0.44595 + 1.66556 x RFA + 0.002747 x saves

Adjusted R-squared: 0.899; standard error: 0.022 (i.e., 2.2 percentage points); t-statistics on the intercept and coefficients: -4.246, 7.319, 3.763 (all significant above the 0.99 level).

That is, the average American League team (RFA = .506, saves = 41) compiled a W-L percentage of .510. (The AL beat the NL in interleague play, thus enabling the AL as a whole to compile a better-than-.500 average.)

According to the Pythagorean formula, the LA Angels were the lucky recipients of 11 extra wins in 2008; that is, the formula underestimates the Angels’ 2008 wins by 11. The regression equation, on the other hand, underestimates the Angels’ 2008 wins by only 2. Generally, the regression equation (indicated by blue) gives much better results than the Pythagorean formula (indicated by black):


“Luck” is a catch-all term for unexplained variance. It shouldn’t be thrown around as if it has real meaning. In this case, the evidence suggests that a decisive factor in a team’s W-L record is the quality of its bullpen — especially the quality of its closers.

Taste and Art

Definitions:

  • “Taste,” as I use it here, is “the faculty of discerning what is aesthetically excellent” (from TheFreeDictionary, taste, n. 7. a).
  • “Art” consists of “the conscious production or arrangement of sounds, colors, forms, movements, or other elements in a manner that affects the sense of beauty” (from TheFreeDictionary, art, n. 2. a).
  • An “artifact” is “an object produced or shaped by human craft, especially a tool, weapon, or ornament of archaeological or historical interest” (from TheFreeDictionary, artifact, n. 1).

There is a good test of the faculty of taste at this site, where a visitor can view a series of ten images and designate each as “art” or “non-art.” The viewer’s choice, in each case, triggers a response of “correct” or “wrong.” For example, the designation of  a piece by a recognized “artist” (e.g., Willem de Kooning) as “non-art” triggers a response of “wrong,” even if the piece looks much like some of the “non-art” images, which are called that because they happen to be the scribbles and dribbles of 4-year olds. A discerning viewer will select “non-art” for all ten images, for none of them is aesthetically excellent. On the other hand, a viewer who is anxious to conform to élite opinion will try to identify and label as “art” the four pieces by de Kooning and his ilk. (For more in this vein, see “Modernism in the Arts and Politics.”)

My observation of the “arts” in the modern age leads me to the following conclusions:

  • Taste is not dictated by élite opinion, which is more about exclusiveness than excellence.
  • Therefore, that which élite opinion designates as “art” is not necessarily art — and is likely to be its opposite.
  • In fact, most of the works of modern “artists” are mere artifacts, having no more relation to beauty than rusty tools, derelict boxcars, and abandoned buildings.

The Best and Worst of Times

Bryan Caplan presents the following table:

youth-mortality-2005-vs-1950

Caplan’s commentary:

Overall, today is much safer than 1950.  That’s probably no surprise to anyone who knows basic economic history.  What’s particularly interesting is that safety gains are especially large for younger kids.  The mortality rate for kids under 5 was almost five times greater in 1950, 3.7 times greater for kids 5-14, and 2.2 times greater for 15-24 year olds.

I suspect that many people will object, “Yes, but if you break the results down by cause of death, modernity is worse in both homicide and suicide – two out of the five categories.”  My reply: All modernity has done is roughly double two vivid near-zero risks.  In exchange, we are vastly safer from the formerly quantitatively fearsome risks of disease, accidents, and war.

Bottom line: Modernity delivers the children’s paradise that the fifties only promised.  Maybe the nation’s parents should try turning off their televisions for a minute of gratitude that they aren’t Ward and June Cleaver?

Caplan’s conclusions are foolish because he reifies “modernity” (an abstraction without causal or explanatory power) and aggregates mortality rates that, individually, stand for separate and distinct phenomena:

  • advances in medical science (thus lower mortality from disease)
  • safer household products, machinery, and automobiles (thus lower mortality from accidents)
  • vastly different conditions of war (intense, head-to-head combat along “front lines” in Korea vs. sporadic operations against/attacks from guerrillas in Afghanistan and Iraq)
  • greater lawlessness among teens and young adults (thus a higher death-from-homicide rate among 15-24 year olds)
  • greater anomie among teens and young adults (thus a higher death-from suicide rate among 15-24 year olds)

War is a trendless phenomenon, and shouldn’t be included in Caplan’s statistics. For 15-24 year olds, then, the relevant mortality rate is 82 persons per 100,000 in 2005, as against 130 per 100,000 in 1950. But what we really see is a mix of change for the better and change for the worse, and the two can’t be combined to suggest overall change for the better.  There is progress of one kind — scientific and engineering advancement — and regress of another kind — greater alienation of young adults from traditional moral strictures against violence to others and oneself.

Why have young adults become less respectful of others and themselves in the past half-century? Consider these influences:

  • “Entertainment” has become more violent, and graphically so. Compare today’s films and TV shows with those of yesteryear, today’s rap “music” with the tepid tunes of the early 1950s, and today’s computer and video “games” with pinball.
  • Under the onslaught of social engineering by government (e.g., sex education in schools, welfare “rights,” easy divorce, and day-care subsidies) family life has become less coherent and the role of parents has become less central in the guidance of children. Increasingly, mothers are absent (at work), and fathers are absent (period).
  • Even in two-parent homes, parents have less time for their children because they (the parents) are caught up in the pursuit of material goods. Parents try to compensate for their physical and spiritual absence by spoiling their children with material goods, which merely signals to children the primacy of material things over humane values.

The predictable result of these influences is disregard for others and oneself. This disregard manifests itself not only in homicide and suicide but also in substance abuse, wanton sex, venereal disease, and abortion or — almost as bad — the bearing of “unwanted” children who then become targets of abuse.

Unlike Caplan, I see a less-than-half-full glass in the mortality-rate trends. Scientific and engineering advances are all very well, but they cannot prevent or offset the decline of America into hedonism and violence. As the descent becomes more obvious to politicians, they will seize the opportunity to “save” us through various draconian measures, just as they would save us from the pleasures of smoking, a natural cycle of “global warming,” the right to defend ourselves, and on and on.

Torture

Can torture be justified? I say yes, because where torture has a reasonable chance of saving innocent lives, it is a proper course of action.

I begin by stipulating that certain practices constitute torture by almost everyone’s standards. Given that there is such a thing as torture, is it ever justified? Let us consider the  objections to torture, which are several:

1. It doesn’t work.

2. Even if it works (in rare circumstances), it is counterproductive because it creates enemies.

3. Torture is wrong — period. Therefore, it does not matter what good ends may be served by torture (e.g., gaining information to prevent terrorist attacks).

4. “We” (i.e., the American government, acting on behalf of Americans) must not sink to the level of those we would torture. Despite all threats and provocations, “we” must stand up for our ideals, which include respect for human life.

5. The practice of torture may be hard to contain, like a contagious disease or a slide down the slippery slope: enemies today, Americans tomorrow.

Here are my responses to the objections:

1. Torture doesn’t work. Never say never, as the saying goes. Of course torture works, sometimes. If you count waterboarding as torture, as do the opponents of torture, there is a strong case that torture prevented post-9/11 terrorist attacks in the U.S. Those who say that torture doesn’t work really mean something else, for example: it may not work in all cases, it will not work if not done skilfully, or there may be ways — short of torture — to achieve the same result. All such objections may be correct, but their possible correctness does not rule out torture on practical grounds.

2. Torture creates enemies. This may be true, but what is another enemy if his enmity is impotent — as is the enmity of most of the anti-American world. (It should not be forgotten that the most potent enemies of liberty are Americans, who — with increasing success — seek to dictate the terms and conditions of our daily lives.) We are not engaged in a popularity contest; we are engaged in a death struggle against predators. The real issue here is whether anti-American sentiment translates into reliable threats of action against Americans and their interests (as opposed to demonstrations), and — even if it does, to some degree — whether the threat is to be feared more than more imminent threats (e.g., terrorist attacks). Further, there is a strong case that signs of weakness (i.e., inaction, negotiation, the pursuit of “justice” rather than vengeance) do more to encourage our enemies than do signs of strength (of which a demonstrated willingness to torture surely is one). There is evidence, for example, that Osama bin Laden was emboldened to attack the U.S. on 9/11 because of the weakness of American responses to earlier attacks. In general, the historical record — notably in the history of the Third Reich and Soviet Russia — points to the folly of accommodation and appeasement.

3. Torture is wrong — period. How can torture be wrong — period — if it can be used to prevent harm to innocents? To assert that torture is always wrong is the same thing as asserting the wrongness of self-defense. It replaces a noble goal — protection of innocent persons from harm — with an ignoble goal — protection of miscreants from harm. If torture is always wrong, then attacks on innocents which could be prevented by torture are always right. That is the logical implication of an absolute injunction against torture. The proponents of an absolute injunction (e.g., this one) usually start from the premise that it is wrong to take human life. But I wonder how many of them would persist in that premise were they in a life-or-death struggle with an armed intruder. Would success in such a struggle mean that a proponent of the “life is precious” view had somehow “sunk” to the intruder’s level of moral depravity? Let us see.

4. “We” must not sink to the level of those “we” would torture. Even the most obdurate opponents of torture (excepting rigid pacifists) would agree that self-defense is a fundamental right. To oppose torture is, therefore, to oppose self-defense. Self-defense is not a matter of “sinking” to the level of those who would kill us; it is a matter of acting rightly against predators. Did we “sink” to the level of Japan and Germany when we warred against them? We did not; to the contrary, we rose to the occasion of defending humanity against brutality. Thus it is (or can be) with torture, in the right circumstances.

5. There is a slippery slope from torturing enemies to torturing Americans. This may be the most telling objection to torture. But, like the other objections, it fails to entertain the alternative: harm to Americans and their interests. It is the prevention of harm, after all, which justifies government. The question of “harm to whom?” was confronted in the creation of American armed forces and police forces. Both can be used (and in the case of police forces, sometimes are used) against innocent Americans. But Americans, by and large, are willing (often eager) for the protections afforded by organized defenses against foreign and domestic predators. It has long been understood that the defenders must be controlled to ensure that they they do run amok, but it also has long been understood that the risk of their running amok is worth the payoff, namely, protection from predators. The point is that adequate control of the timing and methods of torture can ensure against a slide down the slippery slope.

In sum, torture is moral — and therefore justified — when it becomes necessary for the purpose of eliciting information that could save innocent lives and the lives of those whose job it is to defend innocent lives. I do not mean that torture must be used, but that it may be used. I do not mean that torture will not have repulsive consequences for its targets, but that the thought of those consequences should not cause the American government to renounce torture as an option.

Related reading:

An example of the payoff: “Cracking KSM” (here and here, too)

The “torture memos” and why they shouldn’t have been released

The politicization of the “torture issue” (AG Holder cannot conduct an investigation on his own authority, despite disingenous claims to the contrary)

The always insightful Megan McArdle: here, here, and here

The brilliantly incisive Thomas Sowell: here and here

Other commentary: here, here, here, here, here, and here

My earlier (and unchanged) views: here, here, here, here, and here

Negative Rights

Go here for a comprehensive treatment of negative rights, rights in general, and liberty.

Law and Liberty

Law comprises the rules which circumscribe human behavior. Law in the United States is mainly an amalgam of two things:

  • widely observed social norms that have not yet been undermined by government
  • governmental decrees that shape behavior because they (a) happen to reflect social norms or (b) are backed by a credible threat of enforcement.

Law — whether socially evolved or government-imposed — is morally legitimate only when it conduces to liberty; that is, when

  • it applies equally to all persons in a given social group or legal jurisdiction
  • an objector may freely try to influence law (voice)
  • an objector may freely leave a jurisdiction whose law offends him (exit).

Unequal treatment means the denial of negative rights on some arbitrary basis (e.g., color, gender, income). As long as negative rights are not denied, then a norm of voluntary discrimination (on whatever basis) is a legitimate exercise of the negative right to associate with persons of one’s choosing, whether as a matter of personal or commercial preference (the two cannot be separated). True liberty encompasses social distinctions, which are just as much the province of “minorities” and “protected groups” as they are of the beleaguered white male of European descent, whose main sin seems to have been the creation of liberty and prosperity in this country.

Law is not morally legitimate where equal treatment, voice, or exit are denied or suppressed by force or the threat of force. Nor is law morally legitimate where incremental actions of government (e.g., precedential judicial rulings) effectively deny voice and foreclose exit as a viable option.

If government-made law ever had moral legitimacy in the United States, the zenith of its legitimacy came in 1905:

[T]he majority opinion in [Lochner v. New York] came as close as the Supreme Court ever has to protecting a general right to liberty under the Fourteenth Amendment. In his opinion for the Court, Justice Rufus Peckham affirmed that the Constitution protected “the right of the individual to his personal liberty, or to enter into those contracts in relation to labor which may seem to him appropriate or necessary for the support of himself and his family.” (Randy Barnett,  “Is the Constitution Libertarian?,” p. 5)

But:

Beginning in the 1930s, the Supreme Court reversed its approach in Lochner and adopted a presumption of constitutionality whenever a statute restricted unenumerated liberty rights. [See O’Gorman & Young, Inc. v. Hartford Fire Ins. Co. (1931).] In the 1950s it made this presumption effectively irrebuttable. [See Williamson v. Lee Optical of Oklahoma (1955).] Now it will only protect those liberties that are listed, or a very few unenumerated rights such as the right of privacy. But such an approach violates the Ninth Amendment’s injunction against using the fact that some rights are enumerated to deny or disparage others because they are not. (Barnett, op. cit, pp. 17-18)

This bare outline summarizes the governmental acts and decrees that stealthily expanded and centralized government’s power and usurped social norms. The expansion and centralization of power occurred in spite of the specific limits placed on the central government by the original Constitution and the Tenth Amendment, and in spite of the Fourteenth Amendment. These encroachments on liberty are morally illegitimate because their piecemeal character has robbed Americans of voice and mooted the exit option. And so, we have discovered — too late — that we are impotent captives in our own land.

Voice is now so circumscribed by “settled law” that there is a null possibility of restoring Lochner and its ilk. Exit is now mainly an option for the extremely wealthy among us. (More power to them.) For the rest of us, there is no realistic escape from illegitimate government-made law, given that the rest of the world (with a few distant exceptions) is similarly corrupt.

As Thomas Jefferson observed in 1774,

Single acts of tyranny may be ascribed to the accidental opinion of a day; but a series of oppressions, begun at a distinguished period and pursued unalterably through every change of ministers, too plainly prove a deliberate, systematic plan of reducing [a people] to slavery.

Having been subjected to a superficially benign form of slavery by our central government, we must look to civil society and civil disobedience for morally legitimate law. Civil society, as I have written, consists of

the daily observance of person X’s negative rights by persons W, Y, and Z — and vice versa…. [Civil society is necessary to liberty] because it is impossible and — more importantly — undesirable for government to police everyone’s behavior. Liberty depends, therefore, on the institutions of society — family, church, club, and the like — through which individuals learn to treat one another with respect, through which individuals often come to the aid of one another, and through which instances of disrespect can be noted, publicized, and even punished (e.g., by criticism and ostracism).

That is civil society. And it is civil society which … government ought to protect instead of usurping and destroying as it establishes its own agencies (e.g., public schools, welfare), gives them primary and even sole jurisdiction in many matters, and funds them with tax money that could have gone to private institutions.

When government fails to protect civil society — and especially when government destroys it — civil disobedience is in order. If civil disobedience fails, more drastic measures are called for:

When I see the worsening degeneracy in our politicians, our media, our educators, and our intelligentsia, I can’t help wondering if the day may yet come when the only thing that can save this country is a military coup. (Thomas Sowell, writing at National Review Online, May 1, 2007)

In Jefferson’s version,

when wrongs are pressed because it is believed they will be borne, resistance becomes morality.

Rationing and Health Care

Peter Singer — utilitarian extraordinaire , spokesman for involuntary euthanasia, and advocate of infanticide — recently shared with millions of rapt readers his opinions about why and how health care must be rationed: “Why We Must Ration Health Care,” The New York Times Magazine, July 15, 2009. Given Singer’s penchant for playing God, the “we” of his title could be an imperial one, but — in this instance — it is an authoritarian one.

Singer is among the many “public intellectuals” (some of them Nobelists) who believe in an omniscient, infallible government, provided — of course — that it does things unto the rest of us the way that they (the “intellectuals”) would have them done. And, like most of those “intellectuals,” Singer is dead wrong in his assertions about how to “solve” the “health care problem,” because his underlying premises and “logic” are dead wrong.

I begin with Singer’s central thesis:

Health care is a scarce resource, and all scarce resources are rationed in one way or another. In the United States, most health care is privately financed, and so most rationing is by price: you get what you, or your employer, can afford to insure you for.

Those two sentences are replete with inaccuracy and error:

  • There is no such thing as “health care”; the term is a catch-all for a wide variety of goods and services, ranging from the self-administration of generic aspirin to complex, delicate neurological surgery.
  • In any event, “health care” is not a “resource,” its various forms are economic goods (i.e., products and services), the production of which requires the use of resources (e.g., the time of trained nurses and doctors, the raw materials and production facilities used in drug manufacture).
  • Economic goods are not rationed by price; price facilitates voluntary transactions between willing buyers and sellers in free markets. Rationing is what happens when a powerful authority (usually a government) steps in to dictate the organization markets, the specifications of goods, and — more extremely — who may but what goods and at what prices (though dictated prices are essentially meaningless because they do not perform the signaling function that they do in free markets).
  • Much “health care” in the United States is privately financed, to the extent that most Americans buy and self-administer products like aspirin, antihistamines, cough medicine, band-aids, etc., and some (though relatively few) Americans buy medical products and services without the benefit of insurance. But much “health care” is not privately financed, because — as Singer soon notes — there is a substantial taxpayer subsidy for employer-sponsored insurance programs. There are various other taxpayer subsidies and government restraints  (e.g., Medicare, Medicaid, government-funded research of diseases and medicines, FDA approval of most kinds of medications and personal-care products).

The slipperiest of Singer’s facile statements is his characterization of what happens in free markets as “rationing,” thus lending back-handed legitimacy to true rationing, which is brute-force interference by government in what is really a personal responsibility: caring for one’s health. For it has somehow come to be common currency that “health care” is a “right,” something that government ought to do for us, instead of something that we ought to do for ourselves. (After all, we do live in an age of “positive rights,” which come at a high cost to everyone, including those who seek them.)

Most Americans are, however, enmeshed in a Catch-22 situation. They have less money to provide for themselves because it has been taken from them by government, to provide for others. But Singer deems the provision inadequate:

In the public sector, primarily Medicare, Medicaid and hospital emergency rooms, health care is rationed by long waits, high patient copayment requirements, low payments to doctors that discourage some from serving public patients and limits on payments to hospitals.

Singer’s “solution” is to make things worse:

The case for explicit health care rationing in the United States starts with the difficulty of thinking of any other way in which we can continue to provide adequate health care to people on Medicaid and Medicare, let alone extend coverage to those who do not now have it.

How will outright rationing entice doctors and hospitals to provide services that they are now unwilling to provide? If doctors leave the medical profession, and new doctors enter at reduced rates, what would Singer do? Begin drafting students into medical schools? What about hospitals that refuse to conform? Would they be nationalized, along with their nurses, orderlies, etc.?

What a pretty picture: Soviet-style medicine here in the U.S. of A. Yet that it precisely where outright rationing will lead if the politburo in Washington sees a shrinking supply of doctors, hospitals, and other medical providers — as it will. Most politicians do not know how to do less. When they create a mess, their natural inclination is to do more of what they did to cause the mess in the first place.

Singer, naturally, appeals to the authority of just such a politician:

President Obama has said plainly that America’s health care system is broken. It is, he has said, by far the most significant driver of America’s long-term debt and deficits. It is hard to see how the nation as a whole can remain competitive if in 26 years we are spending nearly a third of what we earn on health care, while other industrialized nations are spending far less but achieving health outcomes as good as, or better than, ours.

Well, if BO says it, it must be true, n’est-ce pas? The “system” is broken because government established Medicare and Medicaid, back in the days of LBJ’s “Great Society.”  Those two programs have “only” four fatal flaws:

  • They take money from taxpayers, who therefore are less able to provide for themselves.
  • They grant beneficiaries “free” or low-cost access to medical services, thus bloating the demand for those services and causing their prices to rise. (The subsidy of employer-sponsored health insurances has the same effect.)
  • They involve promises of access to medical services that cannot be redeemed by the paltry Medicare tax rate — thus the prospect of balooning deficits, leading to (a) higher interest rates and/or (b) higher taxes on (you guessed it) “the rich.”
  • “The rich,” who finance economic growth, will flee these shore (or their money will), and the deficits will grow larger as tax revenues fall.

Another natural inclination of politicians is to deplore the messes caused by other politicians, and then to do something to make the messes worse. In this instance, BO itches to trump LBJ.

But, of course, this time will be different — it will be done right:

Rationing health care means getting value for the billions we are spending by setting limits on which treatments should be paid for from the public purse. If we ration we won’t be writing blank checks to pharmaceutical companies for their patented drugs, nor paying for whatever procedures doctors choose to recommend. When public funds subsidize health care or provide it directly, it is crazy not to try to get value for money. The debate over health care reform in the United States should start from the premise that some form of health care rationing is both inescapable and desirable. Then we can ask, What is the best way to do it?

So, instead of insurance companies — which at least compete with each other to offer subscribers affordable and attractive lineups of providers and drug formularies — our choices will be dictated by all-wise bureaucrats. Lovely!

Singer defends the bureaucrats, as long as they do it his way, of course. He begins with NICE:

…Britain’s National Institute for Health and Clinical Excellence…. generally known as NICE, is a government-financed but independently run organization set up to provide national guidance on promoting good health and treating illness…. NICE had set a general limit of £30,000, or about $49,000, on the cost of extending life for a year….

There’s no doubt that it’s tough — politically, emotionally and ethically — to make a decision that means that someone will die sooner than they would have if the decision had gone the other way….

Governments implicitly place a dollar value on a human life when they decide how much is to be spent on health care programs and how much on other public goods that are not directed toward saving lives. The task of health care bureaucrats is then to get the best value for the resources they have been allocated….

As a first take, we might say that the good achieved by health care is the number of lives saved. But that is too crude. The death of a teenager is a greater tragedy than the death of an 85-year-old, and this should be reflected in our priorities. We can accommodate that difference by calculating the number of life-years saved, rather than simply the number of lives saved. If a teenager can be expected to live another 70 years, saving her life counts as a gain of 70 life-years, whereas if a person of 85 can be expected to live another 5 years, then saving the 85-year-old will count as a gain of only 5 life-years. That suggests that saving one teenager is equivalent to saving 14 85-year-olds. These are, of course, generic teenagers and generic 85-year-olds….

Health care does more than save lives: it also reduces pain and suffering. How can we compare saving a person’s life with, say, making it possible for someone who was confined to bed to return to an active life? We can elicit people’s values on that too. One common method is to describe medical conditions to people — let’s say being a quadriplegic — and tell them that they can choose between 10 years in that condition or some smaller number of years without it. If most would prefer, say, 10 years as a quadriplegic to 4 years of nondisabled life, but would choose 6 years of nondisabled life over 10 with quadriplegia, but have difficulty deciding between 5 years of nondisabled life or 10 years with quadriplegia, then they are, in effect, assessing life with quadriplegia as half as good as nondisabled life. (These are hypothetical figures, chosen to keep the math simple, and not based on any actual surveys.) If that judgment represents a rough average across the population, we might conclude that restoring to nondisabled life two people who would otherwise be quadriplegics is equivalent in value to saving the life of one person, provided the life expectancies of all involved are similar.

This is the basis of the quality-adjusted life-year, or QALY, a unit designed to enable us to compare the benefits achieved by different forms of health care. The QALY has been used by economists working in health care for more than 30 years to compare the cost-effectiveness of a wide variety of medical procedures and, in some countries, as part of the process of deciding which medical treatments will be paid for with public money. If a reformed U.S. health care system explicitly accepted rationing, as I have argued it should, QALYs could play a similar role in the U.S.

Here we have utilitarianism rampant on a field of fascism. Given that (in Singer’s mind) “we” must nationalize medicine (i.e., ration “health care”), “we” must do it right. To do it right, “we” must weigh human life on a scale of Singer’s devising, and not on the scale of our individual preferences. For Singer knows all! And government knows all, as long as it operates according Singer’s calculus of deservingness.

And why must “we” ration health care? Singer invokes a familiar statistic:

In the U.S., some 45 million do not [have health insurance], and nor are they entitled to any health care at all, unless they can get themselves to an emergency room.

Who are those legendary 45 (or 47) million persons? Are they entirely bereft of medical attention? Here are some answers to those questions, from June and Dave O’Neil’s “Who are the Uninsured? An Analysis of America’s Uninsured Population, Their Characteristics and Their Health“:

Each year the Census Bureau reports its estimate of the total number of adults and children in the U.S. who lacked health insurance coverage during the previous calendar year. The number of Americans reported as uninsured in 2006 was 47 million, which was close to 16 percent of the U.S. population… This number has come to have a large impact on the debate over healthcare reform in the United States. However, there is a great deal of confusion about the significance of the uninsured numbers.

Many people believe that the number of uninsured signifies that almost 50 million Americans are without healthcare simply because they cannot afford a health insurance policy and as a consequence, suffer from poor health, and premature death. However this line of reasoning is based on a distorted characterization of the facts….

More careful analysis of the statistics on the uninsured shows that many uninsured individuals and families appear to have enough disposable income to purchase health insurance, yet choose not to do so, and instead self-insure. We call this group the “voluntarily uninsured” and find that they account for 43 percent of the uninsured population. The remaining group—the “involuntarily uninsured”—makes up only 57 percent of the Census count of the uninsured. A second important point is that while the uninsured receive fewer medical services than those with private insurance, they nonetheless receive significant amounts of healthcare from a variety of sources—government programs, private charitable groups, care donated by physicians and hospitals, and care paid for by out-of-pocket expenditures. Third, although the involuntarily uninsured by some estimates appear to have a significantly shorter life expectancy than those who are privately insured or voluntarily uninsured, it is difficult to establish cause and effect. We find that differences in mortality according to insurance status are to a large extent explained by factors other than health insurance coverage—such as education, socioeconomic status, and health-related habits like smoking…..

The results [of a regression analysis] vividly show the importance of controlling for characteristics that are strongly related to health status and health outcomes and are also strongly related to insurance status. The unadjusted gross difference in mortality risk between those with private insurance and the involuntarily uninsured was -0.113 or 11 percentage points. After adding to the model all characteristics, including the variable indicating fair/poor health status (M3), we find that the differential in the mortality risk between those with private insurance and those who are involuntarily uninsured is reduced to -0.029, a 2.9 percentage point difference.

The unadjusted differential between the privately insured and the voluntarily uninsured … was small—only 3.3 percentage points—because the characteristics of the two groups are fairly similar. That differential becomes even smaller after controlling for measurable differences in characteristics. Thus … the mortality rate of the voluntarily uninsured is only 1.7 percentage points below that of the privately insured….

In summary, we find as have others, that lack of health insurance is not likely to be the major factor causing higher mortality rates among the uninsured. The uninsured—particularly the involuntarily uninsured—have multiple
disadvantages that in themselves are associated with poor health.

(See also The Henry J. Kaiser Family Foundation’s The Uninsured: A Primer, Supplemental Data Tables, October 2008.)

In summary, the so-called crisis in “health care” is a figment of fevered imaginations. To the extent that medical care and medications are more costly than they “should” be, it is because of government interference: restrictions on the entry of doctors and other providers (thanks to the lobbying efforts of the AMA — the doctors’ “union” — and similar organizations; long and often deadly FDA approval procedures for new drugs; subsidies for employer-provided health insurance; and the establishment of Medicare and Medicaid.

The obvious solution to the “crisis” — obvious to anyone who isn’t wedded to the religion of big government — is to get government out of medicine. But that won’t happen because the “crisis” is yet another excuse for politicians and pundits (like Singer, and worse) to dictate the terms and conditions of our lives. Unfortunately, too many voters are susceptible to the siren call of government action. Such voters are more than ready to elect politicians who promise to “do something” about trumped-up crises — be they crises of “health care,” “global warming,”

What will happen with the current “crisis”? The result will be something less destructive than BO’s preferred result, which would effectively nationalize medicine in the United States by making all providers and drug companies beholden to a single payer (i.e., government) and leveling the quality of medical care to a mediocre standard through mandatory participation in the nationalized scheme. But the result, whatever it is, will be destructive:

  • Costs will rise.
  • Many providers will quit providing, and fewer new providers will replace them, unless they are enticed by tax-funded subsidies.
  • Drug companies will develop fewer new drugs, unless they are co-opted by tax-funded subsidies.
  • “The rich” will be forced to bear a disproportionate share of the cost of making things worse. And so, “the rich” will have less wherewithal with which to stimulate economic growth, and less inclinations to do so (in the United States, at least).

Politicians being politicians, the resulting mess will have only one obvious solution: outright nationalization of medicine in the U.S. (The politburo, of course, will enjoy a separate and distinctly superior brand of taxpayer-funded medical care.)

And then we will have become thoroughly European.

Does the Minimum Wage Increase Unemployment?

Yes!

I have not a shred of doubt that the minimum wage increases unemployment, especially among the most vulnerable group of workers: males aged 16 to 19.

Anyone who claims that the minimum wage does not affect unemployment among that vulnerable group is guilty of (a) ingesting a controlled substance,  (b) wishing upon a star, or — most likely — (c) indulging in a mindless display of vicarious “compassion.”

Economists have waged a spirited mini-war over the minimum-wage issue, to no conclusive end. But anyone who tells you that a wage increase that is forced on businesses by government will not lead to a rise in unemployment is one of three things: an economist with an agenda, a politician with an agenda, a person who has never run a business. There is considerable overlap among the three categories.

I have run a business, and I have worked for the minimum wage (and less). On behalf of business owners and young male workers, I am here to protest further increases in the minimum wage. My protest is entirely evidence-based — no marching, shouting, or singing for me. Facts are my friends, even if they are inimical to Left-wing economists, politicians, and other members of the reality-challenged camp.

I begin with  time series on unemployment among males — ages 16 to 19 and 20 and older — for the period January 1948 through June 2009. (These time series are available via this page on the BLS website.) If it is true that the minimum wage targets younger males, the unemployment rate for 16 to 19 year-old males (16-19 YO) will rise faster or decrease less quickly than the unemployment rate for 20+ year-old males (20+ YO) whenever the minimum wage is increased. The precise change will depend on such factors as the propensity of young males to attend college — which has risen over time — and the value of the minimum wage in relation to prevailing wage rates for the industries which typically employ low-skilled workers. But those factors should have little influence on observed month-to-month changes in unemployment rates.

I use two methods to estimate the effects of minimum wage on the unemployment rate of 16-19 YO: graphical analysis and linear regression.

I begin by finding the long-term relationship between the unemployment rates for 16-19 YO and 20+ YO. As it turns out, there is a statistical artifact in the unemployment data, an artifact that is unexplained by this BLS document, which outlines changes in methods of data collection and analysis over the years. The relationship between the two time series is stable through March 1959, when it shifts abruptly. The markedness of the shift can be seen in the contrast between figure 1, which covers the entire period, and figures 2 and 3, which subdivide the entire period into two sub-periods.

090725_Minimum wage and unemployment_fig 1

090725_Minimum wage and unemployment_fig 2

090725_Minimum wage and unemployment_fig 3

For the graphical analysis, I use the equations shown in figures 2 and 3 to determine a baseline relationship between the unemployment rate for 20+ YO (“x”) and the unemployment rate for 16-19 YO (“y”). The equation in figure 2 yields a baseline unemployment rate for 16-19 YO for each month from January 1948 through March 1959; the equation in figure 3, a baseline unemployment rate for 16-19 YO for each month from April 1959 through June 2009. Combining the results, I obtain a baseline estimate for the entire period, January 1948 through June 2009.

I then find, for each month, a residual value for unemployment among 16-19 YO. The residual (actual value minus baseline estimate) is positive when unemployment among 16-19 YO is higher than expected, and negative when 16-19 YO unemployment is lower than expected. Again, this is unemployment of 16-19 YO relative to 20+ YO. Given the stable baseline relationships between the two unemployment rates (when the time series are subdivided as described above), the values of the residuals (month-to-month deviations from the baseline) can reasonably be attributed to changes in the minimum wage.

For purposes of my analysis, I adopt the following conventions:

  • A change in the minimum wage  begins to affect unemployment among 16-19 YO in the month it becomes law, when the legally effective date falls near the start of the month. A change becomes effective in the month following its legally effective date when that date falls near the end of the month. (All of the effective dates have thus far been on the 1st, 3rd, 24th, and 25th of a month.)
  • In either event, the change in the minimum wage affects unemployment among 16-19 YO for 6 months, including the month in which it becomes effective, as reckoned above.

In other words, I assume that employers (by and large) do not anticipate the minimum wage and begin to fire employees before the effective date of an increase. I assume, rather, that employers (by and large) respond to the minimum wage by failing to hire 16-19 YO who are new to the labor force. Finally, I assume that the non-hiring effect lasts about 6 months — in which time prevailing wage rates for 16-19 YO move toward toward (and perhaps exceed) the minimum wage, thus eventually blunting the effect of the minimum wage on unemployment.

I relax the 6-month rule during eras when the minimum wage rises annually, or nearly so. I assume that during such eras employers anticipate scheduled increases in the minimum wage by continuously suppressing their demand for 16-19 YO labor. (There are four such eras: the first runs from September 1963 through July 1971; the second, from May 1974 through June 1981; the third, from May 1996 through February 1998; the fourth, from July 2007 to the present, and presumably beyond.)

With that prelude, I present the following graph of the relationship between residual unemployment among 16-19 YO and the effective periods of minimum wage increases.

090725_Minimum wage and unemployment_fig 4

The jagged, green and red line represents the residual unemployment rate for 16-19 YO. The green portions of the line denote periods in which the minimum wage is ineffective; the red portions of the line denote periods in which the minimum wage is effective. The horizontal gray bands at +1 and -1 denote the normal range of the residuals, one standard deviation above and below the mean, which is zero.

It is obvious that higher residuals (greater unemployment) are generally associated with periods in which the minimum wage is effective; that is, most portions of the line that lie above the normal range are red. Conversely, lower residuals (less unemployment) are generally associated with periods in which the minimum wage is ineffective; that is, most portions of the line that lie below the normal range are green. (Similar results obtain for variations in which employers anticipate the minimum wage increase, for example, by firing or reduced hiring in the preceding 3 months, while the increase affects employment for only 3 months after it becomes law.)

Having shown that there is an obvious relationship between 16-19 YO unemployment and the minimum wage, I now quantify it. Because of the distinctly different relationships between 16-19 YO unemployment and 20+ YO unemployment in the two sub-periods (January 1948 – March 1959, April 1959 – June 2009), I estimate a separate regression equation for each sub-period.

For the first sub-period, I find the following relationship:

Unemployment rate for 16-19 YO (in percentage points) = 3.913 + 1.828 x unemployment rate for 20+ YO + 0.501 x dummy variable for minimum wage (1 if in effect, 0 if not)

Adjusted R-squared: 0.858; standard error of the estimate: 9 percent of the mean value of 16-19 YO unemployment rate; t-statistics on the intercept and coefficients: 14.663, 28.222, 1.635.

Here is the result for the second sub-period:

Unemployment rate for 16-19 YO (in percentage points) = 8.940 + 1.528 x unemployment rate for 20+ YO + 0.610 x dummy variable for minimum wage (1 if in effect, 0 if not)

Adjusted R-squared: 0.855; standard error of the estimate: 6 percent of the mean value of 16-19 YO unemployment rate; t-statistics on the intercept and coefficients: 62.592, 59.289, 7.495.

On the basis of the robust results for the second sub-period, which is much longer and current, I draw the following conclusions:

  • The baseline unemployment rate for 16-19 YO is about 9 percent.
  • Unemployment around the baseline changes by about 1.5 percentage points for every percentage-point change in the unemployment rate for 20+ YO.
  • The minimum wage, when effective, raises the unemployment rate for 16-19 YO by 0.6 percentage points.

Therefore, given the current number of 16 to 19 year old males in the labor force (about 3.3 million), some 20,000 will lose or fail to find jobs because of yesterday’s boost in the minimum wage. Yes, 20,000 is a small fraction of 3.3 million (0.6 percent), but it is a real, heartbreaking number — 20,000 young men for whom almost any hourly wage would be a blessing.

But the “bleeding hearts” who insist on setting a minimum wage, and raising it periodically, don’t care about those 20,000 young men — they only care about their cheaply won reputation for “compassion.”

UPDATE (09/08/09):

A relevant post by Don Boudreaux:

Here’s a second letter that I sent today to the New York Times:

Gary Chaison misses the real, if unintended, lesson of the Russell Sage Foundation study that finds that low-skilled workers routinely keep working for employers who violate statutory employment regulations such as the minimum-wage (Letters, September 8).  This real lesson is that economists’ conventional wisdom about the negative consequences of the minimum-wage likely is true after all.

Fifteen years ago, David Card and Alan Krueger made headlines by purporting to show that a higher minimum-wage, contrary to economists’ conventional wisdom, doesn’t reduce employment of low-skilled workers.  The RSF study casts significant doubt on Card-Krueger.  First, because the minimum-wage itself is circumvented in practice, its negative effect on employment is muted, perhaps to the point of becoming statistically imperceptible.  Second, employers’ and employees’ success at evading other employment regulations – such as mandatory overtime pay – counteracts the minimum-wage’s effect of pricing many low-skilled workers out of the job market.

Sincerely,
Donald J. Boudreaux

The Court in Retrospect and Prospect

SCOTUSblog has published its final tally of the frequency with which the nine justices of the U.S. Supreme Court disagreed with each other in the 53 non-unanimous cases that were decided in the recently ended term. The tally indicates that Kennedy, the so-called swing justice, generally aligns with the Court’s “conservative” wing, so I placed him there, in company with Alito, Roberts, Thomas, and Scalia. The Court’s “liberal” wing, of course, comprises Breyer, Souter, Ginsburg, and Stevens.*

I then ranked the members of the Court’s two wings according to a measure of their net agreement with the other members of their respective wings. Thus:

090724_Supreme Court disagreement_2

090724_supreme-court-disagreement_3

Alito, for example, was in disagreement with his four “allies” (in non-unanimous cases) a total of 72 percent of the time (see graph below), for an average of 18 percent per ally. Alito was in disagreement with his four “opponents” a total of 272 percent of the time, for an average of 68 percent per opponent.  By subtracting Alito’s average anti-“conservative” score (18 percent) from his average anti-“liberal” score (68 percent), I obtained his net average anti-“liberal” score (50 percent). Doing the same for the other four “conservatives,” I found Alito the most anti-“liberal” of the “conservatives. He was followed closely by Roberts, Thomas, and Scalia, in that order. Kennedy finished fifth by several lengths.

I applied the same method to the “liberals,” and found Breyer the least anti-“conservative” of the lot, with Souter and Ginsburg close to each other in second and third places, and Stevens a strong fourth (or first, if you root for the “liberal” camp). (The apparent arithmetic discrepancies for Thomas, Breyer, and Stevens are due to rounding.)

Thus, if you are a “conservative,” you are likely to rank the nine justices as follows: Alito, Roberts, Thomas, Scalia, Kennedy, Breyer, Souter, Ginsburg, and Stevens. (However, I would place Thomas first, because he comes closest to being a libertarian originalist.) I carried this ranking over to the following graphic, which gives a visual representation of the jurisprudential alignments in the Court’s recently completed term:

090724_supreme-court-disagreement_11

It is hard to see how Sotomayor’s ascendancy to the Court will change outcomes. She may be more assertive than Souter, but I would expect that to work against her in dealings with Alito, Roberts, Thomas, and Scalia. Nor would I expect Kennedy — who seems to pride himself on being the court’s “moderate conservative” — to respond well to Sotomayor’s reputedly “sharp elbows.” Even Kennedy found himself at odds with Stevens 60 percent of the time. And it seems likely that Sotomayor will vote with Stevens far more often than not — in spite of her convenient conversion to judicial restraint during her recent testimony before the Senate Judiciary Committee.

__________
* It would be more accurate to cal Alito, Roberts, and Scalia right-statists, with minarchistic tendencies; Thomas, a right-minarchist; and the rest, left-statists with varying degrees of preference for slavery at home and surrender abroad. (See this post for an explanation of the labels used in the preceding sentence.)

The Cell-Phone Scourge

Today’s edition of The New York Times carries an article by Matt Richtel, “U.S. Withheld Data on Risks of Distracted Driving.” Richtel writes (in part):

In 2003, researchers at a federal agency proposed a long-term study of 10,000 drivers to assess the safety risk posed by cellphone use behind the wheel.

They sought the study based on evidence that such multitasking was a serious and growing threat on America’s roadways.

But such an ambitious study never happened. And the researchers’ agency, the National Highway Traffic Safety Administration, decided not to make public hundreds of pages of research and warnings about the use of phones by drivers — in part, officials say, because of concerns about angering Congress.

On Tuesday, the full body of research is being made public for the first time by two consumer advocacy groups, which filed a Freedom of Information Act lawsuit for the documents. The Center for Auto Safety and Public Citizen provided a copy to The New York Times, which is publishing the documents on its Web site….

“We’re looking at a problem that could be as bad as drunk driving, and the government has covered it up,” said Clarence Ditlow, director of the Center for Auto Safety…..

The highway safety researchers estimated that cellphone use by drivers caused around 955 fatalities and 240,000 accidents over all in 2002.

The researchers also shelved a draft letter they had prepared for Transportation Secretary Norman Y. Mineta to send, warning states that hands-free laws might not solve the problem.

That letter said that hands-free headsets did not eliminate the serious accident risk. The reason: a cellphone conversation itself, not just holding the phone, takes drivers’ focus off the road, studies showed.

The research mirrors other studies about the dangers of multitasking behind the wheel. Research shows that motorists talking on a phone are four times as likely to crash as other drivers, and are as likely to cause an accident as someone with a .08 blood alcohol content.

The three-person research team based the fatality and accident estimates on studies that quantified the risks of distracted driving, and an assumption that 6 percent of drivers were talking on the phone at a given time. That figure is roughly half what the Transportation Department assumes to be the case now.

More precise data does not exist because most police forces have not collected long-term data connecting cellphones to accidents. That is why the researchers called for the broader study with 10,000 or more drivers.

“We nevertheless have concluded that the use of cellphones while driving has contributed to an increasing number of crashes, injuries and fatalities,” according to a “talking points” memo the researchers compiled in July 2003.

It added: “We therefore recommend that the drivers not use wireless communication devices, including text messaging systems, when driving, except in an emergency.”

It comes as no news to any observant person that using a cell-phone of any kind can be a dangerous distraction to a driver. Richtel cites some of the previous work on the subject, work that I have cited in earlier posts about the cell-phone scourge. (See, especially, “Cell Phones and Driving, Once More” and its addendum.)

Richtel’s piece underscores the dangers of driving while using a cell phone. It also — perhaps unwittingly — underscores the misfeasance and malfeasance that are typical of government. I say unwittingly because TNYT has a (selective) bias toward government: nanny-ism = good; defense and justice = bad. In this case, the Times finds government in the wrong because it hasn’t been nanny-ish enough.

The Times to the contrary, government has but one legitimate role: to protect citizens from harm. I have no objection to laws banning cell-phone use by drivers, even though — at first blush — such laws might seem anti-libertarian. So-called libertarians who defend driving-while-cell-phoning are merely indulging in the kind of posturing that I have come to expect from the cosseted solipsists who, unfortunately, have come to dominate — and represent — what now passes for libertarianism.

Such “libertarians” to the contrary, liberty comes with obligations. One of those obligations is the responsibility to act in ways that do not harm others. (Another obligation is to defend liberty, or to pay taxes so that others can defend it on your behalf.) The “right” to drive does not include the “right” to drive while drunk or distracted. In sum, a ban on cell-phone use by drivers is entirely libertarian. As I have said,

for the vast majority of drivers there is no alternative to the use of public streets and highways. Relatively few persons can afford private jets and helicopters for commuting and shopping. And as far as I know there are no private, drunk-drivers-and-cell-phones-banned highways. Yes, there might be a market for those drunk-drivers-and-cell-phones-banned highways, but that’s not the reality of here-and-now.

So, I can avoid the (remote) risk of death by second-hand smoke by avoiding places where people smoke. But I cannot avoid the (less-than-remote) risk of death at the hands of a drunk or cell-phone yakker. Therefore, I say, arrest the drunks, cell-phone users, nail-polishers, newspaper-readers, and others of their ilk on sight; slap them with heavy fines; add jail terms for repeat offenders; and penalize them even more harshly if they take life, cause injury, or inflict property damage.

A Long Row to Hoe

In “A Welcome Trend,” I point to Obama’s declining popularity and note that

the trend — if it continues — offers hope for GOP gains in the mid-term elections, if not a one-term Obama-cy.

Of course, it is early days yet. Popularity lost can be regained. Clinton succeeded in making himself so unpopular during his first two years in office that the GOP was able to seize control of Congress in the 1994 mid-term elections. But Clinton was able to regroup, win re-election in 1996, and leave the presidency riding high in the polls, despite (or perhaps because of) his impeachment.

Focusing on 2012, and assuming that Obama runs for re-election, what must the GOP do to unseat him? In “There Is Hope in Mudville,” I offer this:

What about 2012? Can the GOP beat Obama? Why not? A 9-State swing would do the job, and Bush managed a 10-State swing in winning the 2000 election. If Bush can do it, almost anyone can do it — well, anyone but another ersatz conservative like Bob Dolt or John McLame.

Not so fast. A closer look at the results of the 2008 election is in order:

  • Based on the results of the 2004-2008 elections, I had pegged Iowa, New Hampshire, and New Mexico as tossup States: McCain lost them by 9.5, 9.6, and 15.1 percentage points, respectively.
  • I had designated Florida, Ohio, and Nevada as swing-Red States — close, but generally leaning toward the GOP. McCain lost the swing-Red States by 2.8, 4.6, and 12.5 percentage points, respectively.
  • Of the seven States I had designated as leaning-Red, McCain lost Viginia (6.3 percentage points) and Colorado (9.0 percentage points). (He held onto Missouri by only 0.1 percentage point.)
  • McCain also managed to lose two firm-Red States: North Carolina (0.3) and Indiana (1.0).

The tossups are no longer tossups. It will take a strong GOP candidate to reclaim them in 2012. The same goes for Nevada, Virginia, and Colorado. Only Florida, Ohio, North Carolina, and Indiana are within easy reach for the GOP’s next nominee.

McCain did better than Bush in the following States: Oklahoma, Alabama, Louisiana, West Virginia, Tennessee, and Massachusetts. The first five were already firm- and leaning-Red, so McCain’s showing there was meaningless. His small gain in Massachusetts (1.7 percentage points) is likewise meaningless; Obama won the Bay State by 25.8 percentage points. In sum, there is no solace to be found in McCain’s showing.

The GOP can win in 2012 only if

  • Obama descends into Bush-like unpopularity, and stays there; or
  • Obama remains a divisive figure (which he is, all posturing to the contrary) and the GOP somehow nominates a candidate who is a crowd-pleaser and a principled, articulate spokesman for limited government.

The GOP must not offer a candidate who promises to do what Obama would do to this country, only to do it more effectively and efficiently. The “base” will stay home in droves, and Obama will coast to victory — regardless of his unpopularity and divisiveness.

I hereby temper the optimistic tone of my earlier posts. The GOP has a long row to hoe.

A Welcome Trend

obamas-net-approval2Derived from Rasmussen Reports, Daily Presidential Tracking Poll, Obama Approval Index History. I use Rasmussen’s polling results because Rasmussen has a good track record with respect to presidential-election polling.

Obama’s approval rating may have dropped for the wrong reasons; that is, voters expect him to “do something” about jobs, health care, etc. But voters have come to expect presidents to “do something” about various matters which are none of government’s business. So, even if voters have become less approving of Obama for the wrong reasons, the trend — if it continues — offers hope for GOP gains in the mid-term elections, if not a one-term Obama-cy.

Randomness Is Over-Rated

In the preceding post (“Fooled by Non-Randomness“), I had much to say about Nassim Nicholas Taleb’s Fooled by Randomness. The short of is this: Taleb over-rates the role of randomness in financial markets. In fact, his understanding of randomness seems murky.

My aim here is to offer a clearer picture of randomness (or the lack of it), especially as it relates to human behavior. Randomness, as explained in the preceding post, has almost nothing to do with human behavior, which is dominated by intention. Taleb’s misapprehension of randomness leads him  to overstate the importance of a thing called survivor(ship) bias, to which I will turn after dealing with randomness.

WHERE IS RANDOMNESS FOUND?

Randomness — true randomness — is to be found mainly in the operation of fair dice, fair roulette wheels, cryptograhic pinwheels, and other devices designed expressly for the generation of random values. But what about randomness in human affairs?

What we often call random events in human affairs really are non-random events whose causes we do not and, in some cases, cannot know. Such events are unpredictable, but they are not random. Such is the case with such things as rolls (throws) of fair dice — which are considered random events. Dice-rolls are “random” only because it is impossible to perceive the precise conditions of each roll in “real time,” even though knowledge of those conditions would enable a sharp-eyed observer to forecast the outcome of each throw with some accuracy, if the observer were armed with — and had instant access to — analyses of the results of myriad throws whose precise conditions had been captured by various recording devices.

An observer who lacks such information, and who considers the throws of fair dice to be random events, will see that the total number of pips showing on both dice converges on the following frequency distribution:

Rolled Freq.
2 0.028
3 0.056
4 0.083
5 0.111
6 0.139
7 0.167
8 0.139
9 0.111
10 0.083
11 0.056
12 0.028

This frequency distribution is really a shorthand way of writing 28 times out of 1,000; 56 times out of 1,000; etc.

Stable frequency distributions, such as the one given above, have useful purposes. In the case of craps, for example, a bettor can minimize his losses to the house (over a long period of time) if he takes the frequency distribution into account in his betting. Even more usefully, perhaps, an observed divergence from the normal frequency distribution (over many rolls of the dice) would indicate bias caused by (a) an unusual and possibly fraudulent condition (e.g., loaded dice) or (b) a player’s special skill in manipulating dice to skew the frequency distribution in a certain direction.

Randomness, then, is found in (a) the results of non-intentional actions, where (b) we lack sufficient knowledge to understand the link between actions and results.

THE REAL WORLD OF HUMAN AFFAIRS: IT IS WHAT IT IS

You will have noticed the beautiful symmetry of the frequency distribution for dice-rolling. Two-thirds of a large number of dice-rolls will have values of 5 through 9. Values 3 and 4 together will comprise about 14 percent of the rolls, as will values 10 and 11 together. Values 2 and 12 each will comprise less than 3 percent of the rolls.

In other words, the frequency distribution for dice-rolls closely resembles a normal distribution (bell curve). The virtue of this regularity is that it makes predictable the outcome of a large number of dice-rolls; and it makes obvious (over many dice-rolls) a rigged game involving dice. A statistically unexpected distribution of dice-rolls would be considered non-random or, more plainly, rigged — that is, intended by the rigging party.

To state the underlying point explicitly: It is unreasonable to reduce intentional human behavior to probabilistic formulas. Humans don’t behave like dice, roulette balls, or similar “random” devices. But that is what Taleb (and others) do when they ascribe unusual success in financial markets to “luck.” For example, here is what Taleb says on page 136:

I do not deny that if someone performed better than the crowd in the past, there is a presumption of his ability to do better in the future. But the presumption might be weak, very weak, to the point of being useless in decision making. Why? Because it all depends on two factors: The randomness-content of his profession and the number of [persons in the profession].

What Taleb means is this:

  • Success in a profession where randomness dominates outcomes is likely to have the same kind of distribution as that of an event that is considered random, like rolling dice.
  • That being the case, a certain percentage of the members of the profession will, by chance, seem to have great success.
  • If a profession has relatively few members, than a successful person in that profession is more of a standout than a successful person in a profession with, say, thousands of members.

Let me count the assumptions embedded in Taleb’s argument:

  1. Randomness actually dominates some professions. (In particular, he is thinking of the profession of trading financial instruments: stocks, bonds, derivatives, etc.)
  2. Success in a randomness-dominated profession therefore has almost nothing to do with the relevant skills of a member of that profession, nor with the member’s perspicacity in applying those skills.
  3. It follows that a very successful member of a randomness-dominated profession is probably very successful because of luck.
  4. The probability of stumbling across a very successful member of a randomness-dominated profession depends on the total number of members of the profession, given that the probability of success in the profession is distributed in a non-random way (as with dice-rolls).

One of the ways in which Taleb illustrates his thesis is to point to the mutual-fund industry, where far fewer than half the industry’s actively managed funds fail to match the performance of benchmark indices (e.g., S&P 500) over periods of 5 years and longer. But broad, long-term movements in financial markets are not random — as I show in the preceding post.

Nor is trading in financial instruments random; traders do not roll dice or flip coins when they make trades. (Well, the vast majority don’t.) That a majority (or even a super-majority) of actively managed funds does less well than an index fund has nothing to do with randomness and everything to do with the distribution of stock-picking skills. The research required to make informed decisions about financial instruments is arduous and expensive — and not every fool can do it well. Moreover, decision-making — even when based on thorough research — is clouded by uncertainty about the future and the variety of events that can affect the prices of financial instruments.

It is therefore unsurprising  that the distribution of skills in the financial industry is skewed; there are relatively few professionals who have what it takes to succeed over the long run, and relatively many professionals (or would-be professionals) who compile mediocre-to-awful records.

I say it again: The most successful professionals are not successful because of luck, they are successful because of skill. There is no statistically predetermined percentage of skillful traders; the actual percentage depends on the skills of entrants and their willingness (if skillful) to make a career of it. A relevant analogy is found in the distribution of incomes:

In 2007, all households in the United States earned roughly $7.896 trillion [25]. One half, 49.98%, of all income in the US was earned by households with an income over $100,000, the top twenty percent. Over one quarter, 28.5%, of all income was earned by the top 8%, those households earning more than $150,000 a year. The top 3.65%, with incomes over $200,000, earned 17.5%. Households with annual incomes from $50,000 to $75,000, 18.2% of households, earned 16.5% of all income. Households with annual incomes from $50,000 to $95,000, 28.1% of households, earned 28.8% of all income. The bottom 10.3% earned 1.06% of all income.

The outcomes of human endeavor are skewed because the distribution of human talents is skewed. It would be surprising to find as many as one-half of traders beating the long-run average performance of the various markets in which they operate.

BACK TO BASEBALL

To drive the point home, I return to the example of baseball, which I treated at length in the preceding post. Baseball, like most games, has many “random” elements, which is to say that baseball players cannot always predict accurately such things as the flight of a thrown or batted ball, the course a ball will take when it bounces off grass or an outfield fence, the distance and direction of a throw from the outfield, and so on. But despite the many unpredictable elements of the game, skill dominates outcomes over the course of seasons and careers. Moreover, skill is not distributed in a neat way, say, along a bell curve. A good case in point is the distribution of home runs:

  • There have been 16,884 players and 253,498 home runs in major-league history (1876 – present), an average of 15 home runs per person who have played in the major leagues since 1876. About 2,700 players have more than 15 home runs; about 14,000 players have fewer than 15 home runs; and about 100 players have exactly 15 home runs. Of the 2,700 players with more than 15 home runs, there are (as of yesterday) 1,006 with 74 or more home runs, and 25 with 500 or more home runs. (I obtained data about the frequency of career home runs with this search tool at Baseball-Reference.com.)
  • The career home-run statistic, in other words, has an extremely long, thin “tail” that, at first, rises gradually from 0 to 15. This tail represents the home-run records of about 89 percent of all the men who have played in the major leagues. The tail continues to broaden until, at the other end, it becomes a very short, very fat hump, which represents the 0.15 percent of players with 500 or more home runs.
  • There may be a standard statistical distribution which seems to describe the incidence of career home runs. But to say that the career home-run statistic matches any kind of distribution is merely to posit an after-the-fact “explanation” of a phenomenon that has one essential explanation: Some hitters are better at hitting home runs than other players; those better home-run hitters are more likely to stay in the major leagues long enough to compile a lot of home runs. (Even 74 home runs is a lot, relative to the mean of 15.)

And so it is with traders and other active “players” in financial markets. They differ in skill, and their skill differences cannot be arrayed neatly along a bell curve or any other mathematically neat frequency distribution. To adapt a current coinage, they are what they are — nothing more, nothing less.

TALEB’S A PRIORI WORLDVIEW, WITH A BIAS

Taleb, of course, views the situation the other way around. He sees an a priori distribution of “winners” and losers,” where “winners” are determined mainly by luck, not skill. Moreover, we — the civilians on the sidelines — labor under the false impression about the relative number of “winners” because

it is natural for those who failed to vanish completely. Accordingly, one sees the survivors, and only the survivors, which imparts such a mistaken perception of the odds [favoring success]. (p. 137)

Here, Taleb is playing a variation on a favorite theme: survivor(ship) bias. What is it? Here are three quotations that may help you understand it:

Survivor bias is a prominent form of ex-post selection bias. It exists in data sets that exclude a disproportionate share of non-surviving firms…. (“Accounting Information Free of Selection Bias: A New UK Database 1953-1999 “)

Survivorship bias causes performance results to be overstated
because accounts that have been terminated, which may have
underperformed, are no longer in the database. This is the most
documented and best understood source of peer group bias. For example, an unsuccessful management product that was
terminated in the past is excluded from current peer groups.
This screening out of losers results in an overstatement of past
performance. A good illustration of how survivor bias can skew
things is the “marathon analogy”, which asks: If only 100 runners out of a 1,000?contestant marathon actually finish, is the 100th the last? Or in the top ten percent? (“Warning! Peer Groups Are Hazardous to Our Wealth“)

It is true that a number of famous successful people have spent 10,000 hours practising. However, it is also true that many people we have never heard of because they weren’t successful also practised for 10,000 hours. And that there are successful people who were very good without practising for 10,000 hours before their breakthrough (the Rolling Stones, say). And Gordon Brown isn’t very good at being Prime Minister despite preparing for 10,000 hours. (“Better Services without Reform? It’s Just a Con“)

First of all, there are no “odds” favoring success — even in financial markets. Financial “players” do what they can do, and most of them — like baseball players — simply don’t have what it takes for great success. Outcomes are skewed, not because of (fictitious) odds but because talent is distributed unevenly.

CONCLUSION

The real lesson for us spectators is not to assume that the “winners” are merely lucky. No, the real lesson is to seek out those “winners” who have proven their skills over a long period of time, through boom and bust and boom and bust.

Those who do well, over the long run, do not do so merely because they have survived. They have survived because they do well.