Month: July 2009

Law and Liberty

Law comprises the rules which circumscribe human behavior. Law in the United States is mainly an amalgam of two things:

  • widely observed social norms that have not yet been undermined by government
  • governmental decrees that shape behavior because they (a) happen to reflect social norms or (b) are backed by a credible threat of enforcement.

Law — whether socially evolved or government-imposed — is morally legitimate only when it conduces to liberty; that is, when

  • it applies equally to all persons in a given social group or legal jurisdiction
  • an objector may freely try to influence law (voice)
  • an objector may freely leave a jurisdiction whose law offends him (exit).

Unequal treatment means the denial of negative rights on some arbitrary basis (e.g., color, gender, income). As long as negative rights are not denied, then a norm of voluntary discrimination (on whatever basis) is a legitimate exercise of the negative right to associate with persons of one’s choosing, whether as a matter of personal or commercial preference (the two cannot be separated). True liberty encompasses social distinctions, which are just as much the province of “minorities” and “protected groups” as they are of the beleaguered white male of European descent, whose main sin seems to have been the creation of liberty and prosperity in this country.

Law is not morally legitimate where equal treatment, voice, or exit are denied or suppressed by force or the threat of force. Nor is law morally legitimate where incremental actions of government (e.g., precedential judicial rulings) effectively deny voice and foreclose exit as a viable option.

If government-made law ever had moral legitimacy in the United States, the zenith of its legitimacy came in 1905:

[T]he majority opinion in [Lochner v. New York] came as close as the Supreme Court ever has to protecting a general right to liberty under the Fourteenth Amendment. In his opinion for the Court, Justice Rufus Peckham affirmed that the Constitution protected “the right of the individual to his personal liberty, or to enter into those contracts in relation to labor which may seem to him appropriate or necessary for the support of himself and his family.” (Randy Barnett,  “Is the Constitution Libertarian?,” p. 5)

But:

Beginning in the 1930s, the Supreme Court reversed its approach in Lochner and adopted a presumption of constitutionality whenever a statute restricted unenumerated liberty rights. [See O’Gorman & Young, Inc. v. Hartford Fire Ins. Co. (1931).] In the 1950s it made this presumption effectively irrebuttable. [See Williamson v. Lee Optical of Oklahoma (1955).] Now it will only protect those liberties that are listed, or a very few unenumerated rights such as the right of privacy. But such an approach violates the Ninth Amendment’s injunction against using the fact that some rights are enumerated to deny or disparage others because they are not. (Barnett, op. cit, pp. 17-18)

This bare outline summarizes the governmental acts and decrees that stealthily expanded and centralized government’s power and usurped social norms. The expansion and centralization of power occurred in spite of the specific limits placed on the central government by the original Constitution and the Tenth Amendment, and in spite of the Fourteenth Amendment. These encroachments on liberty are morally illegitimate because their piecemeal character has robbed Americans of voice and mooted the exit option. And so, we have discovered — too late — that we are impotent captives in our own land.

Voice is now so circumscribed by “settled law” that there is a null possibility of restoring Lochner and its ilk. Exit is now mainly an option for the extremely wealthy among us. (More power to them.) For the rest of us, there is no realistic escape from illegitimate government-made law, given that the rest of the world (with a few distant exceptions) is similarly corrupt.

As Thomas Jefferson observed in 1774,

Single acts of tyranny may be ascribed to the accidental opinion of a day; but a series of oppressions, begun at a distinguished period and pursued unalterably through every change of ministers, too plainly prove a deliberate, systematic plan of reducing [a people] to slavery.

Having been subjected to a superficially benign form of slavery by our central government, we must look to civil society and civil disobedience for morally legitimate law. Civil society, as I have written, consists of

the daily observance of person X’s negative rights by persons W, Y, and Z — and vice versa…. [Civil society is necessary to liberty] because it is impossible and — more importantly — undesirable for government to police everyone’s behavior. Liberty depends, therefore, on the institutions of society — family, church, club, and the like — through which individuals learn to treat one another with respect, through which individuals often come to the aid of one another, and through which instances of disrespect can be noted, publicized, and even punished (e.g., by criticism and ostracism).

That is civil society. And it is civil society which … government ought to protect instead of usurping and destroying as it establishes its own agencies (e.g., public schools, welfare), gives them primary and even sole jurisdiction in many matters, and funds them with tax money that could have gone to private institutions.

When government fails to protect civil society — and especially when government destroys it — civil disobedience is in order. If civil disobedience fails, more drastic measures are called for:

When I see the worsening degeneracy in our politicians, our media, our educators, and our intelligentsia, I can’t help wondering if the day may yet come when the only thing that can save this country is a military coup. (Thomas Sowell, writing at National Review Online, May 1, 2007)

In Jefferson’s version,

when wrongs are pressed because it is believed they will be borne, resistance becomes morality.

Rationing and Health Care

Peter Singer — utilitarian extraordinaire , spokesman for involuntary euthanasia, and advocate of infanticide — recently shared with millions of rapt readers his opinions about why and how health care must be rationed: “Why We Must Ration Health Care,” The New York Times Magazine, July 15, 2009. Given Singer’s penchant for playing God, the “we” of his title could be an imperial one, but — in this instance — it is an authoritarian one.

Singer is among the many “public intellectuals” (some of them Nobelists) who believe in an omniscient, infallible government, provided — of course — that it does things unto the rest of us the way that they (the “intellectuals”) would have them done. And, like most of those “intellectuals,” Singer is dead wrong in his assertions about how to “solve” the “health care problem,” because his underlying premises and “logic” are dead wrong.

I begin with Singer’s central thesis:

Health care is a scarce resource, and all scarce resources are rationed in one way or another. In the United States, most health care is privately financed, and so most rationing is by price: you get what you, or your employer, can afford to insure you for.

Those two sentences are replete with inaccuracy and error:

  • There is no such thing as “health care”; the term is a catch-all for a wide variety of goods and services, ranging from the self-administration of generic aspirin to complex, delicate neurological surgery.
  • In any event, “health care” is not a “resource,” its various forms are economic goods (i.e., products and services), the production of which requires the use of resources (e.g., the time of trained nurses and doctors, the raw materials and production facilities used in drug manufacture).
  • Economic goods are not rationed by price; price facilitates voluntary transactions between willing buyers and sellers in free markets. Rationing is what happens when a powerful authority (usually a government) steps in to dictate the organization markets, the specifications of goods, and — more extremely — who may but what goods and at what prices (though dictated prices are essentially meaningless because they do not perform the signaling function that they do in free markets).
  • Much “health care” in the United States is privately financed, to the extent that most Americans buy and self-administer products like aspirin, antihistamines, cough medicine, band-aids, etc., and some (though relatively few) Americans buy medical products and services without the benefit of insurance. But much “health care” is not privately financed, because — as Singer soon notes — there is a substantial taxpayer subsidy for employer-sponsored insurance programs. There are various other taxpayer subsidies and government restraints  (e.g., Medicare, Medicaid, government-funded research of diseases and medicines, FDA approval of most kinds of medications and personal-care products).

The slipperiest of Singer’s facile statements is his characterization of what happens in free markets as “rationing,” thus lending back-handed legitimacy to true rationing, which is brute-force interference by government in what is really a personal responsibility: caring for one’s health. For it has somehow come to be common currency that “health care” is a “right,” something that government ought to do for us, instead of something that we ought to do for ourselves. (After all, we do live in an age of “positive rights,” which come at a high cost to everyone, including those who seek them.)

Most Americans are, however, enmeshed in a Catch-22 situation. They have less money to provide for themselves because it has been taken from them by government, to provide for others. But Singer deems the provision inadequate:

In the public sector, primarily Medicare, Medicaid and hospital emergency rooms, health care is rationed by long waits, high patient copayment requirements, low payments to doctors that discourage some from serving public patients and limits on payments to hospitals.

Singer’s “solution” is to make things worse:

The case for explicit health care rationing in the United States starts with the difficulty of thinking of any other way in which we can continue to provide adequate health care to people on Medicaid and Medicare, let alone extend coverage to those who do not now have it.

How will outright rationing entice doctors and hospitals to provide services that they are now unwilling to provide? If doctors leave the medical profession, and new doctors enter at reduced rates, what would Singer do? Begin drafting students into medical schools? What about hospitals that refuse to conform? Would they be nationalized, along with their nurses, orderlies, etc.?

What a pretty picture: Soviet-style medicine here in the U.S. of A. Yet that it precisely where outright rationing will lead if the politburo in Washington sees a shrinking supply of doctors, hospitals, and other medical providers — as it will. Most politicians do not know how to do less. When they create a mess, their natural inclination is to do more of what they did to cause the mess in the first place.

Singer, naturally, appeals to the authority of just such a politician:

President Obama has said plainly that America’s health care system is broken. It is, he has said, by far the most significant driver of America’s long-term debt and deficits. It is hard to see how the nation as a whole can remain competitive if in 26 years we are spending nearly a third of what we earn on health care, while other industrialized nations are spending far less but achieving health outcomes as good as, or better than, ours.

Well, if BO says it, it must be true, n’est-ce pas? The “system” is broken because government established Medicare and Medicaid, back in the days of LBJ’s “Great Society.”  Those two programs have “only” four fatal flaws:

  • They take money from taxpayers, who therefore are less able to provide for themselves.
  • They grant beneficiaries “free” or low-cost access to medical services, thus bloating the demand for those services and causing their prices to rise. (The subsidy of employer-sponsored health insurances has the same effect.)
  • They involve promises of access to medical services that cannot be redeemed by the paltry Medicare tax rate — thus the prospect of balooning deficits, leading to (a) higher interest rates and/or (b) higher taxes on (you guessed it) “the rich.”
  • “The rich,” who finance economic growth, will flee these shore (or their money will), and the deficits will grow larger as tax revenues fall.

Another natural inclination of politicians is to deplore the messes caused by other politicians, and then to do something to make the messes worse. In this instance, BO itches to trump LBJ.

But, of course, this time will be different — it will be done right:

Rationing health care means getting value for the billions we are spending by setting limits on which treatments should be paid for from the public purse. If we ration we won’t be writing blank checks to pharmaceutical companies for their patented drugs, nor paying for whatever procedures doctors choose to recommend. When public funds subsidize health care or provide it directly, it is crazy not to try to get value for money. The debate over health care reform in the United States should start from the premise that some form of health care rationing is both inescapable and desirable. Then we can ask, What is the best way to do it?

So, instead of insurance companies — which at least compete with each other to offer subscribers affordable and attractive lineups of providers and drug formularies — our choices will be dictated by all-wise bureaucrats. Lovely!

Singer defends the bureaucrats, as long as they do it his way, of course. He begins with NICE:

…Britain’s National Institute for Health and Clinical Excellence…. generally known as NICE, is a government-financed but independently run organization set up to provide national guidance on promoting good health and treating illness…. NICE had set a general limit of £30,000, or about $49,000, on the cost of extending life for a year….

There’s no doubt that it’s tough — politically, emotionally and ethically — to make a decision that means that someone will die sooner than they would have if the decision had gone the other way….

Governments implicitly place a dollar value on a human life when they decide how much is to be spent on health care programs and how much on other public goods that are not directed toward saving lives. The task of health care bureaucrats is then to get the best value for the resources they have been allocated….

As a first take, we might say that the good achieved by health care is the number of lives saved. But that is too crude. The death of a teenager is a greater tragedy than the death of an 85-year-old, and this should be reflected in our priorities. We can accommodate that difference by calculating the number of life-years saved, rather than simply the number of lives saved. If a teenager can be expected to live another 70 years, saving her life counts as a gain of 70 life-years, whereas if a person of 85 can be expected to live another 5 years, then saving the 85-year-old will count as a gain of only 5 life-years. That suggests that saving one teenager is equivalent to saving 14 85-year-olds. These are, of course, generic teenagers and generic 85-year-olds….

Health care does more than save lives: it also reduces pain and suffering. How can we compare saving a person’s life with, say, making it possible for someone who was confined to bed to return to an active life? We can elicit people’s values on that too. One common method is to describe medical conditions to people — let’s say being a quadriplegic — and tell them that they can choose between 10 years in that condition or some smaller number of years without it. If most would prefer, say, 10 years as a quadriplegic to 4 years of nondisabled life, but would choose 6 years of nondisabled life over 10 with quadriplegia, but have difficulty deciding between 5 years of nondisabled life or 10 years with quadriplegia, then they are, in effect, assessing life with quadriplegia as half as good as nondisabled life. (These are hypothetical figures, chosen to keep the math simple, and not based on any actual surveys.) If that judgment represents a rough average across the population, we might conclude that restoring to nondisabled life two people who would otherwise be quadriplegics is equivalent in value to saving the life of one person, provided the life expectancies of all involved are similar.

This is the basis of the quality-adjusted life-year, or QALY, a unit designed to enable us to compare the benefits achieved by different forms of health care. The QALY has been used by economists working in health care for more than 30 years to compare the cost-effectiveness of a wide variety of medical procedures and, in some countries, as part of the process of deciding which medical treatments will be paid for with public money. If a reformed U.S. health care system explicitly accepted rationing, as I have argued it should, QALYs could play a similar role in the U.S.

Here we have utilitarianism rampant on a field of fascism. Given that (in Singer’s mind) “we” must nationalize medicine (i.e., ration “health care”), “we” must do it right. To do it right, “we” must weigh human life on a scale of Singer’s devising, and not on the scale of our individual preferences. For Singer knows all! And government knows all, as long as it operates according Singer’s calculus of deservingness.

And why must “we” ration health care? Singer invokes a familiar statistic:

In the U.S., some 45 million do not [have health insurance], and nor are they entitled to any health care at all, unless they can get themselves to an emergency room.

Who are those legendary 45 (or 47) million persons? Are they entirely bereft of medical attention? Here are some answers to those questions, from June and Dave O’Neil’s “Who are the Uninsured? An Analysis of America’s Uninsured Population, Their Characteristics and Their Health“:

Each year the Census Bureau reports its estimate of the total number of adults and children in the U.S. who lacked health insurance coverage during the previous calendar year. The number of Americans reported as uninsured in 2006 was 47 million, which was close to 16 percent of the U.S. population… This number has come to have a large impact on the debate over healthcare reform in the United States. However, there is a great deal of confusion about the significance of the uninsured numbers.

Many people believe that the number of uninsured signifies that almost 50 million Americans are without healthcare simply because they cannot afford a health insurance policy and as a consequence, suffer from poor health, and premature death. However this line of reasoning is based on a distorted characterization of the facts….

More careful analysis of the statistics on the uninsured shows that many uninsured individuals and families appear to have enough disposable income to purchase health insurance, yet choose not to do so, and instead self-insure. We call this group the “voluntarily uninsured” and find that they account for 43 percent of the uninsured population. The remaining group—the “involuntarily uninsured”—makes up only 57 percent of the Census count of the uninsured. A second important point is that while the uninsured receive fewer medical services than those with private insurance, they nonetheless receive significant amounts of healthcare from a variety of sources—government programs, private charitable groups, care donated by physicians and hospitals, and care paid for by out-of-pocket expenditures. Third, although the involuntarily uninsured by some estimates appear to have a significantly shorter life expectancy than those who are privately insured or voluntarily uninsured, it is difficult to establish cause and effect. We find that differences in mortality according to insurance status are to a large extent explained by factors other than health insurance coverage—such as education, socioeconomic status, and health-related habits like smoking…..

The results [of a regression analysis] vividly show the importance of controlling for characteristics that are strongly related to health status and health outcomes and are also strongly related to insurance status. The unadjusted gross difference in mortality risk between those with private insurance and the involuntarily uninsured was -0.113 or 11 percentage points. After adding to the model all characteristics, including the variable indicating fair/poor health status (M3), we find that the differential in the mortality risk between those with private insurance and those who are involuntarily uninsured is reduced to -0.029, a 2.9 percentage point difference.

The unadjusted differential between the privately insured and the voluntarily uninsured … was small—only 3.3 percentage points—because the characteristics of the two groups are fairly similar. That differential becomes even smaller after controlling for measurable differences in characteristics. Thus … the mortality rate of the voluntarily uninsured is only 1.7 percentage points below that of the privately insured….

In summary, we find as have others, that lack of health insurance is not likely to be the major factor causing higher mortality rates among the uninsured. The uninsured—particularly the involuntarily uninsured—have multiple
disadvantages that in themselves are associated with poor health.

(See also The Henry J. Kaiser Family Foundation’s The Uninsured: A Primer, Supplemental Data Tables, October 2008.)

In summary, the so-called crisis in “health care” is a figment of fevered imaginations. To the extent that medical care and medications are more costly than they “should” be, it is because of government interference: restrictions on the entry of doctors and other providers (thanks to the lobbying efforts of the AMA — the doctors’ “union” — and similar organizations; long and often deadly FDA approval procedures for new drugs; subsidies for employer-provided health insurance; and the establishment of Medicare and Medicaid.

The obvious solution to the “crisis” — obvious to anyone who isn’t wedded to the religion of big government — is to get government out of medicine. But that won’t happen because the “crisis” is yet another excuse for politicians and pundits (like Singer, and worse) to dictate the terms and conditions of our lives. Unfortunately, too many voters are susceptible to the siren call of government action. Such voters are more than ready to elect politicians who promise to “do something” about trumped-up crises — be they crises of “health care,” “global warming,”

What will happen with the current “crisis”? The result will be something less destructive than BO’s preferred result, which would effectively nationalize medicine in the United States by making all providers and drug companies beholden to a single payer (i.e., government) and leveling the quality of medical care to a mediocre standard through mandatory participation in the nationalized scheme. But the result, whatever it is, will be destructive:

  • Costs will rise.
  • Many providers will quit providing, and fewer new providers will replace them, unless they are enticed by tax-funded subsidies.
  • Drug companies will develop fewer new drugs, unless they are co-opted by tax-funded subsidies.
  • “The rich” will be forced to bear a disproportionate share of the cost of making things worse. And so, “the rich” will have less wherewithal with which to stimulate economic growth, and less inclinations to do so (in the United States, at least).

Politicians being politicians, the resulting mess will have only one obvious solution: outright nationalization of medicine in the U.S. (The politburo, of course, will enjoy a separate and distinctly superior brand of taxpayer-funded medical care.)

And then we will have become thoroughly European.

Does the Minimum Wage Increase Unemployment?

Yes!

I have not a shred of doubt that the minimum wage increases unemployment, especially among the most vulnerable group of workers: males aged 16 to 19.

Anyone who claims that the minimum wage does not affect unemployment among that vulnerable group is guilty of (a) ingesting a controlled substance,  (b) wishing upon a star, or — most likely — (c) indulging in a mindless display of vicarious “compassion.”

Economists have waged a spirited mini-war over the minimum-wage issue, to no conclusive end. But anyone who tells you that a wage increase that is forced on businesses by government will not lead to a rise in unemployment is one of three things: an economist with an agenda, a politician with an agenda, a person who has never run a business. There is considerable overlap among the three categories.

I have run a business, and I have worked for the minimum wage (and less). On behalf of business owners and young male workers, I am here to protest further increases in the minimum wage. My protest is entirely evidence-based — no marching, shouting, or singing for me. Facts are my friends, even if they are inimical to Left-wing economists, politicians, and other members of the reality-challenged camp.

I begin with  time series on unemployment among males — ages 16 to 19 and 20 and older — for the period January 1948 through June 2009. (These time series are available via this page on the BLS website.) If it is true that the minimum wage targets younger males, the unemployment rate for 16 to 19 year-old males (16-19 YO) will rise faster or decrease less quickly than the unemployment rate for 20+ year-old males (20+ YO) whenever the minimum wage is increased. The precise change will depend on such factors as the propensity of young males to attend college — which has risen over time — and the value of the minimum wage in relation to prevailing wage rates for the industries which typically employ low-skilled workers. But those factors should have little influence on observed month-to-month changes in unemployment rates.

I use two methods to estimate the effects of minimum wage on the unemployment rate of 16-19 YO: graphical analysis and linear regression.

I begin by finding the long-term relationship between the unemployment rates for 16-19 YO and 20+ YO. As it turns out, there is a statistical artifact in the unemployment data, an artifact that is unexplained by this BLS document, which outlines changes in methods of data collection and analysis over the years. The relationship between the two time series is stable through March 1959, when it shifts abruptly. The markedness of the shift can be seen in the contrast between figure 1, which covers the entire period, and figures 2 and 3, which subdivide the entire period into two sub-periods.

090725_Minimum wage and unemployment_fig 1

090725_Minimum wage and unemployment_fig 2

090725_Minimum wage and unemployment_fig 3

For the graphical analysis, I use the equations shown in figures 2 and 3 to determine a baseline relationship between the unemployment rate for 20+ YO (“x”) and the unemployment rate for 16-19 YO (“y”). The equation in figure 2 yields a baseline unemployment rate for 16-19 YO for each month from January 1948 through March 1959; the equation in figure 3, a baseline unemployment rate for 16-19 YO for each month from April 1959 through June 2009. Combining the results, I obtain a baseline estimate for the entire period, January 1948 through June 2009.

I then find, for each month, a residual value for unemployment among 16-19 YO. The residual (actual value minus baseline estimate) is positive when unemployment among 16-19 YO is higher than expected, and negative when 16-19 YO unemployment is lower than expected. Again, this is unemployment of 16-19 YO relative to 20+ YO. Given the stable baseline relationships between the two unemployment rates (when the time series are subdivided as described above), the values of the residuals (month-to-month deviations from the baseline) can reasonably be attributed to changes in the minimum wage.

For purposes of my analysis, I adopt the following conventions:

  • A change in the minimum wage  begins to affect unemployment among 16-19 YO in the month it becomes law, when the legally effective date falls near the start of the month. A change becomes effective in the month following its legally effective date when that date falls near the end of the month. (All of the effective dates have thus far been on the 1st, 3rd, 24th, and 25th of a month.)
  • In either event, the change in the minimum wage affects unemployment among 16-19 YO for 6 months, including the month in which it becomes effective, as reckoned above.

In other words, I assume that employers (by and large) do not anticipate the minimum wage and begin to fire employees before the effective date of an increase. I assume, rather, that employers (by and large) respond to the minimum wage by failing to hire 16-19 YO who are new to the labor force. Finally, I assume that the non-hiring effect lasts about 6 months — in which time prevailing wage rates for 16-19 YO move toward toward (and perhaps exceed) the minimum wage, thus eventually blunting the effect of the minimum wage on unemployment.

I relax the 6-month rule during eras when the minimum wage rises annually, or nearly so. I assume that during such eras employers anticipate scheduled increases in the minimum wage by continuously suppressing their demand for 16-19 YO labor. (There are four such eras: the first runs from September 1963 through July 1971; the second, from May 1974 through June 1981; the third, from May 1996 through February 1998; the fourth, from July 2007 to the present, and presumably beyond.)

With that prelude, I present the following graph of the relationship between residual unemployment among 16-19 YO and the effective periods of minimum wage increases.

090725_Minimum wage and unemployment_fig 4

The jagged, green and red line represents the residual unemployment rate for 16-19 YO. The green portions of the line denote periods in which the minimum wage is ineffective; the red portions of the line denote periods in which the minimum wage is effective. The horizontal gray bands at +1 and -1 denote the normal range of the residuals, one standard deviation above and below the mean, which is zero.

It is obvious that higher residuals (greater unemployment) are generally associated with periods in which the minimum wage is effective; that is, most portions of the line that lie above the normal range are red. Conversely, lower residuals (less unemployment) are generally associated with periods in which the minimum wage is ineffective; that is, most portions of the line that lie below the normal range are green. (Similar results obtain for variations in which employers anticipate the minimum wage increase, for example, by firing or reduced hiring in the preceding 3 months, while the increase affects employment for only 3 months after it becomes law.)

Having shown that there is an obvious relationship between 16-19 YO unemployment and the minimum wage, I now quantify it. Because of the distinctly different relationships between 16-19 YO unemployment and 20+ YO unemployment in the two sub-periods (January 1948 – March 1959, April 1959 – June 2009), I estimate a separate regression equation for each sub-period.

For the first sub-period, I find the following relationship:

Unemployment rate for 16-19 YO (in percentage points) = 3.913 + 1.828 x unemployment rate for 20+ YO + 0.501 x dummy variable for minimum wage (1 if in effect, 0 if not)

Adjusted R-squared: 0.858; standard error of the estimate: 9 percent of the mean value of 16-19 YO unemployment rate; t-statistics on the intercept and coefficients: 14.663, 28.222, 1.635.

Here is the result for the second sub-period:

Unemployment rate for 16-19 YO (in percentage points) = 8.940 + 1.528 x unemployment rate for 20+ YO + 0.610 x dummy variable for minimum wage (1 if in effect, 0 if not)

Adjusted R-squared: 0.855; standard error of the estimate: 6 percent of the mean value of 16-19 YO unemployment rate; t-statistics on the intercept and coefficients: 62.592, 59.289, 7.495.

On the basis of the robust results for the second sub-period, which is much longer and current, I draw the following conclusions:

  • The baseline unemployment rate for 16-19 YO is about 9 percent.
  • Unemployment around the baseline changes by about 1.5 percentage points for every percentage-point change in the unemployment rate for 20+ YO.
  • The minimum wage, when effective, raises the unemployment rate for 16-19 YO by 0.6 percentage points.

Therefore, given the current number of 16 to 19 year old males in the labor force (about 3.3 million), some 20,000 will lose or fail to find jobs because of yesterday’s boost in the minimum wage. Yes, 20,000 is a small fraction of 3.3 million (0.6 percent), but it is a real, heartbreaking number — 20,000 young men for whom almost any hourly wage would be a blessing.

But the “bleeding hearts” who insist on setting a minimum wage, and raising it periodically, don’t care about those 20,000 young men — they only care about their cheaply won reputation for “compassion.”

UPDATE (09/08/09):

A relevant post by Don Boudreaux:

Here’s a second letter that I sent today to the New York Times:

Gary Chaison misses the real, if unintended, lesson of the Russell Sage Foundation study that finds that low-skilled workers routinely keep working for employers who violate statutory employment regulations such as the minimum-wage (Letters, September 8).  This real lesson is that economists’ conventional wisdom about the negative consequences of the minimum-wage likely is true after all.

Fifteen years ago, David Card and Alan Krueger made headlines by purporting to show that a higher minimum-wage, contrary to economists’ conventional wisdom, doesn’t reduce employment of low-skilled workers.  The RSF study casts significant doubt on Card-Krueger.  First, because the minimum-wage itself is circumvented in practice, its negative effect on employment is muted, perhaps to the point of becoming statistically imperceptible.  Second, employers’ and employees’ success at evading other employment regulations – such as mandatory overtime pay – counteracts the minimum-wage’s effect of pricing many low-skilled workers out of the job market.

Sincerely,
Donald J. Boudreaux

The Court in Retrospect and Prospect

SCOTUSblog has published its final tally of the frequency with which the nine justices of the U.S. Supreme Court disagreed with each other in the 53 non-unanimous cases that were decided in the recently ended term. The tally indicates that Kennedy, the so-called swing justice, generally aligns with the Court’s “conservative” wing, so I placed him there, in company with Alito, Roberts, Thomas, and Scalia. The Court’s “liberal” wing, of course, comprises Breyer, Souter, Ginsburg, and Stevens.*

I then ranked the members of the Court’s two wings according to a measure of their net agreement with the other members of their respective wings. Thus:

090724_Supreme Court disagreement_2

090724_supreme-court-disagreement_3

Alito, for example, was in disagreement with his four “allies” (in non-unanimous cases) a total of 72 percent of the time (see graph below), for an average of 18 percent per ally. Alito was in disagreement with his four “opponents” a total of 272 percent of the time, for an average of 68 percent per opponent.  By subtracting Alito’s average anti-“conservative” score (18 percent) from his average anti-“liberal” score (68 percent), I obtained his net average anti-“liberal” score (50 percent). Doing the same for the other four “conservatives,” I found Alito the most anti-“liberal” of the “conservatives. He was followed closely by Roberts, Thomas, and Scalia, in that order. Kennedy finished fifth by several lengths.

I applied the same method to the “liberals,” and found Breyer the least anti-“conservative” of the lot, with Souter and Ginsburg close to each other in second and third places, and Stevens a strong fourth (or first, I you root for the “liberal” camp). (The apparent arithmetic discrepancies for Thomas, Breyer, and Stevens are due to rounding.)

Thus, if you are a “conservative,” you are likely to rank the nine justices as follows: Alito, Roberts, Thomas, Scalia, Kennedy, Breyer, Souter, Ginsburg, and Stevens. (However, I would place Thomas first, because he comes closest to being a libertarian originalist.) I carried this ranking over to the following graphic, which gives a visual representation of the jurisprudential alignments in the Court’s recently completed term:

090724_supreme-court-disagreement_11

It is hard to see how Sotomayor’s ascendancy to the Court will change outcomes. She may be more assertive than Souter, but I would expect that to work against her in dealings with Alito, Roberts, Thomas, and Scalia. Nor would I expect Kennedy — who seems to pride himself on being the court’s “moderate conservative” — to respond well to Sotomayor’s reputedly “sharp elbows.” Even Kennedy found himself at odds with Stevens 60 percent of the time. And it seems likely that Sotomayor will vote with Stevens far more often than not — in spite of her convenient conversion to judicial restraint during her recent testimony before the Senate Judiciary Committee.

__________
* It would be more accurate to cal Alito, Roberts, and Scalia right-statists, with minarchistic tendencies; Thomas, a right-minarchist; and the rest, left-statists with varying degrees of preference for slavery at home and surrender abroad. (See this post for an explanation of the labels used in the preceding sentence.)

The Cell-Phone Scourge

Today’s edition of The New York Times carries an article by Matt Richtel, “U.S. Withheld Data on Risks of Distracted Driving.” Richtel writes (in part):

In 2003, researchers at a federal agency proposed a long-term study of 10,000 drivers to assess the safety risk posed by cellphone use behind the wheel.

They sought the study based on evidence that such multitasking was a serious and growing threat on America’s roadways.

But such an ambitious study never happened. And the researchers’ agency, the National Highway Traffic Safety Administration, decided not to make public hundreds of pages of research and warnings about the use of phones by drivers — in part, officials say, because of concerns about angering Congress.

On Tuesday, the full body of research is being made public for the first time by two consumer advocacy groups, which filed a Freedom of Information Act lawsuit for the documents. The Center for Auto Safety and Public Citizen provided a copy to The New York Times, which is publishing the documents on its Web site….

“We’re looking at a problem that could be as bad as drunk driving, and the government has covered it up,” said Clarence Ditlow, director of the Center for Auto Safety…..

The highway safety researchers estimated that cellphone use by drivers caused around 955 fatalities and 240,000 accidents over all in 2002.

The researchers also shelved a draft letter they had prepared for Transportation Secretary Norman Y. Mineta to send, warning states that hands-free laws might not solve the problem.

That letter said that hands-free headsets did not eliminate the serious accident risk. The reason: a cellphone conversation itself, not just holding the phone, takes drivers’ focus off the road, studies showed.

The research mirrors other studies about the dangers of multitasking behind the wheel. Research shows that motorists talking on a phone are four times as likely to crash as other drivers, and are as likely to cause an accident as someone with a .08 blood alcohol content.

The three-person research team based the fatality and accident estimates on studies that quantified the risks of distracted driving, and an assumption that 6 percent of drivers were talking on the phone at a given time. That figure is roughly half what the Transportation Department assumes to be the case now.

More precise data does not exist because most police forces have not collected long-term data connecting cellphones to accidents. That is why the researchers called for the broader study with 10,000 or more drivers.

“We nevertheless have concluded that the use of cellphones while driving has contributed to an increasing number of crashes, injuries and fatalities,” according to a “talking points” memo the researchers compiled in July 2003.

It added: “We therefore recommend that the drivers not use wireless communication devices, including text messaging systems, when driving, except in an emergency.”

It comes as no news to any observant person that using a cell-phone of any kind can be a dangerous distraction to a driver. Richtel cites some of the previous work on the subject, work that I have cited in earlier posts about the cell-phone scourge. (See, especially, “Cell Phones and Driving, Once More” and its addendum.)

Richtel’s piece underscores the dangers of driving while using a cell phone. It also — perhaps unwittingly — underscores the misfeasance and malfeasance that are typical of government. I say unwittingly because TNYT has a (selective) bias toward government: nanny-ism = good; defense and justice = bad. In this case, the Times finds government in the wrong because it hasn’t been nanny-ish enough.

The Times to the contrary, government has but one legitimate role: to protect citizens from harm. I have no objection to laws banning cell-phone use by drivers, even though — at first blush — such laws might seem anti-libertarian. So-called libertarians who defend driving-while-cell-phoning are merely indulging in the kind of posturing that I have come to expect from the cosseted solipsists who, unfortunately, have come to dominate — and represent — what now passes for libertarianism.

Such “libertarians” to the contrary, liberty comes with obligations. One of those obligations is the responsibility to act in ways that do not harm others. (Another obligation is to defend liberty, or to pay taxes so that others can defend it on your behalf.) The “right” to drive does not include the “right” to drive while drunk or distracted. In sum, a ban on cell-phone use by drivers is entirely libertarian. As I have said,

for the vast majority of drivers there is no alternative to the use of public streets and highways. Relatively few persons can afford private jets and helicopters for commuting and shopping. And as far as I know there are no private, drunk-drivers-and-cell-phones-banned highways. Yes, there might be a market for those drunk-drivers-and-cell-phones-banned highways, but that’s not the reality of here-and-now.

So, I can avoid the (remote) risk of death by second-hand smoke by avoiding places where people smoke. But I cannot avoid the (less-than-remote) risk of death at the hands of a drunk or cell-phone yakker. Therefore, I say, arrest the drunks, cell-phone users, nail-polishers, newspaper-readers, and others of their ilk on sight; slap them with heavy fines; add jail terms for repeat offenders; and penalize them even more harshly if they take life, cause injury, or inflict property damage.

A Long Row to Hoe

In “A Welcome Trend,” I point to Obama’s declining popularity and note that

the trend — if it continues — offers hope for GOP gains in the mid-term elections, if not a one-term Obama-cy.

Of course, it is early days yet. Popularity lost can be regained. Clinton succeeded in making himself so unpopular during his first two years in office that the GOP was able to seize control of Congress in the 1994 mid-term elections. But Clinton was able to regroup, win re-election in 1996, and leave the presidency riding high in the polls, despite (or perhaps because of) his impeachment.

Focusing on 2012, and assuming that Obama runs for re-election, what must the GOP do to unseat him? In “There Is Hope in Mudville,” I offer this:

What about 2012? Can the GOP beat Obama? Why not? A 9-State swing would do the job, and Bush managed a 10-State swing in winning the 2000 election. If Bush can do it, almost anyone can do it — well, anyone but another ersatz conservative like Bob Dolt or John McLame.

Not so fast. A closer look at the results of the 2008 election is in order:

  • Based on the results of the 2004-2008 elections, I had pegged Iowa, New Hampshire, and New Mexico as tossup States: McCain lost them by 9.5, 9.6, and 15.1 percentage points, respectively.
  • I had designated Florida, Ohio, and Nevada as swing-Red States — close, but generally leaning toward the GOP. McCain lost the swing-Red States by 2.8, 4.6, and 12.5 percentage points, respectively.
  • Of the seven States I had designated as leaning-Red, McCain lost Viginia (6.3 percentage points) and Colorado (9.0 percentage points). (He held onto Missouri by only 0.1 percentage point.)
  • McCain also managed to lose two firm-Red States: North Carolina (0.3) and Indiana (1.0).

The tossups are no longer tossups. It will take a strong GOP candidate to reclaim them in 2012. The same goes for Nevada, Virginia, and Colorado. Only Florida, Ohio, North Carolina, and Indiana are within easy reach for the GOP’s next nominee.

McCain did better than Bush in the following States: Oklahoma, Alabama, Louisiana, West Virginia, Tennessee, and Massachusetts. The first five were already firm- and leaning-Red, so McCain’s showing there was meaningless. His small gain in Massachusetts (1.7 percentage points) is likewise meaningless; Obama won the Bay State by 25.8 percentage points. In sum, there is no solace to be found in McCain’s showing.

The GOP can win in 2012 only if

  • Obama descends into Bush-like unpopularity, and stays there; or
  • Obama remains a divisive figure (which he is, all posturing to the contrary) and the GOP somehow nominates a candidate who is a crowd-pleaser and a principled, articulate spokesman for limited government.

The GOP must not offer a candidate who promises to do what Obama would do to this country, only to do it more effectively and efficiently. The “base” will stay home in droves, and Obama will coast to victory — regardless of his unpopularity and divisiveness.

I hereby temper the optimistic tone of my earlier posts. The GOP has a long row to hoe.

A Welcome Trend

obamas-net-approval2Derived from Rasmussen Reports, Daily Presidential Tracking Poll, Obama Approval Index History. I use Rasmussen’s polling results because Rasmussen has a good track record with respect to presidential-election polling.

Obama’s approval rating may have dropped for the wrong reasons; that is, voters expect him to “do something” about jobs, health care, etc. But voters have come to expect presidents to “do something” about various matters which are none of government’s business. So, even if voters have become less approving of Obama for the wrong reasons, the trend — if it continues — offers hope for GOP gains in the mid-term elections, if not a one-term Obama-cy.

Randomness Is Over-Rated

In the preceding post (“Fooled by Non-Randomness“), I had much to say about Nassim Nicholas Taleb’s Fooled by Randomness. The short of is this: Taleb over-rates the role of randomness in financial markets. In fact, his understanding of randomness seems murky.

My aim here is to offer a clearer picture of randomness (or the lack of it), especially as it relates to human behavior. Randomness, as explained in the preceding post, has almost nothing to do with human behavior, which is dominated by intention. Taleb’s misapprehension of randomness leads him  to overstate the importance of a thing called survivor(ship) bias, to which I will turn after dealing with randomness.

WHERE IS RANDOMNESS FOUND?

Randomness — true randomness — is to be found mainly in the operation of fair dice, fair roulette wheels, cryptograhic pinwheels, and other devices designed expressly for the generation of random values. But what about randomness in human affairs?

What we often call random events in human affairs really are non-random events whose causes we do not and, in some cases, cannot know. Such events are unpredictable, but they are not random. Such is the case with such things as rolls (throws) of fair dice — which are considered random events. Dice-rolls are “random” only because it is impossible to perceive the precise conditions of each roll in “real time,” even though knowledge of those conditions would enable a sharp-eyed observer to forecast the outcome of each throw with some accuracy, if the observer were armed with — and had instant access to — analyses of the results of myriad throws whose precise conditions had been captured by various recording devices.

An observer who lacks such information, and who considers the throws of fair dice to be random events, will see that the total number of pips showing on both dice converges on the following frequency distribution:

Rolled Freq.
2 0.028
3 0.056
4 0.083
5 0.111
6 0.139
7 0.167
8 0.139
9 0.111
10 0.083
11 0.056
12 0.028

This frequency distribution is really a shorthand way of writing 28 times out of 1,000; 56 times out of 1,000; etc.

Stable frequency distributions, such as the one given above, have useful purposes. In the case of craps, for example, a bettor can minimize his losses to the house (over a long period of time) if he takes the frequency distribution into account in his betting. Even more usefully, perhaps, an observed divergence from the normal frequency distribution (over many rolls of the dice) would indicate bias caused by (a) an unusual and possibly fraudulent condition (e.g., loaded dice) or (b) a player’s special skill in manipulating dice to skew the frequency distribution in a certain direction.

Randomness, then, is found in (a) the results of non-intentional actions, where (b) we lack sufficient knowledge to understand the link between actions and results.

THE REAL WORLD OF HUMAN AFFAIRS: IT IS WHAT IT IS

You will have noticed the beautiful symmetry of the frequency distribution for dice-rolling. Two-thirds of a large number of dice-rolls will have values of 5 through 9. Values 3 and 4 together will comprise about 14 percent of the rolls, as will values 10 and 11 together. Values 2 and 12 each will comprise less than 3 percent of the rolls.

In other words, the frequency distribution for dice-rolls closely resembles a normal distribution (bell curve). The virtue of this regularity is that it makes predictable the outcome of a large number of dice-rolls; and it makes obvious (over many dice-rolls) a rigged game involving dice. A statistically unexpected distribution of dice-rolls would be considered non-random or, more plainly, rigged — that is, intended by the rigging party.

To state the underlying point explicitly: It is unreasonable to reduce intentional human behavior to probabilistic formulas. Humans don’t behave like dice, roulette balls, or similar “random” devices. But that is what Taleb (and others) do when they ascribe unusual success in financial markets to “luck.” For example, here is what Taleb says on page 136:

I do not deny that if someone performed better than the crowd in the past, there is a presumption of his ability to do better in the future. But the presumption might be weak, very weak, to the point of being useless in decision making. Why? Because it all depends on two factors: The randomness-content of his profession and the number of [persons in the profession].

What Taleb means is this:

  • Success in a profession where randomness dominates outcomes is likely to have the same kind of distribution as that of an event that is considered random, like rolling dice.
  • That being the case, a certain percentage of the members of the profession will, by chance, seem to have great success.
  • If a profession has relatively few members, than a successful person in that profession is more of a standout than a successful person in a profession with, say, thousands of members.

Let me count the assumptions embedded in Taleb’s argument:

  1. Randomness actually dominates some professions. (In particular, he is thinking of the profession of trading financial instruments: stocks, bonds, derivatives, etc.)
  2. Success in a randomness-dominated profession therefore has almost nothing to do with the relevant skills of a member of that profession, nor with the member’s perspicacity in applying those skills.
  3. It follows that a very successful member of a randomness-dominated profession is probably very successful because of luck.
  4. The probability of stumbling across a very successful member of a randomness-dominated profession depends on the total number of members of the profession, given that the probability of success in the profession is distributed in a non-random way (as with dice-rolls).

One of the ways in which Taleb illustrates his thesis is to point to the mutual-fund industry, where far fewer than half the industry’s actively managed funds fail to match the performance of benchmark indices (e.g., S&P 500) over periods of 5 years and longer. But broad, long-term movements in financial markets are not random — as I show in the preceding post.

Nor is trading in financial instruments random; traders do not roll dice or flip coins when they make trades. (Well, the vast majority don’t.) That a majority (or even a super-majority) of actively managed funds does less well than an index fund has nothing to do with randomness and everything to do with the distribution of stock-picking skills. The research required to make informed decisions about financial instruments is arduous and expensive — and not every fool can do it well. Moreover, decision-making — even when based on thorough research — is clouded by uncertainty about the future and the variety of events that can affect the prices of financial instruments.

It is therefore unsurprising  that the distribution of skills in the financial industry is skewed; there are relatively few professionals who have what it takes to succeed over the long run, and relatively many professionals (or would-be professionals) who compile mediocre-to-awful records.

I say it again: The most successful professionals are not successful because of luck, they are successful because of skill. There is no statistically predetermined percentage of skillful traders; the actual percentage depends on the skills of entrants and their willingness (if skillful) to make a career of it. A relevant analogy is found in the distribution of incomes:

In 2007, all households in the United States earned roughly $7.896 trillion [25]. One half, 49.98%, of all income in the US was earned by households with an income over $100,000, the top twenty percent. Over one quarter, 28.5%, of all income was earned by the top 8%, those households earning more than $150,000 a year. The top 3.65%, with incomes over $200,000, earned 17.5%. Households with annual incomes from $50,000 to $75,000, 18.2% of households, earned 16.5% of all income. Households with annual incomes from $50,000 to $95,000, 28.1% of households, earned 28.8% of all income. The bottom 10.3% earned 1.06% of all income.

The outcomes of human endeavor are skewed because the distribution of human talents is skewed. It would be surprising to find as many as one-half of traders beating the long-run average performance of the various markets in which they operate.

BACK TO BASEBALL

To drive the point home, I return to the example of baseball, which I treated at length in the preceding post. Baseball, like most games, has many “random” elements, which is to say that baseball players cannot always predict accurately such things as the flight of a thrown or batted ball, the course a ball will take when it bounces off grass or an outfield fence, the distance and direction of a throw from the outfield, and so on. But despite the many unpredictable elements of the game, skill dominates outcomes over the course of seasons and careers. Moreover, skill is not distributed in a neat way, say, along a bell curve. A good case in point is the distribution of home runs:

  • There have been 16,884 players and 253,498 home runs in major-league history (1876 – present), an average of 15 home runs per person who have played in the major leagues since 1876. About 2,700 players have more than 15 home runs; about 14,000 players have fewer than 15 home runs; and about 100 players have exactly 15 home runs. Of the 2,700 players with more than 15 home runs, there are (as of yesterday) 1,006 with 74 or more home runs, and 25 with 500 or more home runs. (I obtained data about the frequency of career home runs with this search tool at Baseball-Reference.com.)
  • The career home-run statistic, in other words, has an extremely long, thin “tail” that, at first, rises gradually from 0 to 15. This tail represents the home-run records of about 89 percent of all the men who have played in the major leagues. The tail continues to broaden until, at the other end, it becomes a very short, very fat hump, which represents the 0.15 percent of players with 500 or more home runs.
  • There may be a standard statistical distribution which seems to describe the incidence of career home runs. But to say that the career home-run statistic matches any kind of distribution is merely to posit an after-the-fact “explanation” of a phenomenon that has one essential explanation: Some hitters are better at hitting home runs than other players; those better home-run hitters are more likely to stay in the major leagues long enough to compile a lot of home runs. (Even 74 home runs is a lot, relative to the mean of 15.)

And so it is with traders and other active “players” in financial markets. They differ in skill, and their skill differences cannot be arrayed neatly along a bell curve or any other mathematically neat frequency distribution. To adapt a current coinage, they are what they are — nothing more, nothing less.

TALEB’S A PRIORI WORLDVIEW, WITH A BIAS

Taleb, of course, views the situation the other way around. He sees an a priori distribution of “winners” and losers,” where “winners” are determined mainly by luck, not skill. Moreover, we — the civilians on the sidelines — labor under the false impression about the relative number of “winners” because

it is natural for those who failed to vanish completely. Accordingly, one sees the survivors, and only the survivors, which imparts such a mistaken perception of the odds [favoring success]. (p. 137)

Here, Taleb is playing a variation on a favorite theme: survivor(ship) bias. What is it? Here are three quotations that may help you understand it:

Survivor bias is a prominent form of ex-post selection bias. It exists in data sets that exclude a disproportionate share of non-surviving firms…. (“Accounting Information Free of Selection Bias: A New UK Database 1953-1999 “)

Survivorship bias causes performance results to be overstated
because accounts that have been terminated, which may have
underperformed, are no longer in the database. This is the most
documented and best understood source of peer group bias. For example, an unsuccessful management product that was
terminated in the past is excluded from current peer groups.
This screening out of losers results in an overstatement of past
performance. A good illustration of how survivor bias can skew
things is the “marathon analogy”, which asks: If only 100 runners out of a 1,000?contestant marathon actually finish, is the 100th the last? Or in the top ten percent? (“Warning! Peer Groups Are Hazardous to Our Wealth“)

It is true that a number of famous successful people have spent 10,000 hours practising. However, it is also true that many people we have never heard of because they weren’t successful also practised for 10,000 hours. And that there are successful people who were very good without practising for 10,000 hours before their breakthrough (the Rolling Stones, say). And Gordon Brown isn’t very good at being Prime Minister despite preparing for 10,000 hours. (“Better Services without Reform? It’s Just a Con“)

First of all, there are no “odds” favoring success — even in financial markets. Financial “players” do what they can do, and most of them — like baseball players — simply don’t have what it takes for great success. Outcomes are skewed, not because of (fictitious) odds but because talent is distributed unevenly.

CONCLUSION

The real lesson for us spectators is not to assume that the “winners” are merely lucky. No, the real lesson is to seek out those “winners” who have proven their skills over a long period of time, through boom and bust and boom and bust.

Those who do well, over the long run, do not do so merely because they have survived. They have survived because they do well.

Fooled by Non-Randomness

Nassim Nicholas Taleb, in his best-selling Fooled by Randomness, charges human beings with the commission of many perceptual and logical errors. One reviewer captures the point of the book, which is to

explore luck “disguised and perceived as non-luck (that is, skills).” So many of the successful among us, he argues, are successful due to luck rather than reason. This is true in areas beyond business (e.g. Science, Politics), though it is more obvious in business.

Our inability to recognize the randomness and luck that had to do with making successful people successful is a direct result of our search for pattern. Taleb points to the importance of symbolism in our lives as an example of our unwillingness to accept randomness. We cling to biographies of great people in order to learn how to achieve greatness, and we relentlessly interpret the past in hopes of shaping our future.

Only recently has science produced probability theory, which helps embrace randomness. Though the use of probability theory in practice is almost nonexistent.

Taleb says the confusion between luck and skill is our inability to think critically. We enjoy presenting conjectures as truth and are not equipped to handle probabilities, so we attribute our success to skill rather than luck.

Taleb writes in a style found all too often on best-seller lists: pseudo-academic theorizing “supported” by selective (often anecdotal) evidence. I sometimes enjoy such writing, but only for its entertainment value. Fooled by Randomness leaves me unfooled, for several reasons.

THE FUNDAMENTAL FLAW

The first reason that I am unfooled by Fooled… might be called a meta-reason. Standing back from the book, I am able to perceive its essential defect: According to Taleb, human affairs — especially economic affairs, and particularly the operations of financial markets — are dominated by randomness. But if that is so, only a delusional person can truly claim to understand the conduct of human affairs. Taleb claims to understand the conduct of human affairs. Taleb is therefore either delusional or omniscient.

Given Taleb’s humanity, it is more likely that he is delusional — or simply fooled, but not by randomness. He is fooled because he proceeds from the assumption of randomness instead of exploring the ways and means by which humans are actually capable of shaping events. Taleb gives no more than scant attention to those traits which, in combination, set humans apart from other animals: self-awareness, empathy, forward thinking, imagination, abstraction, intentionality, adaptability, complex communication skills, and sheer brain power. Given those traits (in combination) the world of human affairs cannot be random. Yes, human plans can fail of realization for many reasons, including those attributable to human flaws (conflict, imperfect knowledge, the triumph of hope over experience, etc.). But the failure of human plans is due to those flaws — not to the randomness of human behavior.

What Taleb sees as randomness is something else entirely. The trajectory of human affairs often is unpredictable, but it is not random. For it is possible to find patterns in the conduct of human affairs, as Taleb admits (implicitly) when he discusses such phenomena as survivorship bias, skewness, anchoring, and regression to the mean.

A DISCOURSE ON RANDOMNESS

What Is It?

Taleb, having bloviated for dozens of pages about the failure of humans to recognize randomness, finally gets around to (sort of) defining randomness on pages 168 and 169 (of the 2005 paperback edition):

…Professor Karl Pearson … devised the first test of nonrandomness (it was in reality a test of deviation from normality, which for all intents and purposes, was the same thing). He examined millions of runs of [a roulette wheel] during the month of July 1902. He discovered that, with high degree of statistical significance … the runs were not purely random…. Philosophers of statistics call this the reference case problem to explain that there is no true attainable randomness in practice, only in theory….

…Even the fathers of statistical science forgot that a random series of runs need not exhibit a pattern to look random…. A single random run is bound to exhibit some pattern — if one looks hard enough…. [R]eal randomness does not look random.

The quoted passage illustrates nicely the superficiality of Fooled by Randomness, and (I must assume) the muddledness of Taleb’s thinking:

  • He accepts a definition of randomness which describes the observation of outcomes of mechanical processes (e.g., the turning of a roulette wheel, the throwing of dice) that are designed to yield random outcomes. That is, randomness of the kind cited by Taleb is in fact the result of human intentions.
  • If “there is no true attainable randomness,” why has Taleb written a 200-plus page book about randomness?
  • What can he mean when he says “a random series of runs need not exhibit a pattern to look random”? The only sensible interpretation of that bit of nonsense would be this: It is possible for a random series of runs to contain what looks like a pattern. But remember that the random series of runs to which Taleb refers is random only because humans intended its randomness.
  • It is true enough that “A single random run is bound to exhibit some pattern — if one looks hard enough.” Sure it will. But it remains a single random run of a process that is intended to produce randomness, which is utterly unlike such events as transactions in financial markets.

One of the “fathers of statistical science” mentioned by Taleb (deep in the book’s appendix) is Richard von Mises, who in Probability Statistics and Truth defines randomness as follows:

First, the relative frequencies of the attributes [e.g. heads and tails] must possess limiting values [i.e., converge on 0.5, in the case of coin tosses]. Second, these limiting values must remain the same in all partial sequences which may be selected from the original one in an arbitrary way. Of course, only such partial sequences can be taken into consideration as can be extended indefinitely, in the same way as the original sequence itself. Examples of this kind are, for instance, the partial sequences formed by all odd members of the original sequence, or by all members for which the place number in the sequence is the square of an integer, or a prime number, or a number selected according to some other rule, whatever it may be. (pp. 24-25 of the 1981 Dover edition, which is based on the author’s 1951 edition)

Gregory J. Chaitin, writing in Scientific American (“Randomness and Mathematical Proof,” vol. 232, no. 5 (May 1975), pp. 47-52), offers this:

We are now able to describe more precisely the differences between the[se] two series of digits … :

01010101010101010101
01101100110111100010

The first could be specified to a computer by a very simple algorithm, such as “Print 01 ten times.” If the series were extended according to the same rule, the algorithm would have to be only slightly larger; it might be made to read, for example, “Print 01 a million times.” The number of bits in such an algorithm is a small fraction of the number of bits in the series it specifies, and as the series grows larger the size of the program increases at a much slower rate.

For the second series of digits there is no corresponding shortcut. The most economical way to express the series is to write it out in full, and the shortest algorithm for introducing the series into a computer would be “Print 01101100110111100010.” If the series were much larger (but still apparently patternless), the algorithm would have to be expanded to the corresponding size. This “incompressibility” is a property of all random numbers; indeed, we can proceed directly to define randomness in terms of incompressibility: A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself [emphasis added].

This is another way of saying that if you toss a balanced coin 1,000 times the only way to describe the outcome of the tosses is to list the 1,000 outcomes of those tosses. But, again, the thing that is random is the outcome of a process designed for randomness.

Taking Mises and Chaitin’s definitions together, we can define random events as events which are repeatable, convergent on a limiting value, and truly patternless over a large number of repetitions. Evolving economic events (e.g., stock-market trades, economic growth) are not alike (in the way that dice are, for example), they do not converge on limiting values, and they are not patternless, as I will show.

In short, Taleb fails to demonstrate that human affairs in general or financial markets in particular exhibit randomness, properly understood.

Randomness and the Physical World

Nor are we trapped in a random universe. Returning to Mises, I quote from the final chapter of Probability, Statistics and Truth:

We can only sketch here the consequences of these new concepts [e.g., quantum mechanics and Heisenberg's principle of uncertainty] for our general scientific outlook. First of all, we have no cause to doubt the usefulness of the deterministic theories in large domains of physics. These theories, built on a solid body of experience, lead to results that are well confirmed by observation. By allowing us to predict future physical events, these physical theories have fundamentally changed the conditions of human life. The main part of modern technology, using this word in its broadest sense, is still based on the predictions of classical mechanics and physics. (p. 217)

Even now, almost 60 years on, the field of nanotechnology is beginning to hardness quantum mechanical effects in the service of a long list of useful purposes.

The physical world, in other words, is not dominated by randomness, even though its underlying structures must be described probabilistically rather than deterministically.

Summation and Preview

A bit of unpredictability (or “luck”) here and there does not make for a random universe, random lives, or random markets. If a bit of unpredictability here and there dominated our actions, we wouldn’t be here to talk about randomness — and Taleb wouldn’t have been able to marshal his thoughts into a published, marketed, and well-sold book.

Human beings are not “designed” for randomness. Human endeavors can yield unpredictable results, but those results do not arise from random processes, they derive from skill or the lack therof, knowledge or the lack thereof (including the kinds of self-delusions about which Taleb writes), and conflicting objectives.

An Illustration from Life

To illustrate my position on randomness, I offer the following digression about the game of baseball.

At the professional level, the game’s poorest players seldom rise above the low minor leagues. But even those poorest players are paragons of excellence when compared with the vast majority of American males of about the same age. Did those poorest players get where they were because of luck? Perhaps some of them were in the right place at the right time, and so were signed to minor league contracts. But their luck runs out when they are called upon to perform in more than a few games. What about those players who weren’t in the right place at the right time, and so were overlooked in spite of skills that would have advanced them beyond the rookie leagues? I have no doubt that there have been many such players. But, in the main, professional baseball abounds with the lion’s share of skilled baseball players who are there because they intend to be there, and because baseball clubs intend for them to be there.

Now, most minor leaguers fail to advance to the major leagues, even for the proverbial “cup of coffee” (appearing in few games at the end of the major-league season, when teams are allowed to expand their rosters following the end of the minor-league season). Does “luck” prevent some minor leaguers from advancement to “the show” (the major leagues)? Of course. Does “luck” result in the advancement of some minor leaguers to “the show”? Of course. But “luck,” in this context, means injury, illness, a slump, a “hot” streak, and the other kinds of unpredictable events that ballplayers are subject to. Are the events random? Yes, in the sense that they are unpredictable, but I daresay that most baseball players do not succumb to bad luck or advance very for or for very long because of good luck. In fact, ballplayers who advance to the major leagues, and then stay there for more than a few seasons, do so because they possess (and apply) greater skill than their minor-league counterparts. And make no mistake, each player’s actions are so closely watched and so extensively quantified that it isn’t hard to tell when a player is ready to be replaced.

It is true that a player may experience “luck” for a while during a season, and sometimes for a whole season. But a player will not be consistently “lucky” for several seasons. The length of his career (barring illness, injury, or voluntary retirement), and his accomplishments during that career, will depend mainly on his inherent skills and his assiduousness in applying those skills.

No one believes that Ty Cobb, Babe Ruth, Ted Williams, Christy Matthewson, Warren Spahn, and the dozens of other baseball players who rank among the truly great were lucky. No one believes that the vast majority of the the tens of thousands of minor leaguers who never enjoyed more than the proverbial cup of coffee were unlucky. No one believes that the vast majority of the millions of American males who never made it to the minor leagues were unlucky. Most of them never sought a career in baseball; those who did simply lacked the requisite skills.

In baseball, as in life, “luck” is mainly an excuse and rarely an explanation. We prefer to apply “luck” to outcomes when we don’t like the true explanations for them. In the realm of economic activity and financial markets, one such explanation (to which I will come) is the exogenous imposition of governmental power.

ARE ECONOMIC AND FINANCIAL OUTCOMES TRULY RANDOM?

They Cannot Be, Given Competition

Returning to Taleb’s main theme — the randomness of economic and financial events — I quote this key passage (my comments are in brackets and boldface):

…Most of [Bill] Gates’[s] rivals have an obsessive jealousy of his success. They are maddened by the fact that he managed to win so big while many of them are struggling to make their companies survive. [These are unsupported claims that I include only because they set the stage for what follows.]

Such ideas go against classical economic models, in which results either come from a precise reason (there is no account for uncertainty) or the good guy wins (the good guy is the one who is most skilled and has some technical superiority). [The "good guy" theory would come as a great surprise to "classical" economists, who quite well understood imperfect competition based on product differentiation and monopoly based on (among other things) early entry into a market.] Economists discovered path-dependent effects late in their game [There is no "late" in a "game" that had no distinct beginning and has no pre-ordained end.], then tried to publish wholesale on the topic that otherwise be bland and obvious. For instance, Brian Arthur, an economist concerned with nonlinearities at the Santa Fe Institute [What kinds of nonlinearities are found at the Santa Fe Institute?], wrote that chance events coupled with positive feedback other than technological superiority will determine economic superiority — not some abstrusely defined edge in a given area of expertise. [It would come as no surprise to economists -- even "classical" ones -- that many factors aside from technical superiority determine market outcomes.] While early economic models excluded randomness, Arthur explained how “unexpected orders, chance meetings with lawyers, managerial whims … would help determine which ones acheived early sales and, over time, which firms dominated.”

Regarding the final sentence of the quoted passage, I refer back to the example of baseball. A person or a firm may gain an opportunity to succeed because of the kinds of “luck” cited by Brian Arthur, but “good luck” cannot sustain an incompetent performer for very long.  And when “bad luck” happens to competent individuals and firms they are often (perhaps usually) able to overcome it.

While overplaying the role of luck in human affairs, Taleb underplays the role of competition when he denigrates “classical economic models,” in which competition plays a central role. “Luck” cannot forever outrun competition, unless the game is rigged by governmental intervention, namely, the writing of regulations that tend to favor certain competitors (usually market incumbents) over others (usually would-be entrants). The propensity to regulate at the behest of incumbents (who plead “public interest,” of course) is a proof of the power of competition to shape economic outcomes. It is loathed and feared, and yet it leads us in the direction to which classical economic theory points: greater output and lower prices.

Competition is what ensures that (for the most part) the best ballplayers advance to the major leagues. It’s what keeps “monopolists” like Microsoft hopping (unless they have a government-guaranteed monopoly), because even a monopolist (or oligopolist) can face competition, and eventually lose to it — witness the former “Big Three” auto makers, many formerly thriving chain stores (from Kresge’s to Montgomery Ward’s), and numerous other brand names of days gone by. If Microsoft survives and thrives, it will be because it actually offers consumers more value for their money, either in the way of products similar to those marketed by Microsoft or in entirely new products that supplant those offered by Microsoft.

Monopolists and oligopolists cannot survive without constant innovation and attention to their customers’ needs.Why? Because they must compete with the offerors of all the other goods and services upon which consumers might spend their money. There is nothing — not even water — which cannot be produced or delivered in competitive ways. (For more, see this.)

The names of the particular firms that survive the competitive struggle may be unpredictable, but what is predictable is the tendency of competitive forces toward economic efficiency. In other words, the specific outcomes of economic competition may be unpredictable (which is not a bad thing), but the general result — efficiency — is neither unpredictable nor a manifestation of randomness or “luck.”

Taleb, had he broached the subject of competition would (with his hero George Soros) denigrate it, on the ground that there is no such thing as perfect competition. But the failure of competitive forces to mimic the model of perfect competition does not negate the power of competition, as I have summarized it here. Indeed, the failure of competitive forces to mimic the model of perfect competition is not a failure, for perfect competition is unattainable in practice, and to hold it up as a measure of the effectiveness of market forces is to indulge in the Nirvana fallacy.

In any event, Taleb’s myopia with respect to competition is so complete that he fails to mention it, let alone address its beneficial effects (even when it is less than perfect). And yet Taleb dares to dismiss as a utopist Milton Friedman (p. 272) — the same Milton Friedman who was among the twentieth century’s foremost advocates of the benefits of competition.

Are Financial Markets Random?

Given what I have said thus far, I find it almost incredible that anyone believes in the randomness of financial markets. It is unclear where Taleb stands on the random-walk hypothesis, but it is clear that he believes financial markets to be driven by randomness. Yet, contradictorily, he seems to attack the efficient-markets hypothesis (see pp. 61-62), which is the foundation of the random-walk hypothesis.

What is the random-walk hypothesis? In brief, it is this: Financial markets are so efficient that they instantaneously reflect all information bearing on the prices of financial instruments that is then available to persons buying and selling those instruments. (The qualifier “then available to persons buying and selling those instruments” leaves the door open for [a] insider trading and [b] arbitrage, due to imperfect knowledge on the part of some buyers and/or sellers.) Because information can change rapidly and in unpredictable ways, the prices of financial instruments move randomly. But the random movement is of a very special kind:

If a stock goes up one day, no stock market participant can accurately predict that it will rise again the next. Just as a basketball player with the “hot hand” can miss the next shot, the stock that seems to be on the rise can fall at any time, making it completely random.

And, therefore, changes in stock prices cannot be predicted.

Note, however, the focus on changes. It is that focus which creates the illusion of randomness and unpredictability. It is like hoping to understand the movements of the planets around the sun by looking at the random movements of a particle in a cloud chamber.

When we step back from day-to-day price changes, we are able to see the underlying reality: prices (instead of changes) and price trends (which are the opposite of randomness). This (correct) perspective enables us to see that stock prices (on the whole) are not random, and to identify the factors that influence the broad movements of the stock market.
For one thing, if you look at stock prices correctly, you can see that they vary cyclically. Here is a telling graphic (from “Efficient-market hypothesis” at Wikipedia):

Returns on stocks vs. PE ratioPrice-Earnings ratios as a predictor of twenty-year returns based upon the plot by Robert Shiller (Figure 10.1,[18] source). The horizontal axis shows the real price-earnings ratio of the S&P Composite Stock Price Index as computed in Irrational Exuberance (inflation adjusted price divided by the prior ten-year mean of inflation-adjusted earnings). The vertical axis shows the geometric average real annual return on investing in the S&P Composite Stock Price Index, reinvesting dividends, and selling twenty years later. Data from different twenty-year periods is color-coded as shown in the key. See also ten-year returns. Shiller states that this plot “confirms that long-term investors—investors who commit their money to an investment for ten full years—did do well when prices were low relative to earnings at the beginning of the ten years. Long-term investors would be well advised, individually, to lower their exposure to the stock market when it is high, as it has been recently, and get into the market when it is low.”[18] This correlation between price to earnings ratios and long-term returns is not explained by the efficient-market hypothesis.

Why should stock prices tend to vary cyclically? Because stock prices generally are driven by economic growth (i.e., changes in GDP), and economic growth is strongly cyclical. (See this post.)

More fundamentally, the economic outcomes reflected in stock prices aren’t random, for they depend mainly on intentional behavior along well-rehearsed lines (i.e., the production and consumption of goods and services in ways that evolve over time). Variations in economic behavior, even when they are unpredictable, have explanations; for example:

  • Innovation and capital investment spur the growth of economic output.
  • Natural disasters slow the growth of economic output (at least temporarily) because they absorb resources that could have gone to investment  (as well as consumption).
  • Governmental interventions (taxation and regulation), if not reversed, dampen growth permanently.

There is nothing in those three statements that hasn’t been understood since the days of Adam Smith. Regarding the third statement, the general slowing of America’s economic growth since the advent of the Progressive Era around 1900 is certainly not due to randomness, it is due to the ever-increasing burden of taxation and regulation imposed on the economy — an entirely predictable result, and certainly not a random one.

In fact, the long-term trend of the stock market (as measured by the S&P 500) is strongly correlated with GDP. And broad swings around that trend can be traced to governmental intervention in the economy. The following graph shows how the S&P 500, reconstructed to 1870, parallel constant-dollar GDP:

The next graph shows the relationship more clearly.

090711_Real S&P 500 vs Real GDP

090711_Real S&P 500 vs Real GDP_2

The wild swings around the trend line began in the uncertain aftermath of World War I, which saw the imposition of production and price controls. The swings continued with the onset of the Great Depression (which can be traced to governmental action), the advent of the anti-business New Deal, and the imposition of production and price controls on a grand scale during World War II. The next downswing was occasioned by the culmination the Great Society, the “oil shocks” of the early 1970s, and the raging inflation that was touched off by — you guessed it — government policy. The latest downswing is owed mainly to the financial crisis born of yet more government policy: loose money and easy loans to low-income borrowers.

And so it goes, wildly but predictably enough if you have the faintest sense of history. The moral of the story: Keep your eye on government and a hand on your wallet.

CONCLUSION

There is randomness in economic affairs, but they are not dominated by randomness. They are dominated by intentions, including especially the intentions of the politicians and bureaucrats who run governments. Yet, Taleb has no space in his book for the influence of their deeds economic activity and financial markets.

Taleb is right to disparage those traders (professional and amateur) who are lucky enough to catch upswings, but are unprepared for downswings. And he is right to scoff at their readiness to believe that the current upswing (uniquely) will not be followed by a downswing (“this time it’s different”).

But Taleb is wrong to suggest that traders are fooled by randomness. They are fooled to some extent by false hope, but more profoundly by their inablity to perceive the economic damage wrought by government. They are not alone of course; most of the rest of humanity shares their perceptual failings.

Taleb, in that respect, is only somewhat different than most of the rest of humanity. He is not fooled by false hope, but he is fooled by non-randomness — the non-randomness of government’s decisive influence on economic activity and financial markets. In overlooking that influence he overlooks the single most powerful explanation for the behavior of markets in the past 90 years.

Beware of Libertarian Paternalists

I have written extensively about paternalism of the so-called libertarian variety. (See this post and the posts linked therein.) Glen Whitman, in two recent posts at Agoraphilia, renews his attack on “libertarian paternalism,” the main proponents of which are Cass Sunstein and Richard Thaler (S&T). In the first of the two posts, Whitman writes:

[Thaler] continues to disregard the distinction between public and private action.

Some critics contend that behavioral economists have neglected the obvious fact that bureaucrats make errors, too. But this misses the point. After all, wouldn’t you prefer to have a qualified, albeit human, technician inspect your aircraft’s engines rather than do it yourself?

The owners of ski resorts hire experts who have previously skied the runs, under various conditions, to decide which trails should be designated for advanced skiers. These experts know more than a newcomer to the mountain. Bureaucrats are human, too, but they can also hire experts and conduct research.Here we see two of Thaler’s favorite stratagems deployed at once. First, he relies on a deceptively innocuous, private, and non-coercive example to illustrate his brand of paternalism. Before it was cafeteria dessert placement; now it’s ski-slope markings. Second, he subtly equates private and public decision makers without even mentioning their different incentives. In this case, he uses “bureaucrats” to refer to all managers, regardless of whether they manage private or public enterprises.

The distinction matters. The case of ski-slope markings is the market principle at work. Skiers want to know the difficulty of slopes, and so the owners of ski resorts provide it. They have a profit incentive to do so. This is not at all coercive, and it is no more “paternalist” than a restaurant identifying the vegetarian dishes.

Public bureaucrats don’t have the same incentives at all. They don’t get punished by consumers for failing to provide information, or for providing the wrong information. They don’t suffer if they listen to the wrong experts. They face no competition from alternative providers of their service. They get to set their own standards for “success,” and if they fail, they can use that to justify a larger budget.

And Thaler knows this, because these are precisely the arguments made by the “critics” to whom he is responding. His response is just a dodge, enabled by his facile use of language and his continuing indifference – dare I say hostility? – to the distinction between public and private.

In the second of the two posts, Whitman says:

The advocates of libertarian paternalism have taken great pains to present their position as one that does not foreclose choice, and indeed even adds choice. But this is entirely a matter of presentation. They always begin with non-coercive and privately adopted measures, such as the ski-slope markings in Thaler’s NY Times article. And when challenged, they resolutely stick to these innocuous examples (see this debate between Thaler and Mario Rizzo, for example). But if you read Sunstein & Thaler’s actual publications carefully, you will find that they go far beyond non-coercive and private measures. They consciously construct a spectrum of “libertarian paternalist” policies, and at one end of this spectrum lies an absolutely ban on certain activities, such as motorcycling without a helmet. I’m not making this up!…

[A]s Sunstein & Thaler’s published work clearly indicates, this kind of policy [requiring banks to offer "plain vanilla" mortgages] is the thin end of the wedge. The next step, as outlined in their articles, is to raise the cost of choosing other options. In this case, the government could impose more and more onerous requirements for opting out of the “plain vanilla” mortgage: you must fill out extra paperwork, you must get an outside accountant, you must have a lawyer present, you must endure a waiting period, etc., etc. Again, this is not my paranoid imagination at work. S&T have said explicitly that restrictions like these would count as “libertarian paternalism” by their definition….

The problem is that S&T’s “libertarian paternalism” is used almost exclusively to advocate greater intervention, not less. I have never, for instance, seen S&T push for privatization of Social Security or vouchers in education. I have never seen them advocate repealing a blanket smoking ban and replacing it with a special licensing system for restaurants that want to allow their customers to smoke. If they have, I would love to see it.

In their articles, S&T pay lip service to the idea that libertarian paternalism lies between hard paternalism and laissez faire, and thus that it could in principle be used to expand choice. But look at the actual list of policies they’ve advocated on libertarian paternalist grounds, and see where their real priorities lie.

S&T are typical “intellectuals,” in that they presume to know how others should lead their lives — a distinctly non-libertarian attitude. It is, in fact, a hallmark of “liberalism.” In an earlier post I had this to say about the founders of “liberalism” — John Stuart Mill, Thomas Hill Green, and Leonard Trelawney Hobhouse:

[W]e are met with (presumably) intelligent persons who believe that their intelligence enables them to peer into the souls of others, and to raise them up through the blunt instrument that is the state.

And that is precisely the mistake that lies at heart of what we now call “liberalism” or “progressivism.”  It is the three-fold habit of setting oneself up as an omniscient arbiter of economic and social outcomes, then castigating the motives and accomplishments of the financially successful and socially “well placed,” and finally penalizing financial and social success through taxation and other regulatory mechanisms (e.g., affirmative action, admission quotas, speech codes, “hate crime” legislation”). It is a habit that has harmed the intended beneficiaries of government intervention, not just economically but in other ways, as well….

The other ways, of course, include the diminution of social liberty, which is indivisible from economic liberty.

Just how dangerous to liberty are S&T? Thaler is an influential back-room operator, with close ties to the Obama camp. Sunstein is a long-time crony and adviser who now heads the White House’s Office of Information and Regulatory Affairs, where he has an opportunity to enforce “libertarian paternalism”:

…Sunstein would like to control the content of the internet — for our own good, of course. I refer specifically to Sunstein’s “The Future of Free Speech,” in which he advances several policy proposals, including these:

4. . . . [T]he government might impose “must carry” rules on the most popular Websites, designed to ensure more exposure to substantive questions. Under such a program, viewers of especially popular sites would see an icon for sites that deal with substantive issues in a serious way. They would not be required to click on them. But it is reasonable to expect that many viewers would do so, if only to satisfy their curiosity. The result would be to create a kind of Internet sidewalk, promoting some of the purposes of the public forum doctrine. Ideally, those who create Websites might move in this direction on their own. If they do not, government should explore possibilities of imposing requirements of this kind, making sure that no program draws invidious lines in selecting the sites whose icons will be favoured. Perhaps a lottery system of some kind could be used to reduce this risk.

5. The government might impose “must carry” rules on highly partisan Websites, designed to ensure that viewers learn about sites containing opposing views. This policy would be designed to make it less likely for people to simply hear echoes of their own voices. Of course, many people would not click on the icons of sites whose views seem objectionable; but some people would, and in that sense the system would not operate so differently from general interest intermediaries and public forums. Here too the ideal situation would be voluntary action. But if this proves impossible, it is worth considering regulatory alternatives. [Emphasis added.]

A Left-libertarian defends Sunstein’s foray into thought control, concluding that

Sunstein once thought some profoundly dumb policies might be worth considering, but realized years ago he was wrong about that… The idea was a tentative, speculative suggestion he now condemns in pretty strong terms.

Alternatively, in the face of severe criticism of his immodest proposal, Sunstein merely went underground, to await an opportunity to revive his proposal. I somehow doubt that Sunstein, as a confirmed paternalist, truly abandoned it. The proposal certainly was not off-the-cuff, running to 11 longish web pages.  Now, judging by the bulleted list above, the time is right for a revival of Sunstein’s proposal. And there he is, heading the Office of Information and Regulatory Affairs. The powers of that office supposedly are constrained by the executive order that established it. But it is evident that the Obama adminstration isn’t bothered by legal niceties when it comes to the exercise of power. Only a few pen strokes stand between Obama and a new, sweeping executive order, the unconstitutionality of which would be of no import to our latter-day FDR.

It’s just another step beyond McCain-Feingold, isn’t it?

Thus is the tyranny of “libertarian paternalism.” And thus does the death-spiral of liberty proceed.

There Is Hope in Mudville

Barack Obama’s achieved his electoral “landslide” in 2008 by grabbing only 9 of the 31 States won by G.W. Bush in 2004. Obama managed his less-than-impressive feat by running against the weakest candidate fielded by the GOP since 1996. (I don’t mean to suggest that G.W. Bush was a world-beater.)

How weak was John McCain? He beat Obama in his (McCain’s) home State of Arizona by 8.5 only percentage points. In 2004, Bush beat John Kerry in Arizona by 10.5 percentage points.

How weak is Obama at this moment? His net approval rating has dropped to -5, the lowest since his inauguration. And he’s less than 6 months into his presidency.

Hint to the  GOP: Stop playing nice and start attacking Obama in earnest: on his foreign policy, his defense policy, his profligate spending, his plans to socialize health care, his Supreme Court nominee, etc., etc., etc. Don’t attack Obama emotionally, attack him on the merits, with facts and figures. Do it hard and do it often, until the message sinks into the minds of all those swing voters out there.

What about 2012? Can the GOP beat Obama? Why not? A 9-State swing would do the job, and Bush managed a 10-State swing in winning the 2000 election. If Bush can do it, almost anyone can do it — well, anyone but another ersatz conservative like Bob Dolt or John McLame.

Why Is Entrepreneurship Declining?

Jonathan Adler of The Volokh Conspiracy addresses evidence that entrepreneurial activity is declining in the United States, noting that

The number of employer firms created annually has declined significantly since 1990, and the numbers of businesses created and those claiming to be self-employed have declined as well.

Adler continues:

What accounts for this trend? [The author of the cited analysis] thinks one reason is “the Wal-Mart effect.”

Large, efficient companies are able to out-compete small start-ups, replacing the independent businesses in many markets. Multiply across the entire economy the effect of a Wal-Mart replacing the independent restaurant, grocery store, clothing store, florist, etc., in a town, and you can see how we end up with a downward trend in entrepreneurship over time.

That may be true. It seems to me that another likely contributor is the increased regulatory burden. It is well documented that regulation can increase industry concentration. Smaller firms typically bear significantly greater regulatory costs per employee than larger firms (see, e.g., this study), and regulatory costs can also increase start-up costs and serve as a barrier to entry. While the rate at which new regulations were adopted slowed somewhat in recent years at the federal level (see here), so long as the cumulative regulatory burden increases, I would expect it to depress small business creation and growth.

Going further than Adler, I attribute the whole sorry mess to the growth of government over the past century. And I fully expect the increased regulatory and tax burdens of Obamanomics to depress innovation, business expansion, business creation, job creation, and the rate of economic growth. As I say here,

Had the economy of the U.S. not been deflected from its post-Civil War course [by the advent of the regulatory-welfare state around 1900], GDP would now be more than three times its present level…. If that seems unbelievable to you, it shouldn’t: $100 compounded for 100 years at 4.4 percent amounts to $7,400; $100 compounded for 100 years at 3.1 percent amounts to $2,100. Nothing other than government intervention (or a catastrophe greater than any we have known) could have kept the economy from growing at more than 4 percent.

What’s next? Unless Obama’s megalomaniac plans are aborted by a reversal of the Republican Party’s fortunes, the U.S. will enter a new phase of economic growth — something close to stagnation. We will look back on the period from 1970 to 2008 [when GDP rose at an annual rate of 3.1 percent] with longing, as we plod along at a growth rate similar to that of 1908-1940, that is, about 2.2 percent. Thus:

  • If GDP grows at 2.2 percent through 2108, it will be 58 percent lower than if we plod on at 3.1 percent.
  • If GDP grows at 2.2 percent for through 2108, it will be only 4 percent of what it would have been had it continued to grow at 4.4 percent after 1907.

The latter disparity may seem incredible, but scan the lists here and you will find even greater cross-national disparities in per capita GDP. Go here and you will find that real, per capita GDP in 1790 was only 3.3 percent of the value it had attained 201 years later. Our present level of output seems incredible to citizens of impoverished nations, and it would seem no less incredible to an American of 201 years ago. But vast disparities can and do exist, across nations and time. We have every reason to believe in a sustained growth rate of 4.4 percent, as against one of 2.2 percent, because we have experienced both.

Selection Bias and the Road to Serfdom

Office-seeking is about one thing: power. (Money is sometimes a motivator, but power is the common denominator of politics.) Selection bias, as I argue here, deters office-seeking and voting by those (relatively rare) individuals who oppose the accrual of governmental power. The inevitable result — as we have seen for decades and are seeing today — is the accrual of governmental power on a fascistic scale.

Selection bias

most often refers to the distortion of a statistical analysis, due to the method of collecting samples. If the selection bias is not taken into account then any conclusions drawn may be wrong.

Selection bias can occur in studies that are based on the behavior of participants. For example, one form of selection bias is

self-selection bias, which is possible whenever the group of people being studied has any form of control over whether to participate. Participants’ decision to participate may be correlated with traits that affect the study, making the participants a non-representative sample. For example, people who have strong opinions or substantial knowledge may be more willing to spend time answering a survey than those who do not.

I submit that the path of politics in America (and elsewhere) reflects a kind of self-selection bias: On the one hand, most politicians run for office in order to exert power. On the other hand, most voters — believing that government can “solve problems” or one kind or another — prefer politicians who promise to use their power to “solve problems.” In other words, power-seekers and their enablers select themselves into the control of government and the receipt of its (illusory) benefits.

Who is self-selected “out”? First, there are libertarian* office-seekers — a rare breed — who must first attain power in order to curb it. Self-selection, in this case, means that individuals who eschew power are unlikely to seek it in the first place, understanding the likely futility of their attempts to curb the power of the offices to which they might be elected. Thus the relative rarity of libertarian candidates.

Second, there are libertarian voters, who — when faced with an overwhelming array of power-seeking Democrats and Republicans — tend not to vote. Their non-voting enables non-libertarian voters to elect non-libertarian candidates, who then accrue more power, thus further discouraging libertarian candidacies and driving more libertarian voters away from the polls.

As the futility of libertarianism becomes increasingly evident, more voters — fearing that they won’t get their “share” of (illusory) benefits — choose to join the scramble for said benefits, further empowering anti-libertarian candidates for office. And thus we spiral into serfdom.

HAPPY INDEPENDENCE DAY!

__________
* I use “libertarian” in this post to denote office-seekers and voters who prefer a government (at all levels) whose powers are (in the main) limited to those necessary for the protection of the people from predators, foreign and domestic.

The Indivisibility of Economic and Social Liberty

John Stuart Mill, whose harm principle I have found wanting, had this right:

If the roads, the railways, the banks, the insurance offices, the great joint-stock companies, the universities, and the public charities, were all of them branches of government; if in addition, the municipal corporations and local boards, with all that now devolves on them, became departments of the central administration; if the employees of all these different enterprises were appointed and paid by the government, and looked to the government for every rise in life; not all the freedom of the press and popular constitution of the legislature would make this or any other country free otherwise in name.

From On Liberty, Chapter 5

Friedrich A. Hayek put it this way:

There is, however, yet another reason why freedom of action, especially in the economic field that is so often represented as being of minor importance, is in fact as important as the freedom of the mind. If it is the mind which chooses the ends of human action, their realization depends on the availability of the required means, and any economic control which gives power over the means also gives power over the ends. There can be no freedom of the press if the instruments of printing are under the control of government, no freedom of assembly if the needed rooms are so controlled, no freedom of movement if the means of transport are a government monopoly, etc. This is the reason why governmental direction of all economic activity, often undertaken in the vain hope of providing more ample means for all purposes, has invariably brought severe restrictions of the ends which the individuals can pursue. It is probably the most significant lesson of the political developments of the twentieth century that control of the material part of life has given government, in what we have learnt to call totalitarian systems, far?reaching powers over the intellectual life. It is the multiplicity of different and independent agencies prepared to supply the means which enables us to choose the ends which we will pursue.

From part 16 of Liberalism
(go here and scroll down)

Secession Redux

In “Secession,” I wrote:

The original Constitution contemplates that the government of the United States might have to suppress insurrections and rebellions (see Article I, Section 8), but it nowhere addresses secession. Secession, in and of itself, is not an act of insurrection or rebellion, both of which imply the use of force. Force is not a requirement of secession, which can be accomplished peacefully.

Therefore, given that the Constitution does not require a subscribing State to pledge perpetual membership in the Union, and given that the Constitution does not delegate to the central government a power to suppress secession, the question of secession is one for each State, or the people thereof, to determine, in accordance with the Tenth Amendment. The grounds for secession could be … the abridgment by the United States of the “rights, privileges and immunities”of its citizens.

What about Texas v. White (U.S. Supreme Court, 1868), in which a 5-3 majority anticipated … arguments for a mystical bond of Union; for example:

When … Texas became one of the United States, she entered into an indissoluble relation. All the obligations of perpetual union, and all the guaranties of republican government in the Union, attached at once to the State. The act which consummated her admission into the Union was something more than a compact; it was the incorporation of a new member into the political body. And it was final. The union between Texas and the other States was as complete, as perpetual, and as indissoluble as the union between the original States. There was no place for reconsideration, or revocation, except through revolution, or through consent of the States.

Considered therefore as transactions under the Constitution, the ordinance of secession, adopted by the convention and ratified by a majority of the citizens of Texas, and all the acts of her legislature intended to give effect to that ordinance, were absolutely null. They were utterly without operation in law. The obligations of the State, as a member of the Union, and of every citizen of the State, as a citizen of the United States, remained perfect and unimpaired. It certainly follows that the State did not cease to be a State, nor her citizens to be citizens of the Union.

It would have been bad — bad for slaves, bad for the defense of a diminished Union — had the South prevailed in its effort to withdraw from the Union. But the failure of the South’s effort, in the end, was owed to the superior armed forces of the United States, not to the intentions of the Framers of the Constitution.

In any event, the real jurisprudential issue in Texas v. White was not the constitutionality of secession; it was the right of the post-Civil War government of Texas to recover bonds sold by the secessionist government of Texas. Moreover, as Justice Grier noted in his dissent,

Whether [Texas is] a State de facto or de jure, she is estopped from denying her identity in disputes with her own citizens. If they have not fulfilled their contract, she can have her legal remedy for the breach of it in her own courts.

The majority’s ruling about the constitutionality of secession can be read as obiter dictum and, therefore, not precedential.

Clifford P. Thies makes a similar case in “Secession Is in Our Future“:

The US law of secession is thought to have been decided by the US Supreme Court in White v. Texas, following the Civil War. The actual matter to be decided was relatively insignificant. The Court used the occasion to issue a very broad decision. Chief Justice Chase, speaking for the Court, said,

The union between Texas and the other States was as complete, as perpetual, and as indissoluble as the union between the original States. There was no place for reconsideration or revocation, except through revolution or through consent of the States.

The first sentence I just quoted invokes words such as “perpetual,” and in so doing may create the impression that the Supreme Court decreed that no [S]tate could ever secede from the Union. But, on careful reading, the relationship between Texas and the other [S]tates of the Union is merely “as indissoluble as the union between the original States.” In other words, Texas, having been a nonoriginal [S]tate, has no greater right of secession than do the original [S]tates. As to how [S]tates might secede, the second sentence says, “through revolution or through consent of the States.”

As to why a [S]tate might secede, … Chief Justice Chase presciently discusses the … 10th Amendment[] to the US Constitution, which reserve[s] to the [S]tates and to the people thereof all powers not expressly granted to the federal government, and that the design of the Union, implicit in the very name “United States,” is the preservation of the [S]tates as well as of the Union:

the preservation of the States, and the maintenance of their governments, are as much within the design and care of the Constitution as the preservation of the Union and the maintenance of the National government.

In other words, the federal government abrogates the Constitution when it fails to honor Amendment X:

The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.

Thies puts it starkly:

The so-called United States of America ceases to exist when the political majority of the country attempts to rule the entire country as a nation instead of as a federal government. In such a circumstance, the “indestructible union of indestructible [S]tates” of which the Court speaks is already dissolved.

I would put it this way: The legal basis for the perpetuation of the United States disappears when the federal government abrogates the Constitution. Given that the federal government has long failed to honor Amendment X, there is a prima facie case that the United States no longer exists as a legal entity. Secession then becomes more than an option for the States: It becomes their duty, both as sovereign entitities and as guardians of their citizens’ sovereignty.