The Problem with Voluntary Personal Accounts

A “senior administration official,” speaking on background before President Bush’s State of the Union speech on February 2, said this:

For individuals who were born in 1950 or later, they would have the opportunity — the voluntary opportunity — to participate in personal accounts. If they wished, they could not choose a personal account and they could stay entirely within the current system. The President has said we want to make sure that system is reformed to be fiscally sustainable. Certainly, though, individuals have the option of not taking a personal account and [receiving] the benefits that the traditional system would be able to pay.

What will happen, of course, is that those who choose to participate in personal accounts will draw larger benefits than those who choose to rely on solely on the traditional system. Why? The “return” on “contributions” to the traditional system will continue to shrink as the worker:retiree ratio shrinks. That “return” will be far less than the return earned on personal accounts, even those personal accounts that are invested solely in government bonds.

I predict that those who are foolish enough to remain in the traditional system will complain about the “unfairness” of it all when they reach retirement age and realize that they made a mistake. Congress, bowing inevitably to pressure from the powerful and ever-growing ranks of the elderly, will raise benefits payable on traditional accounts to a level consistent with the benefits payable on personal accounts invested in government bonds. That will necessitate higher payroll taxes, thus negating two important objectives of privatization: (1) to diminish, if not eliminate, the growing burden on workers, and (2) to stimulate economic growth by injecting more money into capital markets.

Privatization will fail to meet its objectives unless everyone who is eligible to open a personal account is required to do so, even if the account is invested only in government bonds.

I’d rather see Social Security abolished, of course, after paying off those who are collecting benefits or have “contributed” to the system. But abolition seems to be out of the question, so the next best thing is to convert Social Security from a transfer-payment Ponzi scheme to something resembling a real retirement plan.

Philosophical Obtuseness

Paul McLeary reviews Benjamin Barber’s Fear’s Empire: War, Terrorism, and Democracy for the San Francisco Chronicle. According to McLeary, Barber

is a political philosopher of the highest order, he is also an astute student of practical politics, having advised a host of governments and politicians over his career and acted as an unofficial adviser and part-time speech doctor to Bill Clinton.

Now that we know where Barber is coming from, let’s examine the quality of his philosophy:

One possibly unintended consequence of the “preventative war” doctrine advocated by the Bush administration, Barber points out, is the potential for the erosion of long-held international norms. If we claim the right to attack a nation that at some point may be a threat to us, what will stop India from invading Pakistan using the same logic, or any other nation using the same rationale to launch a strike against a rival state?

Barber must have flunked or forgotten elementary logic. The idea that one nation (India in Barber’s example) might preemptively invade another nation because the U.S. invaded Iraq is a glowing example of the post hoc ergo propter hoc fallacy, that is, “the logical fallacy of believing that temporal succession implies a causal relation.” Many nations have — and will yet — invade other nations preemptively. The example of the United States and Iraq is neither here nor there.

In fact, there is no other way for a war to begin than through preemption. One side starts the shooting either because it wants something the other side has (e.g., territory or valuable resources), because it fears that the other side will shoot first, because the other side has done something provocative or heinous, or because it wants to keep the other side from becoming a formidable foe. Barber is simply trying to draw a line where no line can be drawn.

The relevant criterion — which Barber and others on the left don’t want to acknowledge — is whether the invasion of Iraq serves the interests of U.S. citizens in the long run. Leftists just don’t seem to care about that. Instead, Barber emits the usual leftist drivel about the outmoded use of brute force, the roots of terrorism in economic and political alienation, and worse:

More often than not, Barber says, the United States is more interested in setting up free markets than it is in promoting democracy, worsening social inequality and disenfranchising the local population.

Right. We invaded Iraq so that Iraqis could have McJobs, not to free Iraq from the grip of Saddam’s notably anti-Shiite regime in which wealth and power flowed to a small fraction of the populace. Does Saddam’s demise make the Middle East and the world safer for Americans and American interests? You bet, but that’s a win-win situation. And there’s nothing wrong with that — unless you’re a hair-shirt leftist philosopher who doesn’t understand economics, much less logic.

Does the example of Saddam’s demise strike fear in the hearts of other despots, whose nuclear saber-rattling is on a par with whistling in a graveyard? You bet. Their only hope for survival is that leftist logic will somehow prevail. Not for four more years, fellows, if then.

Understanding Economic Growth

In “More about Social Security,” I wrote:

As we know well from long experience, the course of the economy isn’t expressed by a smooth, upward rising curve of progress. Aggregate economic output can be thought of as a quantum phenomenon, in that it has many potential values at each point in time. Shocks and stimuli determine which of those potential values becomes reality. Shocks (e.g., the collapse of the American stock market in 1929) can lead to sharp and prolonged downturns that can be reversed only by strong stimuli (e.g., the mobilization for World War II). Despair feeds on itself, as does hope. And hope fuels the kind of creativity that we saw, for example, in the aftermath of the Civil War, when the rapid invention and adoption of new technologies and production processes took us to new heights of prosperity in the 1920s.

The same kind of creativity resurfaced in the late 1900s — spurred by the stimulus of an inflation-busting recession and significant cuts in marginal tax rates. Will it last? Will it take us to ever-higher levels of economic output? It might, but not as a matter of historical inevitability, as some suggest. Historical inevitability is what we see in the rear-view mirror of experience. Something must happen to spur the creation and adoption of new technologies that will take us to new economic heights.

Arnold Kling points to and quotes from a relevant article by Meir Kohn:

Growth does not mean movement along an equilibrium path but rather the unfolding of a complex process. At any moment the potential of the economy is not completely realized: unexploited opportunities for mutually advantageous exchange abound. Indeed the “potential” of the economy is not defined; it depends on the initiative and ingenuity of individuals. Individuals engaging in trading, innovation, and institutional change generate the process of growth, not only discovering potential but also creating it.

The secret ingredient, of course, is purposeful human behavior. As I have shown shown elsewhere, deterministic models of aggregate behavior are woefully inadequate to the understanding of economic and social phenomena.

How’s Your Latin?

Or, do you know what you’re saying when you say such things as “in vino veritas”? Find out by taking a 10-question quiz at BBC news. I got nine out of 10, and I chalk up my wrong answer to a teacher who, long ago, implanted it in my mind.

(Thanks to my son for the tip.)

My Views on "Classical" Music, Vindicated

Last August I wrote this:

What happened around 1900 is that classical music became — and still is, for the most part — an “inside game” for composers and music critics. So-called serious composers (barring Gershwin and a few other holdouts) began treating music as a pure exercise in notational innovation, as a technical challenge to performers, and as a way of “daring” audiences to be “open minded” (i.e., to tolerate nonsense). But the result isn’t music, it’s self-indulgent crap (there’s no other word for it).

Then, this:

My litany of off-putting things about most “classical” music written after 1900 should have included dissonance, atonality, and downright dreariness. Music can be serious, but it needn’t be boring or depressing or just plain unlistenable. But a trip through the list of 20th century composers turns up relatively few who wrote much music that’s endurable. Among the many 20th century specialists in sheer boredom or cacophony are John Adams, Béla Bartók, Alban Berg, Pierre Boulez, John Cage, George Crumb, György Ligeti, Olivier Messiaen, Arnold Schoenberg, Igor Stravinsky, and Anton Webern.

Martin Kettle, writing in today’s Guardian, draws on Peter Van der Merwe’s Roots of the Classical: the Popular Origins of Western Music to make the following points:

[Van der Merwe] reckons that by 1939, the year of Rodrigo’s Concierto de Aranjuez, the flow of music that is both genuinely modern and popular had all but dried up. Van der Merwe nods towards Khachaturian, late Strauss and the Britten of Peter Grimes – and, er, that’s it. For the general public, he argues, classical music ceased to exist by 1950.

There will be an interesting argument about when and where the line can be drawn. That it can be drawn somewhere (1940, 1950 or 1960 hardly matters) is, however, beyond serious dispute. At some point in the past half-century, classical music lost touch with its public.

At the start of the 21st century, we can see what went wrong more clearly. What went wrong was western European modernism. Modernism is a huge, varied and complex phenomenon, and it took on different qualities in different national cultures. But an essential feature, especially as Van der Merwe argues it, was to turn music decisively towards theory – often political theory – and away from its popular roots.

The pioneer figure was Arnold Schoenberg, with his theory of the emancipation of dissonance (which, as Van der Merwe cleverly points out, also implied the suppression of consonance). But it was after Schoenberg’s death, in the period 1955-80, that his ideas achieved the status of holy writ.

The upshot was a deliberate renunciation of popularity. The audience that mattered to modernists (even the many who saw themselves as socialists) ceased to be the general public and increasingly became other composers and the intellectual, often university-based, establishment that claimed to validate the new music, not least through its influence over state patronage. Any failure of the music to become popular was ascribed not to the composer’s lack of communication but the public’s lack of understanding.

Not surprisingly, the public looked elsewhere, to what we are right to call, and right to admire for being, popular music. This embrace started in the early 20th century with ragtime and jazz and reached its apex with rock’n’roll, whose great years belong to that same period, 1955-80, when modernism ruled in the academy….

Classical music survived, after a fashion. But it has less to say about today. It endures overwhelmingly on the strength of its back catalogue and performance tradition, not of any new creativity. Having failed to persuade the public to embrace modern music, it has sustained itself only by rediscovering the music of earlier epochs and – though this is arguable – by learning the lessons of the modernist deviation.

I would draw the line much earlier than 1950, and I would certainly exclude the ponderous pair of Richard Strauss and Benjamin Britten from the list of “serious” composers who wrote in a popular style. Antonin Dvorák (1841-1904) was the last “serious” composer to do that consistently. After Dvorák, only George Gershwin succeeded very often in writing music that was both new and popular.

Can “classical” music make a popular comeback? Kettle has this to say:

[I]t is no longer anathema for composers to embrace popularity. The influence of American composers, for whom popularity is not a dirty word, and of composers from national traditions that survived the modernist onslaught (the Argentinian school, for instance) is perhaps a way forward. Van der Merwe, for one, believes that it is.

Classical music’s second coming, if it is to have one, could hardly be better timed. The popular music that once filled the place…vacated [by classical music] seems in turn to have largely burned itself out. Here, too, creativity is at its lowest ebb since the early 50s. The space awaiting good new music of any kind is immense.

But at least classical music has come up for air, and is asking the right questions. This is more than can be said of some of the visual arts, where the dislike of the public remains as striking and juvenile as ever. Even this, though, will not last. The need to create something beautiful that excites the public and goes beyond its experience is too strong to be frustrated indefinitely. It would just be nice to think it might resume in our lifetime.

It would be nice, but I’m not counting on it. “Serious” music and art are dominated by the academy. And the academy — for all its socialist cant — scorns “the masses.” Academicians (and their fellow travelers) would find it hard to maintain their air of mysterious superiority if they were to produce works that “the masses” could actually comprehend.

Getting It Almost Right about Iraq

Mark Brown, a Chicago Sun-Times columnist, asks “What if Bush has been right about Iraq all along?” Brown who “opposed the Iraq war since before the shooting started,” is now beginning to think the unthinkable: “What if it turns out Bush was right, and we [knee-jerk opponents of the war] were wrong?”

But he still has many qualms:

Going to war still sent so many terrible messages to the world. [Well, it certainly sent this message: Don’t mess with US.]

Most of the obstacles to success in Iraq are all still there, the ones that have always led me to believe that we would eventually be forced to leave the country with our tail tucked between our legs. (I’ve maintained from the start that if you were impressed by the demonstrations in the streets of Baghdad when we arrived, wait until you see how they celebrate our departure, no matter the circumstances.) [What are those obstacles to success, other than the defeatist rhetoric spewed by contemporary versions of Axis Sally and Tokyo Rose? We faced tougher odds in World War II, and its aftermath, and overcame them because we were determined to do so.]

In and of itself, the voting did nothing to end the violence. The forces trying to regain the power they have lost — and the outside elements supporting them — will be no less determined to disrupt our efforts and to drive us out. [So what? Does that mean we should let them drive us out? I guess it does if you’re one of the Mark Browns of this world whose mind cannot comprehend the strategic value of the Middle East. Perhaps his SUV runs on ethanol.]

Somebody still has to find a way to bring the Sunnis into the political process before the next round of elections at year’s end. The Iraqi government still must develop the capacity to protect its people. [The Sunnis can get on the train or get run over by it. I suspect they’ll get on board. With adequate help from the U.S., the Iraqi government will develop the capacity to protect its people. That’s another reason not to pull out just to satisfy the “peace at any price” crowd.]

And there seems every possibility that this could yet end in civil war the day we leave or with Iraq becoming an Islamic state every bit as hostile to our national interests as was Saddam. [The only way to ensure a civil war and the creation of an Islamic state is to pull out prematurely and allow civil war to happen.]

Brown — like his brethren on the left — simply can’t understand what the war in Iraq is all about because his mind is cluttered with Orwellian slogans: “America evil – war always bad.” Thus it has been since the U.S. broke with Stalin in the aftermath of World War II. But, unlike Stalin, “intellectual” lefties like Brown have no conception of the uses of power, for good as well as evil.

In the “intellectual” version of the world, things will happen simply because of inexorable historical forces, that is, because lefties wish they would happen. Marx (the intellectual) wished for “dictatorship of [by] the proletariat,” and Lenin (the man of action) gave Russia dictatorship — period.

Yes, ideas are important, but they’re nothing without action. And — thank God — we have a president who knows it. When you want good things to happen you have to be in command of events. That’s where we seem to be at the moment. And that’s where we’ll stay, as long as defeatists like Brown, Kennedy, and Kerry don’t have their way.

Social Security Privatization and the Stock Market

Paul Krugman’s latest wrong-headed argument against the privatization of Social Security appears in today’s New York Times. Krugman says:

Schemes for Social Security privatization, like the one described in the 2004 Economic Report of the President, invariably assume that investing in stocks will yield a high annual rate of return, 6.5 or 7 percent after inflation, for at least the next 75 years. Without that assumption, these schemes can’t deliver on their promises. Yet a rate of return that high is mathematically impossible unless the economy grows much faster than anyone is now expecting.

Krugman can make all the economic assumptions he wants. And you know that he’ll make assumptions which “prove” his case. But he can’t argue with history.

The S&P 500 — reconstructed back to 1870, with dividends reinvested — grew at an annualized rate of 6.8 percent, after inflation. To repeat, that’s a real rate of return of almost 7 percent. What about all those ups and downs between 1870 and 2004? Well, an exponential fit of the growth curve for 1870-2004 gives an annualized return of precisely 6.5 percent — just what the professor ordered. If 135 years is a too-long run, how about the 59 years since the end of World War II? The rate of return for 1946 through 2004 was 7.4 percent. An exponential fit of the growth curve for 1946-2004 gives an annualized return of 7.1 percent. *

Krugman wants so badly to denigrate the benefits of privatizing Social Security that he’ll stoop to any sort of slippery argument. In this case, he makes the following key assertions (my comments are bracketed in boldface):

In the long run, profits grow at the same rate as the economy. So to get that 6.5 percent rate of return, stock prices would have to keep rising faster than profits, decade after decade. [In fact, the total return on stocks — including reinvested dividends — has been rising faster than the economy for at least 13 decades. And it has been rising at an annualized rate of 6.5 percent.]

The price-earnings ratio – the value of a company’s stock, divided by its profits – is widely used to assess whether a stock is overvalued or undervalued. Historically, that ratio averaged about 14. Today it’s about 20. Where would it have to go to yield a 6.5 percent rate of return? I asked Dean Baker, of the Center for Economic and Policy Research, to help me out with that calculation (there are some technical details I won’t get into). Here’s what we found: by 2050, the price-earnings ratio would have to rise to about 70. By 2060, it would have to be more than 100. [The PE ratio on the S&P 500 (as reconstructed back to 1871) has been drifting generally upward. There have been sharp rises and dips in the PE ratio, because of market bubbles and crashes, but it is clear that the perception of risk has diminished with time and that stocks can command higher PE ratios than they did in the past. A statistical analysis of the relationship between PE and total stock-market returns (including dividends), based on 103 30-year periods ending in 1901 through 2004, produces absolutely no relationship between PE and future returns. In fact, for the 134 years (1870-2004) in which the total real return on the S&P 500 averaged 6.5 percent a year, the year-end PE ratio on the S&P 500 ranged from 5 to 33 — nowhere near the PE ratio of 70 or 100 demanded by Krugman.]

Krugman is simply determined to preserve a wasteful, socialistic relic of the Great Depression. He gives away his game in the first sentence of his article: “The fight over Social Security is, above all, about what kind of society we want to have.” The rest is a rather clumsy attempt at economic sleight-of-hand. Well, as usual, Krugman has fumbled his trick.

__________

* I’ll have more to say about the significance of stock-market returns, and their relation to GDP, in Part V of “Practical Libertarianism for Americans.” For now, I’ll just tantalize you with some of the numbers that I’ll document and discuss in that future post.

The Hockey Stick Is Broken

I wrote twice in October about the impending demise of the hockey-stick model of global warming. The hockey-stick model purports to show that temperatures on Earth have risen sharply in the past century, ostensibly because of human activity. That model is, of course, a favorite of Luddite leftists, many of whom are bandwagon scientists who give blind allegiance to the model.

The article that promises to drive a stake into the heart of the hockey-stick model is about to be published. The bottom line can be seen in this graph (available here):

“MBH98” (the light line) is the infamous hockey-stick depiction of Earth’s temperature trend. “Recalculated” (the dark line) is the correct depiction.

As I have written before — here, here, and here, for instance — there’s evidence that global warming has little to do with human activity and a lot to do with extraterrestrial events.

(Thanks to FuturePundit for the tip.)

Fire or Ice?

From an article at Prospect by Michio Kaku:

The universe is out of control, in a runaway acceleration. Eventually all intelligent life will face the final doom—the big freeze. An advanced civilisation must embark on the ultimate journey: fleeing to a parallel universe….

But since the big freeze is probably billions to trillions of years away, there is time for [an advanced] civilisation to plot the only strategy consistent with the laws of physics: leaving this universe. To do this, an advanced civilisation will first have to discover the laws of quantum gravity, which may or may not turn out to be string theory. These laws will be crucial in calculating several unknown factors, such as the stability of wormholes connecting us to a parallel universe, and how we will know what these parallel worlds will look like. Before leaping into the unknown, we have to know what is on the other side. But how do we make the leap? Here are some of the ways.

Find a naturally occurring wormhole….

Send a probe through a black hole….

Create negative energy….

Create a baby universe ….

Build a laser implosion machine….

Send a nanobot to recreate civilisation
If the wormholes created in the previous steps are too small, too unstable, or the radiation effects too intense, then perhaps we could send only atom-sized particles through a wormhole. In this case, this civilisation may embark upon the ultimate solution: passing an atomic-sized “seed” through the wormhole capable of regenerating the civilisation on the other side. This process is commonly found in nature. The seed of an oak tree, for example, is compact, rugged and designed to survive a long journey and live off the land. It also contains all the genetic information needed to regenerate the tree.

An advanced civilisation might want to send enough information through the wormhole to create a “nanobot,” a self-replicating atomic-sized machine, built with nanotechnology. It would be able to travel at near the speed of light because it would be only the size of a molecule. It would land on a barren moon, and then use the raw materials to create a chemical factory which could create millions of copies of itself. A horde of these robots would then travel to other moons in other solar systems and create new chemical factories. This whole process would be repeated over and over again, making millions upon millions of copies of the original robot. Starting from a single robot, there will be a sphere of trillions of such robot probes expanding at near the speed of light, colonising the entire galaxy….

Next, these robot probes would create huge biotechnology laboratories. The DNA sequences of the probes’ creators would have been carefully recorded, and the robots would have been designed to inject this information into incubators, which would then clone the entire species. An advanced civilisation may also code the personalities and memories of its inhabitants and inject this into the clones, enabling the entire race to be reincarnated.

Although seemingly fantastic, this scenario is consistent with the known laws of physics and biology, and is within the capabilities of a Type III civilisation. There is nothing in the rules of science to prevent the regeneration of an advanced civilisation from the molecular level. For a dying civilisation trapped in a freezing universe, this may be the last hope.

But why would we want to recreate civilization if it wouldn’t include “us”? Why not use all that technological know-how to build a humongous space heater to fend off the impending chill? (Space heater…get it?)

There’s only one proper way to end this post:

Fire and Ice

Some say the world will end in fire,

Some say in ice.
From what I’ve tasted of desire
I hold with those who favor fire.
But if it had to perish twice,
I think I know enough of hate
To say that for destruction ice
Is also great
And would suffice.

Robert Frost

A Century of Progress?

Many, many horrific things happened in the twentieth century, but — despite it all — America made tremendous economic progress. Consider real GDP per capita, which (in year 2000 dollars) was about $4,300 in 1900 and $34,900 in 2000. That increase represents an annualized rate of growth of 2.1 percent.

Before we throw a party to celebrate that great accomplishment, let’s look behind the numbers.

Monetary measures of GDP exclude a lot of things that might be captured in the term “quality of life”; for example:

[F]ailing to account for the output produced within households may lead to misleading comparisons of economy-wide production, as conventionally measured. The female labor force participation rate in the United States has grown enormously since the early part of the 20th century. To the extent that the entry of women into paid employment has reduced the effort women devote to household production, the long-term trend in output, as measured by gross domestic product (GDP), may exaggerate the true growth in national output. [Committee on National Statistics (CNSTAT), Designing Nonmarket Accounts for the United States: Interim Report (2003), p. 9 in HTML version]

The “effort that women devote to household production” involves a lot more than shopping, cooking, cleaning, and all of the other activities usually associated with the term “housewife.” Not the least among those activities is the raising of children. Child-rearing (a quaint but still meaningful phrase) includes more than feeding, bathing, and toilet training. Parents — and especially mothers — impart lessons about civility — lessons that are neglected when children are left on their own to disport with friends, watch TV, and imbibe the nihilistic lyrics that pervade popular music.

Yet, the apparently robust growth of real GDP per capita between owes much to the huge increase in the proportion of women seeking work outside the home. The labor-force participation rate for women of “working age” (14 and older in 1900, 16 and older in 2000) grew from 19 percent in 1900 to 60 percent in 2000, while the rate for men dropped only slightly, from 80 percent to 75 percent. Who knows how much damage society has suffered — and will yet suffer — because of the exodus into the workforce of women with children at home? These figures suggest the extent of that exodus in the latter half of the twentieth century:

Because estimates of GDP don’t capture the value of child-rearing and other aspects of “household production” by stay-at-home mothers, the best way to put 1900 and 2000 on the same footing is to estimate GDP for 2000 at the labor-force participation rates of 1900. The picture then looks quite different: real GDP per capita of $4,300 in 1900, real GDP per capita of $25,300 in 2000 (a reduction of 28 percent), and an annualized growth rate of 1.8 percent, rather than 2.1 percent.

The adjusted rate of growth in GDP per capita still overstates the expansion of prosperity in the twentieth century because it includes government spending, which is demonstrably counterproductive. A further adjustment for the cost of government — which grew at an annualized rate of 7.5 during the century (excluding social transfer payments) — yields these estimates: real GDP per capita of $3,900 in 1900, real GDP per capita of $19,800 in 2000, and an annualized growth rate of 1.6 percent. (In Part V of “Practical Libertarianism for Americans,” I will estimate how much greater growth we would have enjoyed in the absence of government intervention.)

The twentieth century was a time of great material progress. And we know that there would have been significantly greater progress had the hand of government not been laid so heavily on the economy. But what we don’t know is the immeasurable price we have paid — and will pay — for the exodus of mothers from the home. We can only name that price: greater incivility, mistrust, fear, property loss, injury, and death.

Most “liberal” programs have unintended negative consequences. The “liberal” effort to encourage mothers to work outside the home has vastly negative consequences. Unintended? Perhaps. But I doubt that many “liberals” would change their agenda, even if they were confronted with the consequences.

The Limits of Self-Defense

Although I am an advocate of preemptive warfare (see here, for example), I am firmly opposed to the notion of preemptive criminalization, as in the movie Minority Report. What I didn’t know is that preemptive criminalization (like involuntary euthanasia) has already arrived in Europe, according to Stephen Sedley, writing in the London Review of Books:

What is the rationale of objection to a comprehensive national DNA database?….

There remains the concern about possible abuse, that the police might in future use the data not merely for detection but for personality profiling – especially since one of the purposes already sanctioned by law is crime prevention. I think this concern is real. A number of states – and there are indications that England and Wales may join them – have begun to allow the indefinite detention of sexual offenders on the basis of predicted behaviour.

Frightening, especially because it might be picked up as a “liberal” cause in the United States.

Atheism, Religion, and Science, Revised

A reader’s comments — in two thoughtful e-mails — have led me to revise “Atheism, Religion, and Science.” The revisions may not wholly satisfy the reader, but I am very grateful to him for his comments, which led me to make my arguments clearer and more complete.

The Politician: The Pathological Pursuit of Power

Joel Bakan’s The Corporation: The Pathological Pursuit of Profit and Power is creating a bit of a stir. And no wonder, given its premise (from the jacket):

Bakan contends that the corporation is created by law to function much like a psychopathic personality whose destructive behavior, if left unchecked, leads to scandal and ruin.

In the most revolutionary assessment of the corporation as a legal and economic institution since Peter Drucker’s early works, Bakan backs his premise with the following claims:

The corporation’s legally defined mandate is to pursue relentlessly and without exception its own economic self-interest, regardless of the harmful consequences it might cause to others — a concept endorsed by no less a luminary than the Nobel Prize-winning economist Milton Friedman.

The corporation’s unbridled self-interest victimizes individuals, society, and, when it goes awry, even shareholders and can cause corporations to self-destruct, as recent Wall Street scandals reveal.

While corporate social responsibility in some instances does much good, it is often merely a token gesture, serving to mask the corporation’s true character.

Governments have abdicated much of their control over the corporation, despite its flawed character, by freeing it from legal constraints through deregulation and by granting it ever greater authority over society through privatization.

Despite the structural failings found in the corporation, Bakan believes change is possible and outlines a far-reaching program of concrete, pragmatic, and realistic reforms through legal regulation and democratic control.

Bakan would be on the right track if, instead, he were to make these claims:

The politician’s license — granted by the “living” Constitution — is to pursue relentlessly and without exception his power to control our peaceful pursuit of happiness, regardless of the harmful consequences it might cause — a concept endorsed by no less than three dozen Congresses, a dozen presidents, and dozens of Supreme Court justices.

The politician’s unbridled self-interest victimizes individuals, society, and, when it goes awry, even the purported beneficiaries of his insatiable thirst for control.

While the acts of government in some instances are necessary to the security of life, liberty, and property, most politicians — especially those of the left — do not even pretend that the scope of government power should be restricted to those necessary functions.

Elected officials and judges, sworn to uphold the Constitution, have violated their oaths of office innumerable times, by freeing government from its constitutional constraints and by granting it almost dictatorial authority over society through legislation, regulation, and adjudication.

(Thanks to Verity at Southern Appeal for the tip.)

The Pointy-Haired Boss

Today’s Dilbert reminds me of a former boss, who insisted — without qualfication — that change is good.

What’s truly eerie is that the same former boss also scored 10 out of 10 on my test of (poor) management skills:

Are you a CEO or senior manager in a corporate bureaucracy? Want to know how you stack up against your peers? Select your personal management traits from the following list, then tally your score and check it against the scale at the end of the list.

1. Flaunt the privileges of rank: Spend on frills and perks even as you’re down-sizing.

2. Flout the rules you expect others to obey.

3. Put off hard decisions as long as possible so that rumors can grow wildly on the grapevine.

4. Pepper your staff with meaningless projects and pointless questions — hire consultants to give you the “straight scoop.”

5. Hire outsiders for senior management positions and create make-work jobs for your cronies.

6. Keep your door open to whiners and let them second-guess your managers’ decisions.

7. Promise vision but deliver pap.

8. Talk teamwork but don’t let anyone in on your game plan — keep ’em all guessing.

9. Talk empowerment but micro-manage.

10. Keep your board in the dark, except when you turn on the rosy spotlights.

Score of 0: You lie to yourself all the time; see a psychiatrist.

Score of 1-3: You sleep a lot during the day; see a physician.

Score of 4-6: You’re a normal boss, which isn’t necessarily good news.

Score of 7-9: You could give Donald Trump a run for his money.

Score of 10: So you’re the model for Dilbert’s pointy-haired boss!

Beware of Irrational Atheism

Thanks to Chris Lehmann, writing at reasononline in “The tedium of dogmatic atheism,” I learned of Sam Harris’s book, The End of Faith: Religion, Terror, and the Future of Reason. The publisher’s blurb for The End of Faith says this:

This important and timely book delivers a startling analysis of the clash of faith and reason in today’s world. Harris offers a vivid historical tour of mankind’s willingness to suspend reason in favor of religious beliefs, even when those beliefs are used to justify harmful behavior and sometimes heinous crimes. He asserts that in the shadow of weapons of mass destruction, we can no longer tolerate views that pit one true god against another. Most controversially, he argues that we cannot afford moderate lip service to religion—an accommodation that only blinds us to the real perils of fundamentalism. While warning against the encroachment of organized religion into world politics, Harris also draws on new evidence from neuroscience and insights from philosophy to explore spirituality as a biological, brain-based need. He calls on us to invoke that need in taking a secular humanistic approach to solving the problems of this world.

And so, we are to substitute secular humanism for religion. (Or else?) In order to defend liberty we must deprive you of it — if you are religious, that is. A reading of Lehmann’s review reveals the underlying flaw of Harris’s hysterical anti-religionism. It seems that Harris, wittingly or stupidly, has adopted the following syllogism:

1. Heinous acts are committed.

2. Some of those heinous acts are committed in the name of religion.

3. Therefore, all religion is evil.

Why not this, instead?

1. Heinous acts are committed.

2. Some of those heinous acts are committed in the name of irreligious philosophies (e.g., Nazism, fascism, and communism).

3. Therefore, all irreligious philosophies are evil. (That includes secular humanism.)

Harris, on the evidence of Lehmann’s review, strikes me as a knuckle-dragging, atheist ignoramus. And he has plenty of company at sites like The Panda’s Thumb. There’s Matt Young, for instance. Young is an atheist whose revealed attitude of superiority to religionists had already caught my attention. Now he’s back, with more “profound” thoughts about religion (mentioned here and posted here). Young quotes an earlier essay of his, in which he wrote this:

The philosopher Antony Flew, now an emeritus professor at Reading University, recounts a parable about two people who chance upon a clearing in the forest. Both flowers and weeds grow in the clearing. One of the people, the Believer, says that some gardener must be tending the plot, whereas the Skeptic disagrees. They set up camp and watch, but no gardener appears. The Believer suggests that the gardener is invisible, so they patrol with bloodhounds, then set up an electric fence, but there is still no evidence of a gardener.

The Believer insists, however, that there must be a gardener, even if that gardener is invisible, silent, odorless, and impervious to electric shocks. The Skeptic asks how that differs from an imaginary gardener or no gardener at all. Flew uses his parable as a jumping-off point to discuss whether religion is falsifiable. Specifically, referring to the problems of evil and suffering, he asks what would have to happen to falsify a belief in God or in God’s love. Flew’s question is rhetorical; he clearly implies that nothing will falsify a firm religious belief. An Oxford philosopher, Basil Mitchell, agrees or, more accurately, admits that nothing can count decisively against the belief of the true believer; by definition, the believer is committed to a belief in God and is not a detached observer. That is, to Mitchell, the concept of falsifiability is not appropriately applied to a religious belief, whereas, to Flew, religion’s lack of falsifiability evidently counts against it.

Mitchell is right, because Flew’s parable is incomplete. Flew fails to suggest the possibility that the instruments being used to detect the invisible gardener are inadequate (or irrelevant) to the task.

Young continues:

Another Oxford philosopher, R. M. Hare, responded to Flew with a parable of his own: A lunatic (Hare’s word) believes that the dons want to kill him. A friend believes otherwise and tries to convince the lunatic by introducing him to the dons and showing him that they are friendly, gentle people and mean him no harm. The lunatic responds that the dons are duplicitous and are really plotting against him, all the while pretending to be friendly.

Hare calls the lunatic’s belief a blik. This is a term that Hare has coined to describe a belief that is neither verifiable nor falsifiable. Hare notes that the friend also has a blik: The friend’s blik is that the dons are not planning to kill the lunatic. Hare considers this belief a blik just as much as the lunatic’s belief is a blik. That is, the friend does not have no blik at all, but rather has the blik that the dons are harmless. Precisely like the lunatic, the friend cannot prove his blik, because the lunatic can always find an ad hoc hypothesis to refute the friend’s arguments.

Hare’s article was influential, but it seems to me that it contains within it the seeds of its own destruction. First, the issue is not whether a sane person can convince a lunatic that the lunatic’s blik is wrong; he cannot. The issue is, rather, what arguments could both the friend and the lunatic use to convince a detached observer which one is right. In this case, it is clear that the detached observer would rule in favor of the friend, not the lunatic, because the friend would present more-convincing evidence.

And just what is that more-convincing evidence? Is it that the “are friendly, gentle people and mean…no harm”? How is that any more convincing than the lunatic’s assertion that “the dons are duplicitous and are really plotting against him, all the while pretending to be friendly.” A truly detached observer, given no more information, would necessarily adopt the agnostic position that the lunatic’s blik is indeed a blik: a belief that is neither verifiable nor falsifiable. Young seems incapable of logic when he discusses religion, even by inference.

But Young plunges on:

Later in the debate, Hare notes his own blik that the steering column of his car will not fail when he goes for a drive. This blik gives him confidence, without which he might be paralyzed into inaction. Hare’s confidence might be based on a blik, but I have no such blik. Whenever I drive my car, I am perfectly aware that the steering column might fail. I am equally aware, however, that the vast majority of steering columns do not fail during normal use, so I drive my car in the uncertain knowledge that the steering column will probably not fail. This belief is not a blik; it is a statistical statement based on evidence, which I see all around me, that other cars have sound steering columns. Not all firmly held beliefs are bliks.

Hare’s position is that a religious belief need not be defended because it is a blik and can neither be proved nor disproved. Hare himself, however, distinguishes between bliks that are right and bliks that are wrong. Indeed, he seems to intend his lunatic to be analogous to the religious believer who supports his belief with ad hoc hypotheses. The issue, then, is not whether people have bliks but rather whether their bliks are right or wrong. How do we decide whether bliks are right or wrong? We look for evidence. Far from refuting Flew’s argument, Hare has strengthened it.

Young is right to criticize Hare for giving a bad example of a blik, in the case of the steering column. But Young then resorts to a false syllogism, which goes like this:

1. Humans have many firmly held beliefs.

2. Some firmly held beliefs are not bliks.

3. Therefore, there are no bliks; every assertion can be verified or falsified.

Here’s the correct syllogism:

1. Humans have many firmly held beliefs.

2. Some firmly held beliefs are not bliks.

3. Therefore, some firmly held beliefs may be bliks; not every assertion can be verified or falsified.

In sum, Young persists in his (unreasonable) belief that religious belief* is falsifiable. He fails to see the incompleteness of Flew’s parable about the gardener; he posits a (falsely) detached observer in the case of the lunatic; and he adopts a false syllogism about unfalsifiable beliefs.

Atheism is, at bottom, simply a dogmatic position. It is a form of religion, in which the believer hews to the unfalsifiable belief that there is no God.

In case you’re wondering, I take the only scientifically valid position on the question whether there is a God: agnosticism. See here and here.

__________

* Oddly, Young seems to be a practicing Jew who is also an atheist.

Affirmative Action: Two Views from the Academy

First comes Michael Bérubé, a professional academic who is evidently bereft of experience in the real world. His qualifications for writing about affirmative action? He teaches undergraduate courses in American and African-American literature, and graduate courses in literature and cultural studies. He is also co-director of the Disability Studies Program, housed in the Rock Ethics Institute at Penn State.

Writing from the ivory tower for the like-minded readers of The Nation (“And Justice for All“), Bérubé waxes enthusiastic about the benefits of affirmative action, which — to his mind — “is a matter of distributive justice.” Bérubé, in other words, subscribes to “the doctrine that a decision is just or right if all parties receive what they need or deserve.” Who should decide what we need or deserve? Why, unqualified academics like Bérubé, of course. Fie on economic freedom! Fie on academic excellence! If Bérubé and his ilk think that a certain class of people deserve special treatment, regardless of their qualifications as workers or students, far be it from the mere consumers of the goods and services of those present and future workers to object. Let consumers eat inferior cake.

Bérubé opines that “advocates of affirmative action have three arguments at their disposal.” One of those arguments is that

diversity in the classroom or the workplace is not only a positive good in itself but conducive to greater social goods (a more capable global workforce and a more cosmopolitan environment in which people engage with others of different backgrounds and beliefs).

Perhaps Bérubé knows the meaning of “capable global workforce.” If he does, he might have shared it with his readers. As for a workplace that offers a “cosmopolitan environment” and engagement “with others of different backgrounds and beliefs” I say: where’s the beef? As a consumer, I want value for my money. What in the hell does diversity — as defined by Bérubé — have to do with delivering value? Perhaps that’s one reason U.S. jobs are outsourced. (I have nothing against that, but it shouldn’t happen because of inefficiency brought about by affirmative action.) Those who seek a cosmopolitan environment and engagement with others of different backgrounds and beliefs can have all of it they want — on their own time — just by hanging out in the right (or wrong) places.

Alhough Bérubé seems blind to the economic cost of affirmative action, he is willing to admit that the practice has some shortcomings:

Affirmative action in college admissions has been problematic, sometimes rewarding well-to-do immigrants over poor African-American applicants–except that all the other alternatives, like offering admission to the top 10 or 20 percent of high school graduates in a state, seem to be even worse, admitting badly underprepared kids from the top tiers of impoverished urban and rural schools while keeping out talented students who don’t make their school’s talented tenth. In the workplace, affirmative action has been checkered by fraud and confounded by the indeterminacy of racial identities–and yet it’s so popular as to constitute business as usual for American big business, as evidenced by the sixty-eight Fortune 500 corporations, twenty-nine former high-ranking military leaders and twenty-eight broadcast media companies and organizations that filed amicus briefs in support of the University of Michigan’s affirmative action programs in the recent Supreme Court cases of Gratz v. Bollinger and Grutter v. Bollinger (2003).

Stop right there, professor. Affirmative action is “popular” because it’s the law and it’s also a politically correct position that boards of directors, senior corporate managers, and government officials, and military leaders can take at no obvious cost to themselves. Further, those so-called leaders are sheltered from the adverse consequences of affirmative action on the profitability and effectiveness of their institutions by imperfect competition in the private sector and bureaucratic imperatives in the government sector.

As I wrote in “Race, Intelligence, and Affirmative Action,” here’s how affirmative action really operates in the workplace:

If a black person seems to have something like the minimum qualifications for a job, and if the black person’s work record and interviews aren’t off-putting, the black person is likely to be hired or promoted ahead of equally or better-qualified whites. Why?

  • Pressure from government affirmative-action offices, which focus on percentages of minorities hired and promoted, not on the qualifications of applicants for hiring and promotion.
  • The ability of those affirmative-action offices to put government agencies and private employers through the pain and expense of extensive audits, backed by the threat of adverse reports to higher ups (in the case of government agencies) and fines and the loss of contracts (in the case of private employers).
  • The ever-present threat of complaints to the EEOC (or its local counterpart) by rejected minority candidates for hiring and promotion. Those complaints can then be followed by costly litigation, settlements, and court judgments.
  • Boards of directors and senior managers who (a) fear the adverse publicity that can accompany employment-related litigation and (b) push for special treatment of minorities because they think it’s “the right thing to do.”
  • Managers down the line learn to go along and practice just enough reverse discrimination to keep affirmative-action offices and upper management happy.

As if in answer to Bérubé’s reflexive defense of affirmative action, now comes Richard Sander, another academic, but one who actually looks at the numbers. Sander, a professor of law at UCLA who has published “A Systematic Analysis of Affirmative Action in American Law Schools,” is without a doubt a liberal of the modern persuasion and a proponent of diversity. He is nevertheless critical of affirmative action as it is practiced at law schools. Here’s the gist of his analysis, as reported at FindLaw:

The Heavy Weight Placed on Race in Admissions in Virtually All Schools – the Cascade Effect
Professor Sander lays the foundation for his critique by describing the kind of race-based affirmative action that law schools use today. Under the Bakke and Grutter Supreme Court precedents, public (as well as private) law schools are prohibited from making use of quotas, two-track admissions schemes, or fixed points added to the numerical indices of minorities….

Professor Sander argues that, in fact, the Michigan law school program, despite its seeming flexibility and inscrutability, employs race in just as ambitious (critics would say aggressive) a way as did the Michigan undergraduate plan [which the U.S. Supreme Court found unconstitutional in Gratz]….

Moreover, and more important, Sander argues, the way race is used at the Michigan law school is the same way race is used in many if not most law school affirmative action programs. Indeed, Sander says that he has “been unable to find a single law school in the United States whose admissions operate the way Justice O’Connor describes in Grutter” – that is, where race is used as a flexible plus factor that does not effectively dominate over all other diversity criteria. The system of aggressive racial preferences is not, Sander says, confined to the “elite” law schools. Rather, “it is a characteristic of legal education as a whole.”

According to Sander, law school affirmative action across law schools is characterized by a “cascade” effect. As the elite schools “snap up” the blacks who otherwise would have been admitted to and have attended the next tier of schools, that next tier of schools snaps up the blacks who would have otherwise attended the tier below. And so forth.

The Mismatch Effect

This systematic cascade phenomenon is important, because when race is being used so weightily in schools all the way down the ladder, the result is that the African Americans who are admitted to each school under an affirmative action program are significantly less numerically qualified than are their white competitor students at that school, who were admitted outside the affirmative action plan. Sander calls this phenomenon the “mismatch” effect – black beneficiaries of affirmative action are “mismatched” at schools whose non-affirmative action students possess better credentials and skills.

Because of the pronounced mismatch effect that extends down the law school hierarchy, blacks tend to suffer poor grades in law school. According to the data Sanders adduces, the median black law student’s GPA at the end of the first year of law school places him at the 7th or 8th percentile of his class. Put another way, more than 50% of black law students are in the bottom one-tenth of their law school class (in terms of grades) at the end of the first year.

The Long-Term Costs of the Mismatch Effect – Bar Passage and Job Placement

This poor academic performance in law school, in turn, creates two distinct costs for African Americans. First, Sanders argues, the poor grades lead to a very poor bar passage rate. As he points out, “only 45% of black law students in the 1991 cohort completed law school and passed the bar on their first attempt.” That number is far worse than the comparable number for whites.

Sanders goes on to argue that many of these blacks with poor grades would have had better grades – and have ended up with a higher chance of passing the bar – if they had been at law schools more commensurate with their academic skills. Sander’s data suggests to him that black students at any law school who have the same law school grades as white students at that school pass the bar in the same percentages. In other words, blacks with good law school grades don’t fail the bar any more than whites with the same grades.

The problem, Sanders suggests, is that law schools have “mismatched” blacks in schools where they are unlikely to get good grades. By placing black students in environments where their grades will be higher – less competitive law schools — the system could improve their overall bar pass rate….

From all this, Sander argues that if race-based law school affirmative action were eliminated or reduced, the black bar passage rate would actually go up. According to his calculations, in the absence of preferential admissions, this rate would rise to 74% from the 45% he observed….

If affirmative action were eliminated, most black law students wouldn’t be ousted from law school entirely – they would simply attend law schools that “match” their numerical credentials more tightly. In other words, elimination of affirmative action would simply eliminate the mismatch effect – blacks would simply be attending less competitive and less prestigious schools than they are currently attending. And of those blacks who would be displaced from the bottom of the legal academic system altogether (i.e., those who need affirmative action simply to get into the least competitive schools), many of them today do not end up passing the bar and entering the legal profession in any event….

Sander says that blacks at better schools, but with poor grades, get worse jobs than they would if they were at lesser schools and had better grades. In other words, Sander argues, at all but the most elite schools, grades matter more than the school from which one graduates for black law job applicants. The upside of attending a better school is more than outweighed – in terms of employment options – by the downside of getting weak grades at that school, compared to the better grades that could have been obtained at a less competitive school….

So whether one focuses on passing the bar, or getting a good job, Sander says, there is a case that race-based affirmative action hurts, rather than helps, black law students.

Sander’s article has drawn howls of outrage from politically correct academicians, not to mention a long critique, to which Sander has responded at length. But Sander’s fact-based argument make eminent sense, not only for the effects of reverse discrimination at law schools but also for the effects of reverse discrimination generally, in the academy and in the workplace.

As is often the case, a government policy meant to help a particular group of people actually harms that group of people — and many others, as well. The effects of affirmative action illustrate the truth of the adage that there’s no such thing as a free lunch. Instead of forcing universities and employers to accept and hire unqualifed blacks, it would be better — for everyone — simply to give education vouchers to blacks. Such a program would eliminate the costly effects of affirmative action, make blacks more productive, and lift them economically.

Favorite Posts: Affirmative Action and Race

Affirmative Action: Two Views from the Academy

First comes Michael Bérubé, a professional academic who is evidently bereft of experience in the real world. His qualifications for writing about affirmative action? He teaches undergraduate courses in American and African-American literature, and graduate courses in literature and cultural studies. He is also co-director of the Disability Studies Program, housed in the Rock Ethics Institute at Penn State.

Writing from the ivory tower for the like-minded readers of The Nation (“And Justice for All“), Bérubé waxes enthusiastic about the benefits of affirmative action, which — to his mind — “is a matter of distributive justice.” Bérubé, in other words, subscribes to “the doctrine that a decision is just or right if all parties receive what they need or deserve.” Who should decide what we need or deserve? Why, unqualified academics like Bérubé, of course. Fie on economic freedom! Fie on academic excellence! If Bérubé and his ilk think that a certain class of people deserve special treatment, regardless of their qualifications as workers or students, far be it from the mere consumers of the goods and services of those present and future workers to object. Let consumers eat inferior cake….

Click here to read the full post.

Practical Libertarianism for Americans: Part III

III. THE ORIGIN AND ESSENCE OF RIGHTS

This is a brief excerpt of Part III of a nine-part work in progress. I welcome constructive criticisms and suggestions. Please send an e-mail to: libertycorner-at-sbcglobal-dot-net .

This is where I where I enter a debate that splits libertarianism into two camps: fundamentalists and consequentialists. Fundamentalists (or “natural right”) libertarians say that humans inherently possess the right of liberty. Consequentialists say that humans ought to enjoy liberty because, through liberty, humans are happier and more prosperous than they would be in its absence. In spite of this rather fundamental split, all libertarians agree that it is better to live in liberty than not….

I would like to be able to say, with fundamentalist libertarians, that liberty is an innate human right — and the only innate right. But that would be nothing more than an assertion, however cleverly I might clothe it in the language of philosophy.

I would like to be able to say that liberty is a paramount human instinct, honed through eons of human existence and experience. But we are surrounded by too much evidence to the contrary, both in recorded and natural history. The social and intellectual evolution of humankind has led us to a mixed bag of rights, acquired politically through cooperation and conflict resolution, often predating the creation of governments and the empowerment of states. The notion that we ought to enjoy the negative right of liberty is there among our instincts, of course, but it is at war with the positive right of privilege — the notion that we are “owed something” beyond what we earn (through voluntary exchange) for the use of our land, labor, or capital. Liberty is also at war with our instincts for control, aggression, and instant gratification.

I do not mean that the social and intellectual evolution of humankind is right — merely that it is what it is. Libertarians must accept this and learn to work with the grain of humanity, rather than against it. There is no profit in simply asserting the inherent wrongness of laws and government actions that undermine liberty. Nor is there much profit in arguing the unconstitutionality of illiberal laws and government actions; it is obvious that appeals to the Constitution will be of little avail unless and until we have a Supreme Court that abides wholeheartedly by the Constitution.

There can be much profit in demonstrating, logically and factually, how illiberal laws and government actions make people worse off — often the same people who are supposed to benefit from those laws — and in offering superior alternatives. In other words, consequentialist libertarianism can make real gains for liberty by appealing successfully to self-interest. But self-interest must be seduced by reason (Part IV) and bribed by the promise of greater rewards (Part V).

Click here for the full text of Part III.

Three Perspectives on Life: A Parable

The parable:

Imagine that 100 randomly selected humans are locked in a large room without food or water. During a panicky struggle to break down the door, 50 of the humans are trampled to death. The other 50 humans then agree to cooperate in an effort to escape. Because that cooperative effort doesn’t quickly succeed, however, it breaks down in acrimony; cliques develop and fights break out. Before long, there are only a few dozen humans left, and they have split into three camps.

One camp (the scientists) believes that it can find a way to escape the room, given adequate observation and analysis. The second camp (the stoics) believes that there’s no way out and prepares for death by calmly meditating. The third camp (the pragmatists) is agnostic about escape, but isn’t ready to die, so its members begin to kill and eat the stoics.

As they are being killed by the pragmatists, the stoics console themselves by saying that death is inevitable.

What happens next? After eating the stoics, do the pragmatists turn on the scientists and eat them as well? Or do the pragmatists keep (some of) the scientists alive, in case the scientists can find a way out of the room?

If the pragmatists eat all the scientists, the pragmatists will then begin to eat each other. The last pragmatist to die (of starvation) will say that he did the best he could with the hand he was dealt.

If the pragmatists spare the scientists (or some of them), and the scientists don’t find a way out of the room, the pragmatists will begin to finish off the scientists, then each other. The last pragmatist to die (of starvation) will say that he did the best he could with the hand he was dealt. The last scientist to die of cannibalism will say that death was only one possible outcome — an outcome that seems inevitable only in retrospect.

If the pragmatists spare the scientists (or some of them), and the scientists find a way out of the room, the pragmatists will claim that their decision not to kill all the scientists made escape possible. The surviving scientists will say that escape was only one possible outcome — an outcome that seems inevitable only in retrospect.

And the surviving scientists will say this about the stoics: If they had survived they would have claimed that survival was inevitable.

Happiness requires a judicious blend of stoicism, action, and reason:

  • Stoicism makes it possible to accept that over which we have no control. But unblinking stoicism can lead to premature acceptance of a bad outcome.
  • Action is essential to progress, but it must be harnessed to reason. Action for action’s sake is indulgence.
  • Reason is essential to progress, but it must be harnessed to action. Pure reason wields no more power than pure stoicism.

Practical Libertarianism for Americans: Addendum to Part II

This is a brief excerpt of the addendum to Part II of a nine-part work in progress. I welcome constructive criticisms and suggestions. Please send an e-mail to: libertycorner-at-sbcglobal-dot-net .



NOTES ON THE STATE OF LIBERTY IN AMERICAN LAW

As noted in Part II, I am using “liberty” to encompass the full spectrum of liberty rights, which the Founders captured in the phrase “life, liberty, and the pursuit of happiness.” This fragmentary addendum is a provocative gloss on that evocative phrase….

The great forest of American law — which imperfectly sheltered life, liberty, and the pursuit of happiness until the 1930s — has since been laid waste in the pursuit of various Devils, among them: self-defense (at home and abroad), personal responsibility (the main antidote of poverty, illiteracy, and crime), lower-class vices (smoking), (white) racism, (male) sexism, “offensive” (non-leftish) speech, “excessive” political spending and speech (especially by non-incumbents), all forms of pollution (except those necessary to finance a yuppie’s lifestyle and to propel his SUV), and life’s uncertainties in general. Now we are in the open, practically defenseless against the biggest Devil of all — the state — which dictates how much of life, liberty, and happiness we may enjoy….

Click here for the full text of the addendum to Part II.