Month: January 2008


It seems likely that, come November, voters will face with a choice between John McCain and Hillary Clinton or Barack Obama (or suicide).

The last time voters faced such an abysmal choice in a presidential election was in 1996: Clinton (the other one) vs. Bob (tax collector for the welfare state) Dole. Before that, in 1992, there was G.H.W. (read my lips) Bush vs. the other Clinton and H. Ross Pee-rot (as it’s pronounced in Texas). Suicide was very appealing in those days.

How about 1972? Nixon vs. McGovern: sleaze vs. socialism. Don’t like that choice? Try 1968, with Nixon vs. Humphrey vs. Wallace: sleaze vs. socialism vs. state-enforced segregation.

That’s as far back as I care to go on this trip down memory lane. If I go back too far, I’ll remember that I voted for LBJ in 1964. Argh!


David N. Mayer (MayerBlog) parses “fascism.” He uses the term

in its broadest sense, as a political philosophy holding among its essential precepts the claims that individuals have no inherent rights, and that their interests are subordinate to, and therefore may be sacrificed for the sake of, the presumed collective good, whatever it’s called – “society,” “the race,” “the state,” the “Volk,” “the nation,” “the people,” “the proletariat,” “the common good,” or “the public interest.” Purists may object that what I’m really calling “fascism” would be more properly termed collectivism, and that my use of the term fascism is not only historically incorrect but also deliberately provocative – and to a great extent, they’d be right. In defending my use of the term, however, I’d note that as originally coined by Benito Mussolini, the fascist dictator of 1930s Italy, the term referred to the fasces, the bundle of rods wrapped around an axe carried by the lictors who guarded government officials in ancient Rome, where it symbolized the sovereign authority of the state. In this original sense of the term, fascism thus is roughly the equivalent of “statism,” the form of collectivism in which the entity known as “the state” holds the highest political authority in society…. I have an additional justification for using the term fascism. Notwithstanding the arguments of political scientists – who would distinguish fascism from other collectivist –isms such as communism, socialism, or national socialism (Nazism) – these distinctions are really irrelevant because all these forms of collectivism are equally pernicious to, and destructive of, individual rights and freedom. Leftists like to use the terms fascism or fascist as pejoratives because they naively believe that socialism is somehow less evil than collectivism of “the right” – that the murder of millions of people killed by Lenin and Stalin in the Soviet Union, by Mao in Red China, or by Pol Pot in communist Cambodia somehow was less evil than the murder of millions of people killed by Hitler’s regime in Nazi Germany or Mussolini’s regime in fascist Italy. Leftists have no legitimate claim on the truth, and neither do they have any monopoly on use of the terms fascism or fascist as pejoratives.

Mayer, in a typically long post at his excellent blog, goes on to tackle the

“Four Fascisms” of 2008 … : (1) Eco-Fascism, the tyranny of radical environmentalists, including the global-warming hoax and other myths propagated by “green” activists as a rationale for imposing their agenda on us by force; (2) Nanny-State Fascism, the tyranny of the health police, who seek to turn everyone into wards of the state, including the movement pushing for “universal” health care – that is, government monopolization of the health care industry (what used to be called, and still is, socialized medicine); (3) Demopublican/ Replicrat Fascism, the tyranny of the two-party political system in the United States, particularly dangerous in 2008 as an election year; and last, (4) Islamo-Fascism, the danger of militant, fundamentalist Islam to the United States and the rest of the civilized world.

Go there and read. All of it. You many not agree with Mayer in every detail (I don’t), but he aims at the right targets and hits them hard.

Related posts:
FDR and Fascism” (20 Sep 2007)
A Political Compass: Locating the United States” (13 Nov 2007)
The Modern Presidency: A Tour of American History since 1900” (01 Dec 2007)

The People’s Romance

The preceding post is about the “libertarian” Left (LL) and its flirtation with state-imposed political correctness. The LL, while claiming to be anti-statist, wants all of us to behave in certain ways — ways that the LL deems acceptable, of course. The LL’s attitude reminds me of Daniel Klein’s essay, “The People’s Romance: Why People Love Government (as Much as They Do).” Here are some relevant excerpts of that essay:

Government creates common, effectively permanent institutions, such as the streets and roads, utility grids, the postal service, and the school system. In doing so, it determines and enforces the setting for an encompassing shared experience—or at least the myth of such experience. The business of politics creates an unfolding series of battles and dramas whose outcomes few can dismiss as unimportant. National and international news media invite citizens to envision themselves as part of an encompassing coordination of sentiments—whether the focal point is election-day results, the latest effort in the war on drugs, or emergency relief to hurricane victims — and encourage a corresponding regard for the state as a romantic force. I call the yearning for encompassing coordination of sentiment The People’s Romance (henceforth TPR)….

TPR helps us to understand how authoritarians and totalitarians think. If TPR is a principal value, with each person’s well-being thought to depend on everyone else’s proper participation, then it authorizes a kind of joint, though not necessarily absolute, ownership of everyone by everyone, which means, of course, by the government. One person’s conspicuous opting out of the romance really does damage the others’ interests….

TPR lives off coercion—which not only serves as a means of clamping down on discoordination, but also gives context for the sentiment coordination to be achieved….

[N]ested within the conventional view that government is not a mammoth apparatus of coercion is the tenet that society is an organization to which we belong. Either on the view that we constitute and control the government (“we are the government”) or on the view that by deciding to live in the polity we choose voluntarily to abide by the government’s rules (“no one is forcing you to stay here”), the social democrat holds that taxation and interventions such as a minimum wage law are not coercive. The government-rule structure, as they see it, is a matter of “social contract” persisting through time and binding on the complete collection of citizens. The implication is that the whole of society is a club, a collectively owned property, administered by the government….

Members of the LL would hotly dispute the idea that “society is an organization to which we belong” and “a club, a collectively owned property, administered by the government.” Yet, at the same time, they seem to endorse state action that denies liberty in the name of liberty. Liberty is all right, in their view, as long as it produces outcomes of which they approve.

Orwellian” and “doublethink” come to mind.

Political Correctness

“Political correctness” (or “politically correct,” as an adjectival phrase) refers to

language, ideas, policies, or behavior seen as seeking to minimize offense to racial, cultural, or other identity groups.

PCness exists at three connected levels: the individual, voluntary associations, and state action (which draws on and influences the other two).

1. At the individual level, PCness is an exaggerated case of good manners. A PC person refrains from speaking or behaving in ways that might offend or seem to denigrate an “identity group,” even at the expense of stereotyping and patronizing members of such groups (e.g., singling out for special attention, heaping fulsome praise).

2. At the next level, we find voluntary associations (e.g., churches, charities, political parties, academic faculties), whose members, because they share — or profess to share — certain ideas about “equality” and “social justice,” feel bound by those ideas to adopt language, ideas, policies, and behavior that stereotype, patronize, and give special treatment to certain “identity groups.” Some voluntary associations are organized solely for the purpose of seeking legislative and judicial enactment of special treatment, under the guise of “equal protection.”

3. This brings us to the “highest” level: state action. Here, individuals, members of voluntary associations, and government officials (armed with the power of the state) seek to advance the cause of special treatment through legislative and judicial processes, so that such treatment becomes a legal norm, even if it is not a social one.

4. Finally, state action is taken as a moral command by those who are easily led and eager-to-please, thus reinforcing PCness and legitimating its expansion at all three levels.

That PCness is a widespread phenomenon proves nothing about its rightness and a lot about human nature and the coercive power of the state. In spite of that, some libertarians, who (understandably) are anxious to distance themselves from Ron Paul and the Rockwell crowd, have become apologists for PCness. Will Wilkinson, for example, suggests that

most PC episodes mocked and derided by the right are not state impositions. They are generally episodes of the voluntary social enforcement of relatively newly established moral/cultural norms.

Wilkinson grossly simplifies the complex dynamics of PCness, which I sketch above. His so-called “newly established … norms” are, in fact, norms that have been embraced by insular élites (e.g., academics and think-tank denizens like Wilksinson) and then foisted upon “the masses” by the élites in charge of government and government-controlled institutions (e.g., tax-funded universities). Thus it is that proposals to allow same-sex marriage fare poorly when they are submitted to voters. Similarly, the “right” to an abortion, even 35 years after Roe v. Wade, remains far from universally accepted and meets greater popular resistance with the passage of time.

Roderick Long is another “libertarian” who endorses PCness:

Another issue that inflames many libertarians against political correctness is the issue of speech codes on campuses. Yes, many speech codes are daft. But should people really enjoy exactly the same freedom of speech on university property that they would rightfully enjoy on their own property? Why, exactly?

If the answer is that the purposes of a university are best served by an atmosphere of free exchange of ideas — is there no validity to the claim that certain kinds of speech might tend, through an intimidating effect, to undermine just such an atmosphere?…

At my university, several white fraternity members were recently disciplined for dressing up, some in Klan costumes and others in blackface, and enacting a mock lynching. Is the university guilty of violating their freedom of expression? I can’t see that it is. Certainly those students have a natural right to dress up as they please and engage in whatever playacting they like, so long as they conduct themselves peacefully. But there is no natural right to be a student at Auburn University.

Long’s argument is clever, but fallacious. The purposes of a university have nothing to do with the case. Speech is speech.* Long, a member of Auburn’s faculty, is rightly disgusted by the actions of the fraternity members he mentions, but disgust does not excuse the suppression of speech by a State university. It is true that there is no “natural right” to be a student at Auburn, but there is, likewise, no “natural right” not to be offended.

Long describes himself as a “left-libertarian market anarchist” (whatever that is). Interestingly, he also writes for, which is intertwined with Rockwell’s Ludwig von Mises Institute. It is ironic that Lew Rockwell, the Mises Institute, and those who affiliate with them are in bad odor with a long list of bloggers who characterize themselves as libertarians of one kind or another (e.g., here). Their displeasure centers on Ron Paul, his notorious newsletters (thought to have been written or co-written by Rockwell), Paul’s supporters at the Institute, and (for good measure) the Institute itself.

In an earlier post, I noted my agreement with David Friedman’s view of the affray:

There are a lot of different things going on in libertarian reactions to Ron Paul in general and the quotes from the Ron Paul newsletters in particular. One of them, I think, is a culture clash between different sorts of libertarians….

Loosely speaking, I think the clash can be described as between people who see non-PC speech as a positive virtue and those who see it as a fault–or, if you prefer, between people who approve of offending liberal sensibilities (“liberal” in the modern sense of the term) and those who share enough of those sensibilities to prefer not to offend them. The former group see the latter as wimps, the latter see the former as boors.

I added that “a bunch of moralist scolds have leaped at the opportunity to preach their respective, often contradictory, and sometimes wacky visions of libertarian purity.” I now see that there’s more to it. Here is Steven Horwitz, for example:

Yes, legislation like the Civil Rights Act of 1964 involved some interference with private property and the right of association, but it also did away with a great deal of state-sponsored discrimination and was, in my view, a net gain for liberty.

Well, some parts of the Civil Rights Act of 1964, together with its progeny — the Civil Rights Acts of 1968 and 1991 — did advance liberty, but many parts did not. A principled libertarian would acknowledge that, and parse the Acts into their libertarian and anti-libertarian components. A moral scold who really, really wants the state to impose his attitudes on others would presume — as Horwitz does — to weigh legitimate gains (e.g., voting rights) against unconscionable losses (e.g., property rights and the right of association). But presumptuousness comes naturally to Horwitz because he stands high above reality, in his ivory tower.

Will Wilkinson is sympatico with Horwitz:

Government attempts to guarantee the worth of our liberties by recognizing positive rights to a minimum income or certain services like health care often (but not always) undermine the framework of market and civil institutions most likely to enhance liberty over the long run, and should be limited. But this is really an empirical question about what really does maximize individuals’ chances of formulating and realizing meaningful projects and lives.

Within this framework, racism, sexism, etc., which strongly limit the useful exercise of liberty are clear evils. Now, I am ambivalent about whether the state ought to step in and do anything about it.

Wilkinson, like Horwitz, is quite willing to submit to the state (or have others do so), where state action passes some kind of cost-benefit test. Wilkinson, unlike Horwitz, seems to ignore the fact that the state has tried already to do something about racism, sexism, etc., in the Civil Rights Acts. To the extent that balancing tests are relevant to the question of liberty, the Civil Rights Acts have been costly (both economically and socially) and, in the end, both futile and inimical to the comity of the races and sexes. Moreover, as both Horwitz and Wilkinson fail to acknowledge, state action is a blunt instrument, in that it penalizes many for the acts of the (relatively) few.

In any event, what more could the state do than it has done already? Well, there is always “hate crime” legislation, which (as Nat Hentoff points out) is tantamount to “thought crime” legislation. Perhaps that would satisfy Horwitz, Wilkinson, and their brethren on the “libertarian” Left. And, if that doesn’t do the trick, there is always Cass Sunstein’s proposal for policing thought on the internet. Sunstein, at least, doesn’t pretend to be a libertarian.

O brave new world that hath such philosophers in’t!
* Except when it really isn’t speech; for example: sit-ins (trespass), child pornography (sexual exploitation of minors), and divulging military secrets (treason, in fact if not in name).

Cell Phones and Driving, Once More: Addendum

This is an addendum to “Cell Phones and Driving, Once More,” at Liberty Corner. In that post, I dispense with the attempt by Saurabh Bhargava and Vikram Pathania (B&P) to disprove the well established causal link between cell-phone use and traffic accidents through a poorly specified time-series analysis. (Their paper is “Driving Under the (Cellular) Influence: The Link Between Cell Phone Use and Vehicle Crashes,” AEI-Brookings Joint Center for Regulatory Studies, Working Paper 07-15, July 2007.) The question I address here is whether it is possible to quantify that link through time-series analysis.

Coming directly to the point, a rigorously quantitative time-series analysis is impossible because (a) some of the relevant variables cannot be quantified — item by item, along a common dimension — and (b) others are strongly correlated with each other.

The relevant variables that cannot be quantified properly are improvements in the design of automobiles and the streets and highways on which they travel. There simply have been too many different improvements over too long a period of time, during which other significant (and correlated) changes have taken place. There can be no doubt that the design of automobiles has evolved toward greater safety almost since their initial production in the 1890s. What were flimsy, open-bodied carriages with no protection for their occupants are now reinforced, air-bag and shoulder-harness-equipped juggernauts with safety glass, power brakes, and power steering. In parallel, city streets have evolved from unmarked, uncontrolled, unlighted buggy routes to comparatively broad, well-controlled, well-lighted avenues; and highways have evolved from rutted, dirt wagon tracks to comparatively smooth, wide, controlled-access expressways. Thus the combined, long-term effects of design improvements on traffic safety can be seen in aggregate statistics, to which I will come.

Relevant variables that are strongly correlated with each other are traffic fatalities per 100 million vehicle-miles (the dependent variable in this analysis); the proportion of young adults in the population, as measured by the percentage of persons 15-24 years old; the incidence of alcohol consumption, as measured in gallons of ethanol per year; per capita cell-phone use (in average monthly minutes); and the passage of time (measured in years), which is a proxy for improvements in the safety of motor vehicles. Here are the cross-correlations among those variables for the period 1970-2005 (1970 being the earliest year for which I have data on alcohol consumption):




Cell phone

















Cell phone










(The endnote to this post gives the sources for the various statistics discussed and presented in this analysis.)

Obviously, given the strong correlations between the percentage of persons aged 15-24, per capita alcohol consumption, and year, only one of those three variables can be accounted for meaningfully in a regression on the dependent variable, fatalities per 100 million vehicle-miles. Year is the obvious choice, in that it accounts not only for the percentage of 15-24 year olds and alcohol consumption, but also for improvements in the design of motor vehicles and highways.

That cell-phone use is negatively correlated with the fatality rate is merely an artifact of the general decline in the fatality rate, which began long before cell phones came into use. Similarly, the negative correlation between the percentage of 15-24 year olds and the volume of cell-phone use is an artifact of the trends prevailing during 1970-2005: a general decline in the percentage of 15-24 year olds (after 1977), accompanied by a swelling tide of cell-phone use.

Regression analysis illustrates these points. First, I used year as the sole explanatory variable. Despite the high R-squared of the regression (0.911), it lacks nuance; graphically, it is a straight line that bisects the meandering, downward curve of fatality rate (see below). Introducing 15-24 year olds and/or alcohol consumption into the regression would yield a better fit, but because those variables are so strongly correlated with time (and one another) their signs are either intuitively incorrect or their coefficients are statistically insignificant. (This is true for15-24 year olds, even when the regression covers 1957-2005, the period for which I have data for the percentage of 15-24 year olds.)

Adding cell-phone use to year results in a better fit (R-squared = 0.948), and the coefficient for cell-phone use squares with the results of valid studies (i.e., it is significant and positive). But because of the exclusion of 15-24 year olds and alcohol consumption, cell-phone use carries too much weight. Here is the equation:

Annual traffic fatalities per 100mn vehicle-miles =
– (0.105 x year)
+ (0.0022 x number of cell-phone minutes/month/capita in a year)

The t-values of the intercept and coefficients are 21.847, -21.565, and 4.886, respectively (all significant at the 0.99 level). The adjusted R-squared of the equation is 0.945. The mean values of the dependent and explanatory variables are 2.52, 1987.5, and 50.602, respectively. The standard error of the estimate (0.232)/the mean of the dependent variable (2.522) = 0.092. The equation is significant at the 0.99 level.

This equation, when viewed graphically, loses its charm:

It is obvious that the variable for cell-phone use carries too much weight; it over-explains the fatality rate. According to the equation, in 2005, when monthly cell-phone use had ballooned to more than 500 minutes per American, almost 80 percent of traffic fatalities were caused by cell-phone use. That’s an absurd result: an artifact of the difficulty of statistically analyzing traffic fatalities when key variables (time, 15-24 year olds, and alcohol consumption) are strongly correlated. I have no doubt that cell-phone use contributes much to traffic accidents and fatalities (see main post), but not as much as the equation suggests.

A more meaningful relationship is found in the strong, positive correlation (0.973) between cell-phone use and the portion of traffic fatalities that the passage of time fails to account for after 1998, that is, where the blue line crosses below the black line in the graph above. (Similarly, the “hump” in the black line that occurs around 1980, and the declivities that precede and follow it, can be attributed to the rise and fall of the population of 15-24 year olds and the consumption of alcohol.)

It’s time to pull back and look at the big picture. The rate of traffic fatalities has been declining for a long time, owing mainly to improvements in the design of autos and highways. Thus:

Even though a meaningful time-series analysis of traffic fatalities is impossible, it is possible to interpret broadly the history of traffic fatalities since 1900. The first thing to note, of course, is the strong negative relationship between the fatality rate and time, which is a proxy for the kinds of improvements in automobile and highway safety that I mention earlier. Those improvements obviously predate the ascendancy Ralph Nader’s Unsafe at Any Speed (1965), and the ensuing hysteria about automobile safety. Consumers had, for a long time, been demanding — and getting — safer (and more reliable) automobiles. The market works, when you allow it to do its job.

The initial decline in the fatality rate, after 1909, marks the transition from open-sided, unenclosed, buggy-like conveyances to cars with closed sides and metal roofs. Improvements in highway design must have helped, too. Ironically, the drop in the fatality rate became more pronounced after the onset of Prohibition in 1920. It leveled off a bit in the late 1920s, when the “reckless youth of the Jazz Age” came to the fore, equipped with cars and bootleg gin. The rate then spiked at the (official) end of Prohibition (1933), suggesting that that ignoble experiment had some effect on Americans’ drinking habits. The slight bulge during World War II reflects the increasing unreliability of autos then in use; relatively few Americans could afford new cars during the Depression, and new cars weren’t built during the war. The vigorous descent of the fatality rate from 1945 to the early 1960s captures the effects of (a) the resumption of auto production after WWII and (b) continued improvements in auto and highway design. Later bulges and dips in the fatality rate can be traced to the influence of a growing, then declining, population of young adults and the (presumably related) rise and fall in per capita alcohol consumption. Then, along came the cell-phone eruption, with its tidal wave of inattentive drivers, as impaired as if they had been drinking. (The prospect of encountering a cell-phone-using drunk driver is frightening.)

Here are some observations and predictions:

  • In the 48 years from 1909 to 1957 — when the Interstate Highway System was in its infancy and eight years before Nader published Unsafe at Any Speed — the fatality rate dropped from 45.33 to 5.73 fatalities per million vehicle-miles. That’s 39.6 fewer fatalities per million vehicle-miles, a drop of 87 percent.
  • In the 48 years from 1957 to 2005 — the era of federalization — the fatality rate dropped to 1.45 fatalities per million vehicle-miles. That’s 4.28 fewer fatalities per million vehicle-miles, a drop of 73 percent. The smaller absolute and relative decline during these 48 years than in the preceding ones can be explained, in part, by the Peltzman effect (discussed below).
  • Traffic fatalities will continue to drop at about the same rate, whether or not cell-phone bans are widely adopted and enforced. Why? Because technology will save the day. Moore’s law (a description of the declining cost of computing technology) will lead to cheap, reliable, sensor-controlled warning, steering, and braking systems.
  • But the already low fatality rate can’t go much lower, in absolute terms. It may drop another 70 to 80 percent in the next 48 years, from about 1.5 to about 0.3.

I now come to the Peltzman effect: “the hypothesized tendency of people to react to a safety regulation by increasing other risky behavior, offsetting some or all of the benefit of the regulation.” The effect is named after Sam Peltzman, a professor economics at the University of Chicago, who in the 1970s originated the theory of offsetting behavior. Peltzman, writing in 2004, had this to say:

A recent article [here] by Alma Cohen and Linan Einav (2003) on the effects of mandatory seatbelt use laws…. shares with most such studies the crucial bottom line: The real-world effect of these laws on highway mortality is substantially less than it should be if there was no offsetting behavior. [Cohen and Einav] conclude that the increased belt usage occasioned by these laws should, in the absence of any behavioral response, have saved more than three times as many lives as were in fact saved.

Equally important, this kind of “regulatory failure” does not arise because the engineers at NHTSA are wrong abou the effectiveness of the devices they prescribe. Most studies show that, if you are involved in a serious accident, you are much better off buckled than not and with an air bag rather than without. The auto safety liberature attributes the shortfall, either implicitly or explicitly, to an offsetting increase in the likelihood of aserious accident.

Imagine the lives that would have been saved without the “help” of the Naderites of this world.

Fatality Rates. These are from the Statistical Abstract of the United States (online version), Table HS-41, Transportation Indicators for Motor Vehicles and Airlines: 1900 to 2001, and Table 1071, Motor Vehicle Accidents–Number and Deaths: 1980 to 2005.

Population aged 15-24. The numbers of persons aged 15-24 are from the Statistical Abstract, Table HS-3, Population by Age: 1900 to 2002, and Table 7, Resident Population by Age and Sex: 1980 to 2006. The same tables give total population, which I used to compute the percentage of the population aged 15-24.

Alcohol consumption. Estimates of annual, per capita consumption for 1970-2005 are from Per capita ethanol consumption for States, census regions, and the United States, 1970–2005 (National Institute on Alcohol Abuse and Alcoholism).

Per capital cell-phone use. I derived monthly cell-phone use, by year, from Trends in Telephone Service, February 2007 (Wireline Competition Bureau, Industry Analysis and Technology Division, Federal Communications Commission). I obtained total monthly cell-phone usage by multiplying the December values for the number of subscribers, given in tables 11-1 and 11-3, by the average number of minutes of use per month, given in table 11-3. The values for monthly minutes begin with 1993, so I estimated the values for 1984-92 by ussing the average of the values for 1993-98. To estimate per capita use, I divided total monthly minutes by the population of the U.S. (see above).

Cell Phones and Driving, Once More

Almost two years ago I wrote about research conducted by the National Highway Traffic Administration and Virginia Tech’s Transportation Institute which finds, unsurprisingly, that inattention is a main cause of traffic accidents. Further,

[t]he most common distraction for drivers is the use of cell phones. [T]he number of crashes and near-crashes attributable to dialing is nearly identical to the number associated with talking or listening…. [D]ialing a hand-held device (typically a cell phone) [increased the risk of a crash] by almost three times.

Moreover, as the American Psychological Association points out,

[p]sychological research is showing that when drivers use cell phones, whether hand-held or hands-off [emphasis added], their attention to the road drops and driving skills become even worse than if they had too much to drink. Epidemiological research has found that cell-phone use is associated with a four-fold increase in the odds of getting into an accident [see below] – a risk comparable to that of driving with blood alcohol at the legal limit….

David Strayer, PhD, of the Applied Cognition Laboratory at the University of Utah has studied cell-phone impact for more than five years. His lab, using driving high-fidelity simulators while controlling for driving difficulty and time on task, has obtained unambiguous scientific evidence that cell-phone conversations disrupt driving performance. Human attention has a limited capacity, and studies suggest that talking on the phone causes a kind of “inattention blindness” to the driving scene.

In one study, when drivers talked on a cell phone, their reactions to imperative events (such as braking for a traffic light or a decelerating vehicle) were significantly slower than when they were not talking on the cell phone. Sometimes, drivers were so impaired that they were involved in a traffic accident. Listening to the radio or books on tape did not impair driving performance, suggesting that listening per se is not enough to interfere. However, being involved in a conversation takes attention away from the ability to process information about the driving environment well enough to safely operate a motor vehicle….

Disturbingly, forthcoming research [since reported in “A Comparison of the Cell Phone Driver and the Drunk Driver” and “Cell-Phone Induced Driver Distraction“] will show that talking on a cell phone (even hands-free) hurts driving even more than driving with blood alcohol at the legal limit (.08 wt/vol). When talking on a cell phone, drivers using a high-fidelity simulator were slower to brake and had more “accidents” than when they weren’t on the phone. Their impairment level was actually a little higher than that of people intoxicated by ethanol (alcohol).

The studies at Virginia Tech and the University of Utah rely on instrumented vehicles and simulators. Some skeptics dismiss the results of such studies because of their “artificiality.” But the results are consistent with after-the-fact analyses of the role of cell-phone use in actual accidents. See, for example, “Association between Cellular Telephone Calls and Motor Vehicle Collisions,” by Donald A Redelmeier and Robert J. Tibshirani (New England Journal of Medicine, February 1997), and “Role of mobile phones in motor vehicle crashes resulting in hospital attendance: a case-crossover study,” by Suzanne P. McEvoy et al. (BMJ, a journal of the British Medical Association, July 12, 2005).

Redelmeier and Tibshirani analyzed 26,798 cell-phone calls over a 14-month period and found that

[t]he risk of a collision when using a cellular telephone was four times higher than the risk when a cellular telephone was not being used (relative risk, 4.3; 95 percent confidence interval, 3.0 to 6.5). The relative risk was similar for drivers who differed in personal characteristics such as age and driving experience; calls close to the time of the collision were particularly hazardous (relative risk, 4.8 for calls placed within 5 minutes of the collision, as compared with 1.3 for calls placed more than 15 minutes before the collision; P risk, 5.9) offered no safety advantage over hand-held units (relative risk, 3.9; P not significant).

McEvoy et al. queried “456 drivers aged ≥ 17 years who owned or used mobile phones and had been involved in road crashes necessitating hospital attendance between April 2002 and July 2004.” The results:
Driver’s use of a mobile phone up to 10 minutes before a crash was associated with a fourfold increased likelihood of crashing (odds ratio 4.1, 95% confidence interval 2.2 to 7.7, P

All of the studies cited above are microscopic; that is, they examine the behaviors of specific drivers and/or the causes of specific accidents (or simulated accidents). They are also remarkably consistent in their findings: Using a cell phone while driving is risky — about as risky as driving while drunk.

Saurabh Bhargava and Vikram Pathania (hereafter B&P) are graduate students in economics at UC Berkeley who claim to have refuted the kinds of findings summarized above. Their effort is documented in “Driving Under the (Cellular) Influence: The Link Between Cell Phone Use and Vehicle Crashes” (AEI-Brookings Joint Center for Regulatory Studies, Working Paper 07-15, July 2007). As B&P explain, they

investigate[d] the causal link between cellular usage and crash rates by exploiting a natural experiment induced by a popular feature of cell phone plans in recent years—the discontinuity in marginal pricing at 9 pm on weekdays when plans transition from “peak” to “off-peak” pricing. We first document[ed] a jump in call volume of about 20-30% at “peak” to “off-peak” switching times for two large samples of callers from 2000-2001 and 2005. Using a double difference estimator which uses the era prior to price switching as a control (as well as weekends as a second control), we [found] no evidence for a rise in crashes after 9 pm on weekdays from 2002-2005.

What B&P found, in fact, is a slightly negative relationship between the rise in call volume and the accident rate. (See tables 5 and 6 on pages 28 and 29, and related discussion.) How could that be, if it is inherently reckless to use a cell phone while driving?

B&P’s paradoxical results flow from serious shortcomings in their analysis:

  • The actual use of cell phones by drivers isn’t known very well; B&P cite only broad averages based on survey samples.
  • The extent to which cell-phone use by drivers actually rises or falls at the switch-over certainly isn’t known.
  • The results rest on differences in accident rates between two periods: 1990-98 (before the introduction of “off-peak” pricing) and 2002-04 (after the introduction of “off-peak” pricing). But those two periods differ in potentially significant ways: the incidence of younger persons (i.e., more reckless drivers) in the population, the per capita consumption of alcohol, and the design of motor vehicles and highways. B&P acknowledge the second and third factors, but address none of them quantitatively. (See tables A1, a summary of data sources, and table A2, which gives summary statistics.)
  • B&P conduct three additional analyses (page 30) that, they claim, confirm their “basic results.” First, they find (unsurprisingly), a negative correlation between accidents and cell-phone ownership over time, but they merely acknowledge “that there are unobserved variables which are correlated with the growth in cell phone ownership across regions and time.” Second, their examination of the relationship between accident rates and cell-phone ownership across areas of varying population density (metropolitan, urban/suburban, rural) is unnecessarily convoluted and, therefore, unconvincing. Third, they trot out the apparent ineffectiveness of legislative bans on cell-phone use with fatal-accident rates in five jurisdictions, but they offer no statistics about the level of enforcement efforts that accompanied or followed the bans.

The bottom line is that B&P’s analysis fails to control for time-related variations in critical variables. For reasons detailed in the addendum to this post, time-series analysis is inadequate to the task at hand.

B&P expose some relevant cross-section data, but neglect its implications in their haste to exonerate cell-phone use as a cause of accidents. Figures 2 and 3 (page 4) give indices of cell-phone calls and fatal crashes in 2005, in 10-minute bins from 8 p.m. to 10 p.m. A set of observations for a single year offers the advantage of controlling for time-related factors (proportion of young persons in the population, per capita alcohol consumption, and automobile and highway design). B&P do not divulge the data underlying figures 2 and 3, but — given the scale on which the figures are drawn — the data are readily discernible. Regression analysis yields this result:

Index of fatal-accident rate =
– (0.074 x number of minutes after 8 p.m.)
+ (9.787 if weekend, zero if weekday)
+ (0.199 x index of outgoing cell-phone calls)

The t-values of the intercept and coefficients are 2.469, -3.423, 6.183, and 2.670, respectively (all significant at the 0.95 level or higher). The adjusted R-squared of the equation is 0.695. The mean values of the dependent and explanatory variables are 49.692, 60, 0.5, and 130.385, respectively. The standard error of the estimate (3.984)/the mean of the dependent variable (49.692) = 0.080. The equation is significant at the 0.99 level.

The signs of the intercept and the variables are intuitively correct. One would expect (a) a positive “baseline” rate of fatal accidents; (b) a negative relationship between the lateness of the day and the accident rate, as the number of vehicles on the road diminishes and the use of cell phones shifts from the highway to the home; (c) a higher accident rate on weekends, when there is more “partying,” especially among younger (i.e., more reckless) drivers; and (d) a positive relationship between cell-phone use and accidents.

In fact, at the mean values of the variables, a 1-percent rise in aggregate cell-phone use leads to a 0.26-percent rise in the index of fatal accidents, which is equivalent to a 0.52-percent rise in the rate of such accidents. Putting it another way, cell-phone use accounted for about 50 percent of fatal accidents during the hours of 8 p.m. to 10 p.m. in 2005. That may overstate the contribution of cell-phone use to fatal accidents, but (given the evidence cited earlier in this post) I have no doubt that it points in the right direction. For example:

  • If Redelmeier and Tibshirani (see above) are right about the relative of risk of collision arising from cell-phone use (relative risk of 4.3 = 3.3 x baseline rate), and
  • about 15 percent of drivers are on cell phones between 8 p.m. and 10 p.m., and
  • fatal accidents rise in proportion to total accidents, then
  • an estimate of about 50 percent is not unreasonable.

Despite having statistically exonerated cell-phone users as a menace to others, B&P concede the opposite. This is from the UC Berkeley press release announcing their paper:

The economists [B&P] don’t dispute that using cell phones while driving can be dangerous. Bhargava conducted his own personal experiment, talking on his cell phone while driving in Minnesota this summer. Acknowledging that he doesn’t often drive, much less drive and talk on the cell phone at the same time, Bhargava said he almost crashed twice on that trip.

“Our research should not be viewed as an endorsement to use cell phones in a negligent way,” he said. “It certainly may be risky for a marginal user.”

Pathania added another cautionary note: “Since we know that certain demographic groups such as teenagers frequently call and text while driving, and that they are also risky, inexperienced drivers, further research is needed in this area. Laws banning cell phone use in cars for such groups may well have some merit.”

Reality trumps cock-eyed statistical analysis every time.

The moral of the story is that cell phones and driving don’t mix. I am sticking with the bottom line of my earlier post:

[F]or the vast majority of drivers there is no alternative to the use of public streets and highways. Relatively few persons can afford private jets and helicopters for commuting and shopping. And as far as I know there are no private, drunk-drivers-and-cell-phones-banned highways. Yes, there might be a market for [such] highways, but that’s not the reality of here-and-now.

…I can avoid the (remote) risk of death by second-hand smoke by avoiding places where people smoke. But I cannot avoid the (less-than-remote) risk of death at the hands of a drunk or cell-phone yakker. Therefore, I say, arrest the drunks, cell-phone users, nail-polishers, newspaper-readers, and others of their ilk on sight; slap them with heavy fines; add jail terms for repeat offenders; and penalize them even more harshly if they take life, cause injury, or inflict property damage.

See the addendum at Liberty Corner II.

Mr. Laffer Speaks…

…and he is right:

Mark my words: If the Democrats succeed in implementing their plan to tax the rich and cut taxes on the middle and lower income earners, this country will experience a fiscal crisis of serious proportions that will last for years and years until a new Harding, Kennedy or Reagan comes along.

Related post: “The Laffer Curve, ‘Fiscal Responsibility,’ and Economic Growth” (26 Oct 2007)

The Future of Marriage

While Stephanie Coontz explains (hopes for) the decline of marriage — from her over-educated, yuppie, “liberal” perspective — ugly reality persists. From a post at

About 80% of black babies are born to unwed moms,” Indianapolis Star, January 24, 2008:

About eight in 10 black children in Indiana are born to unwed parents — a start to life that sets them up for problems during adolescence and beyond, according to an Indiana Black Expo report [being released Friday]…

Child Trends, a nonpartisan national children’s research organization, reports children born to single mothers are more likely to:

  • Live in poverty and experience social and emotional problems.
  • Have low educational attainment, engage in sex at younger ages and have a premarital birth.
  • Enter adulthood neither in school nor employed, or have lower occupational status and income, and more troubled marriages and divorces than those born to married parents.

The issues that spin out of struggling single-parent families show up throughout the new Black Expo report, including the teen birth rate of 81 per 1,000 for blacks. That is almost twice the state’s overall teen birth rate of 43.5 per 1,000.

Contrast that slice of harsh reality with Coontz’s smug tone:

[T]he time has passed when we can construct our social policies, work schedules, health insurance systems, sex education programs — or even our moral and ethical beliefs about who owes what to whom — on the assumption that all long-term commitments and care-giving obligations should or can be organized through marriage. Of course we must seek ways to make marriage more possible for couples and to strengthen the marriages they contract. But we must be equally concerned to help couples who don’t marry become better co-parents, to help single parents and cohabiting couples meet their obligations, and to teach divorced parents how to minimize their conflicts and improve their parenting.

Who is this “we” of whom Coontz writes? Is it government? Is it “liberal” know-it-alls like Coontz? What we (the real we) need is less government involvement in family matters, not more. We certainly don’t need the amoral, socially destructive “help” of Coontz and her ilk.

Related posts:
A Century of Progress?
Feminist Balderdash
Libertarianism, Marriage, and the True Meaning of Family Values
The Left, Abortion, and Adolescence
Consider the Children
Same-Sex Marriage
“Equal Protection” and Homosexual Marriage
Marriage and Children
Abortion and the Slippery Slope
Equal Time: The Sequel
The Adolescent Rebellion Syndrome
Social Norms and Liberty
Parenting, Religion, Culture, and Liberty
A “Person” or a “Life”?
The Case against Genetic Engineering
How Much Jail Time?
The Political Case for Traditional Morality
Parents and the State
Ahead of His Time
“Family Values,” Liberty, and the State

Highways and Conservatives

A true conservative — one who favors limited government and private solutions to so-called social problems — does not support tax-funded highways, even when those highways are crowded. But Ross Douthat and Jonah Goldberg do, thereby revealing themselves as big-government “conservatives.”

Jonah Goldberg, as you probably know, is the author of Liberal Fascism. In his stance on the matter of highways. he reveals himself as a neoconservative, big-government, twenty-first century fascist.

The answer to the problem of crowded highways isn’t to build more of them at taxpayers’ expense — in the style of Hitler and Mussolini — it is to let the private sector work its magic. Absent government control of highways and the taxes that support highways, more efficient modes of transportation would be offered by private carriers and manufacturers of transportation systems; employers would finally get serious about telecommuting; and some commuters might even opt for simpler lives or forms of employment that don’t require commuting.

In sum, the market and lifestyle distortions caused by tax-funded highways would be diminished, if not removed entirely. A pox on “highway fascism.”

UPDATE (01/25/08): In a related development, Below the Beltway passes along some good news for taxpayers:

The federal government will not fund the Metro extension to Dulles International Airport without drastic changes, officials said yesterday, effectively scuttling a $5 billion project planned for more than 40 years and widely considered crucial to the region’s economic future.

U.S. Transportation Secretary Mary Peters and Federal Transit Administration chief James S. Simpson stunned Virginia politicians at a meeting on Capitol Hill yesterday when they outlined what Simpson called “an extraordinarily large set of challenges” that disqualifies the project from receiving $900 million in federal money. Without that, the project would die.

“Federal money” is money taken from taxpayers across the United States.

Related post: “Traffic-Congestion Hysteria” (09 May 2005)

Poland: Where the "Bad News" is Good

Guest post:

These days when, as a conservative, even the “good news” is bad, it’s nice to hear some good “bad news.” By that, I mean “bad” to the socialist-minded media. I am speaking of Poland, a veritable recusant country in the hyper-liberal European Union. When the European Court of Human Rights recently ruled against France for barring a lesbian woman from adopting a child, Polish politicians took the lead in denouncing the decision. Apparently 90% of Poles oppose adoptions by alternative lifestylers.

Last spring, Warsaw played host to the World Congress of Families and defiantly opposed EU pressure for “same-sex marriages” and abortion. Poland has repeatedly stymied efforts to introduce homosexual propaganda as being subversive of public morals, especially as regards children. Currently Poland prohibits abortion in all but a few cases, despite political and economic penalties leavied by the EU. And while this is this not perfect, it is a far cry from the drive-through abortion practices of most countries. Fortunately there has been an ongoing effort (though momentarily defeated) to close even that contradictory loophole (permitting “eugenic abortions”) by religious conservatives.

No doubt Poland’s unusually strong social conservatism has much to do with its pugnacious Catholic heritage and legacy of sucessful non-violent resistance to totalitarian occupiers (both Nazis and Communists). One more bit of “bad news” for Euro-leftists is the tiny island nation of Malta, famed for its heroic resistance to both Ottoman Turkish and German sieges. Today it faces a new, more subtle political siege against its pro-life, pro family government (see related item).

Back to the Drawing Board: Reflections on Architecture

Guest post:

A recent exhibit at the Library of Virginia, Never Built Virginia (January 11 – May 21, 2008), documents architectural designs that never made it off the drawing board. Ranging in designs from prosaic 19th century churches to ugly modern high rises, the exhibit forms an interesting cultural and aesthetic chronicle. There are a few items which stand out, like the magnificent Greco-Roman concept for the Library of Virginia, proposed in the 1930s. Unfortunately it was shelved in favor of a drab art deco structure (not the best specimen of that style) when the library was rebuilt in 1940. The state library has since been relocated to a retro-modern, and not totally ungraceful, building just down the street.

Not all modernism is bad, but a little bit goes a long way. And when we are told that “Virginia’s deep-rooted traditionalism doomed many [architectural] schemes” we can be glad. After looking at plans from a few decades ago for the James River area—consisting of angular, massive poured concrete structures—it is fortunate that development was postponed until the recent neo-classical revival, when most of the buildings being put up exhibit tasteful Georgian lines to match the historic downtown.

One of the architects highlighted in Never Built Virginia is Haigh Jamgochian, a 1960s disciple of hyper-modernism. That he is a misanthropic recluse who has a made a career (like so many modern “creative” people) by not actually doing anything, seems appropriate. Admittedly his drawings and models are curious to look at, like the whimsical futurist predictions of old science fiction movies. Jamgochian cites the original Star Trek show as an early influence. But the minute you actually throw up such edifices on real streets, amidst venerable brick, stone and stucco structures, the effect is monstrous. Jamgochian was not very successful in selling his designs, but there are still plenty of disasters scattered about Richmond from the ’60s and ’70s to damage the landscape. Fortunately, as an established east coast city, enough of the traditional buildings have survived to maintain its character.

Perhaps the most that can be said for classic modernism is its symmetry. Of course symmetry is not enough to make a good building. But it’s impossible to imagine good design without it. In that respect postmodernism, with its chaotic fragmentation, is only a further step in the direction of artistic decay in which even traditional elements are haphazardly plundered in the way that barbarians of the Dark Ages appropriated bits and pieces from handsome temples and palaces to construct their poorly made hovels. The effect is to evoke not so much admiration as pity.

The Libertarian Culture Clash

David Friedman (Ideas) is right:

There are a lot of different things going on in libertarian reactions to Ron Paul in general and the quotes from the Ron Paul newsletters in particular. One of them, I think, is a culture clash between different sorts of libertarians….

Loosely speaking, I think the clash can be described as between people who see non-PC speech as a positive virtue and those who see it as a fault–or, if you prefer, between people who approve of offending liberal sensibilities (“liberal” in the modern sense of the term) and those who share enough of those sensibilities to prefer not to offend them. The former group see the latter as wimps, the latter see the former as boors.

What else is going on? Well, for one thing, a bunch of moralist scolds have leaped at the opportunity to preach their respective, often contradictory, and sometimes wacky visions of libertarian purity. When they have finished with Ron Paul and his gaggle of white supremacists and conspiracy theorists, they will return to bashing each other. Political purity may be self-satisfying, but it wins few converts and fewer elections.

Just to be clear about it, I hold no brief for Ron Paul.

The Arts: Where Regress is "Progress"

Bookworm (of Bookworm Room) shares my disdain of modern art forms, some of which I express and explain here:

Speaking of Modern Art” (24 Jul 2004)
Making Sense about Classical Music” (23 Aug 2004)
An Addendum about Classical Music” (24 Aug 2004)
My Views on ‘Classical’ Music, Vindicated” (02 Feb 2005)
A Quick Note about Music” (29 Jun 2005)
All That Jazz” (03 Nov 2006)

In the early decades of the twentieth century, the visual, auditory, and verbal arts became an “inside game.” Painters, sculptors, composers (of “serious” music), choreographers, and writers of fiction began to create works not for the enjoyment of audiences but for the sake of exploring “new” forms. Given that the various arts had been perfected by the early 1900s, the only way to explore “new” forms was to regress toward primitive ones — toward a lack of structure, as Bookworm calls it. Aside from its baneful influence on many true artists, the regression toward the primitive has enabled persons of inferior talent (and none) to call themselves “artists.” Thus modernism is banal when it is not ugly.

Painters, sculptors, etc., have been encouraged in their efforts to explore “new” forms by critics, by advocates of change and rebellion for its own sake (e.g., “liberals” and “bohemians”), and by undiscriminating patrons, anxious to be au courant. Critics have a special stake in modernism because they are needed to “explain” its incomprehensibility and ugliness to the unwashed.

The unwashed have nevertheless rebelled against modernism, and so its practitioners and defenders have responded with condescension, one form of which is the challenge to be “open minded” (i.e., to tolerate the second-rate and nonsensical). A good example of condescension is heard on Composers Datebook, a syndicated feature that runs on some NPR stations. Every Composers Datebook program closes by “reminding you that all music was once new.” As if to lump Arnold Schoenberg and John Cage with Johann Sebastian Bach and Ludwig van Beethoven.

All music, painting, sculpture, dance, and literature was once new, but not all of it is good. Much (most?) of what has been produced since 1900 is inferior, self-indulgent crap.

How to Think about Secession

At the risk of being called a “Doughface libertarian,” which I am not, I must express some reservations about Timothy Sandefur’s paper, “How Libertarians Ought to Think about the U.S. Civil War.”

Sandefur avers that “the Constitution does prohibit secession”; therefore, the States do not have the power to secede under the Tenth Amendment, which says:

The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.

But the Constitution nowhere expressly prohibits secession. Sandefur’s argument that the Constitution does prohibit secession is an inferential one that rests on his conclusion that the action of a State (qua State)

cannot change the nature of the federal Constitution as adopted in 1787: it is a binding government of the whole people of the United States. No state may unilaterally leave the union.

Actually, the people of the each State were at liberty not to adopt the Constitution. The Constitution could have gone into effect upon being ratified by the conventions of nine of the thirteen States:

The ratification of the conventions of nine states, shall be sufficient for the establishment of this constitution…

In which case, however, the Constitution would have been binding only upon the States whose people ratified it; that is,

…between the states so ratifying the same.

That the people of all thirteen States did, eventually, ratify the Constitution is another matter. Four of the States could have remained outside the Union; that is, they could have “seceded” preemptively.

How could a State have the right to decline membership in the Union but not to withdraw from membership in the Union? Was the act of ratification equivalent to a Christian marriage vow (before Henry VIII)? It would seem so, according to the U.S. Supreme Court, which in Texas v. White (1868) anticipated Sandefur’s arguments; for example:

When…Texas became one of the United States, she entered into an indissoluble relation. All the obligations of perpetual union, and all the guaranties of republican government in the Union, attached at once to the State. The act which consummated her admission into the Union was something more than a compact; it was the incorporation of a new member into the political body. And it was final. The union between Texas and the other States was as complete, as perpetual, and as indissoluble as the union between the original States. There was no place for reconsideration, or revocation, except through revolution, or through consent of the States.

Considered therefore as transactions under the Constitution, the ordinance of secession, adopted by the convention and ratified by a majority of the citizens of Texas, and all the acts of her legislature intended to give effect to that ordinance, were absolutely null. They were utterly without operation in law. The obligations of the State, as a member of the Union, and of every citizen of the State, as a citizen of the United States, remained perfect and unimpaired. It certainly follows that the State did not cease to be a State, nor her citizens to be citizens of the Union.

But such fine reasoning, which echoes the pre-Civil War position of Union loyalists, did not prevent the secession (or rebellion) of eleven States. It would have been bad — bad for slaves, bad for the defense of a diminished Union — had the South prevailed in its effort to withdraw from the Union. But the failure of the South’s effort, in the end, was due to the force of arms, not the intentions of the Framers of the Constitution. Justice Grier fully grasped that point in his dissent from the majority in Texas v. White:

Is Texas one of these United States? Or was she such at the time this [case] was filed, or since?
This is to be decided as a political fact, not as a legal fiction. This court is bound to know and notice the public history of the nation….
It is true that no organized rebellion now exists there, and the courts of the United States now exercise jurisdiction over the people of that province. But this is no test of the State’s being in the Union; Dacotah is no State, and yet the courts of the United States administer justice there as they do in Texas. The Indian tribes, who are governed by military force, cannot claim to be States of the Union. Wherein does the condition of Texas differ from theirs?… I can only submit to the fact as decided by the political position of the government; and I am not disposed to join in any essay to prove Texas to be a State of the Union, when Congress have decided that she is not. It is a question of fact, I repeat, and of fact only. Politically, Texas is not a State in this Union. Whether rightfully out of it or not is a question not before the court.

Legalistic arguments about secession are irrelevant, even if they are intellectually entertaining. Secession is a political issue, and as Clausewitz said, “war is the continuation of politics by other means.” In paraphrase of Stalin, I ask: How many divisions does the Supreme Court (or a blogging lawyer) have?

For a deeper analysis of secession, see “How Libertarians Ought to Think about the Constitution.”

Is Inflation Inevitable?

Inflation is inevitable as long as government spending, taxation, and regulation continue to inhibit productivity gains by stifling innovation, entrepreneurship, and risk-taking. The historical record shows as much:

Real GDP is nominal (current-dollar) GDP divided by the GDP deflator, a measure of changes in the overall level of prices for the goods and services that make up GDP. I derived five-year averages from the estimates of real GDP and the GDP deflator for 1790 through 2006, as provided by Louis D. Johnston and Samuel H. Williamson, “The Annual Real and Nominal GDP for the United States, 1790 – Present.” Economic History Services, July 27, 2007, URL : UPDATE (01/30/08): The averages for 2005 include estimates of real GDP and the GDP deflator for 2007, as issued by the Bureau of Economic Analysis on January 30, 2008.

Before the early 1900s — before federal income taxes were made constitutional, before government spending rose from less than 10 percent to about 30 percent of GDP, before the Federal Reserve was created, and before the nation’s businesses were engulfed in a regulatory tsunami — the U.S. experienced prolonged periods of deflation, accompanied by rapid economic growth.

The only sustained periods of deflation since 1900 occurred in conjunction with the deep (but relatively brief) recession of the early 1920s and the Great Depression of the 1930s.

The real issue is not inflation per se, it is government. Inflation is a symptom of chronic, government-induced, economic weakness. There is no way, really, to “fight inflation” but to remove the heavy hand of government from the economy.

Related posts:
The Destruction of Income and Wealth by the State” (01 Jan 2005)
Why Government Spending Is Inherently Inflationary” (18 Sep 2005)
Ten Commandments of Economics” (02 Dec 2005)
More Commandments of Economics” (06 Dec 2005)
Liberty, General Welfare, and the State” (06 Feb 2006)
Monopoly and the General Welfare” (25 Feb 2006)
The Causes of Economic Growth” (08 Apr 2006)
Slopes, Ratchets, and the Death Spiral of Liberty” (03 Aug 2006)
The Anti-Phillips Curve” (25 Aug 2006)
Median Household Income and Bad Government” (18 Sep 2006)
Toward a Capital Theory of Value” (12 Jan 2007)
Things to Come” (27 Jun 2007)
The Laffer Curve, “Fiscal Responsibility, and Economic Growth” (26 Oct 2007)
A Political Compass: Locating the United States” (13 Nov 2007)
Intellectuals and Capitalism” (15 Jan 2008)

Religion and the Inculcation of Values

Apropos the preceding post and “Religion and the Inculcation of Morality,” I offer these thoughts by Christopher Dawson:

[T]he Liberal movement, with its humanitarian idealism and its belief in the law of nature and the rights of man, owes its origin to an irregular union between the humanist tradition and a religious ideal that was inspired by Christian moral values, though not by Christian faith…. [T]he whole development of liberalism and humanitarianism, which has been of such immense importance in the history of the modern world, derived its spiritual impetus from the Christian tradition that it attempted to replace, and when that tradition disappears this spiritual impetus is lost, and liberalism in its turn is replaced by the crudity and amoral ideology of the totalitarian state.

“Europe in Eclipse” (1954), compiled in
The Dynamics of World History

UPDATE (01/19/08): Relatedly, Mark Steyn writes today:

…Jonah Goldberg has a brilliant new book out called Liberal Fascism, which I hope to address at length in the weeks ahead. I note, however, that American liberals, not surprisingly, don’t care for the title. As it happens, the phrase is H.G. Wells’s, and he meant it approvingly. Unity [Mitford]’s dreamboat Fuhrer described himself as “a man of the left.”… Even when they’re not in thrall to the personality dictators, a big chunk of Western elites have a strange yen for the sterner ways of distant cultures, from Hillary Clinton’s Hallmark sentimentalization (“It Takes A Village,” etc.) of a tribal existence that’s truly nasty, brutish and short to Germaine Greer’s more explicit defence of “female genital mutilation.” Late in life, Miss Greer has finally found a form of patriarchal oppression that gets her groove back as much as National Socialism did Unity Mitford’s.

If you’re unlucky, it’s not just the elites who fall for ideologically exotic suitors. It would seem to me, given how easily the Continent embraced all the most idiotic “isms” three-quarters of a century ago, that it will surely take up some equally unlovely ones as it faces its perfect storm of an aging native population, a surging Muslim immigrant population, and an unsustainable welfare state…

A Western nation voluntarily embracing sharia? Sounds silly. But so does Unity Mitford. Liberal democracy is squaresville and predictable, small-scale and unheroic, deeply unglamorous compared to the alternatives. And kind of boring. Until it’s gone.

A Sensible Atheist Speaks

David Friedman (Ideas) writes:

Part of my skepticism with regard to the efforts of my fellow atheists [e.g., Richard Dawkins and Sam Harris] to demonstrate how absurd the opposing position is comes from knowing a fair number of intelligent, reasonable, thoughtful people who believe in God–including one I am married to. Part comes from weaknesses I can perceive in the foundations for my own view of the world. At some point, I think, each of us is using the superb pattern recognition software that evolution has equipped us with to see a coherent pattern in the world around us–and since the problem is a harder one than the software was designed to deal with, it isn’t that surprising that we sometimes get different answers.

UPDATE (01/20/08): Friedman ends a follow-up post with this thought:

My own conclusion, as before, is that I do not think God exists. But neither do I think that conclusion so obviously true that all reasonable people ought to accept it.


Related posts:
The Universe…Four Possibilities” (07 Jan 2007)
The Greatest Mystery” (24 Dec 2007)
Religion and the Inculcation of Morality,” which links to many other related posts (12 Nov 2007)

Whither the Stock Market? (II)

UPDATED (03/12/08)

On November 14, 2007, I wrote:

Is it possible that the current bull market reached a temporary peak in May of this year, and is now descending toward a secondary bottom that it will not reach for a few years?

This was my tentative answer, then:

A reversal that lasts a year or two seems entirely possible to me.

My less tentative answer, now, is that the stock market (as measured by the Dow Jones Wilshire 5000 Composite Index) has crossed into “bear country.” That is, it has met the two conditions which indicate a “correction” or bear market that will last for months or years:

  • the index has dropped below its 250-trading-day average, and
  • the 250-day average is moving downward (if imperceptibly).

To see that this is so, go to BigCharts.

1. At the top of the page, in the box for symbol or keyword, type “DWC” and click on the “advanced chart” button.

2 A list of “companies” will appear. Select “Dow Jones Wilshire 5000 Composite Index” by clicking on the icon for that item which is labeled “A.”

3. Then, make the following entries or selections in the panel on the left side of the screen:

Time Frame
Time — select “1 year”
Frequency — select “daily”

Moving averages — select “SMA” and type “250” in the box to the right of that

Chart style
Price display — select “logarithmic”
Chart size — select “medium”

At the bottom, click on “save chart settings.” Then, return to the top of the panel and click “draw chart.” Change the length of time to “1 month, “2 months, “3 months,” and “6 months,” then redraw the chart each time.

What you will see in each chart (as of today) is a dip in the 250-trading-day average. More obviously, you will see that the value of the index has moved below the 250-day average. It is therefore likely that the market has entered a downward phase that could last for months or years.

To see why, change your “Time” selection to “all data” and redraw the chart. The resulting graphic shows 25 years of the index and its 250-day average for the last 24 years. You can see that a market downturn of several months’ or years’ duration has ensued whenever the index has dropped below its 250-day average and the 250-day average has turned down.

On the other side of the coin, how can you know — for sure — when a downturn has ended and the market is in recovery? Answer: The end of a downturn is confirmed when the index rises upward through the 250-day average and the 250-day average is rising.

Regardless of the current state of the market, please remember this:

Don’t bail out now, unless you absolutely, positively need the money. I could be wrong about the reversal. In any event, stocks are for the long run.

P.S. By my reckoning, every downturn in the 250-day average since 1970 has signaled every recession since 1970.