Where Your Tax Dollars Go — Installment 9,999,999,999

From the web site of a taxpayer-funded think tank:

Throughout The X Corporation, employees, in small groups and through individual efforts, are making a difference in the quality of their communities and in the lives of their friends, neighbors, and the many who are in need. Through innumerable acts of kindness and concern and commitments of time and boundless energy, a culture has grown within X that shows that extraordinary acts by ordinary people can make a difference in the world.

Giving back to the community is a priority at The X Corporation….

How about focusing on the work that taxpayers are paying you for? That would be the best way to “give back to the community.”

P.S. This is small potatoes. The real ripoff is in the absurdly high salaries paid by such tax-exempt organizations.

P.P.S. Oh, yes, and what would we do without diversity? Here’s what The X Corporation has to say about its diversity program, which is run by a full-time Diversity Coordinator:

Differing points of view, different frames of reference, and a broad range of life experiences bring an energy to the workplace that has helped X become a world leader in producing effective, insightful analysis. [Actually, if this organization is a “world leader” in anything, it’s b.s. Good analysis requires educated intelligence and perspicacity.]

X maintains an all-inclusive diversity program. Supported by an advisory committee, X’s diversity efforts are designed to ensure that we attract, develop, respect, motivate, and retain highly skilled people without regard to race, ethnicity, gender, personal background, religious beliefs, sexual orientation, education, or position within X. [An all-inclusive diversity program, by definition, ensures the hiring of people — some of them not highly skilled — precisely because of their race, etc., etc.]

See all the great things you get for your tax dollars?

It Is the Economy — And a Few Other Things

Predicting presidential elections is fairly easy. To begin with, the incumbent or the nominee of the incumbent party wins most of the time — 60 percent of the time from 1920 through 2000. (I’ve picked 1920 as the starting point for this analysis because it marks the birth of the modern, post-Teddy Roosevelt, more-or-less laissez-faire Republican Party.)

Why do incumbents or the incumbent party’s nominee tend to win? Because the private economy grows most of the time. (My measure of growth in the private economy is the change in real GDP per capita, net of government spending.) Naive voters (that’s most of them) tend blame or credit the current president with the state of the economy. Such blame or credit is seldom merited.

With three exceptions, which I’ll come to, the incumbent or the incumbent party’s nominee lost the election when the private economy was shrinking in election year or was about where it had been four years earlier. Such was the case in 1920 (Cox lost to Harding), 1932 (Hoover lost to FDR), 1952 (Stevenson lost to Eisenhower), 1960 (Nixon lost to JFK), 1976 (Ford lost to Carter), 1980 Carterr lost to Reagan), and 1992 (Bush I lost to Clinton). The more attractive personalities of Eisenhower (vs. Stevenson), JFK (vs. Nixon), and Reagan (vs. Carter) may also help to explain their victories. Clinton’s personality may have helped him beat Bush I, but he was helped greatly by Perot, who probably siphoned far more votes from Bush I than he did from Clinton.

The only three election outcomes that violate the rule of growth are those of 1944, 1968, and 1980. The private economy was growing in 1944, but it had shrunk since 1940. However, it had shrunk because of the massive diversion of resources to a popular war. People understood that and stuck with Roosevelt, albeit by a smaller margin than in 1932, 1936, and 1940.

In 1968 and 2000, by contrast, incumbent vice presidents (Humphrey and Gore) failed to win election in their own right, despite strong economic growth.

The 1968 election was, as much as anything else, a referendum on the war in Vietnam and the cultural war in America. Humphrey was the target of the electorate’s rage for the failure of the war in Vietnam and for the rise of the counter-culture. Nixon’s victory over Humphrey would have been even larger had Wallace not captured a large part of the counter-counter-culture vote.

The 2000 election was Gore’s to lose, and he lost it, but barely. Yes, the U.S. Supreme Court made the right decision, albeit for the wrong reason. The Florida Supreme Court tried, selectively, to give some voters a second chance after they had failed to cast proper ballots, in contravention of the rules adopted by the Florida legislature and therefore in contravention of the U.S. Constitution.

The strong desire of conservative voters to punish Gore for Clinton’s sins was probably offset by the strong desire of liberal voters to vindicate Clinton by electing Gore. Gore may have been hurt by the contrast between his own rather scolding, school-marmish personality and Clinton’s sunny personality, but Bush certainly wasn’t (and isn’t) an Eisenhower, JFK, or Reagan. Gore simply ran a bad campaign; he zigged left and zagged right, always transparently pandering to one interest group or another. He ultimately lost the election because many people on the far left suspected his bona fides and cast their votes for Nader. And there went Florida. Gore was the Titanic of presidential candidates — seemingly robust but fatally vulnerable.

What will happen in 2004? Bush stands a good chance if the economy continues to rebound strongly until election day. But he will probably lose if the economy stutters or if voters see Iraq as a second Vietnam. A major terrorist attack in the United States could cut either way. It will be close.

Fair’s Fair

Doctor Proposes Not Treating Some Lawyers

AP reports that Dr. J. Chris Hawk, a South Carolina surgeon, asked “the American Medical Association to endorse refusing care to attorneys involved in medical malpractice.” According to AP, Dr. Hawk “said he made the proposal to draw attention to rising medical malpractice costs. The resolution asks that the AMA tell doctors that — except in emergencies — it is not unethical to refuse care to plaintiffs’ attorneys and their spouses.”

Of course, there were loud protests, and Dr. Hawk withdrew his proposal. But I like the idea.

Next target: politicians who meddle with health care.

Respect for Presidents

In the previous post I referred to Ronald Reagan as President Reagan and Mr. Reagan, whereas I called Bill Clinton plain old Clinton. I’m not of the school that accords every president or ex-president the same degree respect when it comes to honorifics. Presidents must earn my respect by their actions. I’m their employer, after all.

Here’s how I think of the men who have served as president since the end of World War II:

Truman — sometimes Mr. Truman and sometimes Harry; it depends on which Truman I’m thinking of at the moment, the no-nonsense Truman or the partisan Democrat.

Eisenhower — Ike, with respect for his soldiering days, or President Eisenhower.

Kennedy — Jack or JFK is the best I can do for the playboy of the West Wing.

Johnson — LBJ or something unprintable for the biggest crook on this list, after Nixon.

Nixon — Nixon, when I’m being kind; otherwise, only Tricky Dick fits the man.

Ford — Jerry, because it rhymes with ordinary.

Carter — Jimmy, because it’s the undignified label he gave himself; better than he deserves. (A “great ex-president” my foot.)

Reagan — Mr. Reagan or President Reagan; Gipper is too familiar for a man who was pleasant but dignified.

Bush I — Bush I is the best I can do for a career bureaucrat who had to have his turn in the White House.

Clinton — Clinton, when I’m being kind; otherwise, Slick Willie or something unprintable for this walking amalgam of JFK, LBJ, and Nixon.

Bush II — Bush II, when he’s being a compassionate conservative; otherwise, President Bush.

Presidential Values

My local paper, apparently unable to bear any longer the outpouring of adulation for President Reagan, today ran a front-page article on his failings as a father. The headline, “Reagan championed family values, but had complicated relationships with his own children,” all but accuses the late president of moral hypocrisy. Now, how are Mr. Reagan’s supposed parental shortcomings related to his accomplishments as a president? They’re not, as far as I can see. But the liberal press simply cannot stand by and let Mr. Reagan pass into history unsullied.

If Mr. Reagan’s personal life is irrelevant to his performance as president, why was there all that fuss about Bill Clinton’s extra-curricular activities with Monica Lewinsky? Democrats persist in saying that the impeachment of Clinton was “just about sex.” But it wasn’t.

Clinton was impeached for lying under oath before a federal grand jury and for obstructing justice in the Paula Jones case. Whatever Jones may have said publicly after she settled her suit against Clinton, she had nevertheless chosen freely to sue him. Her suit became a federal case, under the laws that Clinton had sworn implicitly to uphold in his capacity as president.

Oh, but Clinton wasn’t guilty because the Senate didn’t convict him on any of the articles of impeachment. Wrong! Clinton was guilty, but he wasn’t convicted because Democrats — to a man and woman — effectively refused to hear the evidence. They had made it clear from the beginning that the trial would be a farce. And it was. Some Republican senators, knowing that Clinton couldn’t be convicted, chose to vote “not guilty” out of political expediency. Clinton would have been convicted if Democrats had acted in good faith.

If you need more evidence of Clinton’s guilt than the articles of impeachment, consider this: In the aftermath of the impeachment trial, Clinton was disbarred in his home State of Arkansas and by the U.S. Supreme Court.

Now, tell me why the media must flaunt Mr. Reagan’s purported failings as a father.

A Few More Thoughts

The passing of Ronald Reagan reminds me of two enduring truths, which those who sneered at him never grasped. The first truth is that you can’t have peace with dignity unless you’re prepared for war. The second truth is that free markets — not government programs — offer the surest path out of poverty.

In Memoriam

Ronald W. Reagan: February 6, 1911 – June 5, 2004. He brought us peace through strength and prosperity through greater self-reliance. The greatest president of the 20th century now belongs to the ages.

Will the Libertarian Party’s Candidate Swing the Election to Kerry?

No way. The Libertarian Party has outdone itself this year. I predict that its current presidential nominee — one Michael Badnarik — will garner fewer popular votes than any Libertarian candidate has since 1980. That year marked the party’s high-water mark in presidential politics, when its nominee received 921,199 popular votes. It’s been pretty much downhill since then. The second-best showing of 485,798 popular votes in 1996 was followed by 382,892 popular votes in 2000. (Stats courtesy of the LP web site.)

An article posted yesterday on The Chattanoogan.com indicates the quality of Mr. Badnarik’s intellect:

The reason we can’t find a relationship between the Constitution and our current government is that there is none. [Oh really, none at all, not even the three branches of the federal government?] If I can win the Libertarian nomination, there’s no reason I can’t win this election. [Of course, it would take an unprecedented and undetected failure of most voting machines in the United States.] We have a unique opportunity to change the world. [What an original thought!]

The article goes on to say, “Badnarik urged the national audience to reject the ‘wasted vote’ argument…” Right. Well, if the LP candidate in 2000 had received as many wasted votes as the LP candidate in 1996, the election probably would have gone to Gore, without the need for a long recount in Florida. If Libertarian Party members don’t like Bush’s “compassionate conservatism,” they’d surely hate Gore’s “ultra-compassionate liberalism.”

Badnarik scattered a few more pearls of wisdom:

If you were in prison and faced a 50% chance of death by lethal injection, a 45% chance of the electric chair, and had a 5% chance of escape, would you vote for lethal injection because it was the most likely outcome, or would you try for escape? [What an absolutely compelling metaphor. Will it fit on a bumper sticker?] Voting Libertarian is our only chance for political survival. Choosing the lesser of two evils is still choosing evil. [It may be, but if you don’t choose the lesser evil, you get the greater one.]

My heart may be libertarian but my mind is Republican.

A Bigger Beast

Spending by state and local governments in the United States is five times as large as the federal government’s nondefense spending (about which see my previous post). Real (constant-dollar) spending by state and local governments increased by a multiple of 10 from 1945 to 2003. The population of the United States merely doubled in that same period. Thus the average American’s real tax bill for municipal services is five times larger today than it was in 1945.

It’s evident that not enough of the loot has been spent on courts, policing, emergency services, and roads. No, our modern, “relevant” municipal governments have seen fit to bless us with such things as free bike trails for yuppies, free concerts that mainly attract people who can afford to pay for their own entertainment, all kinds of health services, housing subsidies, support for the arts(?), public access channels on cable TV, grandiose edifices in which municipal governments hatch and oversee their grandiose schemes, and much, much, more.

Then there are public schools…

UPDATE: The good news about state and local spending is that its real rate of growth has dropped since 2000. The bad news is that the slowdown coincided with a recession and period of slow economic recovery. The good news is that municipal spending is a beast with thousands of necks, and each of them can be throttled at the state and local level, given the will to do so.

Starving the Beast

There’s an interesting post by Tyler Cowen of The Volokh Conspiracy as to whether “depriving the government of tax revenue actually limits government spending.” The links in Cowen’s post lead to other VC posts on the same subject (here, here, and here)

and to a paper by Bill Niskanen and Peter Van Doren of the Cato Institute (where I once roosted for a spell).

Here’s the “starve the beast” hypothesis, according to Niskanen and Van Doren:

For nearly three decades, many conservatives and libertarians have argued that reducing federal tax rates, in addition to increasing long-term economic growth, would reduce the growth of federal spending by “starving the beast.” This position has recently been endorsed, for example, by Nobel laureates Milton Friedman and Gary Becker in separate Wall Street Journal columns in 2003.

It seems to me that the notion of starving the beast is really an outgrowth of an older, simpler notion that might well have been called “strangle the beast.”

The notion was (and still is, in some quarters) that the intrusive civilian agencies of the federal government, which have grown rampantly since the 1930s, ought to be slashed, if not abolished. There’s no need for fancy tricks like cutting taxes first, just grab the beast by the budget and choke it.

There’s more than money at stake, of course — there’s liberty and economic growth. The deregulation movement, which finally gained some traction during Carter’s administration, reflects the long-held view that many (most?) civilian agencies have a powerfully debilitating influence by virtue of their regulatory powers and ingrained anti-business attitudes. But I’ll focus on the money that feeds the beast.

Niskanen and Van Doren’s figure of merit is spending as a share of GDP. But it’s the absolute, real size of the beast’s budget that matters. Bigger is bigger — and bigger agencies can cause more mischief than smaller ones. So, my figure of merit is real growth in nondefense spending.

What about defense spending, which Niskanen and Van Doren lump with nondefense spending in their analysis? Real nondefense spending has risen almost without interruption since 1932, with the only significant exception coming in 1940-5, when World War II cured the Depression and drastically changed our spending priorities. Real defense spending, on the other hand, has risen and fallen several times since 1932, in response to exogenous factors, namely, the need to fight hot wars and win a cold one. Niskanen and Van Doren glibly dismiss the essentially exogenous nature of defense spending by saying

that the prospect for a major war has been substantially higher under a unified government. American participation in every war in which the ground combat lasted more than a few days — from the war of 1812 to the current war in Iraq — was initiated by a unified government. One general reason is that each party in a divided government has the opportunity to block the most divisive measures proposed by the other party.

First, defense outlays increased markedly through most of Reagan’s presidency, even though a major war was never imminent. The buildup served a strategy that led to the eventual downfall of the USSR. Reagan, by the way, lived with divided government throughout his presidency. Second, wars are usually (not always, but usually) broadly popular when they begin. Can you imagine a Republican Congress trying to block a declaration of war after the Japanese had bombed Pearl Harbor? Can you imagine a Democrat Congress trying to block Bush II’s foray into Afghanistan after 9/11? For that matter, can you imagine a Democrat-controlled Congress blocking Bush I’s Gulf War Resolution? Well, Congress was then in the hands of Democrats and Congress nevertheless authorized the Gulf War. Niskanen and Van Doren seem to dismiss this counter-example because the ground war lasted only 100 hours. But we fielded a massive force for the Gulf War (it was no Grenada), and we certainly didn’t expect the ground war to end so quickly.

As I was saying, domestic spending is the beast to be strangled. (I’m putting aside here the “sacred beasts” that are financed by transfer payments: Social Security, Medicare, etc.) How has the domestic beast fared over past 30-odd years? Quite well, thank you.

There is a very strong — almost perfect — relationship between real nondefense spending and the unemployment rate for the years 1969 through 2001, that is, from the Nixon-Ford administration through the years of Carter, Reagan, Bush I, and Clinton. Using a linear regression with five pairs of observations, one pair for each administration, I find that the percentage change in real nondefense spending is a linear function of the change in the unemployment rate. Specifically:

S = 1.0315 + 0.11286U

where S = real nondefense spending at end of a presidency/real nondefense spending at beginning of a presidency

U = unemployment rate at end of a presidency/unemployment rate at beginning of a presidency.

The adjusted R-squared for the regression is .997. The t-stats are 228.98 for the constant term and 39.75 for U.

In words, the work of the New Deal and Fair Deal had been capped by the enactment of the Great Society in the Kennedy-Johnson era. The war over domestic spending was finished, and the big spenders had won. Real nondefense spending continued to grow, but more systematically than it had from 1933 to 1969. From 1969 through 2001, each administration (abetted or led by Congress, of course) increased real nondefense spending according to an implicit formula that reflects the outcome of political-bureaucratic bargaining. It enabled the beast to grow, but at a rate that wouldn’t invoke images of a new New Deal or Great Society.

Divided government certainly hampered the ability of Republican administrations (Nixon-Ford, Reagan, Bush I) to strangle the beast, had they wanted to. But it’s not clear that they wanted to very badly. Nixon was, above all, a pragmatist. Moreover, he was preoccupied by foreign affairs (including the extrication of the U.S. from Vietnam), and then by Watergate. Ford was only a caretaker president, and too “nice” into the bargain. Reagan talked a good game, but he had to swallow increases in nondefense spending as the price of his defense buildup. Bush I simply lacked the will and the power to strangle the beast.

Bureaucratic politics also enters the picture. It’s hard to strangle a domestic agency once it has been established. Most domestic agencies have vocal and influential constituencies, in Congress and amongst the populace. Then there are the presidential appointees who run the bureaucracies. Even Republican appointees usually come to feel “ownership” of the bureaucracies they’re tapped to lead.

What happened before 1969?

The beast — a creature of the New Deal — grew prodigiously through 1940, when preparations for war, and war itself, brought an end to the Great Depression. Real nondefense spending grew by a factor of 3.6 during 1933-40. If the relationship for 1969-2001 had been in effect then, real nondefense spending would have increased by only 10 percent.

Truman and the Democrats in control of Congress were still under the spell of their Depression-inspired belief in the efficacy of big government and counter-cyclical fiscal policy. The post-war recession helped their cause, because most Americans feared a return of the Great Depression, which was still a vivid memory. Real nondefense spending increased 2.8 times during the Truman years. If the relationship for 1969-2001 had been in effect, real nondefense spending would have increased by only 20 percent.

The excesses of the Truman years caused a backlash against “big government” that the popular Eisenhower was able to exploit, to a degree, in spite of divided government. Even though the unemployment rate more than doubled during Ike’s presidency, real domestic spending went up by only 9 percent. That increase would have been 28 percent if the relationship for 1969-2001 had been in effect. But even Ike couldn’t resist temptation. After four years of real cuts in nondefense spending, he gave us the interstate highway program: another bureaucracy — and one with a nationwide constituency.

The last burst of the New Deal came in the emotional aftermath of Kennedy’s assassination and Lyndon Johnson’s subsequent landslide victory. Real nondefense spending in the Kennedy-Nixon years rose by 56 percent, even though the unemployment rate dropped by 48 percent during those years. The 56 percent increase in real spending would have been only 8 percent if the 1969-2001 relationship had applied.

As for Bush II, through the end of 2003 he was doing a bit better than average, by the standards of 1969-2001 — but not significantly better. He now seems to have become part of the problem instead of being the solution. In any event, the presence of the federal government has become so pervasive, and so important to so many constituencies, that any real effort to strangle the beast would invoke loud cries of “meanie, meanie” — cries that a self-styled “compassionate conservative” couldn’t endure.

Events since 1969 merely illustrate the fact that the nation and its politicians have moved a long way toward symbiosis with big government. The beast that frightened conservatives in the 1930s, 1940s, and 1950s has become a household pet, albeit one with sharp teeth. Hell, we’ve even been trained to increase his rations every year.

Tax cuts won’t starve the beast — Friedman, Becker, and other eminent economists to the contrary. But tax increases, on the other hand, would only stimulate the beast’s appetite.

The lesson of history, in this case, is that only a major war — on the scale of World War II — might cause us to cut the beast’s rations. And who wants that?

UPDATE: If Bush II wins a second term, might he become the Ike (or even Coolidge) of this decade? As Mike Rappaport of The Right Coast says,

I’ll believe it when I see it, but this is at least a good sign:

The White House put government agencies on notice this month that if President Bush is reelected, his budget for 2006 may include spending cuts for virtually all agencies in charge of domestic programs, including education, homeland security and others that the president backed in this campaign year.

If Bush II wins — and if Republicans retain control of Congress — it’s possible. But don’t count on it.

The timing of this announcement may be intended to whip up enthusiasm for Bush’s re-election among conservative Republicans, who have been wondering what sets Bush apart from a free-spending Democrat, aside from the war in Iraq. And some of those same conservative Republicans, apparently suffering from an overload of media scandal-mongering and defeatism, have begun to wonder about the war, as well.

Toward Domestic Tranquility

Let’s agree that 10 years from now the United States will be divided geographically into two separate nations — U.S. Red and U.S. Blue. The boundaries will be established by the electoral votes cast in the 2004 election (States voting for Bush will be U.S. Red and States voting for Kerry will be U.S. Blue).

If you want to live in U.S. Red (or Blue) and you currently live in a Blue (Red) State, you’ll have 10 years in which to migrate. All the transition details (new constitutions, terms of trade, defense treaties, etc.) can be worked out. Let’s go for it.

By Their Supreme Court Appointments Ye Shall Know Them

Nixon: Rehnquist (later appointed Chief Justice by Reagan) — Belongs in the second tier, all by himself. His instincts are statist rather than libertarian, but he tries to adhere to the original meaning of the Constitution.

Ford: Stevens — What do you expect from Ford? A Republican in name only who interprets the Constitution the way a blind umpire interprets the strike zone.

Carter: He made no appointments, luckily for the nation.

Reagan: O’Connor, Scalia, Kennedy — O’Connor and Kennedy make up the third tier; they vacillate between libertarianism and statism. Scalia’s originalism usually overcomes his instinctive statism; he’s in the top tier with Thomas.

Bush I: Souter, Thomas — Typically conflicted Bush I appointments; from another John Paul Stevens to the best appointment since the 1920s.

Clinton: Ginsburg, Breyer — Clinton failed to nationalize health care, but stuck us with these two crypto-socialists.

Fear of the Free Market — Part III

If it’s unnecessary to regulate health care — as I’ve argued in Part I (April 8) and Part II (April 11) of this series — can we take the next step and denationalize it? Can we forgo other forms of nationalization (particularly Social Security) and the regulation of other industries (e.g., telecommunications, banking, and securities)?

The prospect of deregulating health care; giving up Medicare, Medicaid, or Social Security; and leaving consumers generally “at the mercy of the market” may seem unthinkable. So let us think about it.

Regulation and nationalization (an extreme form of regulation) restrict competition and therefore reduce the supply and quality of regulated products and services. Many have argued, rather persuasively, that individuals would be far better off with the privatization of Social Security. (See, for example, my posts of March 5.) Moreover, there is ample evidence that proper deregulation leads to higher quality and lower prices. Phone service, for example, is not only cheaper (in real terms) but indisputably better, given the range of options available to consumers. Air travel, to take another example, is also cheaper (in real terms) and certainly better for the great majority of travelers who prefer more legroom to the so-called meals that airlines used to serve in coach class.

Why, despite sound arguments and concrete evidence, do most Americans tend to resist denationalization and deregulation? Their resistance arises from two things: risk aversion (both personal and paternalistic) and economic illiteracy.

Risk aversion is revealed in questions like these: Will I choose the right doctor? Will he choose the right medicine? Will that over-the-counter drug poison me? Will I save enough for retirement? What about my parents, my children, my friends, and the elderly poor? The answers are:

• Licensing of doctors doesn’t ensure your doctor’s competence or help you choose the right doctor.

• The FDA’s approval of drugs doesn’t ensure that your doctor will choose the right drug for you or a drug that’s safe for you.

• That over-the-counter drug is unlikely to poison you, especially if the one you choose has been on the market for at least a few years.

• Your parents, children, and all the rest (even you) would have plenty of money for retirement living (including private medical insurance) if the government didn’t collect taxes for Social Security, Medicare, and other welfare programs. The elderly poor would be taken care of by greater charitable donations (afforded by lower taxes) and relatively small, strictly means-tested, welfare programs.

I could go on and on about other components of our over-regulated economy, but I think you get the idea. There is little risk of coming to harm in a free-market economy, where individuals learn to look out for themselves, especially if they are backed by strict enforcement of tough laws against deception and fraud. Conversely, the rewards of a free-market economy are great: more competition, higher quality, lower prices, greater output, higher employment, and higher incomes (from which to fund minimal welfare programs for those who are truly dependent on society because no one else can meet their needs).

Economic illiteracy blinds people to the benefits that flow from a truly free-market economy. The illiterates (that’s most of us) therefore become easy prey for the real beneficiaries of nationalization and regulation, what Bruce Yandle aptly calls “Bootleggers and Baptists”:

• The “bootleggers” are market incumbents (as represented by the American Medical Association and the American Bar Association, for example) who benefit from the suppression of competition (as bootleggers did during Prohibition).

• The “Baptists” are self-appointed guardians of our health and well-being (the sum of all our risk-averse fears, you might say).

Economics can be as abstruse as the physics of special relativity. But it rests on two things that are easily remembered:

• Incentives matter.

• There’s no such thing as a free lunch.

Nationalization and regulation suppress incentives and therefore weaken the economy. The benefits of nationalization and regulation come at a high cost, but we tend to focus on our own benefits (the “free lunch”) and forget the cost (the taxes we pay for benefits that go to others).

Drunk Driver Appointed Traffic Court Judge

If there was any lingering doubt about the corruptness of the 9/11 Commission, Attorney General John Ashcroft dispelled it yesterday in his testimony before the Commission.

Ashcroft disclosed that Commissioner Jamie Gorelick wrote this, which Ashcroft properly described as “[t]he single greatest structural cause for September 11…the wall that segregated criminal investigators and intelligence agents.” Ashcroft continued, “Government erected this wall. Government buttressed this wall. And before September 11, government was blinded by this wall.” Specifically,

In the days before September 11, the wall…impeded the investigation into Zacarias Moussaoui, Khalid al-Midhar and Nawaf al-Hazmi. After the FBI arrested Moussaoui, agents became suspicious of his interest in commercial aircraft and sought approval for a criminal warrant to search his computer. The warrant was rejected because FBI officials feared breaching the wall.

When the CIA finally told the FBI that al-Midhar and al-Hazmi were in the country in late August, agents in New York searched for the suspects. But because of the wall, FBI Headquarters refused to allow criminal investigators who knew the most about the most recent al Qaeda attack to join the hunt for the suspected terrorists.

At that time, a frustrated FBI investigator wrote Headquarters, quote, “Whatever has happened to this — someday someone will die — and wall or not — the public will not understand why we were not more effective and throwing every resource we had at certain ‘problems’. Let’s hope the National Security Law Unit will stand behind their decision then, especially since the biggest threat to us, UBL, is getting the most protection.”

Of course Gorelick didn’t foresee the particular, horrific terrorist acts we call 9/11, just as a drunk driver doesn’t foresee the particular, horrific accident caused by his drunkenness.

If Gorelick’s policy hadn’t become known immediately after 9/11 — and hadn’t been rectified already — Ashcroft’s testimony would have contained the only true “bombshell” to emerge thus far from the 9/11 hearings.

Fear of the Free Market — Part II

In Part I of this series (second post under April 8, 2004), I pointed out that

[i]t is easier to list those markets in which the government doesn’t intervene (namely, “black markets”) than it is to list those markets in which the government does intervene. There simply isn’t a lawful business activity that isn’t affected by government regulation….[G]overnment intervention in the market for any product or service tends to reduce the supply of that product or service.

Health care, being something almost everyone needs (like electricity and phone service), has been regulated to the point of being nationalized (see Part I). Yet it is unclear that the regulation of health care does anything but restrict our access to doctors and drugs. Licensing exams have no meaningful effect on our ability to choose competent doctors (see Part I).

What about FDA approval of drugs? The FDA doesn’t test drugs, it prescribes testing procedures for drugs. The responsibility for testing falls to the maker of the drug. According to a statistics published on the FDA web site, The FDA ultimately approves about 20% of applications for new drugs. The three phases of the FDA’s prescribed testing process last at least one year and sometimes six years and longer. What does the FDA hope to accomplish through its approval process? Here’s some of what the FDA’s Ken Flieger has to say:

Most of us understand that drugs intended to treat people have to be tested in people. These tests, called clinical trials, determine if a drug is safe and effective, at what doses it works best, and what side effects it causes–information that guides health professionals and, for nonprescription drugs, consumers in the proper use of medicines.

Clinical testing isn’t the only way to discover what effects drugs have on people. Unplanned but alert observation and careful scrutiny of experience can often suggest drug effects and lead to more formal study. But such observations are usually not reliable enough to serve as the basis for important, scientifically valid conclusions. Controlled clinical trials, in which results observed in patients getting the drug are compared to the results in similar patients receiving a different treatment, are the best way science has come up with to determine what a new drug really does. That’s why controlled clinical trials are the only legal basis for FDA to conclude that a new drug has shown “substantial evidence of effectiveness.”

It boils down to safety and effectiveness. But safety and effectiveness are also your doctor’s concern. Do you suppose that your doctor would prescribe a drug that its manufacturer hadn’t thoroughly tested for safety and effectiveness? Of course, your doctor might well flub his diagnosis (something that happens a lot, despite the medical licensing exam) and prescribe the wrong medication. Or your doctor might diagnose you correctly but prescribe a medication that produces an unpleasant side effect. In summary, the safety and effectiveness of the drugs your doctor prescribes depends mainly on your doctor’s competence.

Misadventure is more likely with non-prescription (over-the-counter) drugs. As the FDA acknowledges, “Most OTC drug products have been marketed for many years, prior to the laws that require proof of safety and effectiveness before marketing.” Very interesting. As with prescription drugs, OTC drugs, used to be available without the FDA’s imprimatur. That is, individuals used to be trusted to buy and use OTC drugs wisely, but then the FDA got into the act. Why? According to the FDA:

Languishing in Congress for five years, the bill that would replace the 1906 [Food and Drugs Act] was ultimately enhanced and passed in the wake of a therapeutic disaster in 1937. A Tennessee drug company marketed a form of the new sulfa wonder drug that would appeal to pediatric patients, Elixir Sulfanilamide. However, the solvent in this untested product was a highly toxic chemical analogue of antifreeze; over 100 people died, many of whom were children. The public outcry not only reshaped the drug provisions of the new law to prevent such an event from happening again, it propelled the bill itself through Congress. This was neither the first nor the last time Congress presented a public health bill to a president only after a therapeutic disaster. FDR (pictured at left) signed the Food, Drug, and Cosmetic Act on 25 June 1938.

The new law brought cosmetics and medical devices under control, and it required that drugs be labeled with adequate directions for safe use. Moreover, it mandated pre-market approval of all new drugs, such that a manufacturer would have to prove to FDA that a drug were safe before it could be sold. It irrefutably prohibited false therapeutic claims for drugs, although a separate law granted the Federal Trade Commission jurisdiction over drug advertising. The act also corrected abuses in food packaging and quality, and it mandated legally enforceable food standards. Tolerances for certain poisonous substances were addressed. The law formally authorized factory inspections, and it added injunctions to the enforcement tools at the agency’s disposal.

And on it went:

Enforcement of the new law came swiftly. Within two months of the passage of the act, the FDA began to identify drugs such as the sulfas that simply could not be labeled for safe use directly by the patient–they would require a prescription from a physician. The ensuing debate by the FDA, industry, and health practitioners over what constituted a prescription and an over-the-counter drug was resolved in the Durham-Humphrey Amendment of 1951. From the 1940s to the 1960s, the abuse of amphetamines and barbiturates required more regulatory effort by FDA than all other drug problems combined.

Notice that the focus is always on abuses and never on successes. Here’s what The Cato Institute’s Handbook for Congress has to say about the FDA::

As an agency, the FDA has a strong incentive to delay allowing products to reach the market. After all, if a product that helps millions of individuals causes adverse reactions or even death for a few, the FDA will be subject to adverse publicity with critics asking why more tests were not conducted. Certainly, it is desirable to make all pharmaceutical products as safe as possible. But every day that the FDA delays approving a product for market, many patients who might be helped suffer or die needlessly.

For example, Dr. Louis Lasagna, director of Tufts University’s Center for the Study of Drug Development, estimates that the seven-year delay in the approval of beta-blockers as heart medication cost the lives of as many as 119,000 Americans. During the three and half years it took the FDA to approve the drug Interleukin-2, 25,000 Americans died of kidney cancer even though the drug had already been approved for use in nine other countries. Eugene Schoenfeld, a cancer survivor and president of the National Kidney Cancer Association, maintains that ‘‘IL-2 is one of the worst examples of FDA regulation known to man.’’

In the past two decades patients’ groups have become more vocal in demanding timely access to new medication. AIDS sufferers led the way. After all, if an individual is expected to live for only two more years, three more years spent testing the efficacy of a prospective treatment does that person no good. The advent of the Internet has allowed individuals suffering from specific ailments and patient groups to use websites and chat rooms to exchange information and to give them an opportunity to take more control of their own treatment. They now can track the progress of possible treatments as they are tested for safety and efficacy and are quite conscious of how FDA-imposed delays can stand in the way of their good health and even their lives….

[I]n a free society individuals should be free to take care of their physical well-being as they see fit. The advent of the Internet gives individuals even more access to information about medical products and treatments. Individuals should be allowed to choose the treatments they think best. Such liberty does not open the door for fraud or abuse any more than does a free market in other products. In fact, informed consent by patients probably will become more sophisticated as the market for information about medical treatments becomes more free and open.

Government regulation of health-care products and services makes them harder to get and more expensive than the products and services that would be delivered in the absence of regulation. Would quality suffer in a free-market health-care system? It might in some cases, but competition among producers and providers would lead to an overall increase in quality, in response to consumers’ demands for competent medical practitioners and effective drugs.

If it’s unnecessary to regulate health care, can we take the next step and de-nationalize it? What about other industries and types of economic activity? Stay tuned for Part III of this series.

Fear of the Free Market — Part I

In So When Are We Going to Get That Free-Market Health Care Everyone’s Complaining About?, Trent McBride guesstimates that with the addition of the prescription drug benefit to Medicare “our health care system will be paid for by explicit or implicit public funds at a rate of 65-70%.” By “explicit or implicit public funds” he means direct payments (e.g., Medicare, Medicaid, and the VA) plus the sundry regulatory activities (e.g., FDA approval of new drugs) that are funded by taxes. McBride therefore characterizes the health-care system as “marginally nationalized.” He asks, “if we have a nationalized health-care system now, and that system is [considered] broken, is more nationalization the way to go?”

Sasha Volokh objects to McBride’s characterization of the health-care system as “nationalized” because what matters is not only “who pays but also…who controls.” Apparently, in Sasha Volokh’s view, Medicare doesn’t count as a form of nationalization because beneficiaries get to choose their doctors. In this regard, it’s important to recall the old variation on the Golden Rule: “Them what has the gold makes the rules.” I might get to choose my doctor from a government-approved list, but all good doctors won’t be on that list, nor will all the treatments I might like to have. It would cost me more to go to doctors who aren’t on the list and to receive non-approved treatments, but I may not be able to afford either because my wealth has been depleted by many years of paying into Medicare. Bottom line: Medicare is most certainly a form of nationalization.

Government’s effective control of the health-care system is only a notorious example of government’s distortion of free-market mechanisms. It is easier to list those markets in which the government doesn’t intervene (namely, “black markets”) than it is to list those markets in which the government does intervene. There simply isn’t a lawful business activity that isn’t affected by government regulation.

If, for example, I wished to turn this blog into a business by selling advertising space on it, I would (or should) get a business license from the city, pay property tax on my computer (as a piece of business equipment), keep a set of business books for tax purposes, file a special income tax return (Schedule C, at a minimum), and pay additional Social Security taxes at the rate for self-employed persons. If business thrived and I hired someone to help me produce the blog (or handle the paperwork), that would compound my compliance problem and the cost of dealing with it.

Alternatively, I could ignore the law and run the risk of being caught and fined or even imprisoned. That’s a risk that I might take for the sake of a low-profile blog. It’s not a risk that I would take for the sake of making big bucks as an untrained, unlicensed M.D., though it is a risk that others (sometimes trained but unlicensed doctors) have been willing to take.

In summary, government intervention in the market for any product or service tends to reduce the supply of that product or service.

But, but, but…the proponents of regulation say…if government didn’t require doctors to pass licensing exams people wouldn’t know if they were being served by “good” or “bad” docs (not to mention lawyers, electricians, plumbers, and beauticians). Similarly, if the FDA didn’t approve drugs, people wouldn’t know if they were buying efficacious drugs or snake oil. And so on and so forth.

Are all medical school graduates equally competent? Are all medical school graduates who pass licensing exams equally competent? Is the doctor who barely passes the exam significantly better than the doctor who barely flunks it? The correct answer in every instance is “no.”

Do medical licensing exams weed out a large percentage of incompetent doctors? It’s not obvious that they do. Statistics for takers of the <a href="

http:// http://www.usmle.org/news/2002perf.htm”>U.S. Medical Licensing Examination in 2002 indicate that about 85% of first-time takers of the exam from allopathic (conventional) medical schools in the U.S. and Canada successfully complete all three steps of the exam. With re-takes, the percentage successfully completing all three steps is expected to be 97%. Osteopaths have a lower success rate — 60% for first-takers — but they represent only 2% of the first-takers from U.S. and Canadian medical schools.

The only real weeding-out takes place among graduates of medical schools outside the U.S. and Canada. First-takers from those medical schools have only a 34% success rate. This weeding-out may reflect incompetence in English — even though applicants had to pass an English-language proficiency exam — as much as it does incompetence in medicine. These results suggest a simple strategy of avoiding doctors who weren’t trained in the U.S. or Canada — a strategy that many Americans follow instinctively.

As for graduates of medical schools in the U.S. and Canada, you’re on your own. When you go to a licensed doctor for the first time you will probably have no clue about that doctor’s competence. You can avoid the relatively few doctors who have been disciplined because most States now make such information available online. You can get recommendations from family, friends, and acquaintances, but those recommendations may tell you more about a doctor’s “bedside manner” than about his or her competence. And in some large cities you can find lists in local magazines for the “best” doctors, by specialty, though you will have no idea of the criteria underlying such lists. In the end, you’ll simply hope that your doctor is competent, if not warm and fuzzy.

You’ll learn from experience whether your doctor seems competent, just as you’ll learn from experience whether your auto mechanic is competent (and honest) or merely a smiling face. So much for licensing as a boon to consumer choice.

Strategic Vision

Washington’s strategic vision was to break free of British rule. He persevered and the newborn United States survived to childhood.

Lincoln’s strategic vision was to preserve the Union and the ideals of the Declaration of Independence. He persevered and the Union passed tumultuously from adolescence to vigorous adulthood.

Roosevelt’s strategic vision was to cure the adult nation of its Depression. His apparent success — which was owed in fact to a horrific war — sapped the nation’s vigor by leading it into long-term dependence on government.

Reagan’s strategic vision was to cure the nation of its dependence on government, to restore it vigorous adulthood. He failed because the nation’s addiction was too strong to be broken by a mere president, unaided by Congress.

Clinton’s strategic vision — pursued from his adolescence — was to become president. He persevered and the nation sank deeper into senile dependence on government.

Polls, Party Preferences, and Polarization

The Pew Research Center’s web site includes a page entitled The 2004 Political Landscape: Evenly Divided and Increasingly Polarized. The first graph on that page shows party identification in the U.S. between 1937 and 2003. The graph also (unintentionally) shows why polls are so unreliable:

1. The incumbent president’s popularity strongly affects what people tell pollsters about party affiliation. There has been a consistent swing toward the opposite party as the popularity of incumbent presidents has waned or plummeted. This phenomenon can be seen toward the end of every presidency from Truman’s through Clinton’s, and most notably toward the end of Nixon’s disgraced presidency.

2. The core of each party’s constituency has changed drastically during the past seven decades. Remember when New England was reliably Republican and the “Solid South” was a bastion of the Democrat Party? Remember when there was more than a handful of liberal Republicans and conservative Democrats in Congress? The realignment of party affiliations wasn’t sudden. It began in 1948, when many Southerners found it possible not to vote for a Democrat. It continued in 1952, when popular Ike ran as a Republican. It accelerated in 1960, when the Democrats nominated Catholic JFK, much to the consternation of many Southerners. It got another boost in 1968, when Democrats got on the wrong side of the culture war. And it continued well into the 1980s, thanks largely to Carter’s ineptness and the left’s continuing dominance within the Democrat Party. Polling results about party preferences were largely meaningless during the 40 years from 1948 to 1988 because personal as well as regional party alignments were in almost constant flux during that period.

Pollsters — and pundits — are nevertheless fond of drawing sweeping inferences from flawed statistics. An inference that has played prominently since the close presidential election of 2000 is that the nation has become “polarized.” That is, many States have become reliably “Red” (Republican) and “Blue” (Democrat), instead of vacillating from one election to the next. In this case, the pollsters and pundits are right, but they would have been just as right in the 1940s and 1950s, when Republicans reliably held New England and Democrats solidly held the South. So why is “polarization” now such a big issue?

It’s a big issue because the Democrat Party no longer enjoys the large (but illusory) plurality that it enjoyed from New Deal days until the 1980s. “Polarization” is bad only if it means that your favorite party is no longer the dominant party.

The underlying fear, of course, is that today’s “polarization” may become tomorrow’s Republican dominance. As another graph on the Pew page indicates, Democrats tend to be older than Republicans. That is, Democrats are dying at a faster rate than Republicans.

Rating Books, Movies, and Presidents

I have found that I rate books, movies, and music as follows:


• I have (or would gladly) read, see, or hear it more than once. (***)

• Once was enough, but I enjoyed it most of the time. (**)

• I made it to the end. (*)

• I tried but gave up on it. (0)

One person’s *** book or movie won’t be another person’s *** book or movie. By the same token, I’ve given up on many a book and movie that critics and friends have raved about. Among my *** books are Edith Wharton’s Ethan Frome, John Fowles’s The Magus, and Stephen King’s The Stand. Some of my *** movies are “The Philadelphia Story,” “Gunga Din,” and “My Man Godfrey.” Books and movies that I’ve given the goose egg include James Joyce’s Ulysses and Finnegan’s Wake, anything I’ve tried by Martha Grimes and Elizabeth George, and such film “classics” as “Z” and “Last Year at Marienbad.”

Although I’ve read a lot of books and seen a lot of movies that rate ** and *, my preferences in music tend to be binary. Almost anything written between 1700 and 1900 gets *** (the tedious compositions of Wagner, Mahler, and Bruckner being the most notable exceptions). I give a big fat 0 to almost anything written after 1900 by a so-called serious composer: the likes of Berg, Stravinsky, Shostakovich, Poulenc, Britten, Hovannes, Glass, and their more recent offshoots. For music written after 1900, I turn to Gershwin, Lehar, Friml, Kern, bluegrass, jazz (written before 1940), and rock of the 1960s to early 1980s.

Now that I’ve lived through, and remember, 11 complete presidencies — from Truman’s through Clinton’s — here’s how I’d rate them on my book/movie/music scale:


Truman **

Eisenhower ***

Kennedy *

Johnson 0

Nixon 0

Ford *

Carter 0

Reagan ***

Bush I *

Clinton 0

You can try this at home.

A Few (Strange) Ideas About the Election

Suppose the presidential election were to end in a tie (269 electoral votes for Bush, the same for Kerry). The election would then go to the House of Representatives, where Bush would win the required majority of States. Would Kerry take the election to the Supreme Court, claiming that the House had “thwarted the will of the people”? Perhaps he could claim that the votes of the Southern States didn’t count because they had seceded from the Union.

An idea that seemed good two weeks ago is good no longer. Some thought that Bush should or would drop Cheney and put Condi Rice on the ticket. Now Condi is somewhat tarnished by her run-in with Dick (tells-two-tales) Clarke. So where should Bush turn for a “sexy” VP candidate? How about Jodie Foster, who’s rumored to be a Republican.

Will the election again come down to a “long count” in Florida? Why not? I’m confident that the voters of Florida can screw up any kind of ballot: paper, punch card, touch-screen, or whatever. Then the mind readers will again try to interpret “the will of the people.” Perhaps this time the Supreme Court will call a halt to the mind-reading for the right reason: Every voter has one opportunity to cast a ballot that reflects his will. That’s it. Try again next election.

By election day Bush will have slid so far to the left and Kerry so far to the right that they will differ only with respect to the war in Iraq. Bush will say it was necessary. Kerry will say that it was necessary but inadvisable without the support of the UN. Thus the election will be a referendum on the UN. That bodes well for Bush.