At the Movies: The Best and Worst Years

Further thoughts on the decline of the movie industry. (Earlier thoughts, replete with details, here.) Base on my ratings of films released since 1930, these are the best vintages:

1933 (56 percent rated 8 or higher on 10-point scale)
1934 (63%)
1936 (55%)
1938 (75%)
1939 (59%)
1941 (65%)
1954 (57%)*
1974 (60%)**

And these are the worst (also see footnote ***):

1963 (16%)
1969 (15%)
1976 (12%)
1978 (6%)
1985 (15%)
1996 (16%)
2007 (8%)

Excellent films (rating of 8 or higher) as a percentage of films seen, by decade of release:

1930s – 52%
1940s – 36%
1950s – 32%
1960s – 31%
1970s – 28%
1980s – 27%
1990s – 22%
2000s – 22%

Some things have improved markedly over the years (e.g., the quality of automobiles and personal computers). Some things have not: government and entertainment, especially.

Movies are no longer as compelling and entertaining as they used to be. Why? For me, it’s film-makers’ growing reliance on profanity, obscenity, violence, unrealistic graphics, and “social realism” (i.e., depressing situations, anti-capitalist propaganda). To rent a recently released movie (even one that has garnered good reviews) is to play “Hollywood roulette.”
__________
* An aberration in what I call the “Abysmal Years”: 1943-1965.
** An aberration in what I call the “Vile Years”: 1966-present.
*** Tied at 17% are 1943, 1944, 1975, 1991, 1998, and 2005 — all among the Abysmal and Vile Years.

Optimality, Liberty, and the Golden Rule

I ended a recent post by saying that

the only rights that can be claimed universally are negative rights (the right not be attacked, robbed, etc.). Positive rights (the right to welfare benefits, a job based on one’s color or gender, etc.) are not rights, properly understood, because they benefit some persons at the expense of others. Positive rights are not rights, they are privileges.

Liberty, in other words, can be understood as Pareto-optimality, in which a right should be recognized only when doing so makes “at least one individual better off without making any other individual worse off.”

This is not an argument for preserving the status quo. It is, rather, an argument against having gone as far down the road to serfdom as we have gone in the United States. The regulatory-welfare state, to which we have evolved, is rife with privileges: the harming of some persons for the benefit of others. Those privileges have been bestowed in two essential ways: (a) the redistribution income, and (b) the regulation of economic and social affairs to the economic and social benefit of narrow interests (“bootleggers and Baptists“).

Liberty — rightly understood as a Pareto-optimal endowment of rights — is possible only when the Golden Rule is, in fact, the rule. As I say here,

the Golden Rule… encapsulates a lesson learned over the eons of human coexistence. That lesson? If I desist from harming others, they (for the most part) will desist from harming me…. The exceptions usually are dealt with by codifying the myriad instances of the Golden Rule (e.g., do not steal, do not kill) and then enforcing those instances through communal action (i.e., justice and defense).

Why communal (state) action and not purely private, contractual arrangements for justice and defense, as anarcho-capitalists propose? Because there is, now, no alternative to state action. The state has been commandeered by Leftist ideals. It feeds parasites, coddles criminals, and verges on acquiescence to our enemies. The restoration of liberty (or something more like it) is, therefore, impossible unless and until

social and fiscal conservatives… recapture the levers of power and undo the damage that the state has done to liberty over the past century.

There will always be a state. The real issues are these: Who will control the state, and to what ends?

Related posts:
But Wouldn’t Warlords Take Over?” (24 Jul 2005)
Liberty As a Social Compact” (28 Feb 2006)
The Source of Rights” (06 Sep 2006)
The Golden Rule, for Libertarians” (02 Aug 2007)
Anarchistic Balderdash” (17 Aug 2007)
The Fear of Consequentialism” (26 Nov 2007)
‘Family Values,’ Liberty, and the State (07 Dec 2007)
Rights and Liberty” (12 Dec 2007)

An Exercise in Futility

The Libertarian Party’s presidential candidate in 1980, Ed Clark, received about 1.1 percent of the popular votes cast in the presidential election. That election marked the first appearance of the LP candidate on the ballots of 50 States, up from 2 States in 1972 and 32 States in 1976. The LP candidate has been on the ballot of 50 States, or nearly that, in every election since 1980, excepting the election of 1984 (39 States).

The LP’s “breakthrough” in 1980 proved not to be a breakthrough at all. The following graph tells the story. The black line represents the percentage of popular votes received by the LP candidate in each election. The blue line represents the average percentage (0.36) for the elections from 1984 through 2004.

Sources:
http://en.wikipedia.org/wiki/Libertarian_Party_(United_States) http://www.lp.org/organization/history.shtml
http://uselectionatlas.org/

Remember 1980, that year of great disaffection for Jimmy Carter and the boomlet for John Anderson, who wound up with 6.6 percent of the popular vote? The LP was still a novelty, and the LP ballot line was new to many voters. “A libertarian (whatever that is), why not?” Thus the wastage of almost 1 million votes on Ed Clark.

Voters haven’t been as generous to the LP since 1980. Not even in 1992, when Ross Perot capitalized on the disaffection for G.H.W. Bush and drew 19 percent of the popular vote, probably swinging the election to Bill (not-fit-for the Supreme Court) Clinton.

Why does the LP keep wasting its money on presidential candidates, potentially causing the defeat of a Republican: just as the votes cast by Floridians for Ralph Nader in 1980 cost the Democrats that year’s election? Stubborn pride. Too “pure” to play in the same sandbox as Republicans? Who knows?

I’ll vote for a Republican — any Republican, even a Bush-type — before wasting my vote on a Libertarian Party candidate.

P.S. (12/17/07): Ron Paul seems to understand. The erstwhile LP candidate for president (0.47 percent of the popular vote in 1988) has won and held his seat in the U.S. House by running as a Republican. Paul’s candidacy for the GOP nomination makes him visible to the public, and will do far more to inject libertarian ideas into the political mainstream than would another futile run on the Libertarian ticket. If Paul were to run on the LP ticket after losing the GOP nomination, he might do even better than Ed Clark did in 1980, but only because of his (Paul’s) exposure to the public via the GOP race.

P.P.S. (12/17/07): What’s my position on Ron Paul? I like his federalist, limited-government principles. I don’t like his extreme isolationism. And I don’t like his apparent willingness to accept the support of kooks, conspiracy theorists, and racists. UPDATE (12/20/07): On the third point, this doesn’t look good.) One out of three isn’t a good average, in my book. I pass on Paul. UPDATE (12/27/07): The item linked in the previous update has since been updated. Paul probably is not playing footsie with racist whites. That said, I’m still against him because of his extreme isolationism. UPDATE (01/11/08): There are so many smoking guns about Ron Paul in this piece that it is impossible for me to believe that the man is, in any way, a libertarian. For evidence that he is just plain nuts, see this.

Related posts:
I Wish It Were Thus
My Advice to the LP
Great Minds Agree, More or Less
Good Advice for the Libertarian Party

An Immodest Journalistic Proposal

This one is touted by David Hazinski, an associate professor at the University of Georgia’s Grady School of Journalism:

Supporters of “citizen journalism” argue it provides independent, accurate, reliable information that the traditional media don’t provide. While it has its place, the reality is it really isn’t journalism at all, and it opens up information flow to the strong probability of fraud and abuse. The news industry should find some way to monitor and regulate this new trend (emphasis added).

The “news industry” (a.k.a. the mainstream media) isn’t already a hotbed of “fraud and abuse”? How about Rather-gate? How about anti-war propaganda that’s thinly disguised as news? How about the daily contributions to global-warming hysteria? How about the MSM’s pervasive anti-Republican, big-government slant? And on, and on.

As Hazinski observes, “without any real standards, anyone has a right to declare himself or herself a journalist.” And “anyone” does just that — every day — on ABC, CBS, CNN, NBC, MSNBC; in the pages of The New York Times, The Washington Post, Newsweek; etc., etc., etc.

Hazinski proposes this:

Journalism schools such as mine at the University of Georgia should create mini-courses to certify citizen journalists in proper ethics and procedures, much as volunteer teachers, paramedics and sheriff’s auxiliaries are trained and certified.

How can he say, with a straight face, that J-schools are fit to certify anyone’s “proper ethics”? By that standard, Osama bin Laden would be qualified to certify the borders of Israel.

Hazinski acknowledges the argument that “standards could infringe on freedom of the press and journalism shouldn’t be regulated. But,” he adds, “we have already seen the line between news and entertainment blur enough to destroy significant credibility.” Actually, the line between truth and reality has been blurred — nay, obliterated — by the MSM.

Citizen journalism is precisely what’s needed to push the MSM in the direction of accuracy, honesty, and balance. “Professional journalists” like Hazinski don’t want that. They want to keep feeding us their Left-biased distortions and lies — without fear of contradiction.

UPDATE (12/17/07): There’s a related post at The Future of News.

A Superficially Sensible Proposal

Cato’s Tom Firey warms to the idea of raising taxes:

[R]eaders of this blog probably won’t like last weekend’s column [in The New York Times‘s “Economic View”], penned by Cornell economist Robert Frank. Frank argues that “realistic proposals for solving our budget problems must include higher revenue,” i.e., new taxes or tax increases. However, he says, those proposals are being blocked by “powerful anti-tax rhetoric [that] has made legislators at every level of government afraid to talk publicly about a need to raise taxes.”…

Frank has spent much of his academic career arguing for raising taxes on wealthier people so as to create greater income equality (some of his work can be found here, here, and here). It would thus be expected that a Cato analyst would bash Frank’s column like a piñata. But I believe there’s merit to what he writes….

Why have the tax cuts not slowed government growth? Because Uncle Sam is quite happy to borrow money. Frank points out that the national debt has increased $3 trillion since 2002, and it will likely rise an additional $5 trillion over the next decade. As NYU law professor Dan Shaviro notes in this 2004 Regulation cover story, that debt is future taxes….

This leads to the core problem of borrow-and-spend public finance: Because today’s taxpayers receive government services without paying the full cost, they (and their political leaders) are not forced to consider:

  • Is this service worth its cost?
  • Would we be better off if government spent its money differently?
  • Would we be better off if government did not tax that money away from us, but we instead spent it privately?

Instead, borrow-and-spend lets both the Big Government crowd and the Anti-Taxes crowd get what they want: the Big Government folks can keep expanding government and the Anti-Taxes folks pay lower taxes — for now.

That’s why there’s merit to Frank’s column — if we were to pay, today, the full cost of government, we’d give much more thought to the opportunity cost of government spending. I strongly suspect there’d be much less demand for government services and much stronger outcry against current spending and spending proposals….

So, Prof. Frank, I say bully for you! If we follow your proposal, I think we’ll move several steps closer to limited government.

Firey’s argument makes sense if you read it quickly and uncritically. But three things are wrong with it. First, more borrowing today doesn’t necessarily require a proportionate increase in taxes tomorrow. As I discuss here, the government’s debt can rise interminably in a growing economy. (See also this and this for my views about the notion that government borrowing “crowds out” private investment.)

Second, tax increases usually mean higher marginal tax rates. (“Soak the rich.”) But economic growth is financed and fueled by people at the high end of the income distribution, and by people who strive for the high end. Higher taxes = slower economic growth. It’s as simple as that.

Third, higher tax rates won’t change the political equation. Government spending comprises myriad specific programs, each with its own constituency (in and out of government). Voters and interest groups support politicians who promise (and deliver) specific programs that seem desirable (like the proverbial free lunch). Firey lists some of those programs:

Medicare Part D, the proposed farm bill, the latest round of energy subsidies, more and more corporate welfare, No Child Left Behind, and a whole new, giant federal agency

Would the supporters of Medicare Part D have backed off had they thought that taxes would rise in order to fund Part D? I very much doubt it. Despite the prospect of a tax increase, Part D would have yielded a net financial gain for its beneficiaries, psychic gains and political clout for the private interest groups that pushed it, a larger government bureaucracy (which is a plus, in Washington), and so on. In the perverse world of government, higher spending is an excuse for raising taxes. (Harry Hopkins, FDR’s close adviser, is said to have put it this way: “We shall tax and tax, and spend and spend, and elect and elect.“)

The real villain of the piece is the Supreme Court, for its failure to enforce the constitutional doctrine of limited and enumerated powers — a failure that, in large part, can be traced to the New Deal era. The Court has allowed the federal government to do things for which the federal government has no constitutional mandate. Moreover, the unleashed federal government has fostered (through mandates and grants) the transformation of State and local governments from being providers of basic services (e.g., schools, streets, police, and courts) to being providers of a panoply of “social services.”

In sum, the rise of big government cannot be traced to low taxes. It can be traced, instead, to the failure of the executive and legislative branches of the federal government to honor the Constitution, and the failure of the Supreme Court to enforce it.

Economists As Moral Relativists

Russell Roberts — an econ prof at George Mason University with whom I usually agree — says this about the use of performance-enhancing drugs by baseball players:

When everyone cheats, it’s not cheating any more.

First, not “everyone” cheated. Second, cheating is cheating.

One example does not prove that all economists are moral relativists. But, in more than forty years of associating with economists and reading their work, I have observed that most economists focus on efficiency to the exclusion of morality.

How’s that for a generalization?

An FDR Reader

Thanks to John Ray for bringing my attention to these items:

How FDR Made the Depression Worse,” by Robert Higgs (Feb 1995)
Tough Questions for Defenders of the New Deal,” by Jim Powell (06 Nov 2003)
The Real Deal,” by Amity Shlaes (25 Jun 2007)

Related posts at Liberty Corner include:

Getting it Perfect” (04 May 2004)
The Economic Consequences of Liberty” and an addendum, “The Destruction of Income and Wealth by the State” (01 Jan 2005)
Calling a Nazi a Nazi” (12 Mar 2006)
Things to Come” (27 Jun 2007)
FDR and Fascism” (30 Sep 2007)
A Political Compass: Locating the United States” (13 Nov 2007)
The Modern Presidency: A Tour of American History since 1900” (01 Dec 2007)

Our descent into statism didn’t begin with FDR. (His cousin Teddy got the ball rolling downhill.) But FDR compounded an economic crisis, then exploited it to put us firmly on the path to the nanny state. The rest, as they say, is history.

Thus we now have a “compassionate conservative” as president, and several “Republican” candidates for president who would have been comfortable as New Deal Democrats. Calvin Coolidge must be spinning in his grave at hypersonic speed.

Ron Paul Roundup

Guest post:

A perusal of NRO commentary puts the Ron Paul campaign in perspective. It’s acknowledged that he is reaching an audience that no else is. The question is, what is that audience?

On May 28, Jim Geraghty observes that “while supporters of the ten non-Ron-Paul GOP candidates tend to like at least some other Republican candidates besides their favorite, Ron Paul supporters only like Ron Paul.” This kind of exclusivism is never a good thing. One senses that fans of Paul are so fixated on a few key points (opposition to the war and some far-reaching free-market views) that they can’t see the forest for the trees.

On October 21, Geraghty says “with some begrudging admiration” that “Ron Paul is, like Howard Dean in 2004, the only candidate who could spawn a movement that will last beyond his candidacy.” It sounds like a replay of Buchanan in 2000. Wouldn’t it be better if Paul and his people were willing to work with other Republicans and push them in the right direction on certain issues? When they insist on being divisive (a tactic that favors the left in the long run) then I have to question their intentions.

On December 10, Jonah Goldberg refers to the militant optimism of Ron Paul supporters. They can’t accept the fact that he won’t win the presidency. It’s a not a question of enthusiasm, it’s a detachment from reality. Paul fans think their candidate’s woes are the fault of a media conspiracy. This overlooks the fact that “Huckabee is much, much more popular than Ron Paul. And he got there with less money and, until recently, arguably less media exposure.”

Finally, there is Mona Charen’s article, “What Paul Is Running For.” Now, unlike her, I admit that Paul’s pro-life credentials (no small item these days) are impressive. Personally, the man appears impeccable. But he lacks political savvy. As Charen says:

Ron Paul is too cozy with kooks and conspiracy theorists. As syndicated radio host Michael Medved has pointed out, Ron Paul’s newspaper column was carried by the American Free Press (a parent publication of the Hitler-praising Barnes Review). Paul may not have been aware of this. But though invited by Medved to disavow any connection, Paul has so far failed to respond.

I’ve heard the same complaint from friends who are staunch social conservatives. When Paul’s campaign received a contribution from notorious racist Don Black, Paul did nothing to distance himself from the fringe element.

Rights and Liberty

The most quoted sentence of the Declaration of Independence, I daresay, is this one:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

The Founders’ trinity of rights (life, liberty, and the pursuit of happiness) really constitute a unitary right, which I simply call liberty. I do so because liberty (as a separate right) is meaningless without life and the ability to pursue happiness. Thus we have this: rights ≡ liberty (rights and liberty are identical). The identity of rights and liberty is consistent with this definition of liberty:

3. A right or immunity to engage in certain actions without control or interference.

The odd thing about the Founders’ equation (rights ≡ liberty) is that they believed in “natural rights” (“unalienable rights”). Today’s believers in “natural rights” would argue that such rights exist independently of liberty; that is, one always has one’s “natural rights” (whatever those might be), regardless of the state of one’s liberty.

I have put paid to that notion here, here, here, and here. The only rights that a person has are those which he can claim through social custom, common law, statutory law, contract, or constitution — depending on which of them applies and prevails in a given situation. Moreover, rights have no reality unless they are enforceable and can be restored after having been violated.

I do not mean to imply that the restoration of rights is automatic, even in a polity where the rule of law generally prevails. Rights sometimes cannot be restored; for example:

  • A victim of murder no longer has any rights (though his estate might). The victim’s murderer is prosecuted and punished for the sake of the living — for justice (i.e., vengeance) and its deterrent effect.
  • A person who permanently loses something to a criminal (e.g., an eye or a fortune), no longer has the use of that which was lost. His pursuit of happiness is, therefore, impaired permanently.

Further, the restoration of the rights lost by most Americans over the past century is highly doubtful. Rights vanish as liberty recedes. Liberty recedes as the state broadens its scope beyond justice and defense, expands its regulatory regime, redistributes income, and “enables” some citizens at the expense of others.

Finally, but most importantly, the only rights that can be claimed universally are negative rights (the right not be attacked, robbed, etc.). Positive rights (the right to welfare benefits, a job based on one’s color or gender, etc.) are not rights, properly understood, because they benefit some persons at the expense of others. Positive rights are not rights, they are privileges.

(See also Part II of “Practical Libertarianism.”)

Election 2008: Third Forecast

My eighth forecast is here.

The Presidency – Method 1

Intrade posts odds on which party’s nominee will win in each State and, therefore, take each State’s electoral votes. I assign all of a State’s electoral votes to the party that is expected to win that State. Where the odds are 50-50, I split the State’s electoral votes between the two parties.

As of today, the odds point to this result:

Democrat, 306 electoral votes

Republican, 232 electoral votes

(A slight gain for the Dems since the first forecast, 11/16/07, and second forecast, 11/18/07.)

The Presidency – Method 2

I have devised a “secret formula” for estimating the share of electoral votes cast for the winner of the presidential election. I describe the formula’s historical accuracy in my second forecast. The formula currently yields these estimates of the outcome of next year’s presidential election (CORRECTED, 12/13/07):

Democrat nominee — 274 to 313 EVs

Republican nominee — 225 to 264 EVs

This is a much better outlook for the Dems than the one I issued on November 18. It is attributable mainly to the decline of Hillary Clinton’s prospects for her party’s nomination. Clinton, in spite of her strength within the Democrat Party, would be a weaker nominee than Barack Obama. As Obama gains ground on Clinton, a Democrat victory becomes more likely — as of now. Obama could become damaged goods by the time he emerges from a bitterly fought contest for his party’s nomination.

U.S. House and Senate

Later.

* * *
How did I do in 2004? See this and this.

John Warner’s Exit Strategy

Guest post:

John Warner (R-Va.) has never been a favorite with real conservatives (here’s why). Now that he tries to exit the Senate as gracelessly as possible, after thirty misspent years, I doubt they’ll change their minds.

There’s nothing worse than an aging politician, facing the prospect of eternity, who thinks that he’ll ease his conscience by becoming a liberal. Actually, he’s been a liberal on many issues over the years. Now he’s just more consistent.

Having opposed costly and intrusive “greenhouse gas” limits as recently as 2005, he suddenly answered the environmentalist altar call and came down on the side of Al Gore. However, he assures us that this is all in the name of national security, since he lies awake at night worrying that the U.S. military might face new climatic threats!

“Patriotism,” as Johnson said, “is the last refuge of the scoundrel.”

"Family Values," Liberty, and the State

Among Maverick Philosopher”s answers to Dennis Prager’s 23 questions in “Are You a Liberal?” is this gem:

The State is involved in marriage in order to promote a legitimate common interest, namely, that there be healthy families in which men are tamed, women are protected, and children are socialized. There is no reason to extend these protections to any two people who choose to cohabit.

MP points to a deeper truth, which is that civil society (and thus liberty) depends to a large extent (though not exclusively) on the exaltation of heterosexual marriage and “family values.” As I say here:

Liberty requires a consensus about harms and the boundaries of mutual restraint — the one being the complement of the other. Agreed harms are to be avoided mainly through self-restraint. Societal consensus and mutual restraint must, therefore, go hand in hand.

Looked at in that way, it becomes obvious that liberty is embedded in society and preserved through order. There may be societally forbidden acts that, to an outsider, would seem not to cause harm but which, if permitted within a society, would unravel the mutual restraint upon which ordered liberty depends. . . .

What happens to self-restraint, honesty, and mutual aid outside the emotional and social bonds of family, friendship, community, church, and club can be seen quite readily in the ways in which we treat one another when we are nameless or faceless to each other. Thus we become rude (and worse) as drivers, e-mailers, bloggers, spectators, movie-goers, mass-transit commuters, shoppers, diners-out, and so on. Which is why, in a society much larger than a clan, we must resort to the empowerment of governmental agencies to enforce mutual restraint, mutual defense, and honesty within the society — as well as to protect society from external enemies.

But liberty begins at home. Without the civilizing influence of traditional families, friendships, and social organizations, police and courts would be overwhelmed by chaos. Liberty would be a hollower word than it has become, largely because of the existence of other governmental units that have come to specialize in the imposition of harms on the general public in the pursuit of power and in the service of special interests (which enables the pursuit of power). Those harms have been accomplished in large part by the intrusion of government into matters that had been the province of families, voluntary social organizations, and close-knit communities. . . .

The state has, in the past century, undone much of which society had put in place for its own protection. For example, here’s Arnold Kling, writing about Jennifer Roback Morse’s Love and Economics:

Morse argues that the incentives of government programs, such as Social Security, can have the same [destructive] consequences [as government decrees].

It is convenient for us who are young to forget about old people if their financial needs are taken care of…But elderly people want and need attention from their children and grandchildren…This, then, is the ultimate trouble with the government spending other people’s money for the support of one part of the family. Other people’s money relieves us from some of the personal responsibility for the other members of our family. Parents are less accountable for instilling good work habits, encouraging work effort…Young people are less accountable for the care of particular old people, since they are forcibly taxed to support old people in general. (p. 116-117)

Most Western nations have created a cycle of dependency with respect to single motherhood. Government programs, such as welfare payments or taxpayer-funded child care, are developed to “support” single mothers. This in turn encourages more single motherhood. This enlarges the constituency for such support programs, leading politicians to broaden such programs.

Earlier in the same article, Kling says:

There are a number of issues that provide sources of friction between market libertarianism and “family values” conservatism. They concern personal behavior, morality, and the law.

Should gambling, prostitution, and recreational drugs be legalized? Market libertarianism answers in the affirmative, but “family values” conservatives would disagree.

Another potential source of friction is abortion. It is not a coincidence that the abortion issue became prominent during the sexual revolution of the late 1960’s and early 1970’s. That was a period in which social attitudes about sex-without-consequences underwent a reversal. Prior to 1960, sex-without-consequences generally was frowned upon. By 1975, sex-without-consequences was widely applauded. In that context, abortion rights were considered a victory for sexual freedom. Libertarians tend to take the pro-choice side.

Gay marriage is another legacy of the sexual revolution. Again, it tends to divide libertarians from “family values” conservatives.

One compromise, which Morse generally endorses, is to use persuasion rather than government in the family-values struggle. That is a compromise that I would favor, although unlike Morse, I approach the issue primarily as a libertarian.

If one views a strong state and a strong family as incompatible, then a case can be made that taking the state out of issues related to prostitution or abortion or marriage actually helps serve family values. If people know that they cannot rely on the state to arbitrate these issues, then they will turn to families, religious institutions, and other associations within communities to help strengthen our values.

I am unpersuaded, for the simple reason that society cannot rebuild its norms without the state’s help. Having sent the wrong signals about “family values,” in general, and about heterosexual marriage, in particular, the state has wrought much harm; for example:

Abuse risk higher as kids live without two biological parents

Thursday, November 15, 2007

– David Crary, AP National Writer

….[M]any scholars and front-line caseworkers who monitor America’s families see the abusive-boyfriend syndrome as part of a broader trend that deeply worries them. They note an ever-increasing share of America’s children grow up in homes without both biological parents, and say the risk of child abuse is markedly higher in the nontraditional family structures.

“This is the dark underbelly of cohabitation,” said Brad Wilcox, a sociology professor at the University of Virginia. “Cohabitation has become quite common, and most people think, ‘What’s the harm?’ The harm is we’re increasing a pattern of relationships that’s not good for children.”…

[T]here are many…studies that, taken together, reinforce the concerns. Among the findings:

-Children living in households with unrelated adults are nearly 50 times as likely to die of inflicted injuries as children living with two biological parents, according to a study of Missouri abuse reports published in the journal of the American Academy of Pediatrics in 2005.

-Children living in stepfamilies or with single parents are at higher risk of physical or sexual assault than children living with two biological or adoptive parents, according to several studies co-authored by David Finkelhor, director of the University of New Hampshire’s Crimes Against Children Research Center.

-Girls whose parents divorce are at significantly higher risk of sexual assault, whether they live with their mother or their father, according to research by Robin Wilson, a family law professor at Washington and Lee University….

Census data leaves no doubt that family patterns have changed dramatically in recent decades as cohabitation and single-parenthood became common. Thirty years ago, nearly 80 percent of America’s children lived with both parents. Now, only two-thirds of them do. Of all families with children, nearly 29 percent are now one-parent families, up from 17 percent in 1977.

The net result is a sharp increase in households with a potential for instability, and the likelihood that adults and children will reside in them who have no biological tie to each other….

“It comes down to the fact they don’t have a relationship established with these kids,” she said. “Their primary interest is really the adult partner, and they may find themselves more irritated when there’s a problem with the children.”

And the beat goes on:

The teen birth rate in the United States rose in 2006 for the first time since 1991, and unmarried childbearing also rose significantly, according to preliminary birth statistics released [December 5, 2007] by the Centers for Disease Control and Prevention (CDC)….

The study also shows unmarried childbearing reached a new record high in 2006. The total number of births to unmarried mothers rose nearly 8 percent to 1,641,700 in 2006. This represents a 20 percent increase from 2002, when the recent upswing in nonmarital births began. The biggest jump was among unmarried women aged 25-29, among whom there was a 10 percent increase between 2005 and 2006.

In addition, the nonmarital birth rate also rose sharply, from 47.5 births per 1,000 unmarried females in 2005 to 50.6 per 1,000 in 2006 — a 7-percent 1-year increase and a 16 percent increase since 2002.

The study also revealed that the percentage of all U.S. births to unmarried mothers increased to 38.5 percent, up from 36.9 percent in 2005.

And on:

An ETS study reported by the NYT finds four family variables–including proportion of single-parent families–explain two-thirds of the variation in school performance.

And on:

[M]arriage qua marriage tends to be a much more important indicator of well-being, both for children and for parents, in the United States than it does in Europe. Perhaps this will not always be so; perhaps the coexistence, in the 1990s and early Oughts, of falling crime and higher rates of out-of-wedlock births are a leading indicator of the Swedenization of American social norms. But I doubt it, not least because the secondary consequences of family breakdown, persistent inequality and social immobility chief among them, appear to have worsened over the last decade….

What the state has sundered, it must mend. With respect to marriage, for example, I have argued that

it is clear that the kind of marriage a free society needs is heterosexual marriage, which…is a primary civilizing force. I now therefore reject the unrealistic (perhaps even ill-considered) position that the state ought to keep its mitts off marriage. I embrace, instead, the realistic, consequentialist position that society — acting through the state — ought to uphold the special status of heterosexual marriage by refusing legal recognition to other forms of marriage. That is, the state should refuse to treat marriage as if it were mainly (or nothing but) an arrangement to acquire certain economic advantages or to legitimate relationships that society, in the main, finds illegitimate.

The alternative is to advance further down the slippery slope toward societal disintegration and into the morass of ills which accompany that disintegration. (We’ve seen enough societal disintegration and costly consequences since the advent of the welfare state to know that the two go hand in hand.) The recognition of homosexual marriage by the state — though innocuous to many, and an article of faith among most libertarians and liberals — is another step down that slope. When the state, through its power to recognize marriage, bestows equal benefits on homosexual marriage, it will next bestow equal benefits on other domestic arrangements that fall short of traditional, heterosexual marriage. And that surely will weaken heterosexual marriage, which is the axis around which the family revolves….

….Although it’s true that traditional, heterosexual unions have their problems, those problems have been made worse, not better, by the intercession of the state. (The loosening of divorce laws, for example, signaled that marriage was to be taken less seriously, and so it has been.) Nevertheless, the state — in its usual perverse wisdom — may create new problems for society by legitimating same-sex marriage, thus signaling that traditional marriage is just another contractual arrangement in which any combination of persons may participate. Heterosexual marriage — as Jennifer Roback Morse explains — is a primary and irreplicable civilizing force. The recognition of homosexual marriage by the state will undermine that civilizing force. The state will be saying, in effect, “Anything goes. Do your thing. The courts, the welfare system, and the taxpayer — above all — will “pick up the pieces.” And so it will go.

Of course, the state not only continues to undermine heterosexual marriage (except where the general will is consulted through referenda), but it also continues to undermine other social norms. There is, for example, the new California statute known as SB777,

a public education bill prohibiting schools and teachers from “reflecting adversely” on gays and lesbians. The Democratic majority in both houses of the California State Legislature supported the bill, which was signed into law by Republican Governor Arnold Schwarzenegger [in October 2007]. Meanwhile, pro-family groups are mobilized and will challenge this bill with a petition to force a statewide referendum. The law’s implementation has been postponed by court order until January 1, pending results of statewide signature gathering….

Although the bill adds “sexual orientation” to the list of protected groups in this State, supporters are playing down its significance, calling it instead a safety measure. Meanwhile, critics, including Republican members of the Legislature, all of whom were opposed, regard the legislation as an attack on the family….

On the face of it, the changes are simple and uncomplicated. Existing provisions of the Education Code that prohibit discrimination against members of protected racial, ethnic, national, gender, ancestral and disabled groups have been changed to include “sexual orientation,” which specifically refers to members of homosexual, lesbian and bisexual groups.

Supporters of this change are right in one particular, at least from their point of view: the bill simply “updates” existing law. That is, existing law already gives protected status to members of designated groups. The question has always been whether that effectively denies protection (or provides less) to all not belonging to those groups, including especially white males. This sort of categorizing, critics have argued, puts members of all groups in gender conflict while opening the door to protected status for illegal aliens….

Not surprisingly, SB 777 adds the term “sexual orientation” to the categories of persons protected against “hate crimes,” a concept based on the assumption that a violent crime against a gay or lesbian person is particularly heinous, more so than when committed against others.

The main part of the bill concerns public education. All public school districts will be responsible for forbidding any discrimination and monitoring for compliance. Thus, while religious schools whose tenets conflict with homosexuality and lesbianism are exempted, publicly funded alternative and charter schools are not. The new law specifies that “No teacher shall give instruction nor shall a school district sponsor any activity that reflects adversely upon persons because of [sexual orientation].” (Italics in text.) The same requirement is laid on districts’ adoption or use of textbooks or other instructional materials.

The bill does not define what constitutes adverse reflection. It could mean anything from simple good manners, which is wholly defensible; to failure to “celebrate” the homosexual lifestyle, as prominent writers such as Harvey Mansfield have noted. According to its advocates, diversity comprehends all protected groups. who should be celebrated and not merely tolerated.

According to press reports, only in September did State Sen. Sheila Kuehl, D-Santa Monica, the bill’s main sponsor, remove language that would have forbidden in public schools the use of “mom,” “dad,” “husband” and “wife.” It is a fair question whether removing those terms has changed the intent or effect of this legislation.

This latest development in the so-called “culture wars” once again raises the question whether it is a zero-sum game in which, in this case, the protection against adverse reflection on gays and lesbians necessarily entails a rejection of traditional morality and even human categories. Many do not believe so. Time will tell whether this is part of a slippery slope or merely equal justice. Still, one wonders when the straighforward concept of protection takes the broader form of not “reflecting adversely,” that the object is not tolerance but privilege.

I have no doubt that the object is privilege. The effect of SB777, should it become law, will be to suppress and further denigrate heterosexual marriage and all that it stands for. On that point, WND quotes Meredith Turney, the legislative liaison for Capitol Resource Institute:

[State Superintendent of Public Instruction Jack] O’Connell, the bill’s author Sen. Sheila Kuehl and Gov. Schwarzenegger have all maintained the party line that SB 777 merely “streamlines” existing anti-discrimination laws. However, these attempts to discredit the public outcry against SB 777’s policies are disingenuous and misleading. In fact, SB 777 goes far beyond implementing anti-discrimination and harassment policies for public schools.

The new law states that “No teacher shall give instruction nor shall a school district sponsor any activity that promotes a discriminatory bias because of’ … (homosexuality, bisexuality, and transsexual or transgender status). Including instruction and activities in the anti-discrimination law goes much further than ‘streamlining.” This incremental and deceitful approach to achieving their goals is a favorite and effective tactic of liberals. Expanding the law is not ‘streamlining’ the law.

Mr. O’Connell’s doublespeak reveals his – and his peers’ – arrogant attitude toward their “gullible” constituents. In fact, parents are not stupid and they recognize that their authority is being undermined by such subversive school policies. This is nothing less than an attempt to confuse the public about the true intention of SB 777.

The terms “mom and dad” or “husband and wife” could promote discrimination against homosexuals if a same-sex couple is not also featured.

Parents want the assurance that when their children go to school they will learn the fundamentals of reading, writing and arithmetic – not social indoctrination regarding alternative sexual lifestyles. Now that SB 777 is law, schools will in fact become indoctrination centers for sexual experimentation.

Just as taxpayer-funded universities have become indoctrination centers for political correctness and other Leftist dogmas.

Yesteryear’s rebels (the “kids” of the ’60s and ’70s) didn’t get everything they wanted through their protests and riots. So they found a more effective way to destroy the social fabric, which — being perpetual adolescents — they despise. The more effective way was to seize the levers of power in academia and government, and thence to dissolve the social cohesion upon which liberty depends.

No, the answer isn’t to take the state out of the “family values” business. The answer is for social and fiscal conservatives to recapture the levers of power and undo the damage that the state has done to liberty over the past century.

There will always be a state. The real issues are these: Who will control the state, and to what ends?

Related posts:

A Century of Progress?
Feminist Balderdash
Libertarianism, Marriage, and the True Meaning of Family Values
The Left, Abortion, and Adolescence
Consider the Children
Same-Sex Marriage
“Equal Protection” and Homosexual Marriage
Marriage and Children
Abortion and the Slippery Slope
Equal Time: The Sequel
The Adolescent Rebellion Syndrome
Social Norms and Liberty
Parenting, Religion, Culture, and Liberty
A “Person” or a “Life”?
The Case against Genetic Engineering
How Much Jail Time?
The Political Case for Traditional Morality
Anarchy, Minarchy, and Liberty
Parents and the State
Academic Bias
Ahead of His Time

Culture Watch: Whoopi vs. Sherri

Guest post:

One has to be even-handed in passing out the dunce awards to two wannabe pundits: Whoopi Goldberg and Sherri Shepherd. The media has, predictably, jumped all over Shepherd’s historical howlers in a debate with Whoopi Goldberg on The View.

It began with Joy Behard referring to the philosophies of pagan Greece. Shepherd felt obliged to point out that Christians predated the ancient Greeks, and even the Hebrews. Earlier this year she admitted that she didn’t know if the world was flat or round (see related story).

I suppose this will “prove” to militant Christian-bashers what idiots believers really are. Actually it proves to me just how unqualified most celebrities are to discuss anything, from foreign policy to ecology. I almost suspect ABC, which hosts the show, of putting up an inept, if well-meaning, dupe to represent the “conservative Christian” perspective. Ann Coulter, she ain’t.

Theologically speaking, the incident confirms what I’ve believed all along—that dumbed-down religion of the Elmer Gantry/tent revival variety is not representative of serious Christian belief. Shepherd’s personal conduct and comportment are as frivolous as her metaphysics.

But we shouldn’t let Shepherd’s sparring partner off the hook either. Just a few weeks earlier on The View, Goldberg made the remarkable observation that America is “not as free as it was when I was a kid” (when segregation was still in effect). Maybe Ms. Goldberg thinks she’s reprising her role as the superhuman Guinan of Star Trek: The Next Generation. But, if you ask me, it’s just the leftwing equivalent of flat-earthism.

The Economic Divide on the Right: Distributists vs. Capitalists

Guest post:

Too much capitalism does not mean too many capitalists, but too few capitalists.—G.K. Chesterton

Chesterton probably never realized how close to the truth he really came, and it’s unfortunate that this flash of paradox did not enlighten his views on economics in general. He was an advocate of distributism. This is a “third way” economic school popular in certain traditional conservative and Catholic circles. However, it also has its conservative (and Catholic) critics.

I note this as an ex-distributist who admits to the validity of some of Hilaire Belloc’s insights in his foundational distributist tome, The Servile State, without agreeing to its solution. Belloc’s main grievance is what he sees as the collusion of big government and big business in the creation of economic monopolies; a fact also lamented by the free-market theorist Friedrich Hayek who spoke positively of aspects of Belloc’s work in The Road to Serfdom. Seen in that light, the problem is not that there is too much capitalism, but that there isn’t enough. The roadblock to economic independence is heavy taxation and regulation, which ultimately weigh heavier on the small entrepreneur than on the big capitalist. The latter has the clout necessary to get around such problems or even turn them to his own advantage. Unfortunately, distributism takes many anti-capitalist myths at face value and tries to bring about greater property ownership (a noble goal) through yet more regulation, seeking a redistribution of wealth that is hardly distinguishable from socialism.

My own disenchantment with distributists initially had more to do with their methods than their ideas. They often advanced their position as a point of political (and theological) correctness; this despite the fact that very few advocates of that system have a solid grounding in economic theory. For that reason, I am pleased to see that Catholic libertarian Tom Woods (writing with Marcus Epstein and Walter Block) in The Independent Review, offers a perceptive and generally fair critique of the concept. The essay takes into account the good intentions of Belloc and his friend Chesterton (men who made invaluable contributions in so many other areas). But in the end, sincerity cannot make up for flawed reasoning. As the article states,

the creation and maintenance of a distributist economy would have required state action on a scale that, given Chesterton’s and Belloc’s arguments against socialism, would have violated their own principles. Moreover, the distributist argument, although superficially plausible, advanced a number of key claims in a manner that suggested they were beyond debate, when in fact they were quite debatable—and in some cases flatly false. An enormous body of scholarly work published since the time Chesterton and Belloc wrote has undermined their narrative in virtually every particular. (“Chesterton and Belloc: A Critique,” The Independent Review, Spring 2007).

This is more than editorializing here. Hard economic and historical facts are cited by the authors. So please take the time to read it if you’re unconvinced.

N.B. While I don’t agree with The Independent Review that the market paradigm can be applied to all instances of human activity, free trade can still do much good if left to operate in a sensible and just manner.

I Am Happy to Report…

…that I am not a “liberal” (i.e., a statist).

Maverick Philosopher answers “no” to every one of the 23 questions asked by Dennis Prager in “Are You a Liberal?” As do I.

That makes me — most decidedly — an”un-liberal.”

Mike Huckabee and the View from Planet Rockwell

Guest post:

In my Buchananite days I avidly read Lew Rockwell (anarcho-capitalist protégé of the late Murray Rothbard). But one of the things that gradually turned me off was his gratuitous antagonism towards all politicians except those who fit his very narrow, Jansenist-like, political philosophy. Only a few are saved, sola anarchia, while the rest of the unenlightened belong to the massa damnata.

So I’m hardly surprised that Rockwell is gunning for Mike Huckabee, a candidate I’ve increasingly learned to admire and respect. Huckabee is an intelligent communicator and a principled man who will state in his campaign ads that “life begins at conception.” Some give him low marks for his economic views, but considering that we haven’t had a real fiscal conservative president in nearly 80 years (Calvin Coolidge, who was in office from 1923-1929) I’m not going to get too worked up. After all, Ronald Reagan was berated for his “voodoo economics.” But then we learned that there was more to conservatism than just economics. There are also foreign policy and social issues.

But others thought Reagan was “basically a cretin.” Who was that? Some splenetic liberal at The Washington Post or The New York Times? No, it was Murray Rothbard writing about “eight dreary, miserable, mind-numbing years, the years of the Age of Reagan” that were coming to an end in 1989. I guess he was piqued that no one had chosen him to lead the Free World. Rothbard presumably could have told everyone that the fall of the Berlin Wall, and subsequent roll back of Communism, was no big deal because Soviet Russia had never been a threat in the first place. Instead, it was his wisdom that disarmament and isolationism would bring world peace. So now we know where Lew Rockwell is coming from when he complains that Huckabee is no more than a backwoods religious zealot who believes in, of all things, the state-sanctioned death-penalty!

Now I realize that conservatives have many opinions on the viability and quality of the Republican candidates out there. They may not share my enthusiasm for Huckabee. But this example (once again) of anarcho-libertarian contrariety makes me wonder why anyone cares what Rockwell has to say about anything.

Season of Our Discontent

Guest post:

Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.
—W. B. Yeats, “The Second Coming”

These lines from Yeats’s apocalyptic poem come to me as I read Citizens, Simon Schama’s definitive study of the French Revolution.

As Schama makes clear, many of the crises that erupted into open sedition under Louis XVI were due more to changing perceptions than actual problems. France of the late 18th century was hardly worse off than any other country, but its national fiber had been undermined by decades of muckraking journalism and excessive criticism of the government (sometimes lewd and pornographic in nature), coupled with a cynical mood in philosophy and morals. Now compare that with the level of obstructionism among certain elements in this country today, not only on the left but increasingly on the far right. Nor is it surprising that the two extremes will frequently combine against the middle, often for no other reason than exploitative publicity seeking rather than hard principles.

The danger is that when you have people who insist there is no chance of political reform, only collapse and radical change, it becomes a self-fulfilling prophesy. In the France of the 1770s and ‘80s there were similar challenges to reform on the part of entrenched vested interests as well as the ideological rebels. Schama makes the daring observation that the revolution originated at the top — from disaffected and self-seeking aristocrats who long had an axe to grind with the crown, who wanted to use the discontent of the lower orders for their own ends. As so often happens in revolutions, that discontent grew out of control and the mob turned on its masters. We know how that happens on the left. But it can also happen when desperate conservatives foolishly attempt to dabble in the volatile mix of crisis politics and heated populism. In 1933 Adolf Hitler exploited “useful idiots” on the right to gain his electoral victory.

Many of the grievances against the French system under Louis XVI were legitimate. But every time the king attempted genuine reform, he was thwarted and eventually blamed for the failure of others. His enemies weren’t interested in making the existing order better. Of course, the biggest challenge for the French monarchy was that (unlike England or the United States) it lacked “voter confidence” because it was wedded to a clumsy and unpopular absolutism. In our own time, the problem is that extremists across the spectrum would undermine the representative, moderate nature of our institutions, resulting in similar levels of disaffection. While no system is perfect, violent polemics exaggerate society’s difficulties while distracting people from long-term, responsible solutions.

History Lessons

The following is adapted from an introduction that I wrote almost three years ago for “The Modern Presidency: A Tour of American History since 1900,” in its original incarnation.

Chief among the lessons of American history since 1900 is the price we have paid for allowing government to become so powerful. Most Americans today take for granted a degree of government involvement in their lives that would have shocked the Americans of 1900. The growth of governmental power has undermined the voluntary social institutions upon which civil society depends for orderly evolution: family, church, club, and community. The results are evident in the incidence of crime, broken homes, and drug use; in the resort to sex, violence, sensationalism, and banality as modes of entertainment; and generally in the social fragmentation and alienation that beset Americans — in spite of their prosperity.

The other edge of the governmental sword is interference in economic affairs through taxation and regulation. Such interference, which has grown exponentially since the early 1900s, has blunted Americans’ incentives to work hard, invent, innovate, and create new businesses. The result is that Americans — as prosperous as they are — are far less prosperous than they would be had they not ceded so much economic power to government.

Because of the growth of governmental power, much of the freedom that attends Americans’ prosperity is largely illusory: Americans actually have less freedom than they used to have — and much less freedom than envisioned by the founding generation that fought for America’s independence and wrote its Constitution. I am referring not to the imagined excesses of the current administration, which is vigorously and constitutionally defending American citizens against foreign predators. I am referring to such real things as:

  • the diminution of free speech in the name of campaign-finance “reform”
  • the denial of property rights, the right to work, and freedom of association for the sake of racial and sexual “equality”
  • the seizure of private property for private use in the name of “economic development”
  • the interference of government in almost every aspect of commerce, from deciding what may and may not be produced to how it must be produced, advertised, and sold — all to ensure that we do not make mistakes from which we can learn and profit
  • exorbitant taxation at every level of government, which denies those persons who have earned money lawfully the right to decide how to use it lawfully and gives that money, instead, to parasites in and out of government.

Those are the kinds of abuses of governmental power that Americans have acquiesced in — and even clamored for. It is those abuses that should outrage politicians and pundits — and the masses who swallow their distortions and their socialistic agenda.

For a detailed analysis, rich with links to supporting posts and articles, see “A Political Compass: Locating the United States.”

The Modern Presidency: A Tour of American History since 1900

This post traces, through America’s presidencies from the first Roosevelt to the second Bush, the main themes of American history since the turn of the twentieth century. This is a companion-piece to “Presidential Legacies.” The didactic style of the present post reflects its original purpose: to give my grandchildren some insights into American history that aren’t found in standard textbooks.

Theodore Roosevelt (1858-1919) was elected Vice President as a Republican in 1900, when William McKinley was elected to a second term as President. Roosevelt became President when McKinley was assassinated in September 1901. Roosevelt was re-elected President in 1904. He served almost two full terms as President, from September 14, 1901, to March 4, 1909. (Before 1937, a President’s term of office began on March 4 of the year following his election to office.)

Roosevelt was an “activist” President. Roosevelt used what he called the “bully pulpit” of the presidency to gain popular support for programs that exceeded the limits set in the Constitution. Roosevelt was especially willing to use the power of government to regulate business and to break up companies that had become successful by offering products that consumers wanted. Roosevelt was typical of politicians who inherited a lot of money and didn’t understand how successful businesses provided jobs and useful products for less-wealthy Americans.

Roosevelt was more like the Democrat Presidents of the Twentieth Century. He did not like the “weak” government envisioned by the authors of the Constitution. The authors of the Constitution designed a government that would allow people to decide how to live their own lives (as long as they didn’t hurt other people) and to run their own businesses as they wished to (as long as they didn’t cheat other people). The authors of the Constitution thought government should exist only to protect people from criminals and foreign enemies.

William Howard Taft (1857-1930), a close friend of Theodore Roosevelt, served as President from March 4, 1909, to March 4, 1913. Taft ran for the presidency as a Republican in 1908 with Roosevelt’s support. But Taft didn’t carry out Roosevelt’s anti-business agenda aggressively enough to suit Roosevelt. So, in 1912, when Taft ran for re-election as a Republican, Roosevelt ran for election as a Progressive (a newly formed political party). Many Republican voters decided to vote for Roosevelt instead of Taft. The result was that a Democrat, Woodrow Wilson, won the most electoral votes. Although Taft was defeated for re-election, he later became Chief Justice of the United States, making him the only person ever to have served as head of the executive and judicial branches of the U.S. Government.

Thomas Woodrow Wilson (1856-1924) served as President from March 4 1913 to March 4, 1921. (Wilson didn’t use his first name, and was known officially as Woodrow Wilson.) Wilson is the only President to have earned the degree of doctor of philosophy. Wilson’s field of study was political science, and he had many ideas about how to make government “better.” But “better” government, to Wilson, was “strong” government of the kind favored by Theodore Roosevelt.

Wilson was re-elected in 1916 because he promised to keep the United States out of World War I, which had begun in 1914. But Wilson changed his mind in 1917 and asked Congress to declare war on Germany. After the war, Wilson tried to get the United States to join the League of Nations, an international organization that was supposed to prevent future wars by having nations assemble to discuss their differences. The U.S. Senate, which must approve America’s membership in international organizations, refused to join the League of Nations. The League did not succeed in preventing future wars because wars are started by leaders who don’t want to discuss their differences with other nations.

Warren Gamaliel Harding (1865-1923), a Republican, was elected in 1920 and inaugurated on March 4, 1921. Harding asked voters to reject the kind of government favored by Democrats, and voters gave Harding what is known as a “landslide” victory; he received 60 percent of the votes cast in the 1920 election for president, one of the highest percentages ever recorded. Harding’s administration was about to become involved in a major scandal when Harding died suddenly on August 3, 1923, while he was on a trip to the West Coast. The exact cause of Harding’s death is unknown, but he may have had a stroke when he learned of the impending scandal, which involved Albert Fall, Secretary of the Interior. Fall had secretly allowed some of his business associates to lease government land for oil-drilling, in return for personal loans.

There were a few other scandals, but Harding probably had nothing to do with any of them. Because of the scandals, most historians say that they consider Harding to have been a poor President. But that isn’t the real reason for their dislike of Harding. Most historians, like most college professors, favor “strong” government. Historians don’t like Harding because he didn’t use the power of government to interfere in the nation’s economy. An important result of Harding’s policy (called laissez-faire, or “hands off”) was high employment and increasing prosperity during the 1920s.

John Calvin Coolidge (1872-1933), who was Harding’s Vice President, became President upon Harding’s death in 1923. (Coolidge didn’t use his first name, and was known as Calvin.) Coolidge was elected President in 1924. He served as President from August 3, 1923, to March 4, 1929. Coolidge continued Harding’s policy of not interfering in the economy, and people continued to become more prosperous as businesses grew and hired more people and paid them higher wages. Coolidge was known as “Silent Cal” because he was a man of few words. He said only what was necessary for him to say, and he meant what he said. That was in keeping with his approach to the presidency. He was not the “activist” that reporters and historians like to see in the presidency; he simply did the job required of him by the Constitution, which was to execute the laws of the United States. He continued Harding’s hands-off policy, and the country prospered as a result. Coolidge chose not run for re-election in 1928, even though he was quite popular.

Herbert Clark Hoover (1874-1964), a Republican who had been Secretary of Commerce under Coolidge, was elected to the presidency in 1928. Hoover won 58 percent of the popular vote, an endorsement of the hands-off policy of Harding and Coolidge. Hoover’s administration is known mostly for the huge drop in the price of stocks (shares of corporations, which are bought and sold in places known as stock exchanges), and for the Great Depression that was caused partly by the “Crash” — as it became known. The rate of unemployment (the percentage of American workers without jobs) rose from 3 percent just before the Crash to 25 percent by 1933, at the depth of the Great Depression.

The Crash had two main causes. First, the prices of shares in businesses (called stocks) began to rise sharply in the late 1920s. That caused many persons to borrow money in order to buy stocks, in the hope that the price of stocks would continue to rise. If the price of stocks continued to rise, buyers could sell their stocks at a profit and repay the money they had borrowed. But when stock prices got very high in the fall of 1929, some buyers began to worry that prices would fall, so they began to sell their stocks. That drove down the price of stocks, and caused more buyers to sell in the hope of getting out of the stock market before prices fell further. But prices went down so quickly that almost everyone who owned stocks lost money. Prices of stocks kept going down. By 1933, many stocks had become worthless and most stocks were selling for only a small fraction of prices that they had sold for before the Crash.

Because so many people had borrowed money to buy stocks, they went broke when stock prices dropped. When they went broke, they were unable to pay their other debts. That had a ripple effect throughout the economy. As people went broke they spent less money and were unable to pay their debts. Banks had less money to lend. Because people were buying less from businesses, and because businesses couldn’t get loans to stay in business, many businesses closed and people lost their jobs. Then the people who lost their jobs had less money to spend, and so more people lost their jobs.

The effects of the Great Depression were felt in other countries because Americans couldn’t afford to buy as much as they used to from other countries. Also, Congress passed a law known as the Smoot-Hawley Tarrif Act, which President Hoover signed. The Smoot-Hawley Act raised tarrifs (taxes) on items imported into the United States, which meant that Americans bought even less from foreign countries. Foreign countries passed similar laws, which meant that foreigners began to buy less from Americans, which put more Americans out of work.

The economy would have recovered quickly, as it had done in the past when stock prices fell and unemployment increased. But the actions of government — raising tarrifs and making loans harder to get — only made things worse. What could have been a brief recession turned into the Great Depression. People were frightened. They blamed President Hoover for their problems, although President Hoover didn’t cause the Crash. Hoover ran for re-election in 1932, but he lost to Franklin Delano Roosevelt, a Democrat.

Franklin Delano Roosevelt (1882-1945), known as FDR, served as President from March 4, 1933 until his death on April 12, 1945, just a month before V-E Day. FDR was elected to the presidency in 1932, 1936, 1940, and 1944 — the only person elected more than twice. Roosevelt was a very popular President because he served during the Depression and World War II, when most Americans — having lost faith in themselves — sought reassurance that “someone was in charge.” FDR was not universally popular; his share of the popular vote rose from 57 percent in 1932 to 61 percent in 1936, but then dropped to 55 percent in 1940 and 54 percent in 1944. Americans were coming to understand what FDR’s opponents knew at the time, and what objective historians have said since:

  • FDR’s efforts to bring America out of the Depression only made it worse.
  • FDR’s leadership during World War II faltered toward the end, when he was gravely ill and allowed the Soviet Union to take over Eastern Europe.

FDR’s program to end the Depression was known as the New Deal. It consisted of welfare programs, which put people to work on government projects instead of making useful things. It also consisted of higher taxes and other restrictions on business, which discouraged people from starting and investing in businesses, which is the cure for unemployment.

Roosevelt did try to face up to the growing threat from Germany and Japan. However, he wasn’t able to do much to prepare America’s defenses because of strong isolationist and anti-war feelings in the country. Those feelings were the result of America’s involvement in World War I. (Similar feelings in Great Britain kept that country from preparing for war with Germany, which encouraged Hitler’s belief that he could easily conquer Europe.)

When America went to war after Japan’s attack on Pearl Harbor, Roosevelt proved to be an able and inspiring commander-in-chief. But toward the end of the war his health was failing and he was influenced by close aides who were pro-communist and sympathetic to the Soviet Union (Union of Soviet Socialist Republics, or USSR). Roosevelt allowed Soviet forces to claim Eastern Europe, including half of Germany. Roosevelt also encouraged the formation of the United Nations, where the Soviet Union (now Russia) has had a strong voice because it was made a permanent member of the Security Council, the policy-making body of the UN. As a member of the Security Council, Russia can obstruct actions proposed by the United States.

Roosevelt’s appeasement of the USSR caused Josef Stalin (the Soviet dictator) to believe that the U.S. had weak leaders who would not challenge the USSR’s efforts to spread Communism. The result was the Cold War, which lasted for 45 years. During the Cold War the USSR developed nuclear weapons, built large military forces, kept a tight rein on countries behind the Iron Curtain (in Eastern Europe), and expanded its influence to other parts of the world.

Stalin’s belief in the weakness of U.S. leaders was largely correct, until Ronald Reagan became President. As I will discuss, Reagan’s policies led to the end of the Cold War.

Harry S. Truman (1884-1972), who was FDR’s Vice President, became President upon FDR’s death. Truman was re-elected in 1948, so he served as President from April 12, 1945 until January 20, 1953 — almost two full terms. Truman made one right decision during his presidency. He approved the dropping of atomic bombs on Japan. Although hundreds of thousands of Japanese were killed by the bombs, the Japanese soon surrendered. If the Japanese hadn’t surrendered then, U.S. forces would have invaded Japan and millions of Americans and Japanese lives would have been lost in the battles that followed the invasion.

Truman ordered drastic reductions in the defense budget because he thought that Stalin was an ally of the United States. (Truman, like FDR, had advisers who were Communists.) Truman changed his mind about defense budgets, and about Stalin, when Communist North Korea attacked South Korea in 1950. The attack on South Korea came after Truman’s Secretary of State (the man responsible for relations with other countries) made a speech about countries that the United States would defend. South Korea was not one of those countries.

When South Korea was invaded, Truman asked General of the Army Douglas MacArthur to lead the defense of South Korea. MacArthur planned and executed the amphibious landing at Inchon, which turned the war in favor of South Korea and its allies. The allied forces then succeeded in pushing the front line far into North Korea. Communist China then entered the war on the side of North Korea. MacArthur wanted to counterattack Communist Chinese bases and supply lines in Manchuria, but Truman wouldn’t allow that. Truman then “fired” MacArthur because MacArthur spoke publicly about his disagreement with Truman’s decision. The Chinese Communists pushed allied forces back and the Korean War ended in a deadlock, just about where it had begun, near the 38th parallel.

In the meantime, Communist spies had stolen the secret plans for making atomic bombs. They were able to do that because Truman refused to hear the truth about Communist spies who were working inside the government. By the time Truman left office the Soviet Union had manufactured nuclear weapons, had strengthened its grip on Eastern Europe, and was beginning to expand its influence into the Third World (the nations of Africa and the Middle East).

Truman was very unpopular by 1952. As a result he chose not to run for re-election, even though he could have done so. (The “Lame Duck” amendment to the Constitution, which bars a person from serving as President for more than six years was adopted while Truman was President, but it didn’t apply to him.)

Dwight David Eisenhower (1890-1969), a Republican, served as President from January 20, 1953 to January 20, 1961. Eisenhower (also known by his nickname, “Ike”) received 55 percent of the popular vote in 1952 and 57 percent in 1956; his Democrat opponent in both elections was Adlai Stevenson. The Republican Party chose Eisenhower as a candidate mainly because he had become famous as a general during World War II. Republican leaders thought that by nominating Eisenhower they could end the Democrats’ twenty-year hold on the presidency. The Republican leaders were right about that, but in choosing Eisenhower as a candidate they rejected the Republican Party’s traditional stand in favor of small government.

Eisenhower was a “moderate” Republican. He was not a “big spender” but he did not try to undo all of the new government programs that had been started by FDR and Truman. Traditional Republicans eventually fought back and, in 1964, nominated a small-government candidate named Barry Goldwater. I will discuss him when I get to President Lyndon B. Johnson.

Eisenhower was a popular President, and he was a good manager, but he gave the impression of being “laid back” and not “in charge” of things. The news media had led Americans to believe that “activist” Presidents are better than laissez-faire Presidents, and so there was by 1960 a lot of talk about “getting the country moving again” — as if it was the job of the President to “run” the country.

John Fitzgerald Kennedy (1917-1963), a Democrat, was elected in 1960 to succeed President Eisenhower. Kennedy, who became known as JFK, served from January 20, 1961, until November 22, 1963, when he was assassinated in Dallas, Texas. JFK was elected narrowly (he received just 50 percent of the popular vote), but one reason that he won was his image of “vigorous youth” (he was 27 years younger than Eisenhower). In fact, JFK had been in bad health for most of his life. He seemed to be healthy only because he used a lot of medications. Those medications probably impaired his judgment and would have caused him to die at a relatively early age if he hadn’t been assassinated.

Late in Eisenhower’s administration a Communist named Fidel Castro had taken over Cuba, which is only 90 miles south of Florida. The Central Intelligence Agency then began to work with anti-Communist exiles from Cuba. The exiles were going to attempt an invasion of Cuba at a place called the Bay of Pigs. In addition to providing the necessary military equipment, the U.S. was also going to provide air support during the invasion.

JFK succeeded Eisenhower before the invasion took place, in April 1961. JFK approved changes in the invasion plan that resulted in the failure of the invasion. The most important change was to discontinue air support for the invading forces. The exiles were defeated, and Castro has remained firmly in control of Cuba.

The failed invasion caused Castro to turn to the USSR for military and economic assistance. In exchange for that assistance, Castro agreed to allow the USSR to install medium-range ballistic missiles in Cuba. That led to the so-called Cuban Missile Crisis in 1962. Many historians give Kennedy credit for resolving the crisis and avoiding a nuclear war with the USSR. The Russians withdrew their missiles from Cuba, but JFK had to agree to withdraw American missiles from bases in Turkey.

The myth that Kennedy had stood up to the Russians made him more popular in the U.S. His major accomplishment, which Democrats today like to ignore, was to initiate tax cuts, which became law after his assassination. The Kennedy tax cuts helped to make America more prosperous during the 1960s by giving people more money to spend, and by encouraging businesses to expand and create jobs.

The assassination of JFK on November 22, 1963, in Dallas was a shocking event. It also led many Americans to believe that JFK would have become a great President if he had lived and been re-elected to a second term. There is little evidence that JFK would have become a great President. His record in Cuba suggests that he would not have done a good job of defending the country.

Lyndon Baines Johnson (1908-1973), also known as LBJ, was Kennedy’s Vice President and became President upon Kennedy’s assassination. LBJ was re-elected in 1964; he served as President from November 22, 1963 to January 20, 1969. LBJ’s Republican opponent in 1964 was Barry Goldwater, who was an old-style Republican conservative, in favor of limited government and a strong defense. LBJ portrayed Goldwater as a threat to America’s prosperity and safety, when it was LBJ who was the real threat. Americans were still in shock about JFK’s assassination, and so they rallied around LBJ, who won 61 percent of the popular vote.

LBJ is known mainly for two things: his “Great Society” program and the war in Vietnam. The Great Society program was an expansion of FDR’s New Deal. It included such things as the creation of Medicare, which is medical care for retired persons that is paid for by taxes. Medicare is an example of a “welfare” program. Welfare programs take money from people who earn it and give money to people who don’t earn it. The Great Society also included many other welfare programs, such as more benefits for persons who are unemployed. The stated purpose of the expansion of welfare programs under the Great Society was to end poverty in America, but that didn’t happen. The reason it didn’t happen is that when people receive welfare they don’t work as hard to take care of themselves and their families, and they don’t save enough money for their retirement. Welfare actually makes people worse off in the long run.

America’s involvement in Vietnam began in the 1950s, when Eisenhower was President. South Vietnam was under attack by Communist guerrillas, who were sponsored by North Vietnam. Small numbers of U.S. forces were sent to South Vietnam to train and advise South Vietnamese forces. More U.S. advisers were sent by JFK, but within a few years after LBJ became President he had turned the war into an American-led defense of South Vietnam against Communist guerrillas and regular North Vietnamese forces. LBJ decided that it was important for the U.S. to defeat a Communist country and stop Communism from spreading in Southeast Asia.

However, LBJ was never willing to commit enough forces in order to win the war. He allowed air attacks on North Vietnam, for example, but he wouldn’t invade North Vietnam because he was afraid that the Chinese Communists might enter the war. In other words, like Truman in Korea, LBJ was unwilling to do what it would take to win the war decisively. Progress was slow and there were a lot of American casualties from the fighting in South Vietnam. American newspapers and TV began to focus attention on the casualties and portray the war as a losing effort. That led a lot of Americans to turn against the war, and college students began to protest the war (because they didn’t want to be drafted). Attention shifted from the war to the protests, giving the world the impression that America had lost its resolve. And it had.

LBJ had become so unpopular because of the war in Vietnam that he decided not to run for President in 1968. Most of the candidates for President campaigned by saying that they would end the war. In effect, the United States had announced to North Vietnam that it would not fight the war to win. The inevitable outcome was the withdrawal of U.S. forces from Vietnam, which finally happened in 1973, under LBJ’s successor, Richard Nixon. South Vietnam was left on its own, and it fell to North Vietnam in 1975.

Richard Milhous Nixon (1913-1994) was a Republican. He won the election of 1968 by beating the Democrat candidate, Hubert H. Humphrey (who had been LBJ’s Vice President), and a third-party candidate, George C. Wallace. Nixon and Humphrey each received 43 percent of the popular vote; Wallace received 14 percent. If Wallace had not been a candidate, most of the votes cast for him probably would have been cast for Nixon.

Even though Nixon received less than half of the popular vote, he won the election because he received a majority of electoral votes. Electoral votes are awarded to the winner of each State’s popular vote. Nixon won a lot more States than Humphrey and Wallace, so Nixon became President.

Nixon won re-election in 1972, with 61 percent of the popular vote, by beating a Democrat (George McGovern) who would have expanded LBJ’s Great Society and cut America’s armed forces even more than they were cut after the Vietnam War ended. Nixon’s victory was more a repudiation of McGovern than it was an endorsement of Nixon. His second term ended in disgrace when he resigned the presidency on August 9, 1974.

Nixon called himself a conservative, but he did nothing during his presidency to curb the power of government. He did not cut back on the Great Society. He spent a lot of time on foreign policy. But Nixon’s diplomatic efforts did nothing to make the USSR and Communist China friendlier to the United States. Nixon had shown that he was essentially a weak President by allowing U.S. forces to withdraw from Vietnam. Dictatorial rulers like do not respect countries that display weakness.

Nixon was the first (and only) President who resigned from office. He resigned because the House of Representatives was ready to impeach him. An impeachment is like a criminal indictment; it is a set of charges against the holder of a public office. If Nixon had been impeached by the House of Representatives, he would have been tried by the Senate. If two-thirds of the Senators had voted to convict him he would have been removed from office. Nixon knew that he would be impeached and convicted, so he resigned.

The main charge against Nixon was that he ordered his staff to cover up his involvement in a crime that happened in 1972, when Nixon was running for re-election. The crime was a break-in at the headquarters of the Democratic Party in Washington, D.C. The purpose of the break-in was to obtain documents that might help Nixon’s re-election effort. The men who participated in the break-in were hired by aides to Nixon, and Nixon himself probably authorized the break-in. Nixon certainly authorized the effort to cover up the involvement of his aides in the break-in. All of the details about the break-in and Nixon’s involvement were revealed as a result of investigations by Congress, which were helped by reporters who were doing their own investigative work. Because the Democratic Party’s headquarters was located in the Watergate Building in Washington, D.C., this episode became known as the Watergate Scandal.

Gerald Rudolph Ford (1913 – ), who was Nixon’s Vice President at the time Nixon resigned, became President on August 9, 1974 and served until January 20, 1977. Ford succeeded Spiro T. Agnew, who had been Nixon’s Vice President until October 10, 1973, when he resigned because he had been taking bribes while he was Governor of Maryland (the job he had before becoming Vice President).

Ford became the first Vice President chosen in accordance with the Twenty-Fifth Amendment to the Constitution. That amendment spells out procedures for filling vacancies in the presidency and vice presidency. When Vice President Agnew resigned, President Nixon nominated Ford as Vice President, and the nomination was approved by a majority vote of the House and Senate. Then, when Ford became President, he nominated Nelson Rockefeller to fill the vice presidency, and Rockefeller was elected Vice President by the House and Senate.

Ford ran for re-election in 1976, but he was defeated by James Earl Carter, mainly because of the Watergate Scandal. Ford was not involved in the scandal, but voters often cast votes for silly reasons. Carter’s election was a rejection of Richard Nixon, who had left office two years earlier, not a vote of confidence in Carter.

James Earl (“Jimmy”) Carter (1924 – ), a Democrat who had been Governor of Georgia, received only 50 percent of the popular vote. He was defeated for re-election in 1980, so he served as President from January 20, 1977 to January 20, 1981.

Carter was an ineffective President who failed at the most important duty of a President, which is to protect Americans from foreign enemies. His failure came late in his term of office, during the Iran Hostage Crisis. The Shah of Iran had ruled the country for 38 years. He was overthrown in 1979 by a group of Muslim clerics (religious men) who disliked the Shah’s pro-American policies. In November 1979 a group of students loyal to the new Muslim government of Iran invaded the American embassy in Tehran (Iran’s capital city) and took 66 hostages. Carter approved rescue efforts, but they were poorly planned. The hostages were still captive by the time of the presidential election in 1980. Carter lost the election largely because of his feeble rescue efforts.

In recent years Carter has become an outspoken critic of America’s foreign policy. Carter is sympathetic to America’s enemies and he opposes strong military action in defense of America.

Ronald Wilson Reagan (1911-2004), a Republican, succeeded Jimmy Carter as President. Reagan won 51 percent of the popular vote in 1980. Reagan would have received more votes, but a former Republican (John Anderson) ran as a third-party candidate and took 7 percent of the popular vote. Reagan was re-elected in 1984 with 59 percent of the popular vote. He served as President from January 20, 1981, until January 20, 1989.

Reagan had two goals as President: to reduce the size of government and to increase America’s military strength. He was unable to reduce the size of government because, for most of his eight years in office, Democrats were in control of Congress. But Reagan was able to get Congress to approve large reductions in income-tax rates. Those reductions led to more spending on consumer goods and more investment in the creation of new businesses. As a result, Americans had more jobs and higher incomes.

Reagan succeeded in rebuilding America’s military strength. He knew that the only way to defeat the USSR, without going to war, was to show the USSR that the United States was stronger. A lot of people in the United States opposed spending more on military forces; they though that it would cause the USSR to spend more. They also thought that a war between the U.S. and USSR would result. Reagan knew better. He knew that the USSR could not afford to keep up with the United States. Reagan was right. Not long after the end of his presidency the countries of Eastern Europe saw that the USSR was really a weak country, and they began to break away from the USSR. Residents of Berlin demolished the Berlin Wall, which the USSR had erected in 1961 to keep East Berliners from crossing over into West Berlin. East Germany was freed from Communist rule, and it reunited with West Germany. The USSR collapsed, and many of the countries that had been part of the USSR became independent. We owe the end of the Soviet Union and its influence President Reagan’s determination to defeat the threat posed by the Soviet Union.

George Herbert Walker Bush (1924 – ), a Republican, was Reagan’s Vice President. He won 54 percent of the popular vote when he defeated his Democrat opponent, Michael Dukakis, in the election of 1988. Bush lost the election of 1992. He served as President from January 20, 1989 to January 20, 1993.

The main event of Bush’s presidency was the Gulf War of 1990-1991. Iraq, whose ruler was Saddam Hussein, invaded the small neighboring country of Kuwait. Kuwait produces and exports a lot of oil. The occupation of Kuwait by Iraq meant that Saddam Hussein might have been able to control the amount of oil shipped to other countries, including Europe and the United States. If Hussein had been allowed to control Kuwait, he might have moved on to Saudi Arabia, which produces much more oil than Kuwait. President Bush asked Congress to approve military action against Iraq. Congress approved the action, although most Democrats voted against giving President Bush authority to defend Kuwait. The war ended in a quick defeat for Iraq’s armed forces. But President Bush decided not to allow U.S. forces to finish the job and end Saddam Hussein’s reign as ruler of Iraq.

Bush’s other major blunder was to raise taxes, which helped to cause a recession. The country was recovering from the recession in 1992, when Bush ran for re-election, but his opponents were able to convince voters that Bush hadn’t done enough to end the recession. In spite of his quick (but incomplete) victory in the Persian Gulf War, Bush lost his bid for re-election because voters were concerned about the state of the economy.

William Jefferson Clinton (1946 – ), a Democrat, defeated George H.W. Bush in the 1992 election by gaining a majority of the electoral vote. But Clinton won only 43 percent of the popular vote. Bush won 37 percent, and 19 percent went to H. Ross Perot. Perot, a third-party candidate, who received many votes that probably would have been cast for Bush.

Clinton’s presidency got off to a bad start when he sent to Congress a proposal that would have put health care under government control. Congress rejected the plan, and a year later (in 1994) voters went to the polls in large number to elect Republican majorities to the House and Senate.

Clinton was able to win re-election in 1996, but he received only 49 percent of the popular vote. He was re-elected mainly because fewer Americans were out of work and incomes were rising. This economic “boom” was a continuation of the recovery that began under President Reagan. Clinton got credit for the “boom” of the 1990s, which occurred in spite of tax increases passed by Congress while it was still controlled by Democrats.

Clinton was perceived as a “moderate” Democrat because he tried to balance the government’s budget; that is, he tried not to spend more money than the government was receiving in taxes. He was eventually able to balance the budget, but only because he cut defense spending. In addition to that, Clinton made several bad decisions about defense issues. In 1993 he withdrew American troops from Somalia, instead of continuing with the military mission there after some troops were captured and killed by natives. In 1994 he signed an agreement with North Korea that was supposed to keep North Korea from developing nuclear weapons, but the North Koreans continued to work on building nuclear weapons because they had fooled Clinton. By 1998 Clinton knew that al Qaeda had become a major threat when terrorists bombed two U.S. embassies in Africa, but Clinton failed to go to war against al Qaeda. Only after terrorists struck a Navy ship, the USS Cole, in 2000 did Clinton declare terrorism to be a major threat. By then, his term of office was almost over.

Clinton was the second President to be impeached. The House of Representatives impeached him in 1998. He was charged with perjury (lying under oath) when he was the defendant (the person being charged with wrong-doing) in a law suit. The Senate didn’t convict Clinton because every Democrat senator refused to vote for conviction, in spite of overwhelming evidence that Clinton was guilty. The day before Clinton left office he acknowledged his guilt by agreeing to a five-year suspension of his law license. A federal judge later found Clinton guilty of contempt of court for his misleading testimony and fined him $90,000.

Clinton was involved in other scandals during his presidency, but he remains popular with many people because he is good at giving the false impression that he is a nice, humble person.

Clinton’s scandals had more effect on his Vice President, Al Gore, who ran for President as the nominee of the Democrat Party in 2000. His main opponent was George W. Bush, a Republican. A third-party candidate named Ralph Nader also received a lot of votes. The election of 2000 was the closest presidential election since 1876. Bush and Gore each won 48 percent of the popular vote; Nader won 3 percent. The winner of the election was decided by outcome of the vote in Florida. That outcome was the subject of legal proceedings for six weeks. It had to be decided by the U.S. Supreme Court.

Initial returns in Florida gave that State’s electoral votes to Bush, which meant that he would become President. But the Supreme Court of Florida decided that election officials should violate Florida’s election laws and keep recounting the ballots in certain counties. Those counties were selected because they had more Democrats than Republicans, and so it was likely that recounts would favor Gore, the Democrat. The case finally went to the U.S. Supreme Court, which decided that the Florida Supreme Court was wrong. The U.S. Supreme Court ordered an end to the recounts, and Bush was declared the winner of Florida’s electoral votes.

George Walker Bush (1946 – ), a Republican, is the second son of a President to become President. (The first was John Quincy Adams, the sixth President, whose father, John Adams, was the second President. Also, Benjamin Harrison, the 23rd President, was the grandson of William Henry Harrison, the ninth President.) Bush won re-election in 2004, with 51 percent of the popular vote. He has served as President since January 20, 2001.

President Bush’s major accomplishment before September 11, 2001, was to get Congress to cut taxes. The tax cuts were necessary because the economy had been in a recession since 2000. The tax cuts gave people more money to spend and encouraged businesses to expand and create new jobs. The economy has improved a lot because of President Bush’s tax cuts.

The terrorist attacks on September 11, 2001, caused President Bush to give most of his time and attention to the War on Terror. The invasion of Afghanistan, late in 2001, was part of a larger campaign to disrupt terrorist activities. Afghanistan was ruled by the Taliban, a group that gave support and shelter to al Qaeda terrorists. The U.S. quickly defeated the Taliban and destroyed al Qaeda bases in Afghanistan.

The invasion of Iraq, which took place in 2003, was also intended to combat al Qaeda, but in a different way. Iraq, under Saddam Hussein, had been an enemy of the U.S. since the Persian Gulf War of 1990-1991. Hussein was trying to acquire deadly weapons to use against the U.S. and its allies. Hussein was also giving money to terrorists and sheltering them in Iraq. The defeat of Hussein, which came quickly after the invasion of Iraq, was intended to establish a stable, friendly government in the Middle East. It would serve as a base from which U.S. forces could operate against Middle Eastern government that shelter terrorists, and it would serve as a model for other Middle Eastern countries, many of which are dictatorships.

The invasion of Iraq has produced some of the intended results, but there is much unrest there because of long-standing animosity between Sunni Muslims and Shi’a Muslims. There is also much defeatist talk about Iraq — especially by Democrats and the media. That defeatist talk helps to encourage those who are creating unrest in Iraq. It gives them hope that the U.S. will abandon Iraq, just as it abandoned Vietnam more than 30 years earlier.

UPDATE (12/02/07): The final three paragraphs about the War in Iraq are slightly dated, though their thrust is correct. For further reading about Saddam’s aims and his ties to Al Qaeda, go to my “Resources” page and scroll to the the heading “War and Peace.”

Regarding defeatist talk by Democrats and the media, I note especially a recent post at Wolf Howling, “Have Our Copperheads Found Their McClellan in Retired LTG Sanchez?” The author writes:

Several commentators have noted the similarity between our modern day Democrats and the Copperheads of the Civil War. The Copperheads were the virulently anti-war wing that took control of the Democratic party in the 1860’s. Their rhetoric of the day reads like a modern press release from our Democratic Party leadership. Their central meme was that the Civil War was unwinnable and should be concluded….

At the[ir] convention [in 1864], the Democrats nominated retired General George B. McClellan for President. Lincoln had chosen McClellan to command the Union Army in 1861 and then assigned him to command the Army of the Potomac. Lincoln subsuqently relieved McClellan of command in 1862 for his less than stellar performance on the battlefield. McClellan became a bitter and vocal opponent of Lincoln, harshly critical of Lincoln’s prosecution of the war. McClellan and the Copperheads maintained that meme even as the facts on the ground changed drastically with victories by General Sherman in Atlanta and General Sheridan in Shenandoah Valley.

Thus it is not hard to see in McClellan many parallels to retired Lt. Gen. Ricardo Sanchez, the one time top commander in Iraq. Sanchez held the top military position in Iraq during the year after the fall of the Hussein regime, when the insurgency took root and the Abu Ghraib scandal came to light. His was not a successful command and his remarks since show a bitter man.

There’s more about contemporary Copperheads in these posts:

Shall We All Hang Separately?
Foxhole Rats
Foxhole Rats, Redux
Know Thine Enemy
The Faces of Appeasement
Whose Liberties Are We Fighting For?
Words for the Unwise
More Foxhole Rats
Moussaoui and “White Guilt”
The New York Times: A Hot-Bed of Post-Americanism
Post-Americans and Their Progeny
“Peace for Our Time”
Anti-Bush or Pro-Treason?
Parsing Peace