Fascism

David N. Mayer (MayerBlog) parses “fascism.” He uses the term

in its broadest sense, as a political philosophy holding among its essential precepts the claims that individuals have no inherent rights, and that their interests are subordinate to, and therefore may be sacrificed for the sake of, the presumed collective good, whatever it’s called – “society,” “the race,” “the state,” the “Volk,” “the nation,” “the people,” “the proletariat,” “the common good,” or “the public interest.” Purists may object that what I’m really calling “fascism” would be more properly termed collectivism, and that my use of the term fascism is not only historically incorrect but also deliberately provocative – and to a great extent, they’d be right. In defending my use of the term, however, I’d note that as originally coined by Benito Mussolini, the fascist dictator of 1930s Italy, the term referred to the fasces, the bundle of rods wrapped around an axe carried by the lictors who guarded government officials in ancient Rome, where it symbolized the sovereign authority of the state. In this original sense of the term, fascism thus is roughly the equivalent of “statism,” the form of collectivism in which the entity known as “the state” holds the highest political authority in society…. I have an additional justification for using the term fascism. Notwithstanding the arguments of political scientists – who would distinguish fascism from other collectivist –isms such as communism, socialism, or national socialism (Nazism) – these distinctions are really irrelevant because all these forms of collectivism are equally pernicious to, and destructive of, individual rights and freedom. Leftists like to use the terms fascism or fascist as pejoratives because they naively believe that socialism is somehow less evil than collectivism of “the right” – that the murder of millions of people killed by Lenin and Stalin in the Soviet Union, by Mao in Red China, or by Pol Pot in communist Cambodia somehow was less evil than the murder of millions of people killed by Hitler’s regime in Nazi Germany or Mussolini’s regime in fascist Italy. Leftists have no legitimate claim on the truth, and neither do they have any monopoly on use of the terms fascism or fascist as pejoratives.

Mayer, in a typically long post at his excellent blog, goes on to tackle the

“Four Fascisms” of 2008 … : (1) Eco-Fascism, the tyranny of radical environmentalists, including the global-warming hoax and other myths propagated by “green” activists as a rationale for imposing their agenda on us by force; (2) Nanny-State Fascism, the tyranny of the health police, who seek to turn everyone into wards of the state, including the movement pushing for “universal” health care – that is, government monopolization of the health care industry (what used to be called, and still is, socialized medicine); (3) Demopublican/ Replicrat Fascism, the tyranny of the two-party political system in the United States, particularly dangerous in 2008 as an election year; and last, (4) Islamo-Fascism, the danger of militant, fundamentalist Islam to the United States and the rest of the civilized world.

Go there and read. All of it. You many not agree with Mayer in every detail (I don’t), but he aims at the right targets and hits them hard.

Related posts:
FDR and Fascism” (20 Sep 2007)
A Political Compass: Locating the United States” (13 Nov 2007)
The Modern Presidency: A Tour of American History since 1900” (01 Dec 2007)

Cell Phones and Driving, Once More: Addendum

This is an addendum to “Cell Phones and Driving, Once More,” at Liberty Corner. In that post, I dispense with the attempt by Saurabh Bhargava and Vikram Pathania (B&P) to disprove the well established causal link between cell-phone use and traffic accidents through a poorly specified time-series analysis. (Their paper is “Driving Under the (Cellular) Influence: The Link Between Cell Phone Use and Vehicle Crashes,” AEI-Brookings Joint Center for Regulatory Studies, Working Paper 07-15, July 2007.) The question I address here is whether it is possible to quantify that link through time-series analysis.

Coming directly to the point, a rigorously quantitative time-series analysis is impossible because (a) some of the relevant variables cannot be quantified — item by item, along a common dimension — and (b) others are strongly correlated with each other.

The relevant variables that cannot be quantified properly are improvements in the design of automobiles and the streets and highways on which they travel. There simply have been too many different improvements over too long a period of time, during which other significant (and correlated) changes have taken place. There can be no doubt that the design of automobiles has evolved toward greater safety almost since their initial production in the 1890s. What were flimsy, open-bodied carriages with no protection for their occupants are now reinforced, air-bag and shoulder-harness-equipped juggernauts with safety glass, power brakes, and power steering. In parallel, city streets have evolved from unmarked, uncontrolled, unlighted buggy routes to comparatively broad, well-controlled, well-lighted avenues; and highways have evolved from rutted, dirt wagon tracks to comparatively smooth, wide, controlled-access expressways. Thus the combined, long-term effects of design improvements on traffic safety can be seen in aggregate statistics, to which I will come.

Relevant variables that are strongly correlated with each other are traffic fatalities per 100 million vehicle-miles (the dependent variable in this analysis); the proportion of young adults in the population, as measured by the percentage of persons 15-24 years old; the incidence of alcohol consumption, as measured in gallons of ethanol per year; per capita cell-phone use (in average monthly minutes); and the passage of time (measured in years), which is a proxy for improvements in the safety of motor vehicles. Here are the cross-correlations among those variables for the period 1970-2005 (1970 being the earliest year for which I have data on alcohol consumption):

Fatalities

15-24

Alcohol

Cell phone

Year

Fatalities

0.884

0.799

-0.466

-0.954

15-24

0.884

0.963

-0.429

-0.918

Alcohol

0.799

0.963

-0.500

-0.885

Cell phone

-0.466

-0.429

-0.500

0.644

Year

-0.954

-0.918

-0.885

0.644

(The endnote to this post gives the sources for the various statistics discussed and presented in this analysis.)

Obviously, given the strong correlations between the percentage of persons aged 15-24, per capita alcohol consumption, and year, only one of those three variables can be accounted for meaningfully in a regression on the dependent variable, fatalities per 100 million vehicle-miles. Year is the obvious choice, in that it accounts not only for the percentage of 15-24 year olds and alcohol consumption, but also for improvements in the design of motor vehicles and highways.

That cell-phone use is negatively correlated with the fatality rate is merely an artifact of the general decline in the fatality rate, which began long before cell phones came into use. Similarly, the negative correlation between the percentage of 15-24 year olds and the volume of cell-phone use is an artifact of the trends prevailing during 1970-2005: a general decline in the percentage of 15-24 year olds (after 1977), accompanied by a swelling tide of cell-phone use.

Regression analysis illustrates these points. First, I used year as the sole explanatory variable. Despite the high R-squared of the regression (0.911), it lacks nuance; graphically, it is a straight line that bisects the meandering, downward curve of fatality rate (see below). Introducing 15-24 year olds and/or alcohol consumption into the regression would yield a better fit, but because those variables are so strongly correlated with time (and one another) their signs are either intuitively incorrect or their coefficients are statistically insignificant. (This is true for15-24 year olds, even when the regression covers 1957-2005, the period for which I have data for the percentage of 15-24 year olds.)

Adding cell-phone use to year results in a better fit (R-squared = 0.948), and the coefficient for cell-phone use squares with the results of valid studies (i.e., it is significant and positive). But because of the exclusion of 15-24 year olds and alcohol consumption, cell-phone use carries too much weight. Here is the equation:

Annual traffic fatalities per 100mn vehicle-miles =
211.255
– (0.105 x year)
+ (0.0022 x number of cell-phone minutes/month/capita in a year)

The t-values of the intercept and coefficients are 21.847, -21.565, and 4.886, respectively (all significant at the 0.99 level). The adjusted R-squared of the equation is 0.945. The mean values of the dependent and explanatory variables are 2.52, 1987.5, and 50.602, respectively. The standard error of the estimate (0.232)/the mean of the dependent variable (2.522) = 0.092. The equation is significant at the 0.99 level.

This equation, when viewed graphically, loses its charm:

It is obvious that the variable for cell-phone use carries too much weight; it over-explains the fatality rate. According to the equation, in 2005, when monthly cell-phone use had ballooned to more than 500 minutes per American, almost 80 percent of traffic fatalities were caused by cell-phone use. That’s an absurd result: an artifact of the difficulty of statistically analyzing traffic fatalities when key variables (time, 15-24 year olds, and alcohol consumption) are strongly correlated. I have no doubt that cell-phone use contributes much to traffic accidents and fatalities (see main post), but not as much as the equation suggests.

A more meaningful relationship is found in the strong, positive correlation (0.973) between cell-phone use and the portion of traffic fatalities that the passage of time fails to account for after 1998, that is, where the blue line crosses below the black line in the graph above. (Similarly, the “hump” in the black line that occurs around 1980, and the declivities that precede and follow it, can be attributed to the rise and fall of the population of 15-24 year olds and the consumption of alcohol.)

It’s time to pull back and look at the big picture. The rate of traffic fatalities has been declining for a long time, owing mainly to improvements in the design of autos and highways. Thus:

Even though a meaningful time-series analysis of traffic fatalities is impossible, it is possible to interpret broadly the history of traffic fatalities since 1900. The first thing to note, of course, is the strong negative relationship between the fatality rate and time, which is a proxy for the kinds of improvements in automobile and highway safety that I mention earlier. Those improvements obviously predate the ascendancy Ralph Nader’s Unsafe at Any Speed (1965), and the ensuing hysteria about automobile safety. Consumers had, for a long time, been demanding — and getting — safer (and more reliable) automobiles. The market works, when you allow it to do its job.

The initial decline in the fatality rate, after 1909, marks the transition from open-sided, unenclosed, buggy-like conveyances to cars with closed sides and metal roofs. Improvements in highway design must have helped, too. Ironically, the drop in the fatality rate became more pronounced after the onset of Prohibition in 1920. It leveled off a bit in the late 1920s, when the “reckless youth of the Jazz Age” came to the fore, equipped with cars and bootleg gin. The rate then spiked at the (official) end of Prohibition (1933), suggesting that that ignoble experiment had some effect on Americans’ drinking habits. The slight bulge during World War II reflects the increasing unreliability of autos then in use; relatively few Americans could afford new cars during the Depression, and new cars weren’t built during the war. The vigorous descent of the fatality rate from 1945 to the early 1960s captures the effects of (a) the resumption of auto production after WWII and (b) continued improvements in auto and highway design. Later bulges and dips in the fatality rate can be traced to the influence of a growing, then declining, population of young adults and the (presumably related) rise and fall in per capita alcohol consumption. Then, along came the cell-phone eruption, with its tidal wave of inattentive drivers, as impaired as if they had been drinking. (The prospect of encountering a cell-phone-using drunk driver is frightening.)

Here are some observations and predictions:

  • In the 48 years from 1909 to 1957 — when the Interstate Highway System was in its infancy and eight years before Nader published Unsafe at Any Speed — the fatality rate dropped from 45.33 to 5.73 fatalities per million vehicle-miles. That’s 39.6 fewer fatalities per million vehicle-miles, a drop of 87 percent.
  • In the 48 years from 1957 to 2005 — the era of federalization — the fatality rate dropped to 1.45 fatalities per million vehicle-miles. That’s 4.28 fewer fatalities per million vehicle-miles, a drop of 73 percent. The smaller absolute and relative decline during these 48 years than in the preceding ones can be explained, in part, by the Peltzman effect (discussed below).
  • Traffic fatalities will continue to drop at about the same rate, whether or not cell-phone bans are widely adopted and enforced. Why? Because technology will save the day. Moore’s law (a description of the declining cost of computing technology) will lead to cheap, reliable, sensor-controlled warning, steering, and braking systems.
  • But the already low fatality rate can’t go much lower, in absolute terms. It may drop another 70 to 80 percent in the next 48 years, from about 1.5 to about 0.3.

I now come to the Peltzman effect: “the hypothesized tendency of people to react to a safety regulation by increasing other risky behavior, offsetting some or all of the benefit of the regulation.” The effect is named after Sam Peltzman, a professor economics at the University of Chicago, who in the 1970s originated the theory of offsetting behavior. Peltzman, writing in 2004, had this to say:

A recent article [here] by Alma Cohen and Linan Einav (2003) on the effects of mandatory seatbelt use laws…. shares with most such studies the crucial bottom line: The real-world effect of these laws on highway mortality is substantially less than it should be if there was no offsetting behavior. [Cohen and Einav] conclude that the increased belt usage occasioned by these laws should, in the absence of any behavioral response, have saved more than three times as many lives as were in fact saved.

Equally important, this kind of “regulatory failure” does not arise because the engineers at NHTSA are wrong abou the effectiveness of the devices they prescribe. Most studies show that, if you are involved in a serious accident, you are much better off buckled than not and with an air bag rather than without. The auto safety liberature attributes the shortfall, either implicitly or explicitly, to an offsetting increase in the likelihood of aserious accident.

Imagine the lives that would have been saved without the “help” of the Naderites of this world.
__________
SOURCES

Fatality Rates. These are from the Statistical Abstract of the United States (online version), Table HS-41, Transportation Indicators for Motor Vehicles and Airlines: 1900 to 2001, and Table 1071, Motor Vehicle Accidents–Number and Deaths: 1980 to 2005.

Population aged 15-24. The numbers of persons aged 15-24 are from the Statistical Abstract, Table HS-3, Population by Age: 1900 to 2002, and Table 7, Resident Population by Age and Sex: 1980 to 2006. The same tables give total population, which I used to compute the percentage of the population aged 15-24.

Alcohol consumption. Estimates of annual, per capita consumption for 1970-2005 are from Per capita ethanol consumption for States, census regions, and the United States, 1970–2005 (National Institute on Alcohol Abuse and Alcoholism).

Per capital cell-phone use. I derived monthly cell-phone use, by year, from Trends in Telephone Service, February 2007 (Wireline Competition Bureau, Industry Analysis and Technology Division, Federal Communications Commission). I obtained total monthly cell-phone usage by multiplying the December values for the number of subscribers, given in tables 11-1 and 11-3, by the average number of minutes of use per month, given in table 11-3. The values for monthly minutes begin with 1993, so I estimated the values for 1984-92 by ussing the average of the values for 1993-98. To estimate per capita use, I divided total monthly minutes by the population of the U.S. (see above).

Back to the Drawing Board: Reflections on Architecture

Guest post:

A recent exhibit at the Library of Virginia, Never Built Virginia (January 11 – May 21, 2008), documents architectural designs that never made it off the drawing board. Ranging in designs from prosaic 19th century churches to ugly modern high rises, the exhibit forms an interesting cultural and aesthetic chronicle. There are a few items which stand out, like the magnificent Greco-Roman concept for the Library of Virginia, proposed in the 1930s. Unfortunately it was shelved in favor of a drab art deco structure (not the best specimen of that style) when the library was rebuilt in 1940. The state library has since been relocated to a retro-modern, and not totally ungraceful, building just down the street.

Not all modernism is bad, but a little bit goes a long way. And when we are told that “Virginia’s deep-rooted traditionalism doomed many [architectural] schemes” we can be glad. After looking at plans from a few decades ago for the James River area—consisting of angular, massive poured concrete structures—it is fortunate that development was postponed until the recent neo-classical revival, when most of the buildings being put up exhibit tasteful Georgian lines to match the historic downtown.

One of the architects highlighted in Never Built Virginia is Haigh Jamgochian, a 1960s disciple of hyper-modernism. That he is a misanthropic recluse who has a made a career (like so many modern “creative” people) by not actually doing anything, seems appropriate. Admittedly his drawings and models are curious to look at, like the whimsical futurist predictions of old science fiction movies. Jamgochian cites the original Star Trek show as an early influence. But the minute you actually throw up such edifices on real streets, amidst venerable brick, stone and stucco structures, the effect is monstrous. Jamgochian was not very successful in selling his designs, but there are still plenty of disasters scattered about Richmond from the ’60s and ’70s to damage the landscape. Fortunately, as an established east coast city, enough of the traditional buildings have survived to maintain its character.

Perhaps the most that can be said for classic modernism is its symmetry. Of course symmetry is not enough to make a good building. But it’s impossible to imagine good design without it. In that respect postmodernism, with its chaotic fragmentation, is only a further step in the direction of artistic decay in which even traditional elements are haphazardly plundered in the way that barbarians of the Dark Ages appropriated bits and pieces from handsome temples and palaces to construct their poorly made hovels. The effect is to evoke not so much admiration as pity.

On This Date

Wikipedia has several lists of events associated with January 21.

Quotation of the Day

Mark Steyn quotes Arnold Toynbee’s A Study of History

Civilizations die from suicide, not murder.

Precisely.

There are valid, libertarian reasons not to accept everything that is claimed to be a libertarian cause (e.g., sodomistic “marriage,” abortion on demand, and absolute freedom of speech). Those reasons are libertarian in that they go to the foundation of liberty, which can exist only in a civil society founded on the mutual respect, trust, and restraint that arise from the observance of socially evolved norms. The undoing of those norms by the state in the name of liberty is a form of civilizational suicide.

Related posts:
Rights and Liberty” (12 Dec 2007)
Optimality, Liberty, and the Golden Rule” (18 Dec 2007)

An FDR Reader

Thanks to John Ray for bringing my attention to these items:

How FDR Made the Depression Worse,” by Robert Higgs (Feb 1995)
Tough Questions for Defenders of the New Deal,” by Jim Powell (06 Nov 2003)
The Real Deal,” by Amity Shlaes (25 Jun 2007)

Related posts at Liberty Corner include:

Getting it Perfect” (04 May 2004)
The Economic Consequences of Liberty” and an addendum, “The Destruction of Income and Wealth by the State” (01 Jan 2005)
Calling a Nazi a Nazi” (12 Mar 2006)
Things to Come” (27 Jun 2007)
FDR and Fascism” (30 Sep 2007)
A Political Compass: Locating the United States” (13 Nov 2007)
The Modern Presidency: A Tour of American History since 1900” (01 Dec 2007)

Our descent into statism didn’t begin with FDR. (His cousin Teddy got the ball rolling downhill.) But FDR compounded an economic crisis, then exploited it to put us firmly on the path to the nanny state. The rest, as they say, is history.

Thus we now have a “compassionate conservative” as president, and several “Republican” candidates for president who would have been comfortable as New Deal Democrats. Calvin Coolidge must be spinning in his grave at hypersonic speed.

History Lessons

The following is adapted from an introduction that I wrote almost three years ago for “The Modern Presidency: A Tour of American History since 1900,” in its original incarnation.

Chief among the lessons of American history since 1900 is the price we have paid for allowing government to become so powerful. Most Americans today take for granted a degree of government involvement in their lives that would have shocked the Americans of 1900. The growth of governmental power has undermined the voluntary social institutions upon which civil society depends for orderly evolution: family, church, club, and community. The results are evident in the incidence of crime, broken homes, and drug use; in the resort to sex, violence, sensationalism, and banality as modes of entertainment; and generally in the social fragmentation and alienation that beset Americans — in spite of their prosperity.

The other edge of the governmental sword is interference in economic affairs through taxation and regulation. Such interference, which has grown exponentially since the early 1900s, has blunted Americans’ incentives to work hard, invent, innovate, and create new businesses. The result is that Americans — as prosperous as they are — are far less prosperous than they would be had they not ceded so much economic power to government.

Because of the growth of governmental power, much of the freedom that attends Americans’ prosperity is largely illusory: Americans actually have less freedom than they used to have — and much less freedom than envisioned by the founding generation that fought for America’s independence and wrote its Constitution. I am referring not to the imagined excesses of the current administration, which is vigorously and constitutionally defending American citizens against foreign predators. I am referring to such real things as:

  • the diminution of free speech in the name of campaign-finance “reform”
  • the denial of property rights, the right to work, and freedom of association for the sake of racial and sexual “equality”
  • the seizure of private property for private use in the name of “economic development”
  • the interference of government in almost every aspect of commerce, from deciding what may and may not be produced to how it must be produced, advertised, and sold — all to ensure that we do not make mistakes from which we can learn and profit
  • exorbitant taxation at every level of government, which denies those persons who have earned money lawfully the right to decide how to use it lawfully and gives that money, instead, to parasites in and out of government.

Those are the kinds of abuses of governmental power that Americans have acquiesced in — and even clamored for. It is those abuses that should outrage politicians and pundits — and the masses who swallow their distortions and their socialistic agenda.

For a detailed analysis, rich with links to supporting posts and articles, see “A Political Compass: Locating the United States.”

The Modern Presidency: A Tour of American History since 1900

This post traces, through America’s presidencies from the first Roosevelt to the second Bush, the main themes of American history since the turn of the twentieth century. This is a companion-piece to “Presidential Legacies.” The didactic style of the present post reflects its original purpose: to give my grandchildren some insights into American history that aren’t found in standard textbooks.

Theodore Roosevelt (1858-1919) was elected Vice President as a Republican in 1900, when William McKinley was elected to a second term as President. Roosevelt became President when McKinley was assassinated in September 1901. Roosevelt was re-elected President in 1904. He served almost two full terms as President, from September 14, 1901, to March 4, 1909. (Before 1937, a President’s term of office began on March 4 of the year following his election to office.)

Roosevelt was an “activist” President. Roosevelt used what he called the “bully pulpit” of the presidency to gain popular support for programs that exceeded the limits set in the Constitution. Roosevelt was especially willing to use the power of government to regulate business and to break up companies that had become successful by offering products that consumers wanted. Roosevelt was typical of politicians who inherited a lot of money and didn’t understand how successful businesses provided jobs and useful products for less-wealthy Americans.

Roosevelt was more like the Democrat Presidents of the Twentieth Century. He did not like the “weak” government envisioned by the authors of the Constitution. The authors of the Constitution designed a government that would allow people to decide how to live their own lives (as long as they didn’t hurt other people) and to run their own businesses as they wished to (as long as they didn’t cheat other people). The authors of the Constitution thought government should exist only to protect people from criminals and foreign enemies.

William Howard Taft (1857-1930), a close friend of Theodore Roosevelt, served as President from March 4, 1909, to March 4, 1913. Taft ran for the presidency as a Republican in 1908 with Roosevelt’s support. But Taft didn’t carry out Roosevelt’s anti-business agenda aggressively enough to suit Roosevelt. So, in 1912, when Taft ran for re-election as a Republican, Roosevelt ran for election as a Progressive (a newly formed political party). Many Republican voters decided to vote for Roosevelt instead of Taft. The result was that a Democrat, Woodrow Wilson, won the most electoral votes. Although Taft was defeated for re-election, he later became Chief Justice of the United States, making him the only person ever to have served as head of the executive and judicial branches of the U.S. Government.

Thomas Woodrow Wilson (1856-1924) served as President from March 4 1913 to March 4, 1921. (Wilson didn’t use his first name, and was known officially as Woodrow Wilson.) Wilson is the only President to have earned the degree of doctor of philosophy. Wilson’s field of study was political science, and he had many ideas about how to make government “better.” But “better” government, to Wilson, was “strong” government of the kind favored by Theodore Roosevelt.

Wilson was re-elected in 1916 because he promised to keep the United States out of World War I, which had begun in 1914. But Wilson changed his mind in 1917 and asked Congress to declare war on Germany. After the war, Wilson tried to get the United States to join the League of Nations, an international organization that was supposed to prevent future wars by having nations assemble to discuss their differences. The U.S. Senate, which must approve America’s membership in international organizations, refused to join the League of Nations. The League did not succeed in preventing future wars because wars are started by leaders who don’t want to discuss their differences with other nations.

Warren Gamaliel Harding (1865-1923), a Republican, was elected in 1920 and inaugurated on March 4, 1921. Harding asked voters to reject the kind of government favored by Democrats, and voters gave Harding what is known as a “landslide” victory; he received 60 percent of the votes cast in the 1920 election for president, one of the highest percentages ever recorded. Harding’s administration was about to become involved in a major scandal when Harding died suddenly on August 3, 1923, while he was on a trip to the West Coast. The exact cause of Harding’s death is unknown, but he may have had a stroke when he learned of the impending scandal, which involved Albert Fall, Secretary of the Interior. Fall had secretly allowed some of his business associates to lease government land for oil-drilling, in return for personal loans.

There were a few other scandals, but Harding probably had nothing to do with any of them. Because of the scandals, most historians say that they consider Harding to have been a poor President. But that isn’t the real reason for their dislike of Harding. Most historians, like most college professors, favor “strong” government. Historians don’t like Harding because he didn’t use the power of government to interfere in the nation’s economy. An important result of Harding’s policy (called laissez-faire, or “hands off”) was high employment and increasing prosperity during the 1920s.

John Calvin Coolidge (1872-1933), who was Harding’s Vice President, became President upon Harding’s death in 1923. (Coolidge didn’t use his first name, and was known as Calvin.) Coolidge was elected President in 1924. He served as President from August 3, 1923, to March 4, 1929. Coolidge continued Harding’s policy of not interfering in the economy, and people continued to become more prosperous as businesses grew and hired more people and paid them higher wages. Coolidge was known as “Silent Cal” because he was a man of few words. He said only what was necessary for him to say, and he meant what he said. That was in keeping with his approach to the presidency. He was not the “activist” that reporters and historians like to see in the presidency; he simply did the job required of him by the Constitution, which was to execute the laws of the United States. He continued Harding’s hands-off policy, and the country prospered as a result. Coolidge chose not run for re-election in 1928, even though he was quite popular.

Herbert Clark Hoover (1874-1964), a Republican who had been Secretary of Commerce under Coolidge, was elected to the presidency in 1928. Hoover won 58 percent of the popular vote, an endorsement of the hands-off policy of Harding and Coolidge. Hoover’s administration is known mostly for the huge drop in the price of stocks (shares of corporations, which are bought and sold in places known as stock exchanges), and for the Great Depression that was caused partly by the “Crash” — as it became known. The rate of unemployment (the percentage of American workers without jobs) rose from 3 percent just before the Crash to 25 percent by 1933, at the depth of the Great Depression.

The Crash had two main causes. First, the prices of shares in businesses (called stocks) began to rise sharply in the late 1920s. That caused many persons to borrow money in order to buy stocks, in the hope that the price of stocks would continue to rise. If the price of stocks continued to rise, buyers could sell their stocks at a profit and repay the money they had borrowed. But when stock prices got very high in the fall of 1929, some buyers began to worry that prices would fall, so they began to sell their stocks. That drove down the price of stocks, and caused more buyers to sell in the hope of getting out of the stock market before prices fell further. But prices went down so quickly that almost everyone who owned stocks lost money. Prices of stocks kept going down. By 1933, many stocks had become worthless and most stocks were selling for only a small fraction of prices that they had sold for before the Crash.

Because so many people had borrowed money to buy stocks, they went broke when stock prices dropped. When they went broke, they were unable to pay their other debts. That had a ripple effect throughout the economy. As people went broke they spent less money and were unable to pay their debts. Banks had less money to lend. Because people were buying less from businesses, and because businesses couldn’t get loans to stay in business, many businesses closed and people lost their jobs. Then the people who lost their jobs had less money to spend, and so more people lost their jobs.

The effects of the Great Depression were felt in other countries because Americans couldn’t afford to buy as much as they used to from other countries. Also, Congress passed a law known as the Smoot-Hawley Tarrif Act, which President Hoover signed. The Smoot-Hawley Act raised tarrifs (taxes) on items imported into the United States, which meant that Americans bought even less from foreign countries. Foreign countries passed similar laws, which meant that foreigners began to buy less from Americans, which put more Americans out of work.

The economy would have recovered quickly, as it had done in the past when stock prices fell and unemployment increased. But the actions of government — raising tarrifs and making loans harder to get — only made things worse. What could have been a brief recession turned into the Great Depression. People were frightened. They blamed President Hoover for their problems, although President Hoover didn’t cause the Crash. Hoover ran for re-election in 1932, but he lost to Franklin Delano Roosevelt, a Democrat.

Franklin Delano Roosevelt (1882-1945), known as FDR, served as President from March 4, 1933 until his death on April 12, 1945, just a month before V-E Day. FDR was elected to the presidency in 1932, 1936, 1940, and 1944 — the only person elected more than twice. Roosevelt was a very popular President because he served during the Depression and World War II, when most Americans — having lost faith in themselves — sought reassurance that “someone was in charge.” FDR was not universally popular; his share of the popular vote rose from 57 percent in 1932 to 61 percent in 1936, but then dropped to 55 percent in 1940 and 54 percent in 1944. Americans were coming to understand what FDR’s opponents knew at the time, and what objective historians have said since:

  • FDR’s efforts to bring America out of the Depression only made it worse.
  • FDR’s leadership during World War II faltered toward the end, when he was gravely ill and allowed the Soviet Union to take over Eastern Europe.

FDR’s program to end the Depression was known as the New Deal. It consisted of welfare programs, which put people to work on government projects instead of making useful things. It also consisted of higher taxes and other restrictions on business, which discouraged people from starting and investing in businesses, which is the cure for unemployment.

Roosevelt did try to face up to the growing threat from Germany and Japan. However, he wasn’t able to do much to prepare America’s defenses because of strong isolationist and anti-war feelings in the country. Those feelings were the result of America’s involvement in World War I. (Similar feelings in Great Britain kept that country from preparing for war with Germany, which encouraged Hitler’s belief that he could easily conquer Europe.)

When America went to war after Japan’s attack on Pearl Harbor, Roosevelt proved to be an able and inspiring commander-in-chief. But toward the end of the war his health was failing and he was influenced by close aides who were pro-communist and sympathetic to the Soviet Union (Union of Soviet Socialist Republics, or USSR). Roosevelt allowed Soviet forces to claim Eastern Europe, including half of Germany. Roosevelt also encouraged the formation of the United Nations, where the Soviet Union (now Russia) has had a strong voice because it was made a permanent member of the Security Council, the policy-making body of the UN. As a member of the Security Council, Russia can obstruct actions proposed by the United States.

Roosevelt’s appeasement of the USSR caused Josef Stalin (the Soviet dictator) to believe that the U.S. had weak leaders who would not challenge the USSR’s efforts to spread Communism. The result was the Cold War, which lasted for 45 years. During the Cold War the USSR developed nuclear weapons, built large military forces, kept a tight rein on countries behind the Iron Curtain (in Eastern Europe), and expanded its influence to other parts of the world.

Stalin’s belief in the weakness of U.S. leaders was largely correct, until Ronald Reagan became President. As I will discuss, Reagan’s policies led to the end of the Cold War.

Harry S. Truman (1884-1972), who was FDR’s Vice President, became President upon FDR’s death. Truman was re-elected in 1948, so he served as President from April 12, 1945 until January 20, 1953 — almost two full terms. Truman made one right decision during his presidency. He approved the dropping of atomic bombs on Japan. Although hundreds of thousands of Japanese were killed by the bombs, the Japanese soon surrendered. If the Japanese hadn’t surrendered then, U.S. forces would have invaded Japan and millions of Americans and Japanese lives would have been lost in the battles that followed the invasion.

Truman ordered drastic reductions in the defense budget because he thought that Stalin was an ally of the United States. (Truman, like FDR, had advisers who were Communists.) Truman changed his mind about defense budgets, and about Stalin, when Communist North Korea attacked South Korea in 1950. The attack on South Korea came after Truman’s Secretary of State (the man responsible for relations with other countries) made a speech about countries that the United States would defend. South Korea was not one of those countries.

When South Korea was invaded, Truman asked General of the Army Douglas MacArthur to lead the defense of South Korea. MacArthur planned and executed the amphibious landing at Inchon, which turned the war in favor of South Korea and its allies. The allied forces then succeeded in pushing the front line far into North Korea. Communist China then entered the war on the side of North Korea. MacArthur wanted to counterattack Communist Chinese bases and supply lines in Manchuria, but Truman wouldn’t allow that. Truman then “fired” MacArthur because MacArthur spoke publicly about his disagreement with Truman’s decision. The Chinese Communists pushed allied forces back and the Korean War ended in a deadlock, just about where it had begun, near the 38th parallel.

In the meantime, Communist spies had stolen the secret plans for making atomic bombs. They were able to do that because Truman refused to hear the truth about Communist spies who were working inside the government. By the time Truman left office the Soviet Union had manufactured nuclear weapons, had strengthened its grip on Eastern Europe, and was beginning to expand its influence into the Third World (the nations of Africa and the Middle East).

Truman was very unpopular by 1952. As a result he chose not to run for re-election, even though he could have done so. (The “Lame Duck” amendment to the Constitution, which bars a person from serving as President for more than six years was adopted while Truman was President, but it didn’t apply to him.)

Dwight David Eisenhower (1890-1969), a Republican, served as President from January 20, 1953 to January 20, 1961. Eisenhower (also known by his nickname, “Ike”) received 55 percent of the popular vote in 1952 and 57 percent in 1956; his Democrat opponent in both elections was Adlai Stevenson. The Republican Party chose Eisenhower as a candidate mainly because he had become famous as a general during World War II. Republican leaders thought that by nominating Eisenhower they could end the Democrats’ twenty-year hold on the presidency. The Republican leaders were right about that, but in choosing Eisenhower as a candidate they rejected the Republican Party’s traditional stand in favor of small government.

Eisenhower was a “moderate” Republican. He was not a “big spender” but he did not try to undo all of the new government programs that had been started by FDR and Truman. Traditional Republicans eventually fought back and, in 1964, nominated a small-government candidate named Barry Goldwater. I will discuss him when I get to President Lyndon B. Johnson.

Eisenhower was a popular President, and he was a good manager, but he gave the impression of being “laid back” and not “in charge” of things. The news media had led Americans to believe that “activist” Presidents are better than laissez-faire Presidents, and so there was by 1960 a lot of talk about “getting the country moving again” — as if it was the job of the President to “run” the country.

John Fitzgerald Kennedy (1917-1963), a Democrat, was elected in 1960 to succeed President Eisenhower. Kennedy, who became known as JFK, served from January 20, 1961, until November 22, 1963, when he was assassinated in Dallas, Texas. JFK was elected narrowly (he received just 50 percent of the popular vote), but one reason that he won was his image of “vigorous youth” (he was 27 years younger than Eisenhower). In fact, JFK had been in bad health for most of his life. He seemed to be healthy only because he used a lot of medications. Those medications probably impaired his judgment and would have caused him to die at a relatively early age if he hadn’t been assassinated.

Late in Eisenhower’s administration a Communist named Fidel Castro had taken over Cuba, which is only 90 miles south of Florida. The Central Intelligence Agency then began to work with anti-Communist exiles from Cuba. The exiles were going to attempt an invasion of Cuba at a place called the Bay of Pigs. In addition to providing the necessary military equipment, the U.S. was also going to provide air support during the invasion.

JFK succeeded Eisenhower before the invasion took place, in April 1961. JFK approved changes in the invasion plan that resulted in the failure of the invasion. The most important change was to discontinue air support for the invading forces. The exiles were defeated, and Castro has remained firmly in control of Cuba.

The failed invasion caused Castro to turn to the USSR for military and economic assistance. In exchange for that assistance, Castro agreed to allow the USSR to install medium-range ballistic missiles in Cuba. That led to the so-called Cuban Missile Crisis in 1962. Many historians give Kennedy credit for resolving the crisis and avoiding a nuclear war with the USSR. The Russians withdrew their missiles from Cuba, but JFK had to agree to withdraw American missiles from bases in Turkey.

The myth that Kennedy had stood up to the Russians made him more popular in the U.S. His major accomplishment, which Democrats today like to ignore, was to initiate tax cuts, which became law after his assassination. The Kennedy tax cuts helped to make America more prosperous during the 1960s by giving people more money to spend, and by encouraging businesses to expand and create jobs.

The assassination of JFK on November 22, 1963, in Dallas was a shocking event. It also led many Americans to believe that JFK would have become a great President if he had lived and been re-elected to a second term. There is little evidence that JFK would have become a great President. His record in Cuba suggests that he would not have done a good job of defending the country.

Lyndon Baines Johnson (1908-1973), also known as LBJ, was Kennedy’s Vice President and became President upon Kennedy’s assassination. LBJ was re-elected in 1964; he served as President from November 22, 1963 to January 20, 1969. LBJ’s Republican opponent in 1964 was Barry Goldwater, who was an old-style Republican conservative, in favor of limited government and a strong defense. LBJ portrayed Goldwater as a threat to America’s prosperity and safety, when it was LBJ who was the real threat. Americans were still in shock about JFK’s assassination, and so they rallied around LBJ, who won 61 percent of the popular vote.

LBJ is known mainly for two things: his “Great Society” program and the war in Vietnam. The Great Society program was an expansion of FDR’s New Deal. It included such things as the creation of Medicare, which is medical care for retired persons that is paid for by taxes. Medicare is an example of a “welfare” program. Welfare programs take money from people who earn it and give money to people who don’t earn it. The Great Society also included many other welfare programs, such as more benefits for persons who are unemployed. The stated purpose of the expansion of welfare programs under the Great Society was to end poverty in America, but that didn’t happen. The reason it didn’t happen is that when people receive welfare they don’t work as hard to take care of themselves and their families, and they don’t save enough money for their retirement. Welfare actually makes people worse off in the long run.

America’s involvement in Vietnam began in the 1950s, when Eisenhower was President. South Vietnam was under attack by Communist guerrillas, who were sponsored by North Vietnam. Small numbers of U.S. forces were sent to South Vietnam to train and advise South Vietnamese forces. More U.S. advisers were sent by JFK, but within a few years after LBJ became President he had turned the war into an American-led defense of South Vietnam against Communist guerrillas and regular North Vietnamese forces. LBJ decided that it was important for the U.S. to defeat a Communist country and stop Communism from spreading in Southeast Asia.

However, LBJ was never willing to commit enough forces in order to win the war. He allowed air attacks on North Vietnam, for example, but he wouldn’t invade North Vietnam because he was afraid that the Chinese Communists might enter the war. In other words, like Truman in Korea, LBJ was unwilling to do what it would take to win the war decisively. Progress was slow and there were a lot of American casualties from the fighting in South Vietnam. American newspapers and TV began to focus attention on the casualties and portray the war as a losing effort. That led a lot of Americans to turn against the war, and college students began to protest the war (because they didn’t want to be drafted). Attention shifted from the war to the protests, giving the world the impression that America had lost its resolve. And it had.

LBJ had become so unpopular because of the war in Vietnam that he decided not to run for President in 1968. Most of the candidates for President campaigned by saying that they would end the war. In effect, the United States had announced to North Vietnam that it would not fight the war to win. The inevitable outcome was the withdrawal of U.S. forces from Vietnam, which finally happened in 1973, under LBJ’s successor, Richard Nixon. South Vietnam was left on its own, and it fell to North Vietnam in 1975.

Richard Milhous Nixon (1913-1994) was a Republican. He won the election of 1968 by beating the Democrat candidate, Hubert H. Humphrey (who had been LBJ’s Vice President), and a third-party candidate, George C. Wallace. Nixon and Humphrey each received 43 percent of the popular vote; Wallace received 14 percent. If Wallace had not been a candidate, most of the votes cast for him probably would have been cast for Nixon.

Even though Nixon received less than half of the popular vote, he won the election because he received a majority of electoral votes. Electoral votes are awarded to the winner of each State’s popular vote. Nixon won a lot more States than Humphrey and Wallace, so Nixon became President.

Nixon won re-election in 1972, with 61 percent of the popular vote, by beating a Democrat (George McGovern) who would have expanded LBJ’s Great Society and cut America’s armed forces even more than they were cut after the Vietnam War ended. Nixon’s victory was more a repudiation of McGovern than it was an endorsement of Nixon. His second term ended in disgrace when he resigned the presidency on August 9, 1974.

Nixon called himself a conservative, but he did nothing during his presidency to curb the power of government. He did not cut back on the Great Society. He spent a lot of time on foreign policy. But Nixon’s diplomatic efforts did nothing to make the USSR and Communist China friendlier to the United States. Nixon had shown that he was essentially a weak President by allowing U.S. forces to withdraw from Vietnam. Dictatorial rulers like do not respect countries that display weakness.

Nixon was the first (and only) President who resigned from office. He resigned because the House of Representatives was ready to impeach him. An impeachment is like a criminal indictment; it is a set of charges against the holder of a public office. If Nixon had been impeached by the House of Representatives, he would have been tried by the Senate. If two-thirds of the Senators had voted to convict him he would have been removed from office. Nixon knew that he would be impeached and convicted, so he resigned.

The main charge against Nixon was that he ordered his staff to cover up his involvement in a crime that happened in 1972, when Nixon was running for re-election. The crime was a break-in at the headquarters of the Democratic Party in Washington, D.C. The purpose of the break-in was to obtain documents that might help Nixon’s re-election effort. The men who participated in the break-in were hired by aides to Nixon, and Nixon himself probably authorized the break-in. Nixon certainly authorized the effort to cover up the involvement of his aides in the break-in. All of the details about the break-in and Nixon’s involvement were revealed as a result of investigations by Congress, which were helped by reporters who were doing their own investigative work. Because the Democratic Party’s headquarters was located in the Watergate Building in Washington, D.C., this episode became known as the Watergate Scandal.

Gerald Rudolph Ford (1913 – ), who was Nixon’s Vice President at the time Nixon resigned, became President on August 9, 1974 and served until January 20, 1977. Ford succeeded Spiro T. Agnew, who had been Nixon’s Vice President until October 10, 1973, when he resigned because he had been taking bribes while he was Governor of Maryland (the job he had before becoming Vice President).

Ford became the first Vice President chosen in accordance with the Twenty-Fifth Amendment to the Constitution. That amendment spells out procedures for filling vacancies in the presidency and vice presidency. When Vice President Agnew resigned, President Nixon nominated Ford as Vice President, and the nomination was approved by a majority vote of the House and Senate. Then, when Ford became President, he nominated Nelson Rockefeller to fill the vice presidency, and Rockefeller was elected Vice President by the House and Senate.

Ford ran for re-election in 1976, but he was defeated by James Earl Carter, mainly because of the Watergate Scandal. Ford was not involved in the scandal, but voters often cast votes for silly reasons. Carter’s election was a rejection of Richard Nixon, who had left office two years earlier, not a vote of confidence in Carter.

James Earl (“Jimmy”) Carter (1924 – ), a Democrat who had been Governor of Georgia, received only 50 percent of the popular vote. He was defeated for re-election in 1980, so he served as President from January 20, 1977 to January 20, 1981.

Carter was an ineffective President who failed at the most important duty of a President, which is to protect Americans from foreign enemies. His failure came late in his term of office, during the Iran Hostage Crisis. The Shah of Iran had ruled the country for 38 years. He was overthrown in 1979 by a group of Muslim clerics (religious men) who disliked the Shah’s pro-American policies. In November 1979 a group of students loyal to the new Muslim government of Iran invaded the American embassy in Tehran (Iran’s capital city) and took 66 hostages. Carter approved rescue efforts, but they were poorly planned. The hostages were still captive by the time of the presidential election in 1980. Carter lost the election largely because of his feeble rescue efforts.

In recent years Carter has become an outspoken critic of America’s foreign policy. Carter is sympathetic to America’s enemies and he opposes strong military action in defense of America.

Ronald Wilson Reagan (1911-2004), a Republican, succeeded Jimmy Carter as President. Reagan won 51 percent of the popular vote in 1980. Reagan would have received more votes, but a former Republican (John Anderson) ran as a third-party candidate and took 7 percent of the popular vote. Reagan was re-elected in 1984 with 59 percent of the popular vote. He served as President from January 20, 1981, until January 20, 1989.

Reagan had two goals as President: to reduce the size of government and to increase America’s military strength. He was unable to reduce the size of government because, for most of his eight years in office, Democrats were in control of Congress. But Reagan was able to get Congress to approve large reductions in income-tax rates. Those reductions led to more spending on consumer goods and more investment in the creation of new businesses. As a result, Americans had more jobs and higher incomes.

Reagan succeeded in rebuilding America’s military strength. He knew that the only way to defeat the USSR, without going to war, was to show the USSR that the United States was stronger. A lot of people in the United States opposed spending more on military forces; they though that it would cause the USSR to spend more. They also thought that a war between the U.S. and USSR would result. Reagan knew better. He knew that the USSR could not afford to keep up with the United States. Reagan was right. Not long after the end of his presidency the countries of Eastern Europe saw that the USSR was really a weak country, and they began to break away from the USSR. Residents of Berlin demolished the Berlin Wall, which the USSR had erected in 1961 to keep East Berliners from crossing over into West Berlin. East Germany was freed from Communist rule, and it reunited with West Germany. The USSR collapsed, and many of the countries that had been part of the USSR became independent. We owe the end of the Soviet Union and its influence President Reagan’s determination to defeat the threat posed by the Soviet Union.

George Herbert Walker Bush (1924 – ), a Republican, was Reagan’s Vice President. He won 54 percent of the popular vote when he defeated his Democrat opponent, Michael Dukakis, in the election of 1988. Bush lost the election of 1992. He served as President from January 20, 1989 to January 20, 1993.

The main event of Bush’s presidency was the Gulf War of 1990-1991. Iraq, whose ruler was Saddam Hussein, invaded the small neighboring country of Kuwait. Kuwait produces and exports a lot of oil. The occupation of Kuwait by Iraq meant that Saddam Hussein might have been able to control the amount of oil shipped to other countries, including Europe and the United States. If Hussein had been allowed to control Kuwait, he might have moved on to Saudi Arabia, which produces much more oil than Kuwait. President Bush asked Congress to approve military action against Iraq. Congress approved the action, although most Democrats voted against giving President Bush authority to defend Kuwait. The war ended in a quick defeat for Iraq’s armed forces. But President Bush decided not to allow U.S. forces to finish the job and end Saddam Hussein’s reign as ruler of Iraq.

Bush’s other major blunder was to raise taxes, which helped to cause a recession. The country was recovering from the recession in 1992, when Bush ran for re-election, but his opponents were able to convince voters that Bush hadn’t done enough to end the recession. In spite of his quick (but incomplete) victory in the Persian Gulf War, Bush lost his bid for re-election because voters were concerned about the state of the economy.

William Jefferson Clinton (1946 – ), a Democrat, defeated George H.W. Bush in the 1992 election by gaining a majority of the electoral vote. But Clinton won only 43 percent of the popular vote. Bush won 37 percent, and 19 percent went to H. Ross Perot. Perot, a third-party candidate, who received many votes that probably would have been cast for Bush.

Clinton’s presidency got off to a bad start when he sent to Congress a proposal that would have put health care under government control. Congress rejected the plan, and a year later (in 1994) voters went to the polls in large number to elect Republican majorities to the House and Senate.

Clinton was able to win re-election in 1996, but he received only 49 percent of the popular vote. He was re-elected mainly because fewer Americans were out of work and incomes were rising. This economic “boom” was a continuation of the recovery that began under President Reagan. Clinton got credit for the “boom” of the 1990s, which occurred in spite of tax increases passed by Congress while it was still controlled by Democrats.

Clinton was perceived as a “moderate” Democrat because he tried to balance the government’s budget; that is, he tried not to spend more money than the government was receiving in taxes. He was eventually able to balance the budget, but only because he cut defense spending. In addition to that, Clinton made several bad decisions about defense issues. In 1993 he withdrew American troops from Somalia, instead of continuing with the military mission there after some troops were captured and killed by natives. In 1994 he signed an agreement with North Korea that was supposed to keep North Korea from developing nuclear weapons, but the North Koreans continued to work on building nuclear weapons because they had fooled Clinton. By 1998 Clinton knew that al Qaeda had become a major threat when terrorists bombed two U.S. embassies in Africa, but Clinton failed to go to war against al Qaeda. Only after terrorists struck a Navy ship, the USS Cole, in 2000 did Clinton declare terrorism to be a major threat. By then, his term of office was almost over.

Clinton was the second President to be impeached. The House of Representatives impeached him in 1998. He was charged with perjury (lying under oath) when he was the defendant (the person being charged with wrong-doing) in a law suit. The Senate didn’t convict Clinton because every Democrat senator refused to vote for conviction, in spite of overwhelming evidence that Clinton was guilty. The day before Clinton left office he acknowledged his guilt by agreeing to a five-year suspension of his law license. A federal judge later found Clinton guilty of contempt of court for his misleading testimony and fined him $90,000.

Clinton was involved in other scandals during his presidency, but he remains popular with many people because he is good at giving the false impression that he is a nice, humble person.

Clinton’s scandals had more effect on his Vice President, Al Gore, who ran for President as the nominee of the Democrat Party in 2000. His main opponent was George W. Bush, a Republican. A third-party candidate named Ralph Nader also received a lot of votes. The election of 2000 was the closest presidential election since 1876. Bush and Gore each won 48 percent of the popular vote; Nader won 3 percent. The winner of the election was decided by outcome of the vote in Florida. That outcome was the subject of legal proceedings for six weeks. It had to be decided by the U.S. Supreme Court.

Initial returns in Florida gave that State’s electoral votes to Bush, which meant that he would become President. But the Supreme Court of Florida decided that election officials should violate Florida’s election laws and keep recounting the ballots in certain counties. Those counties were selected because they had more Democrats than Republicans, and so it was likely that recounts would favor Gore, the Democrat. The case finally went to the U.S. Supreme Court, which decided that the Florida Supreme Court was wrong. The U.S. Supreme Court ordered an end to the recounts, and Bush was declared the winner of Florida’s electoral votes.

George Walker Bush (1946 – ), a Republican, is the second son of a President to become President. (The first was John Quincy Adams, the sixth President, whose father, John Adams, was the second President. Also, Benjamin Harrison, the 23rd President, was the grandson of William Henry Harrison, the ninth President.) Bush won re-election in 2004, with 51 percent of the popular vote. He has served as President since January 20, 2001.

President Bush’s major accomplishment before September 11, 2001, was to get Congress to cut taxes. The tax cuts were necessary because the economy had been in a recession since 2000. The tax cuts gave people more money to spend and encouraged businesses to expand and create new jobs. The economy has improved a lot because of President Bush’s tax cuts.

The terrorist attacks on September 11, 2001, caused President Bush to give most of his time and attention to the War on Terror. The invasion of Afghanistan, late in 2001, was part of a larger campaign to disrupt terrorist activities. Afghanistan was ruled by the Taliban, a group that gave support and shelter to al Qaeda terrorists. The U.S. quickly defeated the Taliban and destroyed al Qaeda bases in Afghanistan.

The invasion of Iraq, which took place in 2003, was also intended to combat al Qaeda, but in a different way. Iraq, under Saddam Hussein, had been an enemy of the U.S. since the Persian Gulf War of 1990-1991. Hussein was trying to acquire deadly weapons to use against the U.S. and its allies. Hussein was also giving money to terrorists and sheltering them in Iraq. The defeat of Hussein, which came quickly after the invasion of Iraq, was intended to establish a stable, friendly government in the Middle East. It would serve as a base from which U.S. forces could operate against Middle Eastern government that shelter terrorists, and it would serve as a model for other Middle Eastern countries, many of which are dictatorships.

The invasion of Iraq has produced some of the intended results, but there is much unrest there because of long-standing animosity between Sunni Muslims and Shi’a Muslims. There is also much defeatist talk about Iraq — especially by Democrats and the media. That defeatist talk helps to encourage those who are creating unrest in Iraq. It gives them hope that the U.S. will abandon Iraq, just as it abandoned Vietnam more than 30 years earlier.

UPDATE (12/02/07): The final three paragraphs about the War in Iraq are slightly dated, though their thrust is correct. For further reading about Saddam’s aims and his ties to Al Qaeda, go to my “Resources” page and scroll to the the heading “War and Peace.”

Regarding defeatist talk by Democrats and the media, I note especially a recent post at Wolf Howling, “Have Our Copperheads Found Their McClellan in Retired LTG Sanchez?” The author writes:

Several commentators have noted the similarity between our modern day Democrats and the Copperheads of the Civil War. The Copperheads were the virulently anti-war wing that took control of the Democratic party in the 1860’s. Their rhetoric of the day reads like a modern press release from our Democratic Party leadership. Their central meme was that the Civil War was unwinnable and should be concluded….

At the[ir] convention [in 1864], the Democrats nominated retired General George B. McClellan for President. Lincoln had chosen McClellan to command the Union Army in 1861 and then assigned him to command the Army of the Potomac. Lincoln subsuqently relieved McClellan of command in 1862 for his less than stellar performance on the battlefield. McClellan became a bitter and vocal opponent of Lincoln, harshly critical of Lincoln’s prosecution of the war. McClellan and the Copperheads maintained that meme even as the facts on the ground changed drastically with victories by General Sherman in Atlanta and General Sheridan in Shenandoah Valley.

Thus it is not hard to see in McClellan many parallels to retired Lt. Gen. Ricardo Sanchez, the one time top commander in Iraq. Sanchez held the top military position in Iraq during the year after the fall of the Hussein regime, when the insurgency took root and the Abu Ghraib scandal came to light. His was not a successful command and his remarks since show a bitter man.

There’s more about contemporary Copperheads in these posts:

Shall We All Hang Separately?
Foxhole Rats
Foxhole Rats, Redux
Know Thine Enemy
The Faces of Appeasement
Whose Liberties Are We Fighting For?
Words for the Unwise
More Foxhole Rats
Moussaoui and “White Guilt”
The New York Times: A Hot-Bed of Post-Americanism
Post-Americans and Their Progeny
“Peace for Our Time”
Anti-Bush or Pro-Treason?
Parsing Peace

Ahead of His Time

The problem that faces us today … is due to the inherent contradictions of an abnormal state of culture. The natural tendency … is for … society to give itself up passively to the machinery of modern cosmopolitan life. But this is no solution. It leads merely to the breaking down of the old structure of society and the loss of the traditional moral standards without creating anything which can take their place.

As in the decline of the ancient world, the family is steadily losing its form and its social significance, and the state absorbs more and more of the life of its members. The home is no longer a centre of social activity; it has become merely a sleeping place for a number of independent wage-earners. The functions which were formerly fulfilled by the head of the family are now being taken over by the state, which educates the children and takes the responsibility for their maintenance and health.

From Christopher Dawson’s essay, “The Patriarchal Family in History” (1933), collected in The Dynamics of World History (1956). (Paragraph break added: LC.)

Let us hope for an incremental bit of progress on one front: parental choice in the schooling of children. (By progress, of course, I don’t mean the kind of “progress” sought by regressive “progressives,” who would have us and our progeny bow to the almighty state — as long as they control it.)

FDR and Fascism

A blogger (to whom I will not link) once tried to disparage me by referring to my position that (in his words) “Franklin Roosevelt, Adolph Hitler and Joseph Stalin were all essentially dictators.” I suppose that the blogger in question believes Hitler and Stalin to have been dictators. His poorly expressed complaint, therefore, is my lumping of FDR with Hitler and Stalin.

I doubt that the not-to-be-named blogger considers FDR a saint, or even a praiseworthy president. Such a view would be inconsistent with the blogger’s (rather murky) paleo-conservative/libertarian views. The blogger’s apparent aim was not to defend FDR but to discredit me by suggesting that my view of FDR is beyond the pale.*

To the contrary, however, the perception of FDR as a dictator (or dictator manqué) with a fascistic agenda is of long standing and arises from respectable sources. Albert Jay Nock, an early and outspoken opponent of the New Deal — and a paleo-libertarian of the sort admired by the blogger in question — certainly saw Roosevelt’s fascistic agenda for what it was. Many mainstream politicians also attacked Roosevelt’s aims; for example:

While the First New Deal of 1933 had broad support from most sectors, the Second New Deal challenged the business community. Conservative Democrats, led by Al Smith, fought back with the American Liberty League, savagely attacking Roosevelt and equating him with Marx and Lenin.[21]

That Smith and others were unsuccessful in their opposition to FDR’s agenda does not alter the essentially fascistic nature of that agenda.

Now comes David Boaz’s “Hitler, Mussolini, Roosevelt: What FDR had in common with the other charismatic collectivists of the 30s,” a review of Wolfgang Schivelbusch’s Three New Deals: Reflections on Roosevelt’s America, Mussolini’s Italy, and Hitler’s Germany, 1933–1939. Toward the end of the review, Boaz writes:

Why isn’t this book called Four New Deals? Schivelbusch does mention Moscow repeatedly…. But Stalin seized power within an already totalitarian system; he was the victor in a coup. Hitler, Mussolini, and Roosevelt, each in a different way, came to power as strong leaders in a political process. They thus share the “charismatic leadership” that Schivelbusch finds so important.

…B.C. Forbes, the founder of the eponymous magazine, denounced “rampant Fascism” in 1933. In 1935 former President Herbert Hoover was using phrases like “Fascist regimentation” in discussing the New Deal. A decade later, he wrote in his memoirs that “the New Deal introduced to Americans the spectacle of Fascist dictation to business, labor and agriculture,” and that measures such as the Agricultural Adjustment Act, “in their consequences of control of products and markets, set up an uncanny Americanized parallel with the agricultural regime of Mussolini and Hitler.” In 1944, in The Road to Serfdom, the economist F.A. Hayek warned that economic planning could lead to totalitarianism. He cautioned Americans and Britons not to think that there was something uniquely evil about the German soul. National Socialism, he said, drew on collectivist ideas that had permeated the Western world for a generation or more.

In 1973 one of the most distinguished American historians, John A. Garraty of Columbia University, created a stir with his article “The New Deal, National Socialism, and the Great Depression.” Garraty was an admirer of Roosevelt but couldn’t help noticing, for instance, the parallels between the Civilian Conservation Corps and similar programs in Germany. Both, he wrote, “were essentially designed to keep young men out of the labor market. Roosevelt described work camps as a means for getting youth ‘off the city street corners,’ Hitler as a way of keeping them from ‘rotting helplessly in the streets.’ In both countries much was made of the beneficial social results of mixing thousands of young people from different walks of life in the camps. Furthermore, both were organized on semimilitary lines with the subsidiary purposes of improving the physical fitness of potential soldiers and stimulating public commitment to national service in an emergency.”

And in 1976, presidential candidate Ronald Reagan incurred the ire of Sen. Edward Kennedy (D-Mass.), pro-Roosevelt historian Arthur M. Schlesinger Jr., and The New York Times when he told reporters that “fascism was really the basis of the New Deal.”

You get the idea by now, I hope. The correlation of FDR’s regime with those of Hitler and Mussolini (not to mention Stalin’s) is hardly discredited or beyond the pale.

Boaz writes, also, about the ends and means of the New Deal:

On May 7, 1933, just two months after the inauguration of Franklin Delano Roosevelt, the New York Times reporter Anne O’Hare McCormick wrote that the atmosphere in Washington was “strangely reminiscent of Rome in the first weeks after the march of the Blackshirts, of Moscow at the beginning of the Five-Year Plan.…America today literally asks for orders.” The Roosevelt administration, she added, “envisages a federation of industry, labor and government after the fashion of the corporative State as it exists in Italy.”

That article isn’t quoted in Three New Deals, a fascinating study by the German cultural historian Wolfgang Schivelbusch. But it underscores his central argument: that there are surprising similarities between the programs of Roosevelt, Mussolini, and Hitler….

The dream of a planned society infected both right and left. Ernst Jünger, an influential right-wing militarist in Germany, reported his reaction to the Soviet Union: “I told myself: granted, they have no constitution, but they do have a plan. This may be an excellent thing.” As early as 1912, FDR himself praised the Prussian-German model: “They passed beyond the liberty of the individual to do as he pleased with his own property and found it necessary to check this liberty for the benefit of the freedom of the whole people,” he said in an address to the People’s Forum of Troy, New York.

American Progressives studied at German universities, Schivelbusch writes, and “came to appreciate the Hegelian theory of a strong state and Prussian militarism as the most efficient way of organizing modern societies that could no longer be ruled by anarchic liberal principles.” The pragmatist philosopher William James’ influential 1910 essay “The Moral Equivalent of War” stressed the importance of order, discipline, and planning….

In the North American Review in 1934, the progressive writer Roger Shaw described the New Deal as “Fascist means to gain liberal ends.” He wasn’t hallucinating. FDR’s adviser Rexford Tugwell wrote in his diary that Mussolini had done “many of the things which seem to me necessary.” Lorena Hickok, a close confidante of Eleanor Roosevelt who lived in the White House for a spell, wrote approvingly of a local official who had said, “If [President] Roosevelt were actually a dictator, we might get somewhere.” She added that if she were younger, she’d like to lead “the Fascist Movement in the United States.” At the National Recovery Administration (NRA), the cartel-creating agency at the heart of the early New Deal, one report declared forthrightly, “The Fascist Principles are very similar to those we have been evolving here in America.

Roosevelt himself called Mussolini “admirable” and professed that he was “deeply impressed by what he has accomplished.”…

Schivelbusch argues that “Hitler and Roosevelt were both charismatic leaders who held the masses in their sway—and without this sort of leadership, neither National Socialism nor the New Deal would have been possible.” This plebiscitary style established a direct connection between the leader and the masses. Schivelbusch argues that the dictators of the 1930s differed from “old-style despots, whose rule was based largely on the coercive force of their praetorian guards.” Mass rallies, fireside radio chats—and in our own time—television can bring the ruler directly to the people in a way that was never possible before.

To that end, all the new regimes of the ’30s undertook unprecedented propaganda efforts. “Propaganda,” Schivelbusch writes “is the means by which charismatic leadership, circumventing intermediary social and political institutions like parliaments, parties, and interest groups, gains direct hold upon the masses.” The NRA’s Blue Eagle campaign, in which businesses that complied with the agency’s code were allowed to display a “Blue Eagle” symbol, was a way to rally the masses and call on everyone to display a visible symbol of support. NRA head Hugh Johnson made its purpose clear: “Those who are not with us are against us.”…

Program and propaganda merged in the public works of all three systems. The Tennessee Valley Authority, the autobahn, and the reclamation of the Pontine marshes outside Rome were all showcase projects, another aspect of the “architecture of power” that displayed the vigor and vitality of the regime.

If FDR’s aims were fascistic — and clearly they were — why didn’t the U.S. become a police state, in the mold of Germany, Italy, and the Soviet Union? Boaz concludes:

To compare is not to equate, as Schivelbusch says. It’s sobering to note the real parallels among these systems. But it’s even more important to remember that the U.S. did not succumb to dictatorship. Roosevelt may have stretched the Constitution beyond recognition, and he had a taste for planning and power previously unknown in the White House. But he was not a murderous thug. And despite a population that “literally waited for orders,” as McCormick put it, American institutions did not collapse. The Supreme Court declared some New Deal measures unconstitutional. Some business leaders resisted it. Intellectuals on both the right and the left, some of whom ended up in the early libertarian movement, railed against Roosevelt. Republican politicians (those were the days!) tended to oppose both the flow of power to Washington and the shift to executive authority.

Germany had a parliament and political parties and business leaders, and they collapsed in the face of Hitler’s movement. Something was different in the United States. Perhaps it was the fact that the country was formed by people who had left the despots of the Old World to find freedom in the new, and who then made a libertarian revolution. Americans tend to think of themselves as individuals, with equal rights and equal freedom. A nation whose fundamental ideology is, in the words of the recently deceased sociologist Seymour Martin Lipset, “antistatism, laissez-faire, individualism, populism, and egalitarianism” will be far more resistant to illiberal ideologies.

In other words, Americans eluded fascism not because of FDR’s intentions but (in part) because FDR wasn’t “a murderous thug” and (in the main) because of the strength of our “national character.”

Will our character enable us to resist the next FDR? Given the changes in our character since the end of World War II, I very much doubt it.

(For more about FDR’s regime, its objectives, and its destructive consequences, see this, this, and this.)
__________
* That the blogger was trying to discredit me in order to discredit someone related to me is only one bit of evidence of the blogger’s intellectual ineptitude. Further evidence is found in his resort to name calling and logical inconsistency. For example, I am, in one sentence, guilty of “extreme libertarianism” and, in another, an attacker of extreme libertarians, that is, those who “adhere[] to the [non-aggression] principle with deranged fervor” (my words).

As for my so-called extreme libertarianism, if the blogger had bothered to read my blog carefully he would have found plenty of evidence that I am far from being an extreme, individualistic, anti-state libertarian. See, for example, this post and the compilation of posts referenced therein, both of which I published more than a month before the blogger attacked me and my views about FDR.

I could say much more about the blogger’s rabid irrationality, but the main point of this post is FDR’s barely contained fascistic agenda, so I will stop here. Happily for the blogosphere, the blogger-not-to-be-named-here seems to have suspended his blogging operation.

Not Enough Boots: The Why of It

REVISED 08/30/07

This post complements these:

Not Enough Boots
Defense as the Ultimate Social Service
I Have an Idea
The Price of Liberty
How to View Defense Spending
The Best Defense…

Iran foments much of the so-called insurgency in Iraq. Iraqi terrorists might be infiltrating the U.S. via Mexico. These are military problems, not diplomatic or law-enforcement problems. Yet, we are unable to respond to those problems militarily because we continue on the course that was set when Truman surrendered Korea to the Communists. What is that course? This:*

Note: For a clearer view of the graph, right-click on it and select “open link in a new tab.”

Source: Derived from National Income and Product Accounts tables 1.1.5 and 3.1, available here.

I have argued that defense spending should be independent of GDP; that is, it should be geared to the threats we face, and not be set at an arbitrary percentage of GDP. I have not changed my position.

I am pointing out, here, that non-defense spending has displaced defense spending. The black and red lines in the graph highlight that displacement; the blue line in the graph measures it.

The black line plots the percentage of GDP that is available after defense spending (a percentage that I call the “peace dividend”). The red line plots the percentage of GDP available after government spending and social transfer payments at all levels of government: national, State, and local. Note that the red line moves downward even as the black line moves upward. The growing distance between the lines measures the rising share of GDP that is commandeered by government for non-defense spending and social transfer payments.**

The blue line takes us to the heart of the matter. The blue line plots all non-defense spending plus social transfers as a percentage of the peace dividend. That percentage has nearly doubled since the end of the Korean War.

In fact, there has been for more than seventy years a felt need among politicians to respond to the voices that call loudly for more “social services.” Such services — in the minds of politicians — must come at the expense of defense spending. And so they do, even though defense is the ultimate “social service.”

Imagine, then, the defenses we could mount if non-defense spending plus social transfers had not risen above ten percent of GDP. What is special about ten percent? That was a satisfactory share for non-defense spending plus social transfers from the end of the Civil War through the early 1900s — an era of rapid economic growth. In fact, that share remained satisfactory through 1929. It took the unique, government-fostered Great Depression to create in the minds of most Americans the false idea that we need government to ensure economic growth and take care of “social needs.”

So, here we are, rich in “social services” — except by the standards of no-growth Euro-socialists who free-ride on our defenses — but vulnerable to terrorists and opportunists (like those in Iran and Russia).

Other related posts:
The Destruction of Wealth and Income by the State
Things to Come
__________
* Here is a longer view of the trends:This figure makes more obvious the growing allocation of the peace dividend to non-defense spending and social transfers. That trend began with the onset of the Great Depression, but it was interrupted by World War II — the last war in which the United States committed itself to total victory. The trend re-started at the end of World War II and became almost irreversible after the end of the Korean War, which is the point I chose to emphasize above. The low level of defense spending in the 1930s and the sudden drop after World War II arose from the (false) perception that the United States and its overseas interests did not then face serious threats from abroad. The peace dividends of the 1930s and late 1940s were declared before the peace had been won. The peace dividend of the 1990s and 2000s has cost us the ability to deter Russia and Iran (among others) while we respond (half-heartedly) to terrorism and those who foment it.

** I include social transfer payments in the analysis because, even though they are not government expenditures (by the standards of economic accounting), they do take money away from those who earn it and give it to those who do not. Social transfers therefore diminish the incentives that foster economic growth. Moreover, the tax revenues generated for the purpose of making such transfers could be applied to defense. That would create a “transition” problem with respect to Social Security, but it is a problem with a straightforward solution.

Presidential Legacies

UPDATED, WHERE NOTED

I have written several times about presidents and the presidency. This time I focus on the dual legacy of the presidents: the legacy they brought to the presidency and the legacy they bestowed on it. I begin with a selection of pre-twentieth century presidents, then rip through the Teddy Roosevelt-George W. Bush succession. (I indicate parenthetically the years of each president’s birth and death, and the years in which his presidency began and ended.)

George Washington (1732-99, 1789-97) — a Virginia plantation owner of a “middling” social rank who learned at an early age to take responsibility for large endeavors. Without his fierce determination to succeed, the United States might never have been born. His natural dignity set the standard for all presidents, a standard met by too few (if any) of his successors.

Thomas Jefferson (1743-1826, 1801-09) — born of circumstances similar to those of Washington, but an architect of ideas and a political schemer more than a straightforward man of action. The range of Jefferson’s erudition and intellectual curiosity set the standard for all presidents, a standard that has yet to be met by any of his successors.

Andrew Jackson (1767-1845, 1829-37) — a brawling, backwoods populist. Jackson’s mythical status unfortunately helped to make a virtue of vulgarity, and it set the stage for the “populism” that plagues us still.

Abraham Lincoln (1809-64, 1861-65) — the quintessential American: from humble beginnings to the nation’s highest office. Lincoln’s brilliance as a wartime leader and rhetorician validates his iconic status. Lincoln wavered on the issue of slavery — insofar as allowing slavery to continue in some parts of the nation might have preserved the Union. In the end, Lincoln preserved the Union and led the way to the abolition of slavery. (Lincoln’s current “libertarian” detractors, notably one Thomas DiLorenzo, would have had him sacrifice the Union because — they claim wrongly — slavery would soon have ended out of economic necessity.) UPDATE: A comment by my son stirs me to add that the revealed preference of libertarian extremists is for States’ rights over emancipation, when it comes to a choice between the two. Now, I generally favor States’ rights, but I draw the line at slavery (if not other things). As I wrote here, in a different connection, “an attack on States’ rights isn’t always a vice.”

Ulysses S. Grant (1822-85, 1869-77) — a farmer’s son and career soldier who rose to greatness during the Civil War, when the nation most needed greatness. Grant’s presidency coincided with the upheaval and rancor of Reconstruction, and so he should be remembered as being — with Lincoln — a savior of the Union.

Theodore Roosevelt
(Jr.) (1858-1919, 1901-09) — a busy-body from “old money” with crackpot ideas. Roosevelt’s image as “man of the people” rests on his constitutional inability to stick to the proper business of government, unlike his predecessor but one in the presidency, (Stephen) Grover Cleveland. TR’s trust-busting meddlesomeness put us on the road to the regulatory-welfare state (i.e., socialism).

William Howard Taft (1857-1930, 1909-13) — TR’s temperamental negative in every way but money (if Ohio money can be called “old money”). Taft did not scapegoat business in the way that TR did, but Taft was not a small-government conservative. (He pushed for the Sixteenth Amendment, which authorized the federal income tax, for example.) Taft simply restored some semblance of dignity to the presidency, both during his term of office and, by association, through his later service as Chief Justice of the United States.

(Thomas) Woodrow Wilson (1856-1924, 1913-21) — a preacher’s son whose sense of self-worth was vastly inflated by his acquisition of a Ph.D., a professorship, and the presidency of Princeton University. Wilson won re-election on his promise not to enter the Great War, a promise on which he soon reneged. Wilson — through his championship of the League of Nations — injected into American politics the naive, dangerous, and persistent belief that international strife can be averted and alleviated through super-national organizations.

Warren Gamaliel Harding (1865-1923, 1921-23) — a bourgeois vulgarian who was in over his head. Harding’s perceived weakness — his reliance on unscrupulous cronies — overshadows the fact that, for a time, the nation had a respite from the regulatory activism of Wilson’s regime. Harding, by his death in office, bequeathed us…

(John) Calvin Coolidge (1872-1933, 1923-29) — Harding’s dignified, reticent successor. Had Coolidge chosen to run for re-election in 1928 he probably would have won. And if he had won, his inbred conservatism probably would have kept him from trying to “cure” the recession that began in 1929. Thus, we might not have had the Great Depression, FDR, the New Deal, etc., etc., etc.

Herbert Clark Hoover (1874-1964, 1929-33) — a bright, rich technocrat whose people skills were less than zero. Hoover’s bass-ackward efforts to bring the economy out of recession (e.g., signing the Smoot-Hawley tariff bill) contributed in large part to the onset of the Great Depression. Thus came FDR, the New Deal, etc., etc., etc. UPDATE: My son reminds me that Hoover “was a great anti-Communist and left an important legacy for many later conservatives.” Indeed, he did. That legacy includes the establishment, by Hoover, of the Hoover Institution at Stanford University. The Hoover Institution was for many decades the only conservative American think-tank. It is, to this day, a redoubt for scholars and writers of the conservative-libertarian strain.

Franklin Delano Roosevelt (1882-1945, 1933-45) — a professional pol with old money who fell in with professors and communists and became a more dangerous version of his cousin Teddy. FDR was a “man of the people” only because the people were desperate for a father figure. He was, in fact, a thinly disguised dictator whose New Deal worsened the Depression and established the regulatory-welfare state as a permanent fixture in America. FDR’s imperious style set the tone for presidencies to come. His conciliatory gestures toward Stalin were aped by…

Harry S Truman (1884-1972, 1945-53) — the feisty, “common man” in the White House. Truman’s vaunted folksiness and decisiveness overshadow the flaw he shared with FDR: blindness to the foreign and domestic threat of Communism. Truman’s unwillingness to respond effectively to Communist China’s aggression in Korea emboldened the USSR to tighten its grip on Eastern Europe and test America’s resolve through third-world proxies.

Dwight David Eisenhower (1890-1969, 1953-61) — a popular general whose ready smile and garbled syntax belied his natural dignity, steely determination, and cunning. If Ike had been had been a conservative (in the mold of Robert A. Taft) and not a middle-of-the road Republicrat, he might have deployed his popularity in the service of smaller government. As it was, his main legacies were (a) the vast pork-barrel program known as the Interstate Highway System, (b) a tacit acceptance of the “containment strategy” (e.g., inaction in the face of the Soviet’s brutal suppression of the Hungarian uprising of 1956), and (c) the repudiation of what he called the military-industrial complex. The second and third actions served to encourage the Soviet Union’s imperial aims.

John Fitzgerald Kennedy (1917-63, 1961-63) — a bootlegger’s son whose “charisma” and displays of “vigah” belied his moral sleaziness and poor health. JFK’s good relations with the media led to the creation of the myth that he was somehow acted courageously in resolving the so-called Cuban missile crisis. But Kennedy’s actions actually had dire, long-run consequences for the U.S. As I wrote here:

[T]he Bay of Pigs invasion, which the Kennedy administration botched, would make Castro more popular in Cuba. The botched invasion pushed Castro closer to the USSR, which led to the Cuban missile crisis.

JFK’s inner circle was unwilling to believe that Soviet missile facilities were enroute to Cuba, and therefore unable to act before the facilities were installed. JFK’s subsequent unwillingness to attack the missile facilities made it plain to Kruschev that the the Berlin Wall (erected in 1961) would not fall and that the U.S. would not risk armed confrontation with the USSR (conventional or nuclear) for the sake of the peoples behind the Iron Curtain. Thus the costly and tension-ridden Cold War persisted for almost another three decades.

I should add that Kennedy’s willingness to withdraw missiles from Turkey — a key element of the settlement with the USSR — played into Nikita Krushchev‘s hands, further emboldening the Soviet regime. Some legacy.

Lyndon Baines Johnson (1908-73, 1963-69) — a corrupt, vulgar man of humble beginnings, whose deep-seated feelings of inferiority manifested themselves (as they often do) in power-lust and egomania. It is impossible to say whether LBJ or his successor, Richard Nixon, was the most loathsome person ever to become president. LBJ’s main “gifts” to the nation were (a) the extension and entrenchment of the New Deal, via the Great Society, and (b) a half-hearted commitment to the unnecessary war in Vietnam, from which anti-war (i.e., pro-appeasement-and-surrender) forces in the U.S. have been drawing sustenance for 40 years.

Richard Milhous Nixon (1913-94, 1969-74) — the Republican Party’s LBJ, who – because he was a Republican — garners more loathing than LBJ. Nixon, following Johnson as he did, multiplied the scorn that Americans had begun to develop for the “imperial” presidency. (It was too little, too late, however. Americans would be more free and prosperous today had TR and FDR been subjected to popular scorn for their imperiousness.) More specifically, Nixon failed to bring a timely or honorable end to the war in Vietnam; he imposed price controls in a (misguided) effort to deal with inflation; and he legitimated the brutal regime of China’s dictator, Mao Zedong. Nixon’s singular legacies are (a) the Nixon Halloween mask, (b) the line “I am not a crook,” and (c) the not-so-mysterious mystery of the 18-1/2 minute gap. (For the youngsters among you, that gap was found in a tape of Nixon’s conversation with his henchmen about covering up his role in the Watergate break-in and subsequent effort to cover up the White House’s involvement.)

Gerald Rudolph Ford (born Leslie King Jr.) (1913-2006, 1974-77) — a son of the Middle West, as moderate in politics as he was mild in manner. Ford, whose life’s ambition was to serve as Speaker of the House of Representatives, had to settle for the presidency that devolved upon him when Nixon resigned in disgrace. Had Ford allowed Nixon to be punished for his role in Watergate, Ford might have been elected president in his own right, thus sparing us the regime of…

James Earl Carter (1924-, 1977-81) — a wealthy businessman who exudes false humility and suffers from the “guilt” of being a white, Christian American. He therefore became a white, Christian, anti-American — a trait that has become glaringly obvious in his post-presidential years. Carter’s signal “accomplishments” as president were two. First, he deepened the country’s “malaise” by whining about it. Second, he did too little, too late, in reaction to the seizure of America’s embassy, and the Americans in it, by Iranian thugs. Carter’s ineffectual response to those Iranian thugs encouraged the belief that Americans would accede to terrorists’ demands.

Ronald Wilson Reagan (1911-2004, 1981-89) — a man of innate dignity (belying his career as a second-rate film star) and thoughtful, articulate conservatism (belying the portrayal of him as a “dunce” by his liberal detractors). Reagan was unable to dismantle (or even do much damage to) the welfare-regulatory state that arose from the New Deal and Great Society, but he was able to vanquish the Soviet Union, without firing a shot.

George Herbert Walker Bush (1924-, 1989-93) — born to wealth and verbal ineptitude (a trait inherited by his son George W.). Bush’s presidency was notable mainly for the Gulf War of 1991 and, in particular, Bush’s failure to oust Saddam Hussein when given an opportunity to do so easily and decisively. (I need say no more about that.) Bush’s betrayal of his promise of “no new taxes,” a brief recession that had ended before he left office, and his inability to play the “common man” with any degree of verisimilitude caused him to lose his bid for re-election to…

William Jefferson Clinton (born William Jefferson Blythe) (1946-, 1993-2001) — our trailer trash president, known mainly for sexual predation (if not worse) and an ability to cry on cue. The latter trait caused him to be popular among bleeding-heart types (though he was twice elected with a minority of the popular vote). The former trait was forgiven readily by the same hard-core liberals who would have called for the castration of a Republican with Clinton’s sexual track record. Clinton’s legacy is two-fold: the emasculation (no pun intended) of the armed forces (that’s how he erased the budget deficit) and the elevation of his (oft-betrayed) wife to the status of “serious politician.”

George Walker Bush (1946-, 2001-) — a big-government “conservative” whose track record on fiscal matters is no worse and no better than that of his post-World War II predecessors. As I see it now, Bush will leave us with two main accomplishments. First, his tax cuts will prove to have helped the economy, thus shoring up the case for so-called supply-side economics. Second, he did what his father should have done in 1991: depose Saddam Hussein. Third, and most importantly, unlike Truman, Carter, Reagan (yes, Reagan), and Clinton, he has refused steadfastly to cut and run in the face of inferior but troublesome enemy forces in Iraq. If the situation in Iraq and the Middle East stabilizes — as it could well do — the nation and the world (eventually) will be grateful to G.W. Bush for his resolve in the face of fanatical terrorists, fanatical Leftists (at home and abroad), inconstant conservatives (of the cut-and-run variety), and fickle public opinion.

Related: Presidents of the United States at American History Since 1900

Naming the Presidents

To see how quickly you can type the last names of all U.S. presidents, go here. The time limit is 10 minutes. I finished with 7:53 remaining; that is, I did it in 2:07.

SPOILERS AHEAD

Entering each of the names Adams (John and John Quincy), Harrison (William Henry and Benjamin), Johnson (Andrew and Lyndon), Roosevelt (Theodore and Franklin Delano), and Bush (George H.W. and George W.) accounts for two presidents. Cleveland, who served two non-consecutive terms, is covered by entering his name once.

The most “neglected” presidents — those who have been named only 49 or 50 percent of the time by more than 88,000 “guessers” (as I post this) — are (chronologically) Fillmore, Pierce, Buchanan, Hayes, Arthur, and Harding. Andrew Johnson probably would be in that group were it not for Lyndon Johnson.

The presidents most often named have been Bush (94 percent), Washington (93 percent), Clinton (90 percent), Lincoln (89 percent), Kennedy (86 percent), and Nixon (85 percent).

It rankles that Clinton is named more often than Lincoln (if only slightly). It is consoling that the Roosevelts (at 84 percent) do not outrank Washington or Lincoln.

That only 93 percent of the entries have named Washington is a testament to the low estate of American education and/or the vast geographical reach of the web.

Generations

Here is a good summary of Generations: The History of American’s Future, 1589 to 2069, which I read 10-15 years ago. The authors’ historiographic technique consists of after-the-fact generalizations that lead them to conclude that there are four basic generational personalities, which occur in repetitive cycles. It is those cycles that dictate the course of American history — according to the authors.

The generational analysis is of dubious value, because of its reductionism. Human nature and history just aren’t that simple. But the analysis does provide a hook on which to hang a neat summary of American history. The book is worth reading for its unique perspective on that history, not for its pseudo-scientific explanation of it.

The First Roosevelt

Bryan Caplan notes that

Time has put Teddy Roosevelt on the cover of its 5th Annual Special Issue, and the coverage stretches the limits of sycophancy. It reminds me of my high school history textbook, which praised any President who backed new regulations or started a war.

Thomas Sowell weighs in:

Theodore Roosevelt was indeed a landmark figure in the development of American politics and government, but in a very different sense from the way he is portrayed in Time magazine. In fact, the way that Theodore Roosevelt has been celebrated by many in the media and among the intelligentsia tells us more about them than about the first President Roosevelt. . . .

According to Time magazine, TR believed that “government had the right to moderate the excesses of free enterprise.” Just what were these excesses? According to Time, “poverty, child labor, dreadful factory conditions.”

All these things were attributed to the growth of industrial capitalism — without the slightest evidence that any of them was better before the growth of industrial capitalism. Nothing is easier than to imagine some ideal past or future society or to imagine that the net result of government intervention is bound to be a plus.

Sowell goes on to put the boot into that belief.

My own views about TR’s influence on America can be found in two posts. Here I point to the beginnings of the regulatory-welfare state during TR’s presidency (1901-9):

What happened around 1906? First, the regulatory state began to encroach on American industry with the passage of the Food and Drug Act and the vindictive application of the Sherman Antitrust Act, beginning with Standard Oil (the Microsoft of its day).

And here — in my antidote to standard history texts for schoolchildren — I have more to say about the First Roosevelt; for example:

Roosevelt was an “activist” President. Roosevelt used what he called the “bully pulpit” of the presidency to gain popular support for programs that exceeded the limits set in the Constitution. Roosevelt was especially willing to use the power of government to regulate business and to break up companies that had become successful by offering products that consumers wanted. Roosevelt was typical of politicians who inherited a lot of money and didn’t understand how successful businesses provided jobs and useful products for less-wealthy Americans.

It ran in the family.

How the Great Depression Ended

UPDATED BELOW

Conventional wisdom has it that the entry of the United States into World War II caused the end of the Great Depression in this country. My variant is that World War II led to a “glut” of private saving because (1) government spending caused full employment, but (2) workers and businesses were forced to save much of their income because the massive shift of output toward the war effort forestalled spending on private consumption and investment goods. The resulting cash “glut” fueled post-war consumption and investment spending.

Robert Higgs, research director of the Independent Institute, has a different theory, which he spells out in “Regime Uncertainty: Why the Great Depression Lasted So Long and Why Prosperity Resumed After the War” (available here), the first chapter his new book, Depression, War, and Cold War. (Thanks to Don Boudreaux of Cafe Hayek for the pointer.) Here, from “Regime Change . . . ” is Higgs’s summary of his thesis:

I shall argue here that the economy remained in the depression as late as 1940 because private investment had never recovered sufficiently after its collapse during the Great Contraction. During the war, private investment fell to much lower levels, and the federal government itself became the chief investor, directing investment into building up the nation’s capacity to produce munitions. After the war ended, private investment, for the first time since the 1920s, rose to and remained at levels sufficient to create a prosperous and normally growing economy.

I shall argue further that the insufficiency of private investment from 1935 through 1940 reflected a pervasive uncertainty among investors about the security of their property rights in their capital and its prospective returns. This uncertainty arose, especially though not exclusively, from the character of federal government actions and the nature of the Roosevelt administration during the so-called Second New Deal from 1935 to 1940. Starting in 1940 the makeup of FDR’s administration changed substantially as probusiness men began to replace dedicated New Dealers in many positions, including most of the offices of high authority in the war-command economy. Congressional changes in the elections from 1938 onward reinforced the movement away from the New Deal, strengthening the so-called Conservative Coalition.

From 1941 through 1945, however, the less hostile character of the administration expressed itself in decisions about how to manage the warcommand economy; therefore, with private investment replaced by direct government investment, the diminished fears of investors could not give rise to a revival of private investment spending. In 1945 the death of Roosevelt and the succession of Harry S Truman and his administration completed the shift from a political regime investors perceived as full of uncertainty to one in which they felt much more confident about the security of their private property rights. Sufficiently sanguine for the first time since 1929, and finally freed from government restraints on private investment for civilian purposes, investors set in motion the postwar investment boom that powered the economy’s return to sustained prosperity notwithstanding the drastic reduction of federal government spending from its extraordinarily elevated wartime levels.

Higgs’s explanation isn’t inconsistent with mine, but it’s incomplete. Higgs overlooks the powerful influence of the large cash balances that individuals and corporations had accumulated during the war years. It’s true that because the war was a massive resource “sink” those cash balances didn’t represent real assets. But the cash was there, nevertheless, waiting to be spent on consumption goods and to be made available for capital investments through purchases of equities and debt.

It helped that the war dampened FDR’s hostility to business, and that FDR’s death ushered in a somewhat less radical regime. Those developments certainly fostered capital investment. But the capital investment couldn’t have taken place (or not nearly as much of it) without the “glut” of private saving during World War II. The relative size of that “glut” can be seen here:

Derived from Bureau of Economic Analysis, National Income and Product Accounts Tables: 5.1, Saving and Investment. Gross private saving is analagous to cash flow; net private saving is analagous to cash flow less an allowance for depreciation. The bulge in gross private saving represents pent-up demand for consumption and investment spending, which was released after the war.

World War II did bring about the end of the Great Depression, not directly by full employment during the war but because that full employment created a “glut” of saving. After the war that “glut” jump-started

  • capital spending by businesses, which — because of FDR’s demise — invested more than they otherwise would have; and
  • private consumption spending, which — because of the privations of the Great Depression and the war years — would have risen sharply regardless of the political climate.

UPDATE: Robert Higgs, in an e-mail to me dated 06/24/06, submitted the following comment:

I happened upon your blog post that deals with my ideas about why the depression lasted so long and about the way in which the war related to the genuine prosperity that returned in 1946 for the first time since 1929. I appreciate the publicity, of course. I suggest, however, that you read my entire book, especially, with regard to the points you make on your blog, its chapter 5, “From Central Planning to the Market: The American Transition, 1945-47” (originally published in the Journal of Economic History, September 1999. I show there that the “glut of savings” idea, which is an old one, indeed perhaps even the standard theory of the successful postwar reconversion, does not fit the facts of what happened in 1945-47.

Here is my reply of 10/12/06:

I apologize for the delay in replying to your e-mail about my post… Your book, Depression, War, and Cold War, has not yet made it to the top of my Amazon.com wish list, but I have found “From Central Planning to the Market: The American Transition, 1945-47” on the Independent Institute’s website (here). If the evidence and arguments you adduce there are essentially the same as in chapter 5 of your book, I see no reason to reject the “glut of savings” idea, which is an old one, as I knew when I wrote the post. But, because it is not necessarily an old one to everyone who might read my blog, it is worth repeating — to the extent that it has merit.

At the end of the blog post I summarized the causes of the end of the Great Depression, as I see them:

World War II did bring about the end of the Great Depression, not directly by full employment during the war but because that full employment created a “glut” of saving. After the war that “glut” jump-started

  • capital spending by businesses, which — because of FDR’s demise — invested more than they otherwise would have; and
  • private consumption spending, which — because of the privations of the Great Depression and the war years — would have risen sharply regardless of the political climate.

In the web version of chapter 5 of your book you attribute increased capital spending to an improved business outlook (owing to FDR’s demise) and (in the section on the Recovery of the Postwar Economy, under Why the Postwar Investment Boom?) to “a combination of the proceeds of sales of previously acquired government bonds, increased current retained earnings (attributable in part to reduced corporate-tax liabilities), and the proceeds of corporate securities offerings” to the public. It seems that those “previously acquired government bonds” must have arisen from the “glut” of corporate saving during World War II.

What about the “glut” of personal saving, which you reject as the main source of increased consumer demand after World War II? In the online version of chapter 5 (in the section on the Recovery of the Postwar Economy, under Why the Postwar Consumption Boom?) you say:

The potential for a reduction of the personal saving rate (personal saving relative to disposable personal income) was huge after V-J Day. During the war the personal saving rate had risen to extraordinary levels: 23.6 percent in 1942, 25.0 percent in 1943, 25.5 percent in 1944, and 19.7 percent in 1945. Those rates contrasted with prewar rates that had hovered around 5 percent during the more prosperous years (for example, 5.0 percent in 1929, 5.3 percent in 1937, 5.1 percent in 1940). After the war, the personal saving rate fell to 9.5 percent in 1946 and 4.3 percent in 1947 before rebounding to the 5 to 7 percent range characteristic of the next two decades. After having saved at far higher rates than they would have chosen in the absence of the wartime restrictions, households quickly reduced their rate of saving when the war ended.

That statement seems entirely consistent with the proposition that consumers spent more after the war because they had the money to spend — money that they had acquired during the war when their opportunities for spending it were severely restricted. Not so fast, you would say: What about the fact that “individuals did not reduce their holdings of liquid assets after the war” (your statement)? They didn’t need to. Money is fungible. If consumers had more money coming in (as they did), they could spend more while maintaining the same level of liquid assets — because they had a “backlog” of saving. Here’s my take:

  • Higher post-war incomes didn’t just happen, they were the result of higher rates of investment and consumption spending.
  • The higher rate of investment spending was due, in part, to corporate saving during the war and, in part, to individuals’ purchases of corporate securities and equities.
  • At bottom, the wartime “glut” of personal saving enabled the postwar saving rate to decline to a more normal level, thus allowing consumers to buy equities and securities — and to spend more — without drawing down on their liquid assets.

Granted, business and personal saving during World War II was not nearly as large in real terms as it was on paper — given the very high real cost of the war effort. But it was the availability of paper savings that strongly influenced the behavior of businesses and consumers after the war.

Perhaps I am misinterpreting the evidence you present in chapter 5. And perhaps there is more in other chapters of your book that I should take into account. I will be grateful for a reply, if and when you have the time.

I will further update this post if Mr. Higgs replies to my note of October 12, 2006.

62 Years Ago Today

June 6, 1944:
If we Americans could be united against Nazi Germany, why can we not be united against equally evil Islamic terrorists and their enablers, at home and abroad? What has happened to us in the last 62 years? Think about it. The answers will come to you, if you let them. (Here’s a hint.)

(Photo courtesy of Hot Air.)

The Real Thomas Jefferson

David N. Mayer of MayerBlog posted “Thomas Jefferson, Man vs. Myth” yesterday in observance of the 263rd anniversary of Jefferson’s birth. Mayer debunks a lot of bunk that’s been written — and believed — about Jefferson, including his standing as the “father of American democracy”:

Many people today – including historians, political scientists, and even Jefferson scholars – misunderstand Jefferson’s commitment to republicanism and particularly his advocacy of “self-government,” confusing it with democracy. But democracy is government by the majority of the people; republican government is government by the representatives of the people; and limited, constitutional, republican government – the American system – is government by the people’s representatives whose power is limited by various constraints imposed by the constitution. “Self-government,” as Jefferson understood it, meant, literally, individuals governing themselves, without the interference of government. Early in his presidency Jefferson wrote, “Our people in a body are wise, because they are under the unrestrained and unperverted operation of their own understandings.” He viewed the United States as the leading model to the world for “the interesting experiment of self‑government”; that it was the nation’s destiny to show the world “what is the degree of freedom and self‑government in which a society may venture to leave it’s individual members.” To “leave” them to do what? To be free – to govern themselves.

Mayer, who devotes a section of the post to a clear-eyed assessment of Jefferson (no idolator is Mayer), also writes about the Sally Hemings myth and several aspects of Jefferson’s belief system, including his deism and embrace of free markets. Read the whole thing.

American History Since 1900

I have completed Part One of “American History Since 1900.” I am writing the series for my grandchildren, as an alternative to the standard history texts, which extol the virtues of big government and ooze political correctness.

Part One, which is about the Presidents of the United States in the 20th and 21st centuries, is organized chronologically. It discusses the major events during each President’s time in office. Part Two will give more details about major world events that have affected the United States, and will then focus on major political, social and economic trends in the United States. Part Three will discuss the major technological advances that enable Americans of today to live much better than Americans of 1900. Part Four will explain how the growth of government power since 1900 has made Americans much worse off than they should be.

A major theme of this history is the role of government in the lives of Americans. The increasing role of government has been the major development in American history since 1900. Many Americans today take for granted a degree of government involvement in their lives that would have shocked Americans of 1900. There are other important themes in this history, but the growth of government power overshadows everything else. Why is that? It is because the growth of government power means that Americans have less freedom than they used to have, which is far less freedom than envisioned by the founding generation that fought for America’s independence and wrote its Constitution.

The Heart of the Matter

From Mark Steyn’s appreciation of the late Eugene McCarthy (1916-2005):

Forty years after McCarthy’s swift brutal destruction of the most powerful Democrat in the second half of the 20th century [LBJ], it remains unclear whether his party will ever again support a political figure committed to waging serious war, any war: Carter confined himself to a disastrous helicopter rescue mission in Iran; Clinton bombed more countries in a little over six months than the supposed warmonger Bush has hit in six years, but, unless you happened to be in that Sudanese aspirin factory or Belgrade embassy, it was always desultory and uncommitted. Even though the first Gulf War was everything they now claim to support – UN-sanctioned, massive French contribution, etc – John Kerry and most of his colleagues voted against it. Joe Lieberman is the lonesomest gal in town as an unashamedly pro-war Democrat, and even Hillary Clinton’s finding there are parts of the Democratic body politic which are immune to the restorative marvels of triangulation. Gene McCarthy’s brief moment in the spotlight redefined the party’s relationship with the projection of military force. That’s quite an accomplishment. Whether it was in the long-term strategic interests of either the party or American liberalism is another question. Yet those few months in the snows of New Hampshire linger over the Democratic landscape like an eternal winter.

As I once put it, the

Democrat Party began its veer to the hard left in 1968, with Eugene McCarthy’s anti-war candidacy. McCarthy didn’t win the party’s nomination that year, but his strong showing made reflexive anti-war rhetoric a respectable staple of Democrat discourse.

The Democrats proceeded in 1972 to nominate George McGovern, who seems moderate only by contrast with Ramsey Clark and Michael Moore. Since McGovern’s ascendancy, the left-wing nuts generally have dominated the party — in voice if not in numbers. Nominees since McGovern: Carter (a latter-day Tokyo Rose), Mondale (Carter’s one-term accomplice), Dukakis, Clinton, Gore, and Kerry — all well to the left of the mainstream (to borrow some Democrat rhetoric). Bill Clinton (of the failed plan to socialize health care) became a moderate only because he faced Republican majorities in Congress. Clinton lately [in his comments about the war in Iraq] has been showing his true colors.

(Thanks to Ed Driscoll for the pointer to Steyn’s piece.)