Baseball Roundup: Pennant Droughts, Post-Season Play, and Seven-Game World Series

The occasion for this post is the end of the 2019 World Series, which was unique in one way: It is the only Series in which the road team won every game. I begin with a discussion of pennant droughts — the number of years that the 30 teams in MLB have gone without winning a league championship or a World Series. Next is a dissection of post-season play, which has devolved into something like a game of chance rather than a contest between the best teams of each league. I close with a recounting and analysis of the classic World Series — the 38 that have gone to seven games.

PENNANT DROUGHTS

Everyone in the universe knows that when the Chicago Cubs won the National League championship in 2016, that feat ended what had been the longest pennant drought of the 16 old-line franchises in the National and American Leagues. The mini-bears had gone 71 years since winning the NL championship in 1945. The Cubs went on to win the 2016 World Series; their previous win had occurred 108 years earlier, in 1908.

Here are the most recent league championships and World Series wins by the other old-line National League teams: Atlanta (formerly Boston and Milwaukee) Braves — 1999, 1995; Cincinnati Reds — 1990, 1990; Los Angeles (formerly Brooklyn) Dodgers — 2018, 1988; Philadelphia Phillies — 2009, 2008; Pittsburgh Pirates — 1979, 1979; San Francisco (formerly New York) Giants — 2014, 2014; and St. Louis Cardinals — 2013, 2011.

The American League lineup looks like this: Baltimore Orioles (formerly Milwaukee Brewers and St. Louis Browns) — 1983, 1983; Boston Red Sox — 2018, 2018; Chicago White Sox — 2005, 2005; Cleveland Indians — 2016 (previously 1997), 1948; Detroit Tigers — 2012, 1984; Minnesota Twins (formerly Washington Senators) — 1991, 1991; New York Yankees — 2009, 2009; and Oakland (formerly Philadelphia and Kansas City) Athletics — 1990, 1989.

What about the expansion franchises, of which there are 14? I won’t separate them by league because two of them (Milwaukee and Houston) have switched leagues since their inception. Here they are, in this format: Team (year of creation) — year of last league championship, year of last WS victory:

Arizona Diamondbacks (1998) — 2001, 2001

Colorado Rockies (1993) — 2007, never

Houston Astros (1962) — 2019, 2017

Kansas City Royals (1969) — 2015, 2015

Los Angeles Angels (1961) –2002, 2002

Miami Marlins (1993) — 2003, 2003

Milwaukee Brewers (1969, as Seattle Pilots) –1982, never

New York Mets (1962) — 2015, 1986

San Diego Padres (1969) — 1998, never

Seattle Mariners (1977) — never, never

Tampa Bay Rays (1998) — 2008, never

Texas Rangers (1961, as expansion Washington Senators) — 2011, never

Toronto Blue Jays (1977) — 1993, 1993

Washington Nationals (1969, as Montreal Expos) — 2019, 2019

POST-SEASON PLAY — OR, MAY THE BEST TEAM LOSE

The first 65 World Series (1903 and 1905-1968) were contests between the best teams in the National and American Leagues. The winner of a season-ending Series was therefore widely regarded as the best team in baseball for that season (except by the fans of the losing team and other soreheads).

The advent of divisional play in 1969 meant that the Series could include a team that wasn’t the best in its league. From 1969 through 1993, when participation in the Series was decided by a single postseason playoff between division winners (1981 excepted), the leagues’ best teams met in only 10 of 24 series.

The advent of three-tiered postseason play in 1995 and four-tiered postseason play in 2012 has only made matters worse.

By the numbers:

  • Postseason play originally consisted of a World Series (period) involving 1/8 of major-league teams — the best in each league. Postseason play now involves 1/3 of major-league teams and 7 postseason playoffs (3 in each league plus the inter-league World Series).
  • Only 4 of the 25 Series from 1995 through 2019 featured the best teams of both leagues, as measured by W-L record.
  • Of the 25 Series from 1995 through 2019, only 9 were won by the best team in a league.
  • Of the same 25 Series, 12 (48 percent) were won by the better of the two teams, as measured by W-L record. Of the 65 Series played before 1969, 35 were won by the team with the better W-L record and 2 involved teams with the same W-L record. So before 1969 the team with the better W-L record won 35/63 of the time for an overall average of 56 percent. That’s not significantly different from the result for the 25 Series played in 1995-2019, but the teams in the earlier era were always their league’s best, which is no longer true. . .
  • From 1995 through 2019, a league’s best team (based on W-L record) appeared in a Series only 18 of 50 possible times — 7 times for the NL, 11 times for the AL. A random draw among teams qualifying for post-season play would have resulted in the selection of each league’s best team about 9 times.
  • Division winners opposed each other in just over half (13/25) of the Series from 1995 through 2019.
  • Wild-card teams appeared in 11 of those Series, with all-wild-card Series in 2002 and 2014.
  • Wild-card teams occupied almost 1/4 of the slots in the 1995-2019 Series — 12 out of 50.

The winner of the World Series used to be its league’s best team over the course of the entire season, and the winner had to beat the best team in the other league. Now, the winner of the World Series usually can claim nothing more than having won the most postseason games. Why not eliminate the 162-game regular season, select the postseason contestants at random, and go straight to postseason play?

Here are the World Series pairings for 1995-2019 (National League teams listed first; + indicates winner of World Series):

1995 –
Atlanta Braves (division winner; .625 W-L, best record in NL)+
Cleveland Indians (division winner; .694 W-L, best record in AL)

1996 –
Atlanta Braves (division winner; .593, best in NL)
New York Yankees (division winner; .568, 2nd-best in AL)+

1997 –
Florida Marlins (wild-card team; .568, 2nd-best in NL)+
Cleveland Indians (division winner; .534, 4th-best in AL)

1998 –
San Diego Padres (division winner; .605 3rd-best in NL)
New York Yankees (division winner, .704, best in AL)+

1999 –
Atlanta Braves (division winner; .636, best in NL)
New York Yankees (division winner; .605, best in AL)+

2000 –
New York Mets (wild-card team; .580, 4th-best in NL)
New York Yankees (division winner; .540, 5th-best in AL)+

2001 –
Arizona Diamondbacks (division winner; .568, 4th-best in NL)+
New York Yankees (division winner; .594, 3rd-best in AL)

2002 –
San Francisco Giants (wild-card team; .590, 4th-best in NL)
Anaheim Angels (wild-card team; .611, 3rd-best in AL)+

2003 –
Florida Marlins (wild-card team; .562, 3rd-best in NL)+
New York Yankees (division winner; .623, best in AL)

2004 –
St. Louis Cardinals (division winner; .648, best in NL)
Boston Red Sox (wild-card team; .605, 2nd-best in AL)+

2005 –
Houston Astros (wild-card team; .549, 3rd-best in NL)
Chicago White Sox (division winner; .611, best in AL)*

2006 –
St. Louis Cardinals (division winner; .516, 5th-best in NL)+
Detroit Tigers (wild-card team; .586, 3rd-best in AL)

2007 –
Colorado Rockies (wild-card team; .552, 2nd-best in NL)
Boston Red Sox (division winner; .593, tied for best in AL)+

2008 –
Philadelphia Phillies (division winner; .568, 2nd-best in NL)+
Tampa Bay Rays (division winner; .599, 2nd-best in AL)

2009 –
Philadelphia Phillies (division winner; .574, 2nd-best in NL)
New York Yankees (division winner; .636, best in AL)+

2010 —
San Francisco Giants (division winner; .568, 2nd-best in NL)+
Texas Rangers (division winner; .556, 4th-best in AL)

2011 —
St. Louis Cardinals (wild-card team; .556, 4th-best in NL)+
Texas Rangers (division winner; .593, 2nd-best in AL)

2012 —
San Francisco Giants (division winner; .580, 3rd-best in AL)+
Detroit Tigers (division winner; .543, 7th-best in AL)

2013 —
St. Louis Cardinals (division winner; .599, best in NL)
Boston Red Sox (division winner; .599, best in AL)+

2014 —
San Francisco Giants (wild-card team; .543, 4th-best in NL)+
Kansas City Royals (wild-card team; .549, 4th-best in AL)

2015 —
New York Mets (division winner; .556, 5th-best in NL)
Kansas City Royals (division winner; .586, best in AL)+

2016 —
Chicago Cubs (division winner; .640, best in NL)+
Cleveland Indians (division winner; .584, 2nd-best in AL)

2017 —
Los Angeles Dodgers (division winner; .642, best in NL)
Houston Astros (division winner; .623, best in AL)+

2018 —
Los Angeles Dodgers (division winner; .564, 3rd-best in NL)
Boston Red Sox (division winner; .667, best in AL)+

2019 —
Washington Nationals (wild-card team; .574, 3rd-best in NL)+
Houston Astros (divison winner; .660, best in AL)

THE SEVEN-GAME WORLD SERIES

The seven-game World Series holds the promise of high drama. That promise is fulfilled if the Series stretches to a seventh game and that game goes down to the wire. Courtesy of Baseball-Reference.com, here are the scores of the deciding games of every seven-game Series:

1909 – Pittsburgh (NL) 8 – Detroit (AL) 0

1912 – Boston (AL) 3 – New York (NL) 2 (10 innings)

1924 – Washington (AL) 4 – New York (NL) 3 (12 innings)

1925 – Pittsburgh (NL) 9 – Washington (AL) 7

1926 – St. Louis (NL) 3 – New York (AL) 2

1931 – St. Louis (NL) 4 – Philadelphia (AL) 2

1934 – St. Louis (NL) 11 – Detroit (AL) 0

1940 – Cincinnati (NL) 2 – Detroit (AL) 1

1945 – Detroit (AL) 9 – Chicago (NL) 3

1946 – St. Louis (NL) 4 – Boston (AL) 3

1947 – New York (AL) 5 – Brooklyn (NL) 2

1955 – Brooklyn (NL) 2 – New York (AL) 0

1956 – New York (AL) 9 – Brooklyn (NL) 0

1957 – Milwaukee (NL) 5 – New York (AL) 0

1958 – New York (AL) 6 – Milwaukee (NL) 2

1960 – Pittsburgh (NL) 10 – New York (AL) 9 (decided by Bill Mazeroski’s home run in the bottom of the 9th)

1964 – St. Louis (NL) 7 – New York (AL) 5

1965 – Los Angeles (NL) 2 – Minnesota (AL) 0

1967 – St. Louis (NL) 7 – Boston (AL) 2

1968 – Detroit (AL) 4 – St. Louis (NL) 1

1971 – Pittsburgh (NL) 2 – Baltimore (AL) 1

1972 – Oakland (AL) 3 – Cincinnati (NL) 2

1973 – Oakland (AL) 5 – New York (NL) 2

1975 – Cincinnati (AL) 4 – Boston (AL) 3

1979 – Pittsburgh (NL) 4 – Baltimore (AL) 1

1982 – St. Louis (NL) 6 – Milwaukee (AL) 3

1985 – Kansas City (AL) 11 – St. Louis (NL) 0

1986 – New York (NL) 8 – Boston (AL) 5

1987 – Minnesota (AL) 4 – St. Louis (NL) 2

1991 – Minnesota (AL) 1 – Atlanta (NL) 0 (10 innings)

1997 – Florida (NL) 3 – Cleveland (AL) 2 (11 innings)

2001 – Arizona (NL) 3 – New York (AL) 2 (decided in the bottom of the 9th)

2002 – Anaheim (AL) 4 – San Francisco (NL) 1

2011 – St. Louis (NL) 6 – Texas (AL) 2

2014 – San Francisco (NL) 3 – Kansas City (AL) 2

2016 – Chicago (NL) 8 – Cleveland (AL) 7 (10 innings)

2017 – Houston (AL) 5 – Los Angeles (AL) 1

2019 – Washington (NL) 6 – Houston (AL) 2

Summary statistics:

34 percent (38) of 112 Series have gone to the limit of seven games (another four Series were in a best-of-nine format, but none went to nine games).

20 of the 38 Series were decided by 1 or 2 runs.

14 of those Series were decided by 1 run (7 times in extra innings or the winning team’s last at-bat).

20 of the 38 Series were won by the team that was behind after five games

6 of the 38 Series were won by the team that was behind after four games.

There were 4 consecutive seven-game Series 1955-58, all involving the New York Yankees (almost 1/5 of the Yankees’ Series — 8 of 41 — went to seven games).

Does the World Series deliver high drama? If a seven-game Series is high drama, the World Series has delivered about 1/3 of the time. If high drama means a seven-game Series in which the final game was decided by 1 run, the World Series has delivered about 1/8 of the time. If high drama means a seven-game series where the final game was decided by only 1 run in extra innings or the winning team’s final at-bat, the World Series has delivered only 1/16 percent of the time.

The rest of the time the World Series is merely an excuse to fill seats and sell advertising, inasmuch as it’s seldom a contest between the best team in each league.

GDP Trivia

Bearing in mind Arnold Kling’s reservations (and my own) about aggregate economic data, I will nevertheless entertain you with some trivial factoids on the occasion of the release of the 3rd quarter 2019 GDP estimate (advance estimate).

First, the post-World War II business-cycle record:

Graphically (with short cycles omitted):

The current cycle is the second-longest since the end of World War II, but also the least robust.

Note the large gap between the (low) peak growth rates experienced in recent cycles (purple, pale green, and red lines) and the (higher ones) experienced in earlier cycles. The peak for the current cycle (if you can call it a peak) occurred early (in the 5th quarter after the bottom of the Great Recession). Such a low peak so early in the cycle broke a pattern that had held since the end of World War II:

The red diamond represents the current cycle. Earlier cycles are represented by black dots, and the robust regression equation applies to those cycles.

I won’t be surprised if economists discover that the weakness of the current business cycle is due to Obama’s economic policies (and rhetoric), just as economists (unsurprisingly) discovered that FDR’s policies deepened and prolonged the Great Depression.

What’s in a Trend?

I sometimes forget myself and use “trend”. Then I see a post like “Trends for Existing Home Sales in the U.S.” and am reminded why “trend” is a bad word. This graphic is the centerpiece of the post:

There was a sort of upward trend from June 2016 until August 2017, but the trend stopped. So it wasn’t really a trend was it? (I am here using “trend” in way that it seems to be used generally, that is, as a direction of movement into the future.)

After a sort of flat period, the trend turned upward again, didn’t it? No, because the trend had been broken, so a new trend began in the early part of 2018. But it was a trend only until August 2018, when it became a different trend — mostly downward for several months.

Is there a flat trend now, or as the author of the piece puts it: “Existing home sales in the U.S. largely continued treading water through August 2019”? Well that was the trend — temporary pattern is a better descriptor — but it doesn’t mean that the value of existing-home sales will continue to hover around $1.5 trillion.

The moral of the story: The problem with “trend” is that it implies a direction of movement into the future —  a future will look like a lot like the past. But a trend is only a trend for as long as it lasts. And who knows how long it will last, that is, when it will stop?

I hope to start a trend toward the disuse of “trend”. My hope is futile.

Automate the Ball-Strike Call?

Adam Kilgore addresses the issue:

[Umpire Lance] Barksdale’s faulty ball-strike calls did not define the Houston Astros’ 7-1 victory in Game 5 of the World Series, and they did not deserve credit reserved for Gerrit Cole or blame assigned to Washington’s quiet bats and leaky bullpen. But they did overtake the conversation during the game, and they will provide a backdrop as Major League Baseball continues a seemingly inevitable — if potentially misguided — creep toward robot umpires.

All game, the Nationals fumed over borderline calls that went against them. Immediately and decisively, technology allowed them, their fans and anybody with an Internet connection to validate their anger….

It is precisely that scenario that prompts MLB’s consideration of an automated ball-strike system. Players, media and fans have instant access to data compiled by TrackMan and synthesized into binary outcomes. Ball or strike. Right or wrong….

The next logical step, of course, is that if everybody can see clear-cut results immediately, why shouldn’t they be used to determine outcomes rather than a failure-prone set of human eyes?…

The introduction of the system in the majors would come with undesirable consequences, some of them unintended and some unforeseen. It would change the way the sport looks as we know it. For 150 years, a pitcher who missed his spot in the strike zone and made his catcher lunge awkwardly often was punished with a ball; those would become strikes. The three-dimensional nature of the strike zone, and the human eye’s ability to recognize how a 90-mph projectile flies through that plot, means balls in the dirt have always been balls, even if they clip the very front of the zone at the knees. Those would become strikes. It would also eradicate the skill of pitch framing or expanding the zone throughout the game, skills that make baseball richer.

The final paragraph above is unmitigated horsesh**t.

All of the so-called undesirable consequences cited would result from actually enforcement of the actual strike zone. Which would be a big plus because (a) it would be enforced consistently and (b) there would be far less controversy about ball and strike calls.

Players, managers, and fans would quickly adapt to the subtle changes in the way the game is played. The “way that the sport looks as we know it” has changed dramatically — but slowly — for 150 years. But Kilgore is too young to appreciate that fact of life.

I have been in favor of automated ball-strike calls for many years. I’m conservative, which means that I’m in favor of demonstrably beneficial changes. I guess that makes Kilgore of The Washington Post a reactionary. How ironic.

Handicapping the 2019 World Series: Game 6 (and Maybe Game 7)

The Astros and Nats both played 20 other teams during the regular season. They didn’t play each other, but they had 12 opponents in common: The had similar records against the 12 common opponents: The Astros won 36 games and lost 24 games for an overall W-L average of .600. The Nats won 35 games and lost 24 games for an overall average of .593.

But … here’s the kicker. Game 6 (and maybe game 7) will be played in Houston. The Astros played at home against 10 of the 12 common opponents. The Nats played on the road against 11 of the 12 common opponents. The Astros’ home record against the 10 teams was 19-12, for a W-L average of .613. The Nats’ road record against the 11 teams was 16-13, for a .552 W-L average. Moreover, the Astros compiled a 50-31 (.617) record at home, while the Nats went 43-38 (.531) on the road.

My numbers are in sync with the betting line. Take the Astros if you’re a betting person. I’m not, but I expect them to win the Series.

But I won’t be at all surprised if the Nats pull off an upset. Single events don’t have probabilities. Non-random events (like physical games) don’t have probabilities. Single, non-random events are unpredictable, which is why people bet on them. If they were predictable, all bets would be off.

Blogroll Restored

I have finally updated and restored my blogroll, which you will find by scrolling down the sidebar. It includes links to all of the blogs and sites that I read regularly.

Why Are Interest Rates So Low? (II)

Six years ago, I opined that

borrowers have become less keen about borrowing; that is, they lack confidence about future prospects for income (in the case of households) and returns on investment (in the case of businesses). Why should that be?

If the post-World War II trend is any indication — and I believe that it is — the American economy is sinking into stagnation. Here is the long view [growth rates are inflation-adjusted, final entry updated]:

  • 1790-1861 — annual growth of 4.1 percent — a booming young economy, probably at its freest
  • 1866-1907 — annual growth of 4.3 percent — a robust economy, fueled by (mostly) laissez-faire policies and the concomitant rise of technological innovation and entrepreneurship
  • 1970-2010 2018 — annual growth of 2.8 2.7 percent – sagging under the cumulative weight of “progressivism,” New Deal legislation, LBJ’s “Great Society” (with its legacy of the ever-expanding and oppressive welfare/transfer-payment schemes: Medicare, Medicaid, a more generous package of Social Security benefits), and an ever-growing mountain of regulatory restrictions. [All further compounded by Obama’s expansion of Medicare and Medicaid and acceleration of regulatory activity, some of which Trump has reversed, but most of which still throttles the economy.]

Arnold Kling, citing a piece by Andrew McAfee, suggests another reason:

[C]ould this decoupling [economic growth with less resource use] be responsible for low interest rates?… As long as economic growth required more use of resources, you expect a positive return from storing resources. You get a positive interest rate out of that. But when growth is decoupled, you do not expect a positive return from storing resources. If you want to create a store of value with a positive rate of return, you need to find some productive investment.

But storing resources is only part of the picture. The interest rates that producers pay depend on (a) what they expect in the way of future profits and (b) the availability of funds. Even if profitability is rising because of more efficient resource use, rates could be falling because — as a commenter on Kling’s post notes — there is a steady increase in global savings.

Why would that be? Because households (and businesses with large cash balances) have more disposable income as real incomes rise (and profit margins grow). Some of that increment is made available to corporate borrowers through direct purchases of corporate debt and purchases of mutual funds and ETF shares. Even historically low interest rates on corporate debt will attract buyers because the alternatives (low rates on bank deposits and money-market certificates) are worse.

So it would seem that the long-standing slowdown in the U.S. economy isn’t the whole answer to the question. But it remains part of the answer. Interest rates would be higher if the dead hand of government were lifted from the economy’s carcass.

Not-So-Random Thoughts (XXIV)

“Not-So-Random Thoughts” is an occasional series in which I highlight writings by other commentators on varied subjects that I have addressed in the past. Other entries in the series can be found at these links: I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI, XXII, and XXIII. For more in the same style, see “The Tenor of the Times” and “Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration“.

CONTENTS

The Transgender Trap: A Political Nightmare Becomes Reality

Spygate (a.k.a. Russiagate) Revisited

More Evidence for Why I Don’t Believe in “Climate Change”

Thoughts on Mortality

Assortative Mating, Income Inequality, and the Crocodile Tears of “Progressives”


The Transgender Trap: A Political Nightmare Becomes Reality

Begin here and here, then consider the latest outrage.

First, from Katy Faust (“Why It’s Probably Not A Coincidence That The Mother Transing Her 7-Year-Old Isn’t Biologically Related“, The Federalist, October 24, 2019):

The story of seven-year-old James, whom his mother has pressured to become “Luna,” has been all over my newsfeed. The messy custody battle deserves every second of our click-bait-prone attention: Jeffrey Younger, James’s father, wants to keep his son’s body intact, while Anne Georgulas, James’s mother, wants to allow for “treatment” that would physically and chemically castrate him.

The havoc that divorce wreaks in a child’s life is mainstage in this tragic case. Most of us children of divorce quickly learn to act one way with mom and another way with dad. We can switch to a different set of rules, diet, family members, bedtime, screen time limits, and political convictions in that 20-minute ride from mom’s house to dad’s.

Unfortunately for little James, the adaptation he had to make went far beyond meat-lover’s pizza at dad’s house and cauliflower crusts at mom’s: it meant losing one of the most sacred aspects of his identity—his maleness. His dad loved him as a boy, so he got to be himself when he was at dad’s house. But mom showered love on the version of James she preferred, the one with the imaginary vagina.

So, as kids are so apt to do, when James was at her house, he conformed to the person his mother loved. This week a jury ruled that James must live like he’s at mom’s permanently, where he can “transition” fully, regardless of the cost to his mental and physical health….

Beyond the “tale of two households” that set up this court battle, and the ideological madness on display in the proceedings, something else about this case deserves our attention: one of the two parents engaged in this custodial tug-of-war isn’t biologically related to little James. Care to guess which one? Do you think it’s the parent who wants to keep him physically whole? It’s not.

During her testimony Georgulas stated she is not the biological mother of James or his twin brother Jude. She purchased eggs from a biological stranger. This illuminates a well-known truth in the world of family and parenthood: biological parents are the most connected to, invested in, and protective of their children.

Despite the jury’s unfathomable decision to award custody of James to his demented mother, there is hope for James. Walt Hyer picks up the story (“Texas Court Gives 7-Year-Old Boy A Reprieve From Transgender Treatments“, The Federalist, October 25, 2019):

Judge Kim Cooks put aside the disappointing jury’s verdict of Monday against the father and ruled Thursday that Jeffrey Younger now has equal joint conservatorship with the mother, Dr. Anne Georgulas, of their twin boys.

The mother no longer has unfettered authority to manipulate her 7-year old boy into gender transition. Instead both mother and father will share equally in medical, psychological, and other decision-making for the boys. Additionally, the judge changed the custody terms to give Younger an equal amount of visitation time with his sons, something that had been severely limited….

For those who need a little background, here’s a recap. “Six-year-old James is caught in a gender identity nightmare. Under his mom’s care in Dallas, Texas, James obediently lives as a trans girl named ‘Luna.’ But given the choice when he’s with dad, he’s all boy—his sex from conception.

“In their divorce proceedings, the mother has charged the father with child abuse for not affirming James as transgender, has sought restraining orders against him, and is seeking to terminate his parental rights. She is also seeking to require him to pay for the child’s visits to a transgender-affirming therapist and transgender medical alterations, which may include hormonal sterilization starting at age eight.”

All the evidence points to a boy torn between pleasing two parents, not an overwhelming preference to be a girl….

Younger said at the trial he was painted as paranoid and in need of several years of psychotherapy because he doesn’t believe his young son wants to be a girl. But many experts agree that transgendering young children is hazardous.

At the trial, Younger’s expert witnesses testified about these dangers and provided supporting evidence. Dr. Stephen Levine, a psychiatrist renowned for his work on human sexuality, testified that social transition—treating them as the opposite sex—increases the chance that a child will remain gender dysphoric. Dr. Paul W. Hruz, a pediatric endocrinologist and professor of pediatrics and cellular biology at Washington University School of Medicine in Saint Louis, testified that the risks of social transition are so great that the “treatment” cannot be recommended at all.

Are these doctors paranoid, too? Disagreement based on scientific evidence is now considered paranoia requiring “thought reprogramming.” That’s scary stuff when enforced by the courts….

The jury’s 11-1 vote to keep sole managing conservatorship from the father shows how invasive and acceptable this idea of confusing children and transitioning them has become. It’s like we are watching a bad movie where scientific evidence is ignored and believing the natural truth of male and female biology is considered paranoia. I can testify from my life experience the trans-life movie ends in unhappiness, regret, detransitions, or sadly, suicide.

The moral of the story is that the brainwashing of the American public by the media may have advanced to the tipping point. The glory that was America may soon vanish with a whimper.


Spygate (a.k.a. Russiagate) Revisited

I posted my analysis of “Spygate” well over a year ago, and have continually updated the appended list of supporting reference. The list continues to grow as evidence mounts to support the thesis that the Trump-Russia collusion story was part of a plot hatched at the highest levels of the Obama administration and executed within the White House, the CIA, and the Department of Justice (including especially the FBI).

Margot Cleveland addresses the case of Michael Flynn (“Sidney Powell Drops Bombshell Showing How The FBI Trapped Michael Flynn“, The Federalist, October 25, 2019):

Earlier this week, Michael Flynn’s star attorney, Sidney Powell, filed under seal a brief in reply to federal prosecutors’ claims that they have already given Flynn’s defense team all the evidence they are required by law to provide. A minimally redacted copy of the reply brief has just been made public, and with it shocking details of the deep state’s plot to destroy Flynn….

What is most striking, though, is the timeline Powell pieced together from publicly reported text messages withheld from the defense team and excerpts from documents still sealed from public view. The sequence Powell lays out shows that a team of “high-ranking FBI officials orchestrated an ambush-interview of the new president’s National Security Advisor, not for the purpose of discovering any evidence of criminal activity—they already had tapes of all the relevant conversations about which they questioned Mr. Flynn—but for the purpose of trapping him into making statements they could allege as false” [in an attempt to “flip” Flynn in the Spygate affair]….

The timeline continued to May 10 when McCabe opened an “obstruction” investigation into President Trump. That same day, Powell writes, “in an important but still wrongly redacted text, Strzok says: ‘We need to lock in [redacted]. In a formal chargeable way. Soon.’” Page replies: “I agree. I’ve been pushing and I’ll reemphasize with Bill [Priestap].”

Powell argues that “both from the space of the redaction, its timing, and other events, the defense strongly suspects the redacted name is Flynn.” That timing includes Robert Mueller’s appointment as special counsel on May 17, and then the reentering of Flynn’s 302 on May 31, 2017, “for Special Counsel Mueller to use.”

The only surprise (to me) is evidence cited by Cleveland that Comey was deeply embroiled in the plot. I have heretofore written off Comey as an opportunist who was out to get Trump for his own reasons.

In any event, Cleveland reinforces my expressed view of former CIA director John Brennan’s central role in the plot (“All The Russia Collusion Clues Are Beginning To Point Back To John Brennan“, The Federalist, October 25, 2019):

[I]f the media reports are true, and [Attorney General William] Barr and [U.S. attorney John] Durham have turned their focus to Brennan and the intelligence community, it is not a matter of vengeance; it is a matter of connecting the dots in congressional testimony and reports, leaks, and media spin, and facts exposed during the three years of panting about supposed Russia collusion. And it all started with Brennan.

That’s not how the story went, of course. The company story ran that the FBI launched its Crossfire Hurricane surveillance of the Trump campaign on July 31, 2016, after learning that a young Trump advisor, George Papadopoulos, had bragged to an Australian diplomat, Alexander Downer, that the Russians had dirt on Hillary Clinton….

But as the Special Counsel Robert Mueller report made clear, it wasn’t merely Papadopoulos’ bar-room boast at issue: It was “a series of contacts between Trump Campaign officials and individuals with ties to the Russian government,” that the DOJ and FBI, and later the Special Counsel’s office investigated.

And who put the FBI on to those supposedly suspicious contacts? Former CIA Director John Brennan….

The evidence suggests … that Brennan’s CIA and the intelligence community did much more than merely pass on details about “contacts and interactions between Russian officials and U.S. persons involved in the Trump campaign” to the FBI. The evidence suggests that the CIA and intelligence community—including potentially the intelligence communities of the UK, Italy, and Australia—created the contacts and interactions that they then reported to the FBI as suspicious.

The Deep State in action.


More Evidence for Why I Don’t Believe in “Climate Change”

I’ve already adduced a lot of evidence in “Why I Don’t Believe in Climate Change” and “Climate Change“. One of the scientists to whom I give credence is Dr. Roy Spencer of the Climate Research Center at the University of Alabama-Huntsville. Spencer agrees that CO2 emissions must have an effect on atmospheric temperatures, but is doubtful about the magnitude of the effect.

He revisits a point that he has made before, namely, that the there is no “preferred” state of the climate (“Does the Climate System Have a Preferred Average State? Chaos and the Forcing-Feedback Paradigm“, Roy Spencer, Ph.D., October 25, 2019):

If there is … a preferred average state, then the forcing-feedback paradigm of climate change is valid. In that system of thought, any departure of the global average temperature from the Nature-preferred state is resisted by radiative “feedback”, that is, changes in the radiative energy balance of the Earth in response to the too-warm or too-cool conditions. Those radiative changes would constantly be pushing the system back to its preferred temperature state…

[W]hat if the climate system undergoes its own, substantial chaotic changes on long time scales, say 100 to 1,000 years? The IPCC assumes this does not happen. But the ocean has inherently long time scales — decades to millennia. An unusually large amount of cold bottom water formed at the surface in the Arctic in one century might take hundreds or even thousands of years before it re-emerges at the surface, say in the tropics. This time lag can introduce a wide range of complex behaviors in the climate system, and is capable of producing climate change all by itself.

Even the sun, which we view as a constantly burning ball of gas, produces an 11-year cycle in sunspot activity, and even that cycle changes in strength over hundreds of years. It would seem that every process in nature organizes itself on preferred time scales, with some amount of cyclic behavior.

This chaotic climate change behavior would impact the validity of the forcing-feedback paradigm as well as our ability to determine future climate states and the sensitivity of the climate system to increasing CO2. If the climate system has different, but stable and energy-balanced, states, it could mean that climate change is too complex to predict with any useful level of accuracy [emphasis added].

Which is exactly what I say in “Modeling and Science“.


Thoughts on Mortality

I ruminated about it in “The Unique ‘Me’“:

Children, at some age, will begin to understand that there is death, the end of a human life (in material form, at least). At about the same time, in my experience, they will begin to speculate about the possibility that they might have been someone else: a child born in China, for instance.

Death eventually loses its fascination, though it may come to mind from time to time as one grows old. (Will I wake up in the morning? Is this the day that my heart stops beating? Will I be able to break my fall when the heart attack happens, or will I just go down hard and die of a fractured skull?)

Bill Vallicella (Maverick Philosopher) has been ruminating about it in recent posts. This is from his “Six Types of Death Fear” (October 24, 2019):

1. There is the fear of nonbeing, of annihilation….

2. There is the fear of surviving one’s bodily death as a ghost, unable to cut earthly attachments and enter nonbeing and oblivion….

3. There is the fear of post-mortem horrors….

4. There is the fear of the unknown….

5. There is the fear of the Lord and his judgment….

6. Fear of one’s own judgment or the judgment of posterity.

There is also — if one is in good health and enjoying life — the fear of losing what seems to be a good thing, namely, the enjoyment of life itself.


Assortative Mating, Income Inequality, and the Crocodile Tears of “Progressives”

Mating among human beings has long been assortative in various ways, in that the selection of a mate has been circumscribed or determined by geographic proximity, religious affiliation, clan rivalries or alliances, social relationships or enmities, etc. The results have sometimes been propitious, as Gregory Cochran points out in “An American Dilemma” (West Hunter, October 24, 2019):

Today we’re seeing clear evidence of genetic differences between classes: causal differences.  People with higher socioeconomic status have ( on average) higher EA polygenic scores. Higher scores for cognitive ability, as well. This is of course what every IQ test has shown for many decades….

Let’s look at Ashkenazi Jews in the United States. They’re very successful, averaging upper-middle-class.   So you’d think that they must have high polygenic scores for EA  (and they do).

Were they a highly selected group?  No: most were from Eastern Europe. “Immigration of Eastern Yiddish-speaking Ashkenazi Jews, in 1880–1914, brought a large, poor, traditional element to New York City. They were Orthodox or Conservative in religion. They founded the Zionist movement in the United States, and were active supporters of the Socialist party and labor unions. Economically, they concentrated in the garment industry.”

And there were a lot of them: it’s harder for a sample to be very unrepresentative when it makes up a big fraction of the entire population.

But that can’t be: that would mean that Europeans Jews were just smarter than average.  And that would be racist.

Could it be result of some kind of favoritism?  Obviously not, because that would be anti-Semitic.

Cochran obviously intends sarcasm in the final two paragraphs. The evidence for the heritability of intelligence is, as he says, quite strong. (See, for example, my “Race and Reason: The Achievement Gap — Causes and Implications” and “Intelligence“.) Were it not for assortative mating among Ashkenazi Jews, they wouldn’t be the most intelligent ethnic-racial group.

Branko Milanovic specifically addresses the “hot” issue in “Rich Like Me: How Assortative Mating Is Driving Income Inequality“, Quillette, October 18, 2019):

Recent research has documented a clear increase in the prevalence of homogamy, or assortative mating (people of the same or similar education status and income level marrying each other). A study based on a literature review combined with decennial data from the American Community Survey showed that the association between partners’ level of education was close to zero in 1970; in every other decade through 2010, the coefficient was positive, and it kept on rising….

At the same time, the top decile of young male earners have been much less likely to marry young women who are in the bottom decile of female earners. The rate has declined steadily from 13.4 percent to under 11 percent. In other words, high-earning young American men who in the 1970s were just as likely to marry high-earning as low-earning young women now display an almost three-to- one preference in favor of high-earning women. An even more dramatic change happened for women: the percentage of young high-earning women marrying young high-earning men increased from just under 13 percent to 26.4 percent, while the percentage of rich young women marrying poor young men halved. From having no preference between rich and poor men in the 1970s, women currently prefer rich men by a ratio of almost five to one….

High income and wealth inequality in the United States used to be justified by the claim that everyone had the opportunity to climb up the ladder of success, regardless of family background. This idea became known as the American Dream. The emphasis was on equality of opportunity rather than equality of outcome….

The American Dream has remained powerful both in the popular imagination and among economists. But it has begun to be seriously questioned during the past ten years or so, when relevant data have become available for the first time. Looking at twenty-two countries around the world, Miles Corak showed in 2013 that there was a positive correlation between high inequality in any one year and a strong correlation between parents’ and children’s incomes (i.e., low income mobility). This result makes sense, because high inequality today implies that the children of the rich will have, compared to the children of the poor, much greater opportunities. Not only can they count on greater inheritance, but they will also benefit from better education, better social capital obtained through their parents, and many other intangible advantages of wealth. None of those things are available to the children of the poor. But while the American Dream thus was somewhat deflated by the realization that income mobility is greater in more egalitarian countries than in the United States, these results did not imply that intergenerational mobility had actually gotten any worse over time.

Yet recent research shows that intergenerational mobility has in fact been declining. Using a sample of parent-son and parent-daughter pairs, and comparing a cohort born between 1949 and 1953 to one born between 1961 and 1964, Jonathan Davis and Bhashkar Mazumder found significantly lower intergenerational mobility for the latter cohort.

Milanovic doesn’t mention the heritabiliity of intelligence, which is bound to be generally higher among children of high-IQ parents (like Ashkenzi Jews and East Asians), and the strong correlation between intelligence and income. Does this mean that assortative mating should be banned and “excess” wealth should be confiscated and redistributed? Elizabeth Warren and Bernie Sanders certainly favor the second prescription, which would have a disastrous effect on the incentive to become rich and therefore on economic growth.

I addressed these matters in “Intelligence, Assortative Mating, and Social Engineering“:

So intelligence is real; it’s not confined to “book learning”; it has a strong influence on one’s education, work, and income (i.e., class); and because of those things it leads to assortative mating, which (on balance) reinforces class differences. Or so the story goes.

But assortative mating is nothing new. What might be new, or more prevalent than in the past, is a greater tendency for intermarriage within the smart-educated-professional class instead of across class lines, and for the smart-educated-professional class to live in “enclaves” with their like, and to produce (generally) bright children who’ll (mostly) follow the lead of their parents.

How great are those tendencies? And in any event, so what? Is there a potential social problem that will  have to be dealt with by government because it poses a severe threat to the nation’s political stability or economic well-being? Or is it just a step in the voluntary social evolution of the United States — perhaps even a beneficial one?…

[Lengthy quotations from statistical evidence and expert commentary.]

What does it all mean? For one thing, it means that the children of top-quintile parents reach the top quintile about 30 percent of the time. For another thing, it means that, unsurprisingly, the children of top-quintile parents reach the top quintile more often than children of second-quintile parents, who reach the top quintile more often than children of third-quintile parents, and so on.

There is nevertheless a growing, quasi-hereditary, smart-educated-professional-affluent class. It’s almost a sure thing, given the rise of the two-professional marriage, and given the correlation between the intelligence of parents and that of their children, which may be as high as 0.8. However, as a fraction of the total population, membership in the new class won’t grow as fast as membership in the “lower” classes because birth rates are inversely related to income.

And the new class probably will be isolated from the “lower” classes. Most members of the new class work and live where their interactions with persons of “lower” classes are restricted to boss-subordinate and employer-employee relationships. Professionals, for the most part, work in office buildings, isolated from the machinery and practitioners of “blue collar” trades.

But the segregation of housing on class lines is nothing new. People earn more, in part, so that they can live in nicer houses in nicer neighborhoods. And the general rise in the real incomes of Americans has made it possible for persons in the higher income brackets to afford more luxurious homes in more luxurious neighborhoods than were available to their parents and grandparents. (The mansions of yore, situated on “Mansion Row,” were occupied by the relatively small number of families whose income and wealth set them widely apart from the professional class of the day.) So economic segregation is, and should be, as unsurprising as a sunrise in the east.

None of this will assuage progressives, who like to claim that intelligence (like race) is a social construct (while also claiming that Republicans are stupid); who believe that incomes should be more equal (theirs excepted); who believe in “diversity,” except when it comes to where most of them choose to live and school their children; and who also believe that economic mobility should be greater than it is — just because. In their superior minds, there’s an optimum income distribution and an optimum degree of economic mobility — just as there is an optimum global temperature, which must be less than the ersatz one that’s estimated by combining temperatures measured under various conditions and with various degrees of error.

The irony of it is that the self-segregated, smart-educated-professional-affluent class is increasingly progressive….

So I ask progressives, given that you have met the new class and it is you, what do you want to do about it? Is there a social problem that might arise from greater segregation of socio-economic classes, and is it severe enough to warrant government action. Or is the real “problem” the possibility that some people — and their children and children’s children, etc. — might get ahead faster than other people — and their children and children’s children, etc.?

Do you want to apply the usual progressive remedies? Penalize success through progressive (pun intended) personal income-tax rates and the taxation of corporate income; force employers and universities to accept low-income candidates (whites included) ahead of better-qualified ones (e.g., your children) from higher-income brackets; push “diversity” in your neighborhood by expanding the kinds of low-income housing programs that helped to bring about the Great Recession; boost your local property and sales taxes by subsidizing “affordable housing,” mandating the payment of a “living wage” by the local government, and applying that mandate to contractors seeking to do business with the local government; and on and on down the list of progressive policies?

Of course you do, because you’re progressive. And you’ll support such things in the vain hope that they’ll make a difference. But not everyone shares your naive beliefs in blank slates, equal ability, and social homogenization (which you don’t believe either, but are too wedded to your progressive faith to admit). What will actually be accomplished — aside from tokenism — is social distrust and acrimony, which had a lot to do with the electoral victory of Donald J. Trump, and economic stagnation, which hurts the “little people” a lot more than it hurts the smart-educated-professional-affluent class….

The solution to the pseudo-problem of economic inequality is benign neglect, which isn’t a phrase that falls lightly from the lips of progressives. For more than 80 years, a lot of Americans — and too many pundits, professors, and politicians — have been led astray by that one-off phenomenon: the Great Depression. FDR and his sycophants and their successors created and perpetuated the myth that an activist government saved America from ruin and totalitarianism. The truth of the matter is that FDR’s policies prolonged the Great Depression by several years, and ushered in soft despotism, which is just “friendly” fascism. And all of that happened at the behest of people of above-average intelligence and above-average incomes.

Progressivism is the seed-bed of eugenics, and still promotes eugenics through abortion on demand (mainly to rid the world of black babies). My beneficial version of eugenics would be the sterilization of everyone with an IQ above 125 or top-40-percent income who claims to be progressive [emphasis added].

Enough said.

What’s in a Name?

A lot, especially if it’s the name of a U.S. Navy ship. Take the aircraft carrier, for instance, which has been the Navy’s capital ship since World War II. The first aircraft carrier in the U.S. fleet was the USS Langley, commissioned in 1922. Including escort carriers, which were smaller than the relatively small carriers of World War II, a total of 154 carriers have been commissioned and put into service in the U.S. Navy. (During World War II, some escort carriers were transferred to the Royal Navy upon commissioning.)

As far as I am able to tell, not one of the the 82 escort carriers was named for a person. Of the 72 “regular” carriers, which includes 10 designated as light aircraft carriers, none was named for a person until CVB-49, the Franklin D. Roosevelt, was commissioned in 1945, several months after the death of its namesake. The next such naming came in 1947, with the commissioning of the Wright, named for Wilbur and Orville Wright, the aviation pioneers. There was a hiatus of 8 years, until the commissioning of the Forrestal in 1955; a ship named for the late James Forrestal, the first secretary of defense.

The dam burst in 1968, with the commissioning of John F. Kennedy. That carrier and the 11 commissioned since have been named for persons, only one of whom, Admiral of the Fleet Chester W. Nimitz, was a renowned naval person. In addition to Kennedy, the namesakes include former U.S. presidents (Eisenhower, T. Roosevelt, Lincoln, Washington, Truman, Reagan, Bush 41, and Ford), Carl Vinson (a long-serving chairman of the House Armed Services Committee), and John C. Stennis (a long-serving chairman of the Senate Armed Services Committee). Reagan and Bush were honored while still living (though Reagan may have been unaware of the honor because of the advanced state of his Alzheimer’s disease).

All but the Kennedy are on active service. And the Kennedy, which was decommissioned in 2007, is due to be replaced by a namesake next year. But that may be the end of it. Wisdom may have prevailed before the Navy becomes embroiled in nasty, needless controversies over the prospect of naming of a carrier after Lyndon Johnson, Richard Nixon, Jimmy Carter, Bill Clinton, George Bush, Barack Obama, or Donald Trump.

The carrier after Kennedy (II) will be named Enterprise — the third carrier to be thus named. Perhaps future carriers will take the dashing names of those that I remember well from my days as a young defense analyst: Bon Homme Richard (a.k.a, Bonny Dick), Kearsarge, Oriskany, Princeton, Shangri-La, Lake Champlain, Tarawa, Midway, Coral Sea, Valley Forge, Saipan, Saratoga, Ranger, Independence, Kitty Hawk, Constellation, Enterprise (II), and America.

And while we’re at it, perhaps the likes of Admiral William McRaven (USN ret.) will do their duty, become apolitical, and shut up.

“Endorsed” by Victor Davis Hanson

Not really. But here’s what he said on October 20 in “Why Do They Hate Him So?“:

The Left detests Trump for a lot of reasons besides winning the 2016 election and aborting the progressive project. But mostly they hate his guts because he is trying and often succeeding to restore a conservative America at a time when his opponents thought that the mere idea was not just impossible but unhinged.

And that is absolutely unforgivable.

Here’s what I said on October 11 in “Understanding the ‘Resistance’: The Enemies Within“:

Why such a hysterical and persistent reaction to the outcome of the 2016 election? (The morally corrupt, all-out effort to block the confirmation of Justice Kavanaugh was a loud echo of that reaction.) Because the election of 2016 had promised to be the election to end all elections — the election that might have all-but-assured the the ascendancy of the left in America, with the Supreme Court as a strategic high ground.

But Trump — through his budget priorities, deregulatory efforts, and selection of constitutionalist judges — has made a good start on undoing Obama’s great leap forward in the left’s century-long march toward its vision of Utopia. The left cannot allow this to continue, for if Trump succeeds (and a second term might cement his success), its vile work could be undone.

VDH and LV, the dream team.

Conservatism vs. Libertarianism

Returning to the subject of political ideologies, I take up a post that had languished in my drafts folder for these past 18 months. It begins by quoting an unintentionally prescient piece by Michael Warren Davis: “The Max Bootification of the American Right” (The American Conservative, April 13, 2018). It’s unintentionally prescient because Davis boots Boot out of conservatism at about the same time that Boot was declaring publicly that he was no longer a conservative.

By way of introduction, Davis takes issue with

an article from the Spring 2012 issue of the Intercollegiate Review called “The Pillars of Modern American Conservatism” by Alfred S. Regnery. Like the [Intercollegiate Studies Institute] itself, it was excellent on the main. But it suffers from the grave (albeit common) sin of believing there is such a thing as “modern” conservatism, which can be distinguished from historic conservatism….

The trouble with “modern” conservatism … is that historic conservatism didn’t fail. It has not been tried and found wanting, as Chesterton would say; it has been found difficult and not tried….

The genius of fusionists (what is generally meant by “modern” conservatives) like William F. Buckley and Frank S. Meyer was joining the intellectual sophistication of traditionalism with the political credibility of libertarianism. The greatest traditionalists and libertarians of that age—Russell Kirk and Friedrich Hayek, respectively—protested vehemently against this fusion, insisting that their two schools were different species and could not intermarry. It was inevitable that “modern” conservatism would prioritize the first principles of one movement over the other. That is to say, this new conservatism would be either fundamentally traditionalist or fundamentally libertarian. It could not be both.

Regnery’s article proves that the latter came to pass. “Modern” conservatism is in fact not conservatism at all: it is a kind of libertarianism, albeit with an anti-progressive instinct.

Consider the subheadings: “The first pillar of conservatism,” Regnery writes, “is liberty, or freedom… The second pillar of conservative philosophy is tradition and order.” This is an inversion of the hierarchy put forward in What Is Conservatism?, a collection of essays edited by Meyer and published by the ISI in 1964. According to Meyer’s table of contents, essays with an “emphasis on tradition and authority” (Kirk, Willmoore Kendall) rank higher than those with an “emphasis on freedom” (M. Stanton Evans, Wilhelm Röpke, Hayek).

The ordering is no coincidence. This question of priorities became one of the principal logjams between the Kirkians and Hayekians. As Kirk explained in “Libertarians: Chirping Sectaries,” published in the Fall 1981 issue of Modern Age:

In any society, order is the first need of all. Liberty and justice may be established only after order is tolerably secure. But the libertarians give primacy to an abstract liberty. Conservatives, knowing that “liberty inheres in some sensible object,” are aware that true freedom can be found only within the framework of a social order, such as the constitutional order of these United States. In exalting an absolute and indefinable “liberty” at the expense of order, the libertarians imperil the very freedoms they praise.

This seems rather straightforward in terms of domestic policy, but we should consider its implications for foreign policy, too. The triumph of the “emphasis on freedom” is responsible for the disastrous interventionist tendencies that have plagued all modern Republican administrations.

We again turn to Kirk in his essay for What is Conservatism? titled “Prescription, Authority, and Ordered Freedom.” Here he warned:

To impose the American constitution on all the world would not render all the world happy; to the contrary, our constitution would work in few lands and would make many men miserable in short order. States, like men, must find their own paths to order and justice and freedom; and usually those paths are ancient and winding ways, and their signposts are Authority, Tradition, Prescription.

That is why traditionalists oppose regime change in the Middle East. Freedom may follow tyranny only if (as in the Revolutions of 1989) the people themselves desire it and are capable of maintaining the machinery of a free society. If the public is not especially interested in self-government, they will succumb either to a new despot or a stronger neighboring country. We have seen both of these scenarios play out in post-Ba’athist Iraq, with the rise of ISIS and the expansion of Iranian hegemony.

It is also why traditionalist conservatives are tarred as pro-Putin by liberals and “modern” conservatives. If Putin is indeed a neo-Tsarist, we may hope to see Russia follow C.S. Lewis’s maxim: “A sum can be put right: but only by going back till you find the error and working it afresh from that point, never by simply going on.” Communism is the error, and while Putinism is by no means the solution, we may hope (though not blindly) that it represents a return to the pre-communist order. Those are, if not optimal conditions for true liberty to flourish, at least the best we can reasonably expect.

More important, however, is that we recognize the absurdity of “modern” conservatives’ hopes that Russia would have transitioned from the Soviet Union to a carbon copy of 1980s Britain. We do the Russian people a disservice by holding President Putin to the example of some mythical Tsarina Thatcherova. That is simply not the “ancient and winding way” Providence has laid out for them.

Such an unhealthy devotion to abstract liberty is embodied in Max Boot, the Washington Post’s new conservative [sic] columnist. Consider the opening lines of his essay “I Would Vote for a (Sane) Donald Trump,” published last year in Foreign Policy:

I am socially liberal: I am pro-LGBTQ rights, pro-abortion rights, pro-immigration. I am fiscally conservative: I think we need to reduce the deficit and get entitlement spending under control… I am pro-free trade: I think we should be concluding new trade treaties rather than pulling out of old ones. I am strong on defense: I think we need to beef up our military to cope with multiple enemies. And I am very much in favor of America acting as a world leader: I believe it is in our own self-interest to promote and defend freedom and free markets as we have been doing in one form or another since at least 1898.

Boot has no respect for Authority, Tradition, and Prescription—not in this country, and not in those manifold countries he would have us invade. His politics are purely propositional: freedom is the greatest (perhaps the sole) virtue, and can be achieved equally by all men in all ages. Neither God nor history nor the diverse and delicate fibers that comprise a nation’s social order have any bearing on his ideologically tainted worldview.

Boot, of course, was hired by the Post to rubber-stamp the progressive agenda with the seal of Principled Conservatism™. Even he can’t possibly labor under the delusion that Jeff Bezos hired him to threaten Washington’s liberal establishment. Yet his conclusions follow logically from the pillars of “modern” conservatism.

Two choices lie before us, then. One is to restore a conservatism of Authority, Tradition, and Prescription. The other is to stand by and watch the Bootification of the American Right. Pray that we choose correctly, before it’s too late to undo the damage that’s already been done.

Boot was to have been the Post‘s answer to David Brooks, the nominal conservative at The New York Times, about whom I have often written. Boot, however, has declared himself a person of the left, whereas Brooks calls himself a “moderate“, which is another way of saying wishy-washy. Both of them give aid and comfort to the left. They are Tweedeldum and Tweedle-dumber, as a wag once observed (inaccurately) of Richard Nixon and Hubert Humphrey (opponents in the 1968 presidential election).

Returning to the main point of this post, which is the difference between conservatism and libertarianism, I will offer a view that is consistent with Davis’s, but expressed somewhat differently. This is from “Political Ideologies“:

There is an essential difference between conservatism and libertarianism. Conservatives value voluntary social institutions not just because they embed accumulated wisdom. Conservatives value voluntary social institutions because they bind people in mutual trust and respect, which foster mutual forbearance and breed social comity in the face of provocations. Adherence to long-standing social norms helps to preserve the wisdom embedded in them while also signalling allegiance to the community that gave rise to the norms.

Libertarians, on the other hand, following the lead of their intellectual progenitor, John Stuart Mill, are anxious to throw off what they perceive as social “oppression”. The root of libertarianism is Mill’s “harm principle”, which I have exposed for the fraud that it is (e.g., here and here)….

There’s more. Libertarianism, as it is usually explained and presented, lacks an essential ingredient: morality. Yes, libertarians espouse a superficially plausible version of morality — the harm principle, quoted above by Scott Yeonor. But the harm principle is empty rhetoric. Harm must be defined, and its definition must arise from social norms. The alternative, which libertarians — and “liberals” — obviously embrace, is that they are uniquely endowed with the knowledge of what is “right”, and therefore should be enforced by the state. Not the least of their sins against social comity is the legalization of abortion and same-sex “marriage” (detailed arguments at the links).

Liberty is not an abstraction. It is the scope of action that is allowed by long-standing, voluntarily evolved social norms. It is that restrained scope of action which enables people to coexist willingly, peacefully, and cooperatively for their mutual benefit. That is liberty, and it is served by conservatism, not by amoral, socially destructive libertarianism.

I rest my case.

Homelessness

It has long been my contention that homelessness is encouraged by programs to aid the homeless. It’s a fact of life: If you offer people a chance to get something for doing nothing, some of them will take your offer. (The subsidization of unemployment with welfare payments, food stamps, etc., is among the reasons that the real unemployment rate is markedly higher than the official rate.)

Recently, after I had mentioned my hypothesis to a correspondent, Francis Menton posted “The More Public Money Spent to Solve ‘Homelessness,’ the More Homelessness There Is“, at his blog, Manhattan Contrarian. Menton observes that the budget for homeless services in San Francisco

has gone from about $155 million annually in the 2011-12 fiscal year, to $271 million annually in San Francisco’s most recent 2018-19 spending plan.

[T]he $271 million per year would place San Francisco right near the top of the heap in per capita spending by a municipality to solve the homelessness problem. With a population of about 900,000, $271 million would come to about $300 per capita per year. By comparison, champion spender New York City, with a population close to ten times that of San Francisco, is up to spending some $3.2 billion annually on the homeless, which would be about $375 per capita….

So surely, with all this spending, homelessness in San Francisco must have at least begun its inevitable rapid decline? No, I’m sorry. Once again, it is the opposite. According to a piece in the City Journal by Erica Sandberg on October 10, the official count of homeless in San Francisco is now 9,780. That represents an increase of at least 30% just since 2017.

There’s more. It comes from The Economist, a magazine that was founded in the era of classical liberalism but which has gone over to the dark side: modern “liberlism”. In case you don’t know the difference, see “Political Ideologies“.

In “Homelessness Is Declining in America” (available with a limited-use free subscription), the real story is buried. The fake story is the nationwide decline of homelessness since 2009, which is unsurprising given that 2009 marked the nadir of the Great Recession.

The real story is that despite the nationwide decline of homelessness, its incidence has risen in major cities, where reigning Democrats are bent on solving the problem by throwing money at it; thus this graph, which is well down the page:

Further, The Economist acknowledges the phenomenon discussed by Menton:

Despite significant public efforts—such as a surcharge on sales tax directed entirely towards homeless services and a $1.2bn bond issue to pay for affordable housing—the problem of homelessness is worsening in Los Angeles. It has emerged as the greatest liability for Eric Garcetti, the mayor, and may have hindered his ambitions to run for president. After spending hundreds of millions, the city was surprised to learn in July that the number of homeless people had increased by 12% from the previous year (city officials point out that this was less than in many other parts of California). Though it can be found everywhere, homelessness, unlike other social pathologies, is not a growing national problem. Rather it is an acute and worsening condition in America’s biggest, most successful cities.

Every year in January, America’s Department of Housing and Urban Development mobilises thousands of volunteers to walk the streets and count the unsheltered homeless. Along with data provided by homeless shelters, these create an annual census of types of homeless residents. Advocates think that the methodology produces a significant undercount, but they are the best statistics available (and much higher quality than those of other developed countries). Since 2009 they show a 12% decline nationally, but increases of 18% in San Francisco, 35% in Seattle, 50% in Los Angeles and 59% in New York. [These figures seem to be drawn from HUD reports that can be found here and here.]

The Economist tries to minimize the scope of the problem by addressing “myths”:

The first is that the typical homeless person has lived on the street for years, while dealing with addiction, mental illness, or both. In fact, only 35% of the homeless have no shelter, and only one-third of those are classified as chronically homeless. The overwhelming majority of America’s homeless are in some sort of temporary shelter paid for by charities or government. This skews public perceptions of the problem. Most imagine the epicentre of the American homeless epidemic to be San Francisco—where there are 6,900 homeless people, of whom 4,400 live outdoors—instead of New York, where there are 79,000 homeless, of whom just 3,700 are unsheltered.

The “mythical” perception about the “typical homeless person” is a straw man, which seems designed to distract attention from the fact that homelessness is on the rise in big cities. Further, there is the attempt to distinguish between sheltered and unsheltered homeless persons. But sheltering is part of the problem, in that the availability of shelters makes it easier to be homeless. (More about that, below.)

The second myth is that rising homelessness in cities is the result of migration, either in search of better weather or benefits. Homelessness is a home-grown problem. About 70% of the homeless in San Francisco previously lived in the city; 75% of those living on the streets of Los Angeles, in places like Skid Row, come from the surrounding area. Though comparable data do not exist for Hawaii—which has one of the highest homelessness rates in the country—a majority of the homeless are ethnic Hawaiians and Pacific Islanders, suggesting that the problem is largely local.

The fact that homelessness is mainly a home-grown problem is consistent with the hypothesis that spending by big-city governments helps to promote it. The Economist doesn’t try to rebut that idea, but mentions in a sneering way a report by the Council of Economic Advisers “suggesting that spending on shelters would incentivise homelessness.” Well, I found the report (“The State of Homelessness in America“), and it cites evidence from actual research (as opposed to The Economist‘s hand-waving) to support what should be obvious to anyone who thinks about it: Sheltering incentivizes homelessness.

The Economist isn’t through, however:

All this obscures the chief culprit, however, which is the cost of housing. Even among the poor—of which there are officially 38m in America—homelessness is relatively rare, affecting roughly one in 70 people. What pushes some poor people into homelessness, and not others, remains obscure. So too are the reasons for the sharp racial disparities in homelessness; roughly 40% of the homeless are black, compared with 13% of the population. But remarkably tight correlations exist with rent increases.

An analysis by Chris Glynn and Emily Fox, two statisticians, predicts that a 10% increase in rents in a high-cost city like New York would result in an 8% increase in the number of homeless residents. Wherever homelessness appears out of control in America—whether in Honolulu, Seattle or Washington, DC—high housing costs almost surely lurk. Fixing this means dealing with a lack of supply, created by over-burdensome zoning regulations and an unwillingness among Democratic leaders to overcome entrenched local interests.

Ah, yes, “affordable housing” is always the answer if you’re a leftist. But it isn’t. Housing costs are high and bound to get higher because population continues to grow and businesses continue to grow and hire. Most of the population and business growth occurs in big cities. And if not in city cores, then in the satellite cities and developed areas that revolve around the cores. What this means is that there is a limited amount of land on which housing, offices, and factories can be built, so that the value of the land rises as the demand for it rises. Even if the supply of construction materials and labor were to rise with demand, the price of housing would continue to rise.

The only real “solution” is for governments to dictate across-the-board restrictions on lot size, building-unit size, and the elaborateness of materials used. That isn’t an issue for “entrenched local interests”, it’s an issue for anyone who believes that government shouldn’t tell him that he must live in a middle-income home when he can afford (and enjoy) something more luxurious, or that he must squeeze his highly paid employees into barren lofts.

Thus “affordable housing” in practice means subsidization. If opposition to subsidization is an “entrenched local interest”, it’s of a piece with opposition to across-the-board restrictions. It requires people who earn money to give it to people who don’t earn money (or very much money), thus blunting everyone’s incentive to earn more. Nobody promised anybody a rose garden — at least not until the welfare state came along in the 1930s. And, despite that, my father and grandfathers held menial jobs during the Great Depression and paid for their own housing, such as it was. If people are different now, it’s because of the welfare state.

Finally, homelessness is also encouraged by “enlightened” policies that allow (or don’t discourage) loitering, camping, and panhandling. I happen to live in Austin, where the homeless have been encouraged to do all of those things, to the detriment of public health and safety. I hope that Governor Abbott follows through on his commitment to rid public spaces of homeless encampments.

More Unsettled Science

Now hear this:

We’re getting something wrong about the universe.

It might be something small: a measurement issue that makes certain stars looks closer or farther away than they are, something astrophysicists could fix with a few tweaks to how they measure distances across space. It might be something big: an error — or series of errors — in  cosmology, or our understanding of the universe’s origin and evolution. If that’s the case, our entire history of space and time may be messed up. But whatever the issue is, it’s making key observations of the universe disagree with each other: Measured one way, the universe appears to be expanding at a certain rate; measured another way, the universe appears to be expanding at a different rate. And, as a new paper shows, those discrepancies have gotten larger in recent years, even as the measurements have gotten more precise….

The two most famous measurements work very differently from one another. The first relies on the Cosmic Microwave Background (CMB): the microwave radiation leftover from the first moments after the Big Bang. Cosmologists have built theoretical models of the entire history of the universe on a CMB foundation — models they’re very confident in, and that would require an all-new physics to break. And taken together, Mack said, they produce a reasonably precise number for the Hubble constant, or H0, which governs how fast the universe is currently expanding.

The second measurement uses supernovas and flashing stars in nearby galaxies, known as Cepheids. By gauging how far those galaxies are from our own, and how fast they’re moving away from us, astronomers have gotten what they believe is a very precise measurement of the Hubble constant. And that method offers a different H0.

It’s possible that the CMB model is just wrong in some way, and that’s leading to some sort of systematic error in how physicists are understanding the universe….

It’s [also] possible … that the supernovas-Cepheid calculation is just wrong. Maybe physicists are measuring distances in our local universe wrong, and that’s leading to a miscalculation. It’s hard to imagine what that miscalculation would be, though…. Lots of astrophysicists have measured local distances from scratch and have come up with similar results. One possibility … is just that we live in a weird chunk of the universe where there are fewer galaxies and less gravity, so our neighborhood is expanding faster than the universe as a whole….

Coming measurements might clarify the contradiction — either explaining it away or heightening it, suggesting a new field of physics is necessary. The Large Synoptic Survey Telescope, scheduled to come online in 2020, should find hundreds of millions of supernovas, which should vastly improve the datasets astrophysicists are using to measure distances between galaxies. Eventually, … gravitational wave studies will get good enough to constrain the expansion of the universe as well, which should add another level of precision to cosmology. Down the road, … physicists might even develop instruments sensitive enough to watch objects expand away from one another in real time.

But for the moment cosmologists are still waiting and wondering why their measurements of the universe don’t make sense together.

Here’s a very rough analogy to the problem described above:

  • A car traveling at a steady speed on a highway passes two markers that are separated by a measured distance (expressed in miles). Dividing the distance between the markers by the time of travel between the markers (expressed in hours) gives the speed of the car in miles per hour.
  • The speed of the same car is estimated by a carefully calibrated radar gun, one that has been tested on many cars under conditions like those in which it is used on the car in question.
  • The two methods yield different results. They are so different that there is no overlap between the normal ranges of uncertainty for the two methods.

The problem is really much more complicated than that. In the everyday world of cars traveling on highways, relativistic effects are unimportant and can be ignored. In the universe where objects are moving away from each other at a vastly greater speed — a speed that seems to increase constantly — relativistic effects are crucial. By relativistic effects I mean the interdependence of distance, time, and speed — none of which is an absolute, and all of which depend on each other (maybe).

If the relativistic effects involved in measuring cosmological phenomena are well understood, they shouldn’t account for the disparate estimates of the Hubble constant (H0). This raises a possibility that isn’t mentioned in the article quoted above, namely, that the relativistic effects aren’t well understood or have been misestimated.

There are other possibilities; for example:

  • The basic cosmological assumption of a Big Bang and spatially uniform expansion is wrong.
  • The speed of light (and/or other supposed constants) isn’t invariant.
  • There is an “unknown unknown” that may never be identified, let alone quantified.

Whatever the case, this is a useful reminder that science is never settled.

Political Ideologies

I have just published a new page, “Political Ideologies”. Here’s the introduction:

Political ideologies proceed in a circle. Beginning arbitrarily with conservatism and moving clockwise, there are roughly the following broad types of ideology: conservatism, anti-statism (libertarianism), and statism. Statism is roughly divided into left-statism (“liberalism”or “progressivism”, left-populism) and right-statism (faux conservatism, right-populism). Left-statism and right-statism are distinguishable by their stated goals and constituencies.

By statism, I mean the idea that government should do more than merely defend the people from force and fraud. Conservatism and libertarianism are both anti-statist, but there is a subtle and crucial difference between them, which I will explain.

Not everyone has a coherent ideology of a kind that I discuss below. Far from it. There is much vacillation between left-statism and right-statism. And there is what I call the squishy center of the electorate which is easily swayed by promises and strongly influenced by bandwagon effects. In general, there is what one writer calls clientelism:

the distribution of resources by political power through an agreement in which politicians – the patrons – make this allocation dependent on the political support of the beneficiaries – their clients. Clientelism emerges at the intersection of political power with social and economic activity.

Politicians themselves are prone to stating ideological positions to which they don’t adhere, out of moral cowardice and a strong preference for power over principle. Republicans have been especially noteworthy in this respect. Democrats simply try to do what they promise to do — increase the power of government (albeit at vast but unmentioned economic and social cost).

In what follows, I will ignore the squishy center and the politics of expediency. I will focus on the various ideologies, the contrasts between them, and the populist allure of left-statism and right-statism. Because the two statisms are so much alike under the skin, I will start with conservatism and work around the circle to them. Conservatism gets more attention than the other ideologies because it is intellectually richer.

Go here for the rest.

Leninthink and Left-think

The following passages from Gary Saul Morson’s “Leninthink” (The New Criterion, October 2019) speak volumes about today’s brand of leftism:

In [Lenin’s] view, Marx’s greatest contribution was not the idea of the class struggle but “the dictatorship of the proletariat,” and as far back as 1906 Lenin had defined dictatorship as “nothing other than power which is totally unlimited by any laws, totally unrestrained by absolutely any rules, and based directly on force.”

*   *   *

For us, the word “politics” means a process of give and take, but for Lenin it’s we take, and you give. From this it follows that one must take maximum advantage of one’s position. If the enemy is weak enough to be destroyed, and one stops simply at one’s initial demands, one is objectively helping the enemy, which makes one a traitor.

*   *   *

If there is one sort of person Lenin truly hated more than any other, it is—to use some of his more printable adjectives—the squishy, squeamish, spineless, dull-witted liberal reformer.

*   *   *

If by law one means a code that binds the state as well as the individual, specifies what is and is not permitted, and eliminates arbitrariness, then Lenin entirely rejected law as “bourgeois.”…  Recall that he defined the dictatorship of the proletariat as rule based entirely on force absolutely unrestrained by any law.

*   *   *

Lenin’s language, no less than his ethics, served as a model, taught in Soviet schools and recommended in books with titles like Lenin’s Language and On Lenin’s Polemical Art. In Lenin’s view, a true revolutionary did not establish the correctness of his beliefs by appealing to evidence or logic, as if there were some standards of truthfulness above social classes. Rather, one engaged in “blackening an opponent’s mug so well it takes him ages to get it clean again.” Nikolay Valentinov, a Bolshevik who knew Lenin well before becoming disillusioned, reports him saying: “There is only one answer to revisionism: smash its face in!”

*   *   *

No concessions, compromises, exceptions, or acts of leniency; everything must be totally uniform, absolutely the same, unqualifiedly unqualified.

*   *   *

Critics objected that Lenin argued by mere assertion. He disproved a position simply by showing it contradicted what he believed. In his attack on the epistemology of Ernst Mach and Richard Avenarius, for instance, every argument contrary to dialectical materialism is rejected for that reason alone. Valentinov, who saw Lenin frequently when he was crafting this treatise, reports that Lenin at most glanced through their works for a few hours. It was easy enough to attribute to them views they did not hold, associate them with disreputable people they had never heard of, or ascribe political purposes they had never imagined. These were Lenin’s usual techniques, and he made no bones about it.

Opponents objected that Lenin lied without compunction, and it is easy to find quotations in which he says—as he did to the Bolshevik leader Karl Radek—“Who told you a historian has to establish the truth?” Yes, we are contradicting what we said before, he told Radek, and when it is useful to reverse positions again, we will.

*   *   *

Lenin did not just invent a new kind of party, he also laid the basis for what would come to be known in official parlance as “partiinost’,” literally Partyness, in the sense of Party-mindedness….

… The true Party member cares for nothing but the Party. It is his family, his community, his church. And according to Marxism-Leninism, everything it did was guaranteed to be correct.

*   *   *

[The prominent Bolshevik Yuri] Pyatakov grasped Lenin’s idea that coercion is not a last resort but the first principle of Party action. Changing human nature, producing boundless prosperity, overcoming death itself: all these miracles could be achieved because the Party was the first organization ever to pursue coercion without limits.

*   *   *

Many former Communists describe their belated recognition that experienced Party members do not seem to believe what they profess…. It gradually dawned on [Richard Wright] that the Party takes stances not because it cares about them—although it may—but because it is useful for the Party to do so.

Doing so may help recruit new members, as its stance on race had gotten Wright to join. But after a while a shrewd member learned, without having been explicitly told, that loyalty belonged not to an issue, not even to justice broadly conceived, but to the Party itself. Issues would be raised or dismissed as needed.

*   *   *

I remarked to one colleague, who called herself a Marxist-Leninist, that it only made things worse when she told obvious falsehoods in departmental meetings. Surely, such unprincipled behavior must bring discredit to your own position, I pleaded.

Her reply brought me back to my childhood [as a son of a party member]. I quote it word-for-word: “You stick to your principles, and I’ll stick to mine.” From a Leninist perspective, a liberal, a Christian, or any type of idealist only ties his hands by refraining from doing whatever works. She meant: we Leninists will win because we know better than to do that.

In the end, leftism is about power — unchallenged power to do whatever it is that must be done.

Obama vs. Trump, in One Easy Lesson

The lesson is buried in yesterday’s long post about the modern presidency. Here it is in graphical form (red means bad; green means good):

The Modern Presidency: From TR to DJT

This is a revision and expansion of a post that I published at my old blog late in 2007. The didactic style of this post reflects its original purpose, which was to give my grandchildren some insights into American history that aren’t found in standard textbooks. Readers who consider themselves already well-versed in the history of American politics should nevertheless scan this post for its occasionally provocative observations.

Theodore Roosevelt Jr. (1858-1919) was elected Vice President as a Republican in 1900, when William McKinley was elected to a second term as President. Roosevelt became President when McKinley was assassinated in September 1901. Roosevelt was re-elected President in 1904, with 56 percent of the “national” popular vote. (I mention popular-vote percentages here and throughout this post because they are a gauge of the general popularity of presidential candidates, though an inaccurate gauge if a strong third-party candidate emerges to distort the usual two-party dominance of the popular vote. There is, in fact, no such thing as a national popular vote. Rather, it is the vote in each State which determines the distribution of that State’s electoral votes between the various candidates. The electoral votes of all States are officially tallied about a month after the general election, and the president-elect is the candidate with the most electoral votes. I have more to say more about electoral votes in several of the entries that follow this one.)

Theodore Roosevelt (also known as TR) served almost two full terms as President, from September 14, 1901, to March 4, 1909. (Before 1937, a President’s term of office began on March 4 of the year following his election to office.)

Roosevelt was an “activist” President. Roosevelt used what he called the “bully pulpit” of the presidency to gain popular support for programs that exceeded the limits set in the Constitution. Roosevelt was especially willing to use the power of government to regulate business and to break up companies that had become successful by offering products that consumers wanted. Roosevelt was typical of politicians who inherited a lot of money and didn’t understand how successful businesses provided jobs and useful products for less-wealthy Americans.

Roosevelt was more like the Democrat Presidents of the Twentieth Century. He did not like the “weak” government envisioned by the authors of the Constitution. The authors of the Constitution designed a government that would allow people to decide how to live their own lives (as long as they didn’t hurt other people) and to run their own businesses as they wished to (as long as they didn’t cheat other people). The authors of the Constitution thought government should exist only to protect people from criminals and foreign enemies.

William Howard Taft (1857-1930), a close friend of Theodore Roosevelt, served as President from March 4, 1909, to March 4, 1913. Taft ran for the presidency as a Republican in 1908 with Roosevelt’s support. But Taft didn’t carry out Roosevelt’s anti-business agenda aggressively enough to suit Roosevelt. So, in 1912, when Taft ran for re-election as a Republican, Roosevelt ran for election as a Progressive (a newly formed political party). Many Republican voters decided to vote for Roosevelt instead of Taft. The result was that a Democrat, Woodrow Wilson, won the most electoral votes. Although Taft was defeated for re-election, he later became Chief Justice of the United States, making him the only person ever to have served as head of the executive and judicial branches of the U.S. Government.

Thomas Woodrow Wilson (1856-1924) served as President from March 4, 1913, to March 4, 1921. (Wilson didn’t use his first name, and was known officially as Woodrow Wilson.) Wilson is the only President to have earned the degree of doctor of philosophy. Wilson’s field of study was political science, and he had many ideas about how to make government “better”. But “better” government, to Wilson, was “strong” government of the kind favored by Theodore Roosevelt. In fact, it was government by executive decree rather than according to the Constitution’s rules for law-making, in which Congress plays the central role.

Wilson was re-elected in 1916 because he promised to keep the United States out of World War I, which had begun in 1914. But Wilson changed his mind in 1917 and asked Congress to declare war on Germany. After the war, Wilson tried to get the United States to join the League of Nations, an international organization that was supposed to prevent future wars by having nations assemble to discuss their differences. The U.S. Senate, which must approve America’s membership in international organizations, refused to join the League of Nations. The League did not succeed in preventing future wars because wars are started by leaders who don’t want to discuss their differences with other nations.

Warren Gamaliel Harding (1865-1923), a Republican, was elected in 1920 and inaugurated on March 4, 1921. Harding asked voters to reject the kind of government favored by Democrats, and voters gave Harding what is known as a “landslide” victory; he received 60 percent of the votes cast in the 1920 election for president, one of the highest percentages ever recorded. Harding’s administration was about to become involved in a major scandal when Harding died suddenly on August 3, 1923, while he was on a trip to the West Coast. The exact cause of Harding’s death is unknown, but he may have had a stroke when he learned of the impending scandal, which involved Albert Fall, Secretary of the Interior. Fall had secretly allowed some of his business associates to lease government land for oil-drilling, in return for personal loans.

There were a few other scandals, but Harding probably had nothing to do with any of them. Because of the scandals, most historians say that they consider Harding to have been a poor President. But that isn’t the real reason for their dislike of Harding. Most historians, like most college professors, favor “strong” government. Historians don’t like Harding because he didn’t use the power of government to interfere in the nation’s economy. An important result of Harding’s policy (called laissez-faire, or “hands off”) was high employment and increasing prosperity during the 1920s.

John Calvin Coolidge (1872-1933) , who was Harding’s Vice President, became President upon Harding’s death in 1923. (Coolidge didn’t use his first name, and was known as Calvin.) Coolidge was elected President in 1924. He served as President from August 3, 1923, to March 4, 1929. Coolidge continued Harding’s policy of not interfering in the economy, and people continued to become more prosperous as businesses grew and hired more people and paid them higher wages. Coolidge was known as “Silent Cal” because he was a man of few words. He said only what was necessary for him to say, and he meant what he said. That was in keeping with his approach to the presidency. He was not the “activist” that reporters and historians like to see in the presidency; he simply did the job required of him by the Constitution, which was to execute the laws of the United States. He continued Harding’s hands-off policy, and the country prospered as a result. Coolidge chose not run for re-election in 1928, even though he was quite popular.

Herbert Clark Hoover (1874-1964), a Republican who had been Secretary of Commerce under Coolidge, was elected to the presidency in 1928. He served as President from March 4, 1929, to March 4, 1933.

Hoover won 58 percent of the popular vote, an endorsement of the hands-off policy of Harding and Coolidge. Hoover’s administration is known mostly for the huge drop in the price of stocks (shares of corporations, which are bought and sold in places known as stock exchanges), and for the Great Depression that was caused partly by the “Crash” — as it became known. The rate of unemployment (the percentage of American workers without jobs) rose from 3 percent just before the Crash to 25 percent by 1933, at the depth of the Great Depression.

The Crash had two main causes. First, the prices of shares in businesses (called stocks) began to rise sharply in the late 1920s. That caused many persons to borrow money in order to buy stocks, in the hope that the price of stocks would continue to rise. If the price of stocks continued to rise, buyers could sell their stocks at a profit and repay the money they had borrowed. But when stock prices got very high in the fall of 1929, some buyers began to worry that prices would fall, so they began to sell their stocks. That drove down the price of stocks, and caused more buyers to sell in the hope of getting out of the stock market before prices fell further. But prices went down so quickly that almost everyone who owned stocks lost money. Prices of stocks kept going down. By 1933, many stocks had become worthless and most stocks were selling for only a small fraction of prices that they had sold for before the Crash.

Because so many people had borrowed money to buy stocks, they went broke when stock prices dropped. When they went broke, they were unable to pay their other debts. That had a ripple effect throughout the economy. As people went broke they spent less money and were unable to pay their debts. Banks had less money to lend. Because people were buying less from businesses, and because businesses couldn’t get loans to stay in business, many businesses closed and people lost their jobs. Then the people who lost their jobs had less money to spend, and so more people lost their jobs.

The effects of the Great Depression were felt in other countries because Americans couldn’t afford to buy as much as they used to from other countries. Also, Congress passed a law known as the Smoot-Hawley Tarrif Act, which President Hoover signed. The Smoot-Hawley Act raised tarrifs (taxes) on items imported into the United States, which meant that Americans bought even less from foreign countries. Foreign countries passed similar laws, which meant that foreigners began to buy less from Americans, which put more Americans out of work.

The economy would have recovered quickly, as it had done in the past when stock prices fell and unemployment increased. But the actions of government — raising tariffs and making loans harder to get — only made things worse. What could have been a brief recession turned into the Great Depression. People were frightened. They blamed President Hoover for their problems, although President Hoover didn’t cause the Crash. Hoover ran for re-election in 1932, but he lost to Franklin Delano Roosevelt, a Democrat.

Franklin Delano Roosevelt (1882-1945), known as FDR, served as President from March 4, 1933 until his death on April 12, 1945, just a month before V-E Day. FDR was elected to the presidency in 1932, 1936, 1940, and 1944 — the only person elected more than twice. Roosevelt was a very popular President because he served during the Depression and World War II, when most Americans — having lost faith in themselves — sought reassurance that “someone was in charge”. FDR was not universally popular; his share of the popular vote rose from 57 percent in 1932 to 61 percent in 1936, but then dropped to 55 percent in 1940 and 54 percent in 1944. Americans were coming to understand what FDR’s opponents knew at the time, and what objective historians have said since:

FDR’s program to end the Great Depression was known as the New Deal. It consisted of welfare programs, which put people to work on government projects instead of making useful things. It also consisted of higher taxes and other restrictions on business, which discouraged people from starting and investing in businesses, which is the cure for unemployment.

Roosevelt did try to face up to the growing threat from Germany and Japan. However, he wasn’t able to do much to prepare America’s defenses because of strong isolationist and anti-war feelings in the country. Those feelings were the result of America’s involvement in World War I. (Similar feelings in Great Britain kept that country from preparing for war with Germany, which encouraged Hitler’s belief that he could easily conquer Europe.)

When America went to war after Japan’s attack on Pearl Harbor, Roosevelt proved to be an able and inspiring commander-in-chief. But toward the end of the war his health was failing and he was influenced by close aides who were pro-communist and sympathetic to the Soviet Union (Union of Soviet Socialist Republics, or USSR). Roosevelt allowed Soviet forces to claim Eastern Europe, including half of Germany. Roosevelt also encouraged the formation of the United Nations, where the Soviet Union (now the Russian Federation) has had a strong voice because it was made a permanent member of the Security Council, the policy-making body of the UN. As a member of the Security Council, Russia can obstruct actions proposed by the United States. (In any event, the UN has long since become a hotbed of anti-American, left-wing sentiment.)

Roosevelt’s appeasement of the USSR caused Josef Stalin (the Soviet dictator) to believe that the U.S. had weak leaders who would not challenge the USSR’s efforts to spread Communism. The result was the Cold War, which lasted for 45 years. During the Cold War the USSR developed nuclear weapons, built large military forces, kept a tight rein on countries behind the Iron Curtain (in Eastern Europe), and expanded its influence to other parts of the world.

Stalin’s belief in the weakness of U.S. leaders was largely correct, until Ronald Reagan became President. As I will discuss, Reagan’s policies led to the end of the Cold War.

Harry S Truman (1884-1972), who was Vice President in FDR’s fourth term, became President upon FDR’s death. Truman was re-elected in 1948, so he served as President from April 12, 1945 until January 20, 1953 — almost two full terms.

Truman made one right decision during his presidency. He approved the dropping of atomic bombs on Japan. Although hundreds of thousands of Japanese were killed by the bombs, the Japanese soon surrendered. If the Japanese hadn’t surrendered then, U.S. forces would have invaded Japan and millions of Americans and Japanese lives would have been lost in the battles that followed the invasion.

Truman ordered drastic reductions in the defense budget because he thought that Stalin was an ally of the United States. (Truman, like FDR, had advisers who were Communists.) Truman changed his mind about defense budgets, and about Stalin, when Communist North Korea attacked South Korea in 1950. The attack on South Korea came after Truman’s Secretary of State (the man responsible for relations with other countries) made a speech about countries that the United States would defend. South Korea was not one of those countries.

When South Korea was invaded, Truman asked General of the Army Douglas MacArthur to lead the defense of South Korea. MacArthur planned and executed the amphibious landing at Inchon, which turned the war in favor of South Korea and its allies. The allied forces then succeeded in pushing the front line far into North Korea. Communist China then entered the war on the side of North Korea. MacArthur wanted to counterattack Communist Chinese bases and supply lines in Manchuria, but Truman wouldn’t allow that. Truman then “fired” MacArthur because MacArthur spoke publicly about his disagreement with Truman’s decision. The Chinese Communists pushed allied forces back and the Korean War ended in a deadlock, just about where it had begun, near the 38th parallel.

In the meantime, Communist spies had stolen the secret plans for making atomic bombs. They were able to do that because Truman refused to hear the truth about Communist spies who were working inside the government. By the time Truman left office the Soviet Union had manufactured nuclear weapons, had strengthened its grip on Eastern Europe, and was beginning to expand its influence into the Third World (the nations of Africa and the Middle East).

Truman was very unpopular by 1952. As a result he chose not to run for re-election, even though he could have done so. (The “Lame Duck” amendment to the Constitution, which bars a person from serving as President for more than six years was adopted while Truman was President, but it didn’t apply to him.)

Dwight David Eisenhower (1890-1969), a Republican, served as President from January 20, 1953 to January 20, 1961. Eisenhower (also known by his nickname, “Ike”) received 55 percent of the popular vote in 1952 and 57 percent in 1956; his Democrat opponent in both elections was Adlai Stevenson. The Republican Party chose Eisenhower as a candidate mainly because he had become famous as a general during World War II. Republican leaders thought that by nominating Eisenhower they could end the Democrats’ twenty-year hold on the presidency. The Republican leaders were right about that, but in choosing Eisenhower as a candidate they rejected the Republican Party’s traditional stand in favor of small government.

Eisenhower was a “moderate” Republican. He was not a “big spender” but he did not try to undo all of the new government programs that had been started by FDR and Truman. Traditional Republicans eventually fought back and, in 1964, nominated a small-government candidate named Barry Goldwater. I will discuss him when I get to President Lyndon B. Johnson.

Eisenhower was a popular President, and he was a good manager, but he gave the impression of being “laid back” and not “in charge” of things. The news media had led Americans to believe that “activist” Presidents are better than laissez-faire Presidents, and so there was by 1960 a lot of talk about “getting the country moving again” — as if it was the job of the President to “run” the country instead of execution laws duly enacted in accordance with the Constitution.

John Fitzgerald Kennedy (1917-1963), a Democrat, was elected in 1960 to succeed President Eisenhower. Kennedy, who became known as JFK, served from January 20, 1961, until November 22, 1963, when he was assassinated in Dallas, Texas.

One reason that Kennedy won the election of 1960 (with 50 percent of the popular vote) was his image of “vigorous youth” (he was 27 years younger than Eisenhower). In fact, JFK had been in bad health for most of his life. He seemed to be healthy only because he used a lot of medications. Those medications probably impaired his judgment and would have caused him to die at a relatively early age if he hadn’t been assassinated.

Late in Eisenhower’s administration a Communist named Fidel Castro had taken over Cuba, which is only 90 miles south of Florida. The Central Intelligence Agency then began to work with anti-Communist exiles from Cuba. The exiles were going to attempt an invasion of Cuba at a place called the Bay of Pigs. In addition to providing the necessary military equipment, the U.S. was also going to provide air support during the invasion.

JFK succeeded Eisenhower before the invasion took place, in April 1961. JFK approved changes in the invasion plan that resulted in the failure of the invasion. The most important change was to discontinue air support for the invading forces. The exiles were defeated, and Castro has remained firmly in control of Cuba.

The failed invasion caused Castro to turn to the USSR for military and economic assistance. In exchange for that assistance, Castro agreed to allow the USSR to install medium-range ballistic missiles in Cuba. That led to the so-called Cuban Missile Crisis in 1962. Many historians give Kennedy credit for resolving the crisis and avoiding a nuclear war with the USSR. The Russians withdrew their missiles from Cuba, but JFK had to agree to withdraw American missiles from bases in Turkey.

The myth that Kennedy had stood up to the Russians made him more popular in the U.S. His major accomplishment, which Democrats today like to ignore, was to initiate tax cuts, which became law after his assassination. The Kennedy tax cuts helped to make America more prosperous during the 1960s by giving people more money to spend, and by encouraging businesses to expand and create jobs.

The assassination of JFK on November 22, 1963, in Dallas was a shocking event. It also led many Americans to believe that JFK would have become a great President if he had lived and been re-elected to a second term. There is little evidence that JFK would have become a great President. His record in Cuba suggests that he would not have done a good job of defending the country.

Lyndon Baines Johnson (1908-1973), also known as LBJ, was Kennedy’s Vice President and became President upon Kennedy’s assassination. LBJ was re-elected in 1964; he served as President from November 22, 1963 to January 20, 1969. LBJ’s Republican opponent in 1964 was Barry Goldwater, who was an old-style Republican conservative, in favor of limited government and a strong defense. LBJ portrayed Goldwater as a threat to America’s prosperity and safety, when it was LBJ who was the real threat. Americans were still in shock about JFK’s assassination, and so they rallied around LBJ, who won 61 percent of the popular vote.

LBJ is known mainly for two things: his “Great Society” program and the war in Vietnam. The Great Society program was an expansion of FDR’s New Deal. It included such things as the creation of Medicare, which is medical care for retired persons that is paid for by taxes. Medicare is an example of a “welfare” program. Welfare programs take money from people who earn it and give money to people who don’t earn it. The Great Society also included many other welfare programs, such as more benefits for persons who are unemployed. The stated purpose of the expansion of welfare programs under the Great Society was to end poverty in America, but that didn’t happen. The reason it didn’t happen is that when people receive welfare they don’t work as hard to take care of themselves and their families, and they don’t save enough money for their retirement. Welfare actually makes people worse off in the long run.

America’s involvement in Vietnam began in the 1950s, when Eisenhower was President. South Vietnam was under attack by Communist guerrillas, who were sponsored by North Vietnam. Small numbers of U.S. forces were sent to South Vietnam to train and advise South Vietnamese forces. More U.S. advisers were sent by JFK, but within a few years after LBJ became President he had turned the war into an American-led defense of South Vietnam against Communist guerrillas and regular North Vietnamese forces. LBJ decided that it was important for the U.S. to defeat a Communist country and stop Communism from spreading in Southeast Asia.

However, LBJ was never willing to commit enough forces in order to win the war. He allowed air attacks on North Vietnam, for example, but he wouldn’t invade North Vietnam because he was afraid that the Chinese Communists might enter the war. In other words, like Truman in Korea, LBJ was unwilling to do what it would take to win the war decisively. Progress was slow and there were a lot of American casualties from the fighting in South Vietnam. American newspapers and TV began to focus attention on the casualties and portray the war as a losing effort. That led a lot of Americans to turn against the war, and college students began to protest the war (because they didn’t want to be drafted). Attention shifted from the war to the protests, giving the world the impression that America had lost its resolve. And it had.

LBJ had become so unpopular because of the war in Vietnam that he decided not to run for President in 1968. Most of the candidates for President campaigned by saying that they would end the war. In effect, the United States had announced to North Vietnam that it would not fight the war to win. The inevitable outcome was the withdrawal of U.S. forces from Vietnam, which finally happened in 1973, under LBJ’s successor, Richard Nixon. South Vietnam was left on its own, and it fell to North Vietnam in 1975.

Richard Milhous Nixon (1913-1994) was a Republican. He won the election of 1968 by beating the Democrat candidate, Hubert H. Humphrey (who had been LBJ’s Vice President), and a third-party candidate, George C. Wallace. Nixon and Humphrey each received 43 percent of the popular vote; Wallace received 14 percent. If Wallace had not been a candidate, most of the votes cast for him probably would have been cast for Nixon.

Even though Nixon received less than half of the popular vote, he won the election because he received a majority of electoral votes. Electoral votes are awarded to the winner of each State’s popular vote. Nixon won a lot more States than Humphrey and Wallace, so Nixon became President.

Nixon won re-election in 1972, with 61 percent of the popular vote, by beating a Democrat (George McGovern) who would have expanded LBJ’s Great Society and cut America’s armed forces even more than they were cut after the Vietnam War ended. Nixon’s victory was more a repudiation of McGovern than it was an endorsement of Nixon. His second term ended in disgrace when he resigned the presidency on August 9, 1974.

Nixon called himself a conservative, but he did nothing during his presidency to curb the power of government. He did not cut back on the Great Society. He spent a lot of time on foreign policy. But Nixon’s diplomatic efforts did nothing to make the USSR and Communist China friendlier to the United States. Nixon had shown that he was essentially a weak President by allowing U.S. forces to withdraw from Vietnam. Dictatorial rulers like do not respect countries that display weakness.

Nixon was the first (and only) President who resigned from office. He resigned because the House of Representatives was ready to impeach him. An impeachment is like a criminal indictment; it is a set of charges against the holder of a public office. If Nixon had been impeached by the House of Representatives, he would have been tried by the Senate. If two-thirds of the Senators had voted to convict him he would have been removed from office. Nixon knew that he would be impeached and convicted, so he resigned.

The main charge against Nixon was that he ordered his staff to cover up his involvement in a crime that happened in 1972, when Nixon was running for re-election. The crime was a break-in at the headquarters of the Democratic Party in Washington, D.C. Because the Democratic Party’s headquarters was located in the Watergate Building in Washington, D.C., this episode became known as the Watergate Scandal.

The purpose of the break-in was to obtain documents that might help Nixon’s re-election effort. The men who participated in the break-in were hired by aides to Nixon. Details about the break-in and Nixon’s involvement were revealed as a result of investigations by Congress, which were helped by reporters who were doing their own investigative work.

But there is good reason to believe that Nixon was unjustly forced from office by the concerted efforts of the news media (most of which had long been biased against Nixon), Democrats in Congress, and many Republicans who were anxious to rid themselves of Nixon, who was a magnet for controversy.

Gerald Rudolph Ford (born Leslie King Jr.) (1913 – 2007), who was Nixon’s Vice President at the time Nixon resigned, became President on August 9, 1974 and served until January 20, 1977. Ford succeeded Spiro T. Agnew, who had been Nixon’s Vice President until October 10, 1973, when he resigned because he had been taking bribes while he was Governor of Maryland (the job he had before becoming Vice President).

Ford became the first Vice President chosen in accordance with the Twenty-Fifth Amendment to the Constitution. That amendment spells out procedures for filling vacancies in the presidency and vice presidency. When Vice President Agnew resigned, President Nixon nominated Ford as Vice President, and the nomination was approved by a majority vote of the House and Senate. Then, when Ford became President, he nominated Nelson Rockefeller to fill the vice presidency, and Rockefeller was elected Vice President by the House and Senate.

Ford ran for re-election in 1976, but he was defeated by James Earl Carter, mainly because of the Watergate Scandal. Ford was not involved in the scandal, but voters often cast votes for silly reasons. Carter’s election was a rejection of Richard Nixon, who had left office two years earlier, not a vote of confidence in Carter.

James Earl (“Jimmy”) Carter Jr. (1924 – ), a Democrat who had been Governor of Georgia, received only 50 percent of the popular vote. He was defeated for re-election in 1980, so he served as President from January 20, 1977 to January 20, 1981.

Carter was an ineffective President who failed at the most important duty of a President, which is to protect Americans from foreign enemies. His failure came late in his term of office, during the Iran Hostage Crisis. The Shah of Iran had ruled the country for 38 years. He was overthrown in 1979 by a group of Muslim clerics (religious men) who disliked the Shah’s pro-American policies. In November 1979 a group of students loyal to the new Muslim government of Iran invaded the American embassy in Tehran (Iran’s capital city) and took 66 hostages. Carter approved rescue efforts, but they were poorly planned. The hostages were still captive by the time of the presidential election in 1980. Carter lost the election largely because of his feeble rescue efforts.

In recent years Carter has become an outspoken critic of America’s foreign policy. Carter is sympathetic to America’s enemies and he opposes strong military action in defense of America.

Ronald Wilson Reagan (1911-2004), a Republican, succeeded Jimmy Carter as President. Reagan won 51 percent of the popular vote in 1980. Reagan would have received more votes, but a former Republican (John Anderson) ran as a third-party candidate and took 7 percent of the popular vote. Reagan was re-elected in 1984 with 59 percent of the popular vote. He served as President from January 20, 1981, until January 20, 1989.

Reagan had two goals as President: to reduce the size of government and to increase America’s military strength. He was unable to reduce the size of government because, for most of his eight years in office, Democrats were in control of Congress. But Reagan was able to get Congress to approve large reductions in income-tax rates. Those reductions led to more spending on consumer goods and more investment in the creation of new businesses. As a result, Americans had more jobs and higher incomes.

Reagan succeeded in rebuilding America’s military strength. He knew that the only way to defeat the USSR, without going to war, was to show the USSR that the United States was stronger. A lot of people in the United States opposed spending more on military forces; they though that it would cause the USSR to spend more. They also thought that a war between the U.S. and USSR would result. Reagan knew better. He knew that the USSR could not afford to keep up with the United States. Reagan was right. Not long after the end of his presidency the countries of Eastern Europe saw that the USSR was really a weak country, and they began to break away from the USSR. Residents of Berlin demolished the Berlin Wall, which the USSR had erected in 1961 to keep East Berliners from crossing over into West Berlin. East Germany was freed from Communist rule, and it reunited with West Germany. The USSR collapsed, and many of the countries that had been part of the USSR became independent. We owe the end of the Soviet Union and its influence President Reagan’s determination to defeat the threat posed by the Soviet Union.

George Herbert Walker Bush (1924 – 2019), a Republican, was Reagan’s Vice President. He won 54 percent of the popular vote when he defeated his Democrat opponent, Michael Dukakis, in the election of 1988. Bush lost the election of 1992. He served as President from January 20, 1989 to January 20, 1993.

The main event of Bush’s presidency was the Gulf War of 1990-1991. Iraq, whose ruler was Saddam Hussein, invaded the small neighboring country of Kuwait. Kuwait produces and exports a lot of oil. The occupation of Kuwait by Iraq meant that Saddam Hussein might have been able to control the amount of oil shipped to other countries, including Europe and the United States. If Hussein had been allowed to control Kuwait, he might have moved on to Saudi Arabia, which produces much more oil than Kuwait. President Bush asked Congress to approve military action against Iraq. Congress approved the action, although most Democrats voted against giving President Bush authority to defend Kuwait. The war ended in a quick defeat for Iraq’s armed forces. But President Bush decided not to allow U.S. forces to finish the job and end Saddam Hussein’s reign as ruler of Iraq.

Bush’s other major blunder was to raise taxes, which helped to cause a recession. The country was recovering from the recession in 1992, when Bush ran for re-election, but his opponents were able to convince voters that Bush hadn’t done enough to end the recession. In spite of his quick (but incomplete) victory in the Persian Gulf War, Bush lost his bid for re-election because voters were concerned about the state of the economy.

William Jefferson Clinton (born William Jefferson Blythe III) (1946 – ), a Democrat, defeated George H.W. Bush in the 1992 election by gaining a majority of the electoral vote. But Clinton won only 43 percent of the popular vote. Bush won 37 percent, and 19 percent went to H. Ross Perot. Perot, a third-party candidate, who received many votes that probably would have been cast for Bush.

Clinton’s presidency got off to a bad start when he sent to Congress a proposal that would have put health care under government control. Congress rejected the plan, and a year later (in 1994) voters went to the polls in large number to elect Republican majorities to the House and Senate.

Clinton was able to win re-election in 1996, but he received only 49 percent of the popular vote. He was re-elected mainly because fewer Americans were out of work and incomes were rising. This economic “boom” was a continuation of the recovery that began under President Reagan. Clinton got credit for the “boom” of the 1990s, which occurred in spite of tax increases passed by Congress while it was still controlled by Democrats.

Clinton was perceived as a “moderate” Democrat because he tried to balance the government’s budget; that is, he tried not to spend more money than the government was receiving in taxes. He was eventually able to balance the budget, but only because he cut defense spending. In addition to that, Clinton made several bad decisions about defense issues. In 1993 he withdrew American troops from Somalia, instead of continuing with the military mission there after some troops were captured and killed by natives. In 1994 he signed an agreement with North Korea that was supposed to keep North Korea from developing nuclear weapons, but the North Koreans continued to work on building nuclear weapons because they had fooled Clinton. By 1998 Clinton knew that al Qaeda had become a major threat when terrorists bombed two U.S. embassies in Africa, but Clinton failed to go to war against al Qaeda. Only after terrorists struck a Navy ship, the USS Cole, in 2000 did Clinton declare terrorism to be a major threat. By then, his term of office was almost over.

Clinton was the second President to be impeached. The House of Representatives impeached him in 1998. He was charged with perjury (lying under oath) when he was the defendant (the person being charged with wrong-doing) in a law suit. The Senate didn’t convict Clinton because every Democrat senator refused to vote for conviction, in spite of overwhelming evidence that Clinton was guilty. The day before Clinton left office he acknowledged his guilt by agreeing to a five-year suspension of his law license. A federal judge later found Clinton guilty of contempt of court for his misleading testimony and fined him $90,000.

Clinton was involved in other scandals during his presidency, but he remains popular with many people because he is good at giving the false impression that he is a nice, humble person.

Clinton’s scandals had more effect on his Vice President, Al Gore, who ran for President as the nominee of the Democrat Party in 2000. His main opponent was George W. Bush, a Republican. A third-party candidate named Ralph Nader also received a lot of votes. The election of 2000 was the closest presidential election since 1876. Bush and Gore each won about 48 percent of the popular vote (Gore’s percentage was slightly higher than Bush’s); Nader won 3 percent. The winner of the election was decided by outcome of the vote in Florida. That outcome was the subject of legal proceedings for six weeks. It had to be decided by the U.S. Supreme Court.

Initial returns in Florida gave that State’s electoral votes to Bush, which meant that he would become President. But the Supreme Court of Florida decided that election officials should violate Florida’s election laws and keep recounting the ballots in certain counties. Those counties were selected because they had more Democrats than Republicans, and so it was likely that recounts would favor Gore, the Democrat. The case finally went to the U.S. Supreme Court, which decided that the Florida Supreme Court was wrong. The U.S. Supreme Court ordered an end to the recounts, and Bush was declared the winner of Florida’s electoral votes.

George Walker Bush (1946 – ), a Republican, was the second son of a President to become President. (The first was John Quincy Adams, the sixth President, whose father, John Adams, was the second President. Also, Benjamin Harrison, the 23rd President, was the grandson of William Henry Harrison, the ninth President.) Bush won re-election in 2004, with 51 percent of the popular vote. He served as President from January 20, 2001, to January 20, 2009.

President Bush’s major accomplishment before September 11, 2001, was to get Congress to cut taxes. The tax cuts were necessary because the economy had been in a recession since 2000. The tax cuts gave people more money to spend and encouraged businesses to expand and create new jobs.

The terrorist attacks on September 11, 2001, caused President Bush to give most of his time and attention to the War on Terror. The invasion of Afghanistan, late in 2001, was part of a larger campaign to disrupt terrorist activities. Afghanistan was ruled by the Taliban, a group that gave support and shelter to al Qaeda terrorists. The U.S. quickly defeated the Taliban and destroyed al Qaeda bases in Afghanistan.

The invasion of Iraq, which took place in 2003, was also intended to combat al Qaeda, but in a different way. Iraq, under Saddam Hussein, had been an enemy of the U.S. since the Persian Gulf War of 1990-1991. Hussein was trying to acquire deadly weapons to use against the U.S. and its allies. Hussein was also giving money to terrorists and sheltering them in Iraq. The defeat of Hussein, which came quickly after the invasion of Iraq, was intended to establish a stable, friendly government in the Middle East.

The invasion of Iraq produced some of the intended results, but there was much unrest there because of long-standing animosity between Sunni Muslims and Shi’a Muslims. There was also much defeatist talk about Iraq — especially by Democrats and the media. That defeatist talk helped to encourage those who were creating unrest in Iraq. It gave them hope that the U.S. would abandon Iraq, just as it abandoned Vietnam more than 30 years earlier. The country had become almost uncontrollable until Bush authorized a military “surge” — enough additional troops to quell the unrest.

However, Bush, like his father, failed to take a strategically decisive course of action. He should have ended the pretense of “nation-building”, beefed up U.S. military presence, and installed a compliant Iraqi government. That would have created a U.S. stronghold in the Middle East and stifled Iran’s moves toward regional hegemony, just as the presence of U.S. forces in Europe for decades after World War II kept the USSR from seizing new territory and eventually wore it down.

With Iraq as a U.S. base of operations, it would have been easier to quell Afghanistan and to launch preemptive strikes on Iran’s nuclear-weapons program while it was still in its early stages.

But the early failures in Iraq — and the futility of the Afghan operation (also done on the cheap) — meant that Bush had no political backing for bolder military measures. Further, the end of his second term was blighted by a financial crisis that led a stock-market crash, the failure of some major financial firms, the bailout of some others, and thence to the Great Recession.

The election of 2008 coincided with the economic downturn, and it was no surprise that the Democrat candidate handily beat the feckless Republican (in-name-only) candidate, John Sidney McCain III.

Barack Hussein Obama II (1961 – ) was the Democrat who defeated McCain. Obama, like most of his predecessors, was a professional politician, but most of his political experience was as a “community organizer” (i.e., rabble-rouser and shakedown artist) in Chicago. He was still serving in his first major office (as U.S. Senator from Illinois) when he vaulted ahead of Hillary Rodham Clinton and seized the Democrat nomination for the presidency. He served as President from January 20, 2009, until January 20, 2017.

Obama’s ascendancy was owed in large part to the perception of him as youthful and energetic. He was careful to seem moderate in his campaign rhetoric, though those in the know (party leaders and activists) were well aware of his strong left-wing leanings, which were revealed in his Senate votes and positions. Clinton, by contrast, was perceived as middle-of the-road, but only because the road had shifted well to the left over the years. It was she, for example, who propounded the health-care nationalization scheme known as HillaryCare. The scheme was defeated in Congress, but it was responsible in large part for massive swing of House seats in 1994, which returned the House to GOP control for the first time in 42 years.

Obama’s election was due also to a health dose of white “guilt”. Here was an opportunity for many voters to “prove” (and to brag about) their lack of racism. And so, given the experience of Iraq, the onset of the Great Recession, and a me-too Republican candidate, they did the easy thing by voting for Obama, and enjoyed the feel-good sensation that went with it.

At any rate, Obama served two terms (the second was secured by defeating Willard Mitt Romney, another feckless RINO). His presidency throughout both terms was marked by disastrous policies; for example:

  • Obamacare, which drastically raised health-care costs and insurance premiums and added millions of freeloaders to Medicaid
  • encouragement of illegal immigration, which imposes heavy burdens on middle-class taxpayers and is intended to swell the rolls of Democrat voters through amnesty schemes
  • increases in marginal tax rates for individuals and businesses
  • issuance of economically stultifying regulations at an unprecedented page
  • nomination of dozens of left-wing judges and two left-wing Supreme Court Justices, partly to ensure “empathic” (leftist) rulings rather than rulings in accordance with the Constitution
  • sharp reductions in defense spending
  • meddling in Libya, which through Hillary Clinton’s negligence cost the lives of American diplomats
  • Clinton’s use of a private e-mail server, in which Obama was complicit, and which resulted in the compromise of sensitive, classified information.
  • a drastic military draw-down in Iraq, with immediately dire consequences (and a just-in-time reversal by Obama)
  • persistent anti-white and anti-American rhetoric (the latter especially on foreign soil and at the UN)
  • persistent anti-business rhetoric that, together with tax increases and regulatory excesses, killed the recovery from the Great Recession and put the U.S. firmly on the road to economic stagnation.

It should therefore have been a simple matter for voters to reject Obama’s inevitable successor: Hillary Clinton. But the American public has been indoctrinated in leftism for decades by public schools, the mainstream media, and a plethora TV shows and movies, with the result that Clinton acquired 5 million more popular votes, nationwide, than did her Republican opponent. The foresight of the Framers of the Constitution proved providential because her opponent carefully chose his battlegrounds and was handily won in the electoral college. Thus …

Donald John Trump (1946 – ) succeeded Obama and was inaugurated as President on January 20, 2017. He is only in the third year of his presidency, but has accomplished much despite a “resistance” movement that began as soon as his election was assured in the early-morning hours of November 9, 2016. (The “resistance”, which I discuss here, is a continuation of political and social trends that are rooted in the 1960s.)

These are among Trump’s accomplishments, many of them the result of a successful collaboration with both houses of Congress, which Republicans controlled for the first two years of Trump’s presidency, and the Senate, which remains under GOP control:

  • the end of Obamacare’s requirement to buy some form of health-insurance or pay a “tax”, which penalized the healthy and forced many to do something that would otherwise not do
  • discouragement of illegal immigration through tougher enforcement (against a huge, left-wing financed influx of illegals)
  • decreases in marginal tax rates for individuals and businesses
  • the repeal of many economically stultifying regulations and a drastic slowdown in the issuance of regulations
  • nomination of dozens of conservative judges and two conservative Supreme Court Justices
  • sharp increases in defense spending
  • the beginning of the end of foreign adventures that are unrelated to the interests of Americans (e.g., the drawdown in Syria)
  • relative stability in Iraq
  • pro-American rhetoric on foreign soil and at the UN
  • persistent pro-business rhetoric that, together with tax-rate cuts and regulatory reform, is helping to buoy the U.S. economy despite slowdowns elsewhere and Trump’s “trade war”, which is really aimed at creating a level playing field for American companies and workers.

This story will be continued.

More about Modeling and Science

This post is based on a paper that I wrote 38 years ago. The subject then was the bankruptcy of warfare models, which shows through in parts of this post. I am trying here to generalize the message to encompass all complex, synthetic models (defined below). For ease of future reference, I have created a page that includes links to this post and the many that are listed at the bottom.

THE METAPHYSICS OF MODELING

Alfred North Whitehead said in Science and the Modern World (1925) that “the certainty of mathematics depends on its complete abstract generality” (p. 25). The attraction of mathematical models is their apparent certainty. But a model is only a representation of reality, and its fidelity to reality must be tested rather than assumed. And even if a model seems faithful to reality, its predictive power is another thing altogether. We are living in an era when models that purport to reflect reality are given credence despite their lack of predictive power. Ironically, those who dare point this out are called anti-scientific and science-deniers.

To begin at the beginning, I am concerned here with what I will call complex, synthetic models of abstract variables like GDP and “global” temperature. These are open-ended, mathematical models that estimate changes in the variable of interest by attempting to account for many contributing factors (parameters) and describing mathematically the interactions between those factors. I call such models complex because they have many “moving parts” — dozens or hundreds of sub-models — each of which is a model in itself. I call them synthetic because the estimated changes in the variables of interest depend greatly on the selection of sub-models, the depictions of their interactions, and the values assigned to the constituent parameters of the sub-models. That is to say, compared with a model of the human circulatory system or an internal combustion engine, a synthetic model of GDP or “global” temperature rests on incomplete knowledge of the components of the systems in question and the interactions among those components.

Modelers seem ignorant of or unwilling to acknowledge what should be a basic tenet of scientific inquiry: the complete dependence of logical systems (such as mathematical models) on the underlying axioms (assumptions) of those systems. Kurt Gödel addressed this dependence in his incompleteness theorems:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic….

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

There is the view that Gödel’s theorems aren’t applicable in fields outside of mathematical logic. But any quest for certainty about the physical world necessarily uses mathematical logic (which includes statistics).

This doesn’t mean that the results of computational exercises are useless. It simply means that they are only as good as the assumptions that underlie them; for example, assumptions about relationships between parameters, assumptions about the values of the parameters, and assumptions as to whether the correct parameters have been chosen (and properly defined) in the first place.

There is nothing new in that, certainly nothing that requires Gödel’s theorems by way of proof. It has long been understood that a logical argument may be valid — the conclusion follows from the premises — but untrue if the premises (axioms) are untrue. But it bears repeating — and repeating.

REAL MODELERS AT WORK

There have been mathematical models of one kind and another for centuries, but formal models weren’t used much outside the “hard sciences” until the development of microeconomic theory in the 19th century. Then came F.W. Lanchester, who during World War I devised what became known as Lanchester’s laws (or Lanchester’s equations), which are

mathematical formulae for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two [opponents’] strengths A and B as a function of time, with the function depending only on A and B.

Lanchester’s equations are nothing more than abstractions that must be given a semblance of reality by the user, who is required to make myriad assumptions (explicit and implicit) about the factors that determine the “strengths” of A and B, including but not limited to the relative killing power of various weapons, the effectiveness of opponents’ defenses, the importance of the speed and range of movement of various weapons, intelligence about the location of enemy forces, and commanders’ decisions about when, where, and how to engage the enemy. It should be evident that the predictive value of the equations, when thus fleshed out, is limited to small, discrete engagements, such as brief bouts of aerial combat between two (or a few) opposing aircraft. Alternatively — and in practice — the values are selected so as to yield results that mirror what actually happened (in the “replication” of a historical battle) or what “should” happen (given the preferences of the analyst’s client).

More complex (and realistic) mathematical modeling (also known as operations research) had seen limited use in industry and government before World War II. Faith in the explanatory power of mathematical models was burnished by their use during the war, where such models seemed to be of aid in the design of more effective tactics and weapons.

But the foundation of that success wasn’t the mathematical character of the models. Rather, it was the fact that the models were tested against reality. Philip M. Morse and George E. Kimball put it well in Methods of Operations Research (1946):

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [Op cit., p. 10]

A mathematical model doesn’t represent scientific knowledge unless its predictions can be and have been tested. Even then, a valid model can represent only a narrow slice of reality. The expansion of a model beyond that narrow slice requires the addition of parameters whose interactions may not be well understood and whose values will be uncertain.

Morse and Kimball accordingly urged “hemibel thinking”:

Having obtained the constants of the operations under study … we compare the value of the constants obtained in actual operations with the optimum theoretical value, if this can be computed. If the actual value is within a hemibel ( … a factor of 3) of the theoretical value, then it is extremely unlikely that any improvement in the details of the operation will result in significant improvement. [When] there is a wide gap between the actual and theoretical results … a hint as to the possible means of improvement can usually be obtained by a crude sorting of the operational data to see whether changes in personnel, equipment, or tactics produce a significant change in the constants. [Op cit., p. 38]

Should we really attach little significance to differences of less than a hemibel? Consider a five-parameter model involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model can easily yield a cumulative error of a hemibel (or greater), given a twenty-five percent error in the value each parameter. (Mathematically, 1.255 = 3.05; alternatively, 0.755 = 0.24, or about one-fourth.)

ANTI-SCIENTIFIC MODELING

What does this say about complex, synthetic models such as those of economic activity or “climate change”? Any such model rests on the modeler’s assumptions as to the parameters that should be included, their values (and the degree of uncertainty surrounding them), and the interactions among them. The interactions must be modeled based on further assumptions. And so assumptions and uncertainties — and errors — multiply apace.

But the prideful modeler (I have yet to meet a humble one) will claim validity if his model has been fine-tuned to replicate the past (e.g., changes in GDP, “global” temperature anomalies). But the model is useless unless it predicts the future consistently and with great accuracy, where “great” means accurately enough to validly represent the effects of public-policy choices (e.g., setting the federal funds rate, investing in CO2 abatement technology).

Macroeconomic Modeling: A Case Study

In macroeconomics, for example, there is Professor Ray Fair, who teaches macroeconomic theory, econometrics, and macroeconometric modeling at Yale University. He has been plying his trade at prestigious universities since 1968, first at Princeton, then at MIT, and since 1974 at Yale. Professor Fair has since 1983 been forecasting changes in real GDP — not decades ahead, just four quarters (one year) ahead. He has made 141 such forecasts, the earliest of which covers the four quarters ending with the second quarter of 1984, and the most recent of which covers the four quarters ending with the second quarter of 2019. The forecasts are based on a model that Professor Fair has revised many times over the years. The current model is here. His forecasting track record is here.) How has he done? Here’s how:

1. The median absolute error of his forecasts is 31 percent.

2. The mean absolute error of his forecasts is 69 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 3 percent; too low when real, four-quarter GDP growth is greater than 3 percent.

4. His forecasts have grown generally worse — not better — with time. Recent forecasts are better, but still far from the mark.

Thus:


This and the next two graphs were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing, as noted in the caption.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

It fails a crucial test, in that it doesn’t reflect the downward trend in economic growth:

General Circulation Models (GCMs) and “Climate Change”

As for climate models, Dr. Tim Ball writes about a

fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).

clip_image006

Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

And yet the proponents of CO2-forced “climate change” rely heavily on that flawed temperature record because it is the only one that goes back far enough to “prove” the modelers’ underlying assumption, namely, that it is anthropogenic CO2 emissions which have caused the rise in “global” temperatures. See, for example, Dr. Roy Spencer’s “The Faith Component of Global Warming Predictions“, wherein Dr. Spencer points out that the modelers

have only demonstrated what they assumed from the outset. It is circular reasoning. A tautology. Evidence that nature also causes global energy imbalances is abundant: e.g., the strong warming before the 1940s; the Little Ice Age; the Medieval Warm Period. This is why many climate scientists try to purge these events from the historical record, to make it look like only humans can cause climate change.

In fact the models deal in temperature anomalies, that is, departures from a 30-year average. The anomalies — which range from -1.41 to +1.68 degrees C — are so small relative to the errors and uncertainties inherent in the compilation, estimation, and model-driven adjustments of the temperature record, that they must fail Morse and Kimball’s hemibel test. (The model-driven adjustments are, as Dr. Spencer suggests, downward adjustments of historical temperature data for consistency with the models which “prove” that CO2 emissions induce a certain rate of warming. More circular reasoning.)

They also fail, and fail miserably, the acid test of predicting future temperatures with accuracy. This failure has been pointed out many times. Dr. John Christy, for example, has testified to that effect before Congress (e.g., this briefing). Defenders of the “climate change” faith have attacked Dr. Christy’s methods and finding, but the rebuttals to one such attack merely underscore the validity of Dr. Christy’s work.

This is from “Manufacturing Alarm: Dana Nuccitelli’s Critique of John Christy’s Climate Science Testimony“, by Mario Lewis Jr.:

Christy’s testimony argues that the state-of-the-art models informing agency analyses of climate change “have a strong tendency to over-warm the atmosphere relative to actual observations.” To illustrate the point, Christy provides a chart comparing 102 climate model simulations of temperature change in the global mid-troposphere to observations from two independent satellite datasets and four independent weather balloon data sets….

To sum up, Christy presents an honest, apples-to-apples comparison of modeled and observed temperatures in the bulk atmosphere (0-50,000 feet). Climate models significantly overshoot observations in the lower troposphere, not just in the layer above it. Christy is not “manufacturing doubt” about the accuracy of climate models. Rather, Nuccitelli is manufacturing alarm by denying the models’ growing inconsistency with the real world.

And this is from Christopher Monckton of Brenchley’s “The Guardian’s Dana Nuccitelli Uses Pseudo-Science to Libel Dr. John Christy“:

One Dana Nuccitelli, a co-author of the 2013 paper that found 0.5% consensus to the effect that recent global warming was mostly manmade and reported it as 97.1%, leading Queensland police to inform a Brisbane citizen who had complained to them that a “deception” had been perpetrated, has published an article in the British newspaper The Guardian making numerous inaccurate assertions calculated to libel Dr John Christy of the University of Alabama in connection with his now-famous chart showing the ever-growing discrepancy between models’ wild predictions and the slow, harmless, unexciting rise in global temperature since 1979….

… In fact, as Mr Nuccitelli knows full well (for his own data file of 11,944 climate science papers shows it), the “consensus” is only 0.5%. But that is by the bye: the main point here is that it is the trends on the predictions compared with those on the observational data that matter, and, on all 73 models, the trends are higher than those on the real-world data….

[T]he temperature profile [of the oceans] at different strata shows little or no warming at the surface and an increasing warming rate with depth, raising the possibility that, contrary to Mr Nuccitelli’s theory that the atmosphere is warming the ocean, the ocean is instead being warmed from below, perhaps by some increase in the largely unmonitored magmatic intrusions into the abyssal strata from the 3.5 million subsea volcanoes and vents most of which Man has never visited or studied, particularly at the mid-ocean tectonic divergence boundaries, notably the highly active boundary in the eastern equatorial Pacific. [That possibility is among many which aren’t considered by GCMs.]

How good a job are the models really doing in their attempts to predict global temperatures? Here are a few more examples:

Mr Nuccitelli’s scientifically illiterate attempts to challenge Dr Christy’s graph are accordingly misconceived, inaccurate and misleading.

I have omitted the bulk of both pieces because this post is already longer than needed to make my point. I urge you to follow the links and read the pieces for yourself.

Finally, I must quote a brief but telling passage from a post by Pat Frank, “Why Roy Spencer’s Criticism is Wrong“:

[H]ere’s NASA on clouds and resolution: “A doubling in atmospheric carbon dioxide (CO2), predicted to take place in the next 50 to 100 years, is expected to change the radiation balance at the surface by only about 2 percent. … If a 2 percent change is that important, then a climate model to be useful must be accurate to something like 0.25%. Thus today’s models must be improved by about a hundredfold in accuracy, a very challenging task.

Frank’s very long post substantiates what I say here about the errors and uncertainties in GCMs — and the multiplicative effect of those errors and uncertainties. I urge you to read it. It is telling that “climate skeptics” like Spencer and Frank will argue openly, whereas “true believers” work clandestinely to present a united front to the public. It’s science vs. anti-science.

CONCLUSION

In the end, complex, synthetic models can be defended only by resorting to the claim that they are “scientific”, which is a farcical claim when models consistently fail to yield accurate predictions. It is a claim based on a need to believe in the models — or, rather, what they purport to prove. It is, in other words, false certainty, which is the enemy of truth.

Newton said it best:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Just as Newton’s self-doubt was not an attack on science, neither have I essayed an attack on science or modeling — only on the abuses of both that are too often found in the company of complex, synthetic models. It is too easily forgotten that the practice of science (of which modeling is a tool) is in fact an art, not a science. With this art we may portray vividly the few pebbles and shells of truth that we have grasped; we can but vaguely sketch the ocean of truth whose horizons are beyond our reach.


Related pages and posts:

Climate Change
Modeling and Science

Modeling Is Not Science
Modeling, Science, and Physics Envy
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
Ty Cobb and the State of Science
Is Science Self-Correcting?
Mathematical Economics
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Hurricane Hysteria
Deduction, Induction, and Knowledge
A (Long) Footnote about Science
The Balderdash Chronicles
Analytical and Scientific Arrogance
The Pretence of Knowledge
Wildfires and “Climate Change”
Why I Don’t Believe in “Climate Change”
Modeling Is Not Science: Another Demonstration
Ad-Hoc Hypothesizing and Data Mining
Analysis vs. Reality

Understanding the “Resistance”: The Enemies Within

There have been, since the 1960s, significant changes in the culture of America. Those changes have been led by a complex consisting of the management of big corporations (especially but not exclusively Big Tech), the crypto-authoritarians of academia, the “news” and “entertainment” media, and affluent adherents of  “hip” urban culture on the two Left Coasts. The changes include but are far from limited to the breakdown of long-standing, civilizing, and uniting norms. These are notably (but far from exclusively) traditional marriage and family formation, religious observance, self-reliance, gender identity, respect for the law (including immigration law), pride in America’s history, and adherence to the rules of politics even when you are on the losing end of an election.

Most of the changes haven’t occurred through cultural diffusion, trial and error, and general acceptance of what seems to be a change for the better. No, most of the changes have been foisted on the public at large through legislative, executive, and judicial “activism” by the disciples of radical guru Saul Alinsky (e.g., Barack Obama), who got their start in the anti-war riots of the 1960s and 1970s. They and their successors then cloaked themselves in respectability (e.g., by obtaining Ivy League degrees) to infiltrate and subvert the established order.

How were those disciples bred? Through the public-education system, the universities, and the mass media. The upside-down norms of the new order became gospel to the disciples. Thus the Constitution is bad, free markets are bad, freedom of association (for thee) is bad, self-defense (for thee) is bad, defense of the country must not get in the way of “social justice”, socialism and socialized medicine (for thee) are good, a long list of “victims” of “society” must be elevated, compensated, and celebrated regardless of their criminality and lack of ability.

And the disciples of the new dispensation must do whatever it takes to achieve their aims. Even if it means tearing up long-accepted rules, from those inculcated through religion to those written in the Constitution. Even if it means aggression beyond beyond strident expression of views to the suppression of unwelcome views, “outing” those who don’t subscribe to those views, and assaulting perceived enemies — physically and verbally.

All of this is the product of a no-longer-stealthy revolution fomented by a vast, left-wing conspiracy. One aspect of this movement has been the unrelenting attempt to subvert the 2016 election and reverse its outcome. Thus the fraud known as Spygate (a.k.a. Russiagate) and the renewed drive to impeach Trump, engineered with the help of a former Biden staffer.

Why such a hysterical and persistent reaction to the outcome of the 2016 election?  (The morally corrupt, all-out effort to block the confirmation of Justice Kavanaugh was a loud echo of that reaction.) Because the election of 2016 had promised to be the election to end all elections — the election that might have all-but-assured the the ascendancy of the left in America, with the Supreme Court as a strategic high ground.

But Trump — through his budget priorities, deregulatory efforts, and selection of constitutionalist judges — has made a good start on undoing Obama’s great leap forward in the left’s century-long march toward its vision of Utopia. The left cannot allow this to continue, for if Trump succeeds (and a second term might cement his success), its vile work could be undone.

There has been, in short, a secession — not of States (though some of them act that way), but of a broad and powerful alliance, many of whose members serve in government. They constitute a foreign presence in the midst of “real Americans“.

They are barbarians inside the gate, and must be thought of as enemies.

Expressing Certainty (or Uncertainty)

I have waged war on the misuse of probability for a long time. As I say in the post at the link:

A probability is a statement about a very large number of like events, each of which has an unpredictable (random) outcome. Probability, properly understood, says nothing about the outcome of an individual event. It certainly says nothing about what will happen next.

From a later post:

It is a logical fallacy to ascribe a probability to a single event. A probability represents the observed or computed average value of a very large number of like events. A single event cannot possess that average value. A single event has a finite number of discrete and mutually exclusive outcomes. Those outcomes will not “average out” — only one of them will obtain, like Schrödinger’s cat.

To say that the outcomes will average out — which is what a probability implies — is tantamount to saying that Jack Sprat and his wife were neither skinny nor fat because their body-mass indices averaged to a normal value. It is tantamount to saying that one can’t drown by walking across a pond with an average depth of 1 foot, when that average conceals the existence of a 100-foot-deep hole.

But what about hedge words that imply “probability” without saying it: certain, uncertain, likely, unlikely, confident, not confident, sure, unsure, and the like? I admit to using such words, which are common in discussions about possible future events and the causes of past events. But what do I, and presumably others, mean by them?

Hedge words are statements about the validity of hypotheses about phenomena or causal relationships. There are two ways of looking at such hypotheses, frequentist and Bayesian:

While for the frequentist, a hypothesis is a proposition (which must be either true or false) so that the frequentist probability of a hypothesis is either 0 or 1, in Bayesian statistics, the probability that can be assigned to a hypothesis can also be in a range from 0 to 1 if the truth value is uncertain.

Further, as discussed above, there is no such thing as the probability of a single event. For example, the Mafia either did or didn’t have JFK killed, and that’s all there is to say about that. One might claim to be “certain” that the Mafia had JFK killed, but one can be certain only if one is in possession of incontrovertible evidence to that effect. But that certainty isn’t a probability, which can refer only to the frequency with which many events of the same kind have occurred and can be expected to occur.

A Bayesian view about the “probability” of the Mafia having JFK killed is nonsensical. Even If a Bayesian is certain, based on incontrovertible evidence, that the Mafia had JFK killed, there is no probability attached to the occurrence. It simply happened, and that’s that.

Lacking such evidence, a Bayesian (or an unwitting “man on the street”) might say “I believe there’s a 50-50 chance that the Mafia had JFK killed”. Does that mean (1) there’s some evidence to support the hypothesis, but it isn’t conclusive, or (2) that the speaker would bet X amount of money, at even odds, that incontrovertible evidence (if any) surfaces it will prove that the Mafia had JFK killed? In the first case, attaching a 50-percent probability to the hypothesis is nonsensical; how does the existence of some evidence translate into a statement about the probability of a one-off event that either occurred or didn’t occur? In the second case, the speaker’s willingness to bet on the occurrence of an event at certain odds tells us something about the speaker’s preference for risk-taking but nothing at all about whether or not the event occurred.

What about the familiar use of “probability” (a.k.a., “chance”) in weather forecasts? Here’s my take:

[W]hen you read or hear a statement like “the probability of rain tomorrow is 80 percent”, you should mentally translate it into language like this:

X guesses that Y will (or will not) happen at time Z, and the “probability” that he attaches to his guess indicates his degree of confidence in it.

The guess may be well-informed by systematic observation of relevant events, but it remains a guess. As most Americans have learned and relearned over the years, when rain has failed to materialize or has spoiled an outdoor event that was supposed to be rain-free.

Further, it is true that some things happen more often than other things but

only one thing will happen at a given time and place.

[A] clever analyst could concoct a probability of a person’s being shot by writing an equation that includes such variables as his size, the speed with which he walks, the number of shooters, their rate of fire, and the distance across the shooting range.

What would the probability estimate mean? It would mean that if a very large number of persons walked across the shooting range under identical conditions, approximately S percent of them would be shot. But the clever analyst cannot specify which of the walkers would be among the S percent.

Here’s another way to look at it. One person wearing head-to-toe bullet-proof armor could walk across the range a large number of times and expect to be hit by a bullet on S percent of his crossings. But the hardy soul wouldn’t know on which of the crossings he would be hit.

Suppose the hardy soul became a foolhardy one and made a bet that he could cross the range without being hit. Further, suppose that S is estimated to be 0.75; that is, 75 percent of a string of walkers would be hit, or a single (bullet-proof) walker would be hit on 75 percent of his crossings. Knowing the value of S, the foolhardy fellow offers to pay out $1 million dollars if he crosses the range unscathed — one time — and claim $4 million (for himself or his estate) if he is shot. That’s an even-money bet, isn’t it?

No it isn’t….

The bet should be understood for what it is, an either-or-proposition. The foolhardy walker will either lose $1 million or win $4 million. The bettor (or bettors) who take the other side of the bet will either win $1 million or lose $4 million.

As anyone with elementary reading and reasoning skills should be able to tell, those possible outcomes are not the same as the outcome that would obtain (approximately) if the foolhardy fellow could walk across the shooting range 1,000 times. If he could, he would come very close to breaking even, as would those who bet against him.

I omitted from the preceding quotation a sentence in which I used “more likely”:

If a person walks across a shooting range where live ammunition is being used, he is more likely to be killed than if he walks across the same patch of ground when no one is shooting.

Inasmuch as “more likely” is a hedge word, I seem to have contradicted my own position about the probability of a single event, such as being shot while walking across a shooting range. In that context, however, “more likely” means that something could happen (getting shot) that wouldn’t happen in a different situation. That’s not really a probabilistic statement. It’s a statement about opportunity; thus:

  • Crossing a firing range generates many opportunities to be shot.
  • Going into a crime-ridden neighborhood certainly generates some opportunities to be shot, but their number and frequency depends on many variables: which neighborhood, where in the neighborhood, the time of day, who else is present, etc.
  • Sitting by oneself, unarmed, in a heavy-gauge steel enclosure generates no opportunities to be shot.

The “chance” of being shot is, in turn, “more likely”, “likely”, and “unlikely” — or a similar ordinal pattern that uses “certain”, “confident”, “sure”, etc. But the ordinal pattern, in any case, can never (logically) include statements like “completely certain”, “completely confident”, etc.

An ordinal pattern is logically valid only if it conveys the relative number of opportunities to attain a given kind of outcome — being shot, in the example under discussion.

Ordinal statements about different types of outcome are meaningless. Consider, for example, the claim that the probability that the Mafia had JFK killed is higher than (or lower than or the same as) the probability that the moon is made of green cheese. First, and to repeat myself for the nth time, the phenomena in question are one-of-a-kind and do not lend themselves to statements about their probability, nor even about the frequency of opportunities for the occurrence of the phenomena. Second, the use of “probability” is just a hifalutin way of saying that the Mafia could have had a hand in the killing of JFK, whereas it is known (based on ample scientific evidence, including eye-witness accounts) that the Moon isn’t made of green cheese. So the ordinal statement is just a cheap rhetorical trick that is meant to (somehow) support the subjective belief that the Mafia “must” have had a hand in the killing of JFK.

Similarly, it is meaningless to say that the “average person” is “more certain” of being killed in an auto accident than in a plane crash, even though one may have many opportunities to die in an auto accident or a plane crash. There is no “average person”; the incidence of auto travel and plane travel varies enormously from person to person; and the conditions that conduce to fatalities in auto travel and plane travel vary just as enormously.

Other examples abound. Be on the lookout for them, and avoid emulating them.