Baseball and Lesser Sports

The American League’s Greatest Hitters: III

This post supersedes “The American League’s Greatest Hitters: Part II” and “The American League’s Greatest Hitters.” Here, I build on “Bigger, Stronger, and Faster — but Not Quicker?” which assesses the long-term trend (or lack thereof) in batting skill.

Specifically, I derived ballpark factors (BF) for each AL team for each season from 1901 through 2016. For example, the fabled New York Yankees of 1927 hit 1.03 times as well at home as on the road. Given a schedule evenly divided between home and road games, this means that batting averages for the Yankees of 1927 were inflated by 1.015 relative to batting averages for players on other teams.

The BA of a 1927 Yankee — as adjusted by the method described in “Bigger, Stronger…” — should therefore be multiplied by a BF of 0.985 (1/1.015) to obtain that Yankee’s “true” BA for that season. (This is a player-season-specific adjustment, in addition the long-term trend adjustment applied in “Bigger, Stronger…,” which captures a gradual and general decline in home-park advantage.)

I made those adjustments for 147 players who had at least 5,000 plate appearances in the AL and an official batting average (BA) of at least .285 in those plate appearances. Here are the adjustments, plotted against the middle year of each player’s AL career:


When all is said and done, there are only 43 qualifying players with an adjusted career BA of .300 or higher:


Here’s a graph of the relationship between adjusted career BA and middle-of-career year:


The curved line approximates the trend, which is downward until about the mid-1970s, then slightly upward. But there’s a lot of variation around that trend, and one player — Ty Cobb at .360 — clearly stands alone as the dominant AL hitter of all time.

Michael Schell, in Baseball’s All-Time Best Hitters, ranks Cobb second behind Tony Gwynn, who spent his career (1982-2001) in the National League (NL), and much closer to Rod Carew, who played only in the AL (1967-1985). Schell’s adjusted BA for Cobb is .340, as opposed to .332 for Carew, an advantage of .008 for Cobb. I have Cobb at .360 and Carew at .338, an advantage of .022 for Cobb. The difference in our relative assessments of Cobb and Carew is typical; Schell’s analysis is biased (intentionally or not) toward recent and contemporary players and against players of the pre-World War II era.

Here’s how Schell’s adjusted BAs stack up against mine, for 32 leading hitters rated by both of us:


Schell’s bias toward recent and contemporary players is most evident in his standard-deviation (SD) adjustment:

In his book Full House, Stephen Jay Gould, an evolutionary biologist [who imported his ideological biases into his work]…. Gould imagines [emphasis added] that there is a “wall” of human ability. The best players at the turn of the [20th] century may have been close to the “wall,” many of their peers were not. Over time, progressively better hitters replace the weakest hitters. As a result, the best current hitters do not stand out as much from their peers.

Gould and I believe that the reduction in the standard deviation [of BA within a season] demonstrates that there has been an improvement in the overall quality of major league baseball today compared to nineteenth-century and early twentieth-century play. [pp. 94-95]

Thus Schell’s SD adjustment, which slashes the BA of the better hitters of the early part of the 20th century because the SDs of that era were higher than the SDs after World War II. The SD adjustment is seriously flawed for several reasons:

1. There may be a “wall” of human ability, or it may truly be imaginary. Even if there is such a wall, we have no idea how close Ty Cobb, Tony Gwynn, and other great hitters have been to it. That is to say, there’s no a priori reason (contra Schell’s implicit assumption) that Cobb couldn’t have been closer to the wall than Gwynn.

2. It can’t be assumed that reaction time — an important component of human ability, and certainly of hitting ability — has improved with time. In fact, there’s a plausible hypothesis to the contrary, which is stated in “Bigger, Stronger…” and examined there, albeit inconclusively.

3. Schell’s discussion of relative hitting skill implies, wrongly, that one player’s higher BA comes at the expense of other players. Not so. BA is a measure of the ability of a hitter to hit safely given the quality of pitching and other conditions (examined in detail in “Bigger, Stronger…”). It may be the case that weaker hitters were gradually replaced by better ones, but that doesn’t detract from the achievements of the better hitter, like Ty Cobb, who racked up his hits at the expense of opposing pitchers, not other batters.

4. Schell asserts that early AL hitters were inferior to their NL counterparts, thus further justifying an SD adjustment that is especially punitive toward early AL hitters (e.g., Cobb). However, early AL hitters were demonstrably inferior to their NL counterparts only in the first two years of the AL’s existence, and well before the arrival of Cobb, Joe Jackson, Tris Speaker, Harry Heilmann, Babe Ruth, George Sisler, Lou Gehrig, and other AL greats of the pre-World War II era. Thus:


There seems to have been a bit of backsliding between 1905 and 1910, but the sample size for those years is too small to be meaningful. On the other hand, after 1910, hitters enjoyed no clear advantage by moving from NL to AL (or vice versa). The data for 1903 through 1940, taken altogether, suggest parity between the two leagues during that span.

One more bit of admittedly sketchy evidence:

  • Cobb hit as well as Heilmann during Cobb’s final nine seasons as a regular player (1919-1927), which span includes the years in which the younger Heilmann won batting titles with average of .394, .403, 398, and .393.
  • In that same span, Heilmann outhit Ruth, who was the same age as Heilmann.
  • Ruth kept pace with the younger Gehrig during 1925-1932.
  • In 1936-1938, Gehrig kept pace with the younger Joe DiMaggio, even though Gehrig’s BA dropped markedly in 1938 with the onset of the disease that was to kill him.
  • The DiMaggio of 1938-1941 was the equal of the younger Ted Williams, even though the final year of the span saw Williams hit .406.
  • Williams’s final three years as a regular, 1956-1958, overlapped some of the prime seasons of Mickey Mantle, who was 13 years Williams’s junior. Williams easily outhit Mantle during those years, and claimed two batting titles to Mantle’s one.

I see nothing in the preceding recitation to suggest that the great hitters of the years 1901-1940 were inferior to the great hitters of the post-WWII era. In fact, it points in the opposite direction. This might be taken as indirect confirmation of the hypothesis that reaction times have slowed. Or it might have something to do with the emergence of football and basketball as “serious” professional sports after WWII, an emergence that could well have led potentially great hitters to forsake baseball for another sport. Yet another possibility is that post-war prosperity and educational opportunities drew some potentially great hitters into non-athletic trades and professions. In other words, unlike Schell, I remain open to the possibility that there may have been a real, if slight, decline in hitting talent after WWII — a decline that was gradually reversed because of the eventual effectiveness of integration (especially of Latin American players) and the explosion of salaries with the onset of free agency.

Finally, in “Bigger, Stronger…” I account for the cross-temporal variation in BA by applying a general algorithm and then accounting for 14 discrete factors, including the ones applied by Schell. As a result, I practically eliminate the effects of the peculiar conditions that cause BA to be inflated in some eras relative to other eras. (See figure 7 of “Bigger, Stronger…” and the accompanying discussion.) Even after taking all of those factors into account, Cobb still stands out as the best AL hitter of all time — by a wide margin.

And given Cobb’s statistical dominance over his contemporaries in the NL, he still stands as the greatest hitter in the history of the major leagues.

Not Just for Baseball Fans

I have substantially revised “Bigger, Stronger, and Faster — But Not Quicker?” I set out to test Dr. Michael Woodley’s hypothesis that reaction times have slowed since the Victorian era:

It seems to me that if Woodley’s hypothesis has merit, it ought to be confirmed by the course of major-league batting averages over the decades. Other things being equal, quicker reaction times ought to produce higher batting averages. Of course, there’s a lot to hold equal, given the many changes in equipment, playing conditions, player conditioning, “style” of the game (e.g., greater emphasis on home runs), and other key variables over the course of more than a century.

I conclude that my analysis

says nothing definitive about reaction times, even though it sheds a lot of light on the relative hitting prowess of American League batters over the past 116 years. (I’ll have more to say about that in a future post.)

It’s been great fun but it was just one of those things.

Sandwiched between those statements you’ll find much statistical meat (about baseball) to chew on.

Bigger, Stronger, and Faster — but Not Quicker?


There’s some controversial IQ research which suggests that reaction times have slowed and people are getting dumber, not smarter. Here’s Dr. James Thompson’s summary of the hypothesis:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong? [“The Victorians Were Cleverer Than Us!” Psychological Comments, April 29, 2013]

Thompson discusses this and other relevant research in many posts, which you can find by searching his blog for Victorians and Woodley. I’m not going to venture my unqualified opinion of Woodley’s hypothesis, but I am going to offer some (perhaps) relevant analysis based on — you guessed it — baseball statistics.

It seems to me that if Woodley’s hypothesis has merit, it ought to be confirmed by the course of major-league batting averages over the decades. Other things being equal, quicker reaction times ought to produce higher batting averages. Of course, there’s a lot to hold equal, given the many changes in equipment, playing conditions, player conditioning, “style” of the game (e.g., greater emphasis on home runs), and other key variables over the course of more than a century.

Undaunted, I used the Play Index search tool at to obtain single-season batting statistics for “regular” American League (AL) players from 1901 through 2016. My definition of a regular player is one who had at least 3 plate appearances (PAs) per scheduled game in a season. That’s a minimum of 420 PAs in a season from 1901 through 1903, when the AL played a 140-game schedule; 462 PAs in the 154-game seasons from 1904 through 1960; and 486 PAs in the 162-game seasons from 1961 through 2016. I found 6,603 qualifying player-seasons, and a long string of batting statistics for each of them: the batter’s age, his batting average, his number of at-bats, his number of PAs, etc.

The raw record of batting averages looks like this, fitted with a 6th-order polynomial to trace the shifts over time:


Everything else being the same, the best fit would be a straight line that rises gradually, falls gradually, or has no slope. The undulation reflects the fact that everything hasn’t stayed the same. Major-league baseball wasn’t integrated until 1947, and integration was only token for a couple of decades after that. For example: night games weren’t played until 1935, and didn’t become common until after World War II; a lot of regular players went to war, and those who replaced them were (obviously) of inferior quality — and hitting suffered more than pitching; the “deadball” era ended after the 1919 season and averages soared in the 1920s and 1930s; fielders’ gloves became larger and larger.

The list goes on, but you get the drift. Playing conditions and the talent pool have changed so much over the decades that it’s hard to pin down just what caused batting averages to undulate rather than move in a relatively straight line. It’s unlikely that batters became a lot better, only to get worse, then better again, and then worse again, and so on.

Something else has been going on — a lot of somethings, in fact. And the 6th-order polynomial captures them in an undifferentiated way. What remains to be explained are the differences between official BA and the estimates yielded by the 6th-order polynomial. Those differences are the stage 1 residuals displayed in this graph:


There’s a lot of variability in the residuals, despite the straight, horizontal regression line through them. That line, by the way, represents a 6th-order polynomial fit, not a linear one. So the application of the equation shown in figure 1 does an excellent job of de-trending the data.

The variability of the stage 1 residuals has two causes: (1) general changes in the game and (2) the performance of individual players, given those changes. If the effects of the general changes can be estimated, the remaining, unexplained variability should represent the “true” performance of individual batters.

In stage 2, I considered 16 variables in an effort to isolate the general changes. I began by finding the correlations between each of the 16 candidate variables and the stage 1 residuals. I then estimated a regression equation with stage 1 residuals as the dependent variable and the most highly correlated variable as the explanatory variable. I next found the correlations between the residuals of that regression equation and the remaining 15 variables. I introduced the most highly correlated variable into a new regression equation, as a second explanatory variable. I continued this process until I had a regression equation with 16 explanatory variables. I chose to use the 13th equation, which was the last one to introduce a variable with a highly significant p-value (less than 0.01). Along the way, because of collinearity among the variables, the p-values of a few others became high, but I kept them in the equation because they contributed to its overall explanatory power.

Here’s the 13-variable equation (REV13), with each coefficient given to 3 significant figures:

R1 = 1.22 – 0.0185WW – 0.0270DB – 1.26FA + 0.00500DR + 0.00106PT + 0.00197Pa + 0.00191LT – 0.000122Ba – 0.00000765TR + 0.000816DH – 0.206IP + 0.0153BL – 0.000215CG


R1 = stage 1 residuals

WW = World War II years (1 for 1942-1945, 0 for all other years)

DB = “deadball” era (1 for 1901-1919, 0 thereafter)

FA = league fielding average for the season

DR = prevalence of performance-enhancing drugs (1 for 1994-2007, 0 for all other seasons)

PT = number of pitchers per team

Pa = average age of league’s pitchers for the season

LT = fraction of teams with stadiums equipped with lights for night baseball

Ba = batter’s age for the season (not a general condition but one that can explain the variability of a batter’s performance)

TR = maximum distance traveled between cities for the season

DH = designated-hitter rule in effect (0 for 1901-1972, 1 for 1973-2016)

IP = average number of innings pitched per pitcher per game (counting all pitchers in the league during a season)

BL = fraction of teams with at least 1 black player

CG = average number of complete games pitched by each team during the season

The r-squared for the equation is 0.035, which seems rather low, but isn’t surprising given the apparent randomness of the dependent variable. Moreover, with 6,603 observations, the equation has an extremely significant f-value of 1.99E-43.

A positive coefficient means that the variable increases the value of the stage 1 residuals. That is, it causes batting averages to rise, other things being equal. A negative coefficient means the opposite, of course. Do the signs of the coefficients seem intuitively right, and if not, why are they the opposite of what might be expected? I’ll take them one at a time:

World War II (WW)

A lot of the game’s best batters were in uniform in 1942-1945. That left regular positions open to older, weaker batters, some of whom wouldn’t otherwise have been regulars or even in the major leagues. The negative coefficient on this variable captures the war’s effect on hitting, which suffered despite the fact that a lot of the game’s best pitchers also went to war.

Deadball era (DB)

The so-called deadball era lasted from the beginning of major-league baseball in 1871 through 1919 (roughly). It was called the deadball era because the ball stayed in play for a long time (often for an entire game), so that it lost much of its resilience and became hard to see because it accumulated dirt and scuffs. Those difficulties (for batters) were compounded by the spitball, the use of which was officially curtailed beginning with the 1920 season. (See this and this.) As figure 1 indicates, batting averages rose markedly after 1919, so the negative coefficient on DB is unsurprising.

Performance-enhancing drugs (DR)

Their rampant use seems to have begun in the early 1990s and trailed off in the late 2000s. I assigned a dummy variable of 1 to all seasons from 1994 through 2007 in an effort to capture the effect of PEDs. The coefficient suggests that the effect was (on balance) positive.

Number of pitchers per AL team (PT)

This variable, surprisingly, has a positive coefficient. One would expect the use of more pitchers to cause BA to drop. PT may be a complementary variable, one that’s meaningless without the inclusion of related variable(s). (See IP.)

Average age of AL pitchers (Pa)

The stage 1 residuals rise with respect to Pa rise until Pa = 27.4 , then they begin to drop. This variable represents the difference between 27.4 and the average age of AL pitchers during a particular season. The coefficient is multiplied by 27.4 minus average age; that is, by a positive number for ages lower than 27.4, by zero for age 27.4, and by a negative number for ages above 27.4. The positive coefficient suggests that, other things being equal, pitchers younger than 27.4 give up hits at a lower rate than pitchers older than 27.4. I’m agnostic on the issue.

Night baseball, that is, baseball played under lights (LT)

It has long been thought that batting is more difficult under artificial lighting than in sunlight. This variable measures the fraction of AL teams equipped with lights, but it doesn’t measure the rise in night games as a fraction of all games. I know from observation that that fraction continued to rise even after all AL stadiums were equipped with lights. The positive coefficient on LT suggests that it’s a complementary variable. It’s very highly correlated with BL, for example.

Batter’s age (Ba)

The stage 1 residuals don’t vary with Ba until Ba = 37 , whereupon the residuals begin to drop. The coefficient is multiplied by 37 minus the batter’s age; that is, by a positive number for ages lower than 37, by zero for age 37, and by a negative number for ages above 37. The very small negative coefficient probably picks up the effects of batters who were good enough to have long careers and hit for high averages at relatively advanced ages (e.g., Ty Cobb and Ted Williams). Their longevity causes them to be “over represented” in the sample.

Maximum distance traveled by AL teams (TR)

Does travel affect play? Probably, but the mode and speed of travel (airplane vs. train) probably also affects it. The tiny negative coefficient on this variable — which is highly correlated with several others — is meaningless, except insofar as it combines with all the other variables to account for the stage 1 residuals. TR is highly correlated with the number of teams (expansion), which suggests that expansion has had little effect on hitting.

Designated-hitter rule (DH)

The small positive coefficient on this variable suggests that the DH is a slightly better hitter, on average, than other regular position players.

Innings pitched per AL pitcher per game (IP)

This variable reflects the long-term trend toward the use of more pitchers in a game, which means that batters more often face rested pitchers who come at them with a different delivery and repertoire of pitches than their predecessors. IP has dropped steadily over the decades, presumably exerting a negative effect on BA. This is reflected in the rather large, negative coefficient on the variable, which means that it’s prudent to treat this variable as a complement to PT (discussed above) and CG (discussed below), both of which have counterintuitive signs.

Integration (BL)

I chose this variable to approximate the effect of the influx of black players (including non-white Hispanics) since 1947. BL measures only the fraction of AL teams that had at least one black player for each full season. It begins at 0.25 in 1948 (the Indians and Browns signed Larry Doby and Hank Thompson during the 1947 season) and rises to 1 in 1960, following the signing of Pumpsie Green by the Red Sox during the 1959 season. The positive coefficient on this variable is consistent with the hypothesis that segregation had prevented the use of players superior to many of the whites who occupied roster slots because of their color.

Complete games per AL team (CG)

A higher rate of complete games should mean that starting pitchers stay in games longer, on average, and therefore give up more hits, on average. The negative coefficient seems to contradict that hypothesis. But there are other, related variables (PT and CG), so this one should be thought of as a complementary variable.

Despite all of that fancy footwork, the equation accounts for only a small portion of the stage 1 residuals:


What’s left over — the stage 2 residuals — is (or should be) a good measure of comparative hitting ability, everything else being more or less the same. One thing that’s not the same, and not yet accounted for is the long-term trend in home-park advantage, which has slightly (and gradually) weakened. Here’s a graph of the inverse of the trend, normalized to the overall mean value of home-park advantage:


To get a true picture of a player’s single-season batting average, it’s just a matter of adding the stage 2 residual for that player’s season to the baseline batting average for the entire sample of 6,603 single-season performances. The resulting value is then multiplied by the equation given in figure 4. The baseline is .280, which is the overall average  for 1901-2016, from which individual performances diverge. Thus, for example, the stage 2 residual for Jimmy Williams’s 1901 season, adjusted for the long-term trend shown in figure 4, is .022. Adding that residual to .280 results in an adjusted (true) BA of .302, which is 15 points (.015) lower than Williams’s official BA of .317 in 1901.

Here are the changes from official to adjusted averages, by year:


Unsurprisingly, the pattern is roughly a mirror image of the 6th-order polynomial regression line in figure 1.

Here’s how the adjusted batting averages (vertical axis) correlate with the official batting averages (horizontal axis):


The red line represents the correlation between official and adjusted BA. The dotted gray line represents a perfect correlation. The actual correlation is very high (r = 0.93), and has a slightly lower slope than a perfect correlation. High averages tend to be adjusted downward and low averages tend to be adjusted upward. The gap separates the highly inflated averages of the 1920s and 1930s (lower right) from the less-inflated averages of most other decades (upper left).

Here’s a time-series view of the official and adjusted averages:


The wavy, bold line is the 6th-order polynomial fit from figure 1. The lighter, almost-flat line is a 6th-order polynomial fit to the adjusted values. The flatness is a good sign that most of the general changes in game conditions have been accounted for, and that what’s left (the gray plot points) is a good approximation of “real” batting averages.

What about reaction times? Have they improved or deteriorated since 1901? The results are inconclusive. Year (YR) doesn’t enter the stage 2 analysis until step 15, and it’s statistically insignificant (p-value = 0.65). Moreover, with the introduction of another variable in step 16, the sign of the coefficient on YR flips from slightly positive to slightly negative.

In sum, this analysis says nothing definitive about reaction times, even though it sheds a lot of light on the relative hitting prowess of American League batters over the past 116 years. (I’ll have more to say about that in a future post.)

It’s been great fun but it was just one of those things.

Pennant Droughts, Post-Season Play, and Seven-Game World Series


Everyone in the universe knows that the Chicago Cubs beat the Cleveland Indians to win the 2016 World Series. The Cubs got into the Series by ending what had been the longest pennant drought of the 16 old-line franchises in the National and American Leagues. The mini-bears had gone 71 years since winning the NL championship in 1945. And before last night, the Cubs last won a Series in 1908, a “mere” 108 years ago.

Here are the most recent league championships and World Series wins by the other old-line National League teams: Atlanta (formerly Boston and Milwaukee) Braves — 1999, 1995; Cincinnati Reds — 1990, 1990; Los Angeles (formerly Brooklyn) Dodgers — 1988, 1988; Philadelphia Phillies — 2009, 2008; Pittsburgh Pirates — 1979, 1979; San Francisco (formerly New York) Giants — 2014, 2014; and St. Louis Cardinals — 2013, 2011.

The American League lineup looks like this: Baltimore Orioles (formerly Milwaukee Brewers and St. Louis Browns) — 1983, 1983; Boston Red Sox — 2013, 2013; Chicago White Sox — 2005, 2005; Cleveland Indians — 2016 (previously 1997), 1948; Detroit Tigers — 2012, 1984; Minnesota Twins (formerly Washington Senators) — 1991, 1991; New York Yankees — 2009, 2009; and Oakland (formerly Philadelphia and Kansas City) Athletics — 1990, 1989.

What about the expansion franchises, of which there are 14? I’ll lump them because two of them (Milwaukee and Houston) have switched leagues since their inception. Here they are, in this format: Team (year of creation) — year of last league championship, year of last WS victory:

Arizona Diamondbacks (1998) — 2001, 2001

Colorado Rockies (1993) — 2007, never

Houston Astros (1962) — 2005, never

Kansas City Royals (1969) — 2015, 2015

Los Angeles Angels (1961) –2002, 2002

Miami Marlins (1993) — 2003, 2003

Milwaukee Brewers (1969, as Seattle Pilots) –1982, never

New York Mets (1962) — 2015, 1986

San Diego Padres (1969) — 1998, never

Seattle Mariners (1977) — never, never

Tampa Bay Rays (1998) — 2008, never

Texas Rangers (1961, as expansion Washington Senators) — 2011, never

Toronto Blue Jays (1977) — 1993, 1993

Washington Nationals (1969, as Montreal Expos) — never, never


The first 65 World Series (1903 and 1905-1968) were contests between the best teams in the National and American Leagues. The winner of a season-ending Series was therefore widely regarded as the best team in baseball for that season (except by the fans of the losing team and other soreheads). The advent of divisional play in 1969 meant that the Series could include a team that wasn’t the best in its league. From 1969 through 1993, when participation in the Series was decided by a single postseason playoff between division winners (1981 excepted), the leagues’ best teams met in only 10 of 24 series. The advent of three-tiered postseason play in 1995 and four-tiered postseason play in 2012, has only made matters worse.

By the numbers:

  • Postseason play originally consisted of a World Series (period) involving 1/8 of major-league teams — the best in each league. Postseason play now involves 1/3 of major-league teams and 7 postseason series (3 in each league plus the inter-league World Series).
  • Only 3 of the 22 Series from 1995 through 2016 have featured the best teams of both leagues, as measured by W-L record.
  • Of the 22 Series from 1995 through 2015, only 7 were won by the best team in a league.
  • Of the same 22 Series, 11 (50 percent) were won by the better of the two teams, as measured by W-L record. Of the 65 Series played before 1969, 35 were won by the team with the better W-L record and 2 involved teams with the same W-L record. So before 1969 the team with the better W-L record won 35/63 of the time for an overall average of 56 percent. That’s not significantly different from the result for the 22 Series played in 1995-2016, but the teams in the earlier era were each league’s best, which is no longer true. . .
  • From 1995 through 2016, a league’s best team (based on W-L record) appeared in a Series only 15 of 44 possible times — 6 times for the NL (pure luck), 9 times for the AL (little better than pure luck). (A random draw among teams qualifying for post-season play would have resulted in the selection of each league’s best team about 6 times out of 22.)
  • Division winners have opposed each other in only 11 of the 22 Series from 1995 through 2016.
  • Wild-card teams have appeared in 10 of those Series, with all-wild-card Series in 2002 and 2014.
  • Wild-card teams have occupied more than one-fourth of the slots in the 1995-2016 Series — 12 slots out of 44.

The winner of the World Series used to be its league’s best team over the course of the entire season, and the winner had to beat the best team in the other league. Now, the winner of the World Series usually can claim nothing more than having won the most postseason games — 11 or 12 out of as many as 19 or 20. Why not eliminate the 162-game regular season, select the postseason contestants at random, and go straight to postseason play?

Here are the World Series pairings for 1994-2016 (National League teams listed first; + indicates winner of World Series):

1995 –
Atlanta Braves (division winner; .625 W-L, best record in NL)+
Cleveland Indians (division winner; .694 W-L, best record in AL)

1996 –
Atlanta Braves (division winner; .593, best in NL)
New York Yankees (division winner; .568, second-best in AL)+

1997 –
Florida Marlins (wild-card team; .568, second-best in NL)+
Cleveland Indians (division winner; .534, fourth-best in AL)

1998 –
San Diego Padres (division winner; .605 third-best in NL)
New York Yankees (division winner, .704, best in AL)+

1999 –
Atlanta Braves (division winner; .636, best in NL)
New York Yankees (division winner; .605, best in AL)+

2000 –
New York Mets (wild-card team; .580, fourth-best in NL)
New York Yankees (division winner; .540, fifth-best in AL)+

2001 –
Arizona Diamondbacks (division winner; .568, fourth-best in NL)+
New York Yankees (division winner; .594, third-best in AL)

2002 –
San Francisco Giants (wild-card team; .590, fourth-best in NL)
Anaheim Angels (wild-card team; .611, third-best in AL)+

2003 –
Florida Marlines (wild-card team; .562, third-best in NL)+
New York Yankees (division winner; .623, best in AL)

2004 –
St. Louis Cardinals (division winner; .648, best in NL)
Boston Red Sox (wild-card team; .605, second-best in AL)+

2005 –
Houston Astros (wild-card team; .549, third-best in NL)
Chicago White Sox (division winner; .611, best in AL)*

2006 –
St. Louis Cardinals (division winner; .516, fifth-best in NL)+
Detroit Tigers (wild-card team; .586, third-best in AL)

2007 –
Colorado Rockies (wild-card team; .552, second-best in NL)
Boston Red Sox (division winner; .593, tied for best in AL)+

2008 –
Philadelphia Phillies (division winner; .568, second-best in NL)+
Tampa Bay Rays (division winner; .599, second-best in AL)

2009 –
Philadelphia Phillies (division winner; .574, second-best in NL)
New York Yankees (division winner; .636, best in AL)+

2010 —
San Francisco Giants (division winner; .568, second-best in NL)+
Texas Rangers (division winner; .556, fourth-best in AL)

2011 —
St. Louis Cardinals (wild-card team; .556, fourth-best in NL)+
Texas Rangers (division winner; .593, second-best in AL)

2012 —
San Francisco Giants (division winner; .580, third-best in AL)+
Detroit Tigers (division winner; .543, seventh-best in AL)

2013 —
St. Louis Cardinals (division winner; .599, best in NL)
Boston Red Sox (division winner; .599, best in AL)+

2014 —
San Francisco Giants (wild-card team; .543, 4th-best in NL)+
Kansas City Royals (wild-card team; .549, 4th-best in AL)

2015 —
New York Mets (division winner; .556, 5th best in NL)
Kansas City Royals (division winner; .586, best in AL)+

2016 —
Chicago Cubs (division winner; .640, best in NL)+
Cleveland Indians (division winner; .584, 2nd best in AL)


The seven-game World Series holds the promise of high drama. That promise is fulfilled if the Series stretches to a seventh game and that game goes down to the wire. Courtesy of, here’s what’s happened in the deciding games of the seven-game Series that have been played to date:

1909 – Pittsburgh (NL) 8 – Detroit (AL) 0

1912 – Boston (AL) 3 – New York (NL) 2 (10 innings)

1925 – Pittsburgh (NL) 9 – Washington (AL) 7

1926 – St. Louis (NL) 3 – New York (AL) 2

1931 – St. Louis (NL) 4 – Philadelphia (AL) 2

1934 – St. Louis (NL) 11 – Detroit (AL) 0

1940 – Cincinnati (NL) 2 – Detroit (AL) 1

1945 – Detroit (AL) 9 – Chicago (NL) 3

1947 – New York (AL) 5 – Brooklyn (NL) 2

1955 – Brooklyn (NL) 2 – New York (AL) 0

1956 – New York (AL) 9 – Brooklyn (NL) 0

1957 – Milwaukee (NL) 5 – New York (AL) 0

1958 – New York (AL) 6 – Milwaukee (NL) 2

1960 – Pittsburgh (NL) 10 New York (AL) 9 (decided by Bill Mazeroski’s home run in the bottom of the 9th)

1965 – Los Angeles (NL) 2 – Minnesota (AL) 0

1967 – St. Louis (NL) 7 – Boston (AL) 2

1968 – Detroit (AL) 4 – St. Louis (NL) 1

1971 – Pittsburgh (NL) 2 – Baltimore (AL) 1

1972 – Oakland (AL) 3 – Cincinnati (NL) 2

1973 – Oakland (AL) 5 – New York (NL) 2

1975 – Cincinnati (AL) 4 – Boston (AL) 3

1979 – Pittsburgh (NL) 4 – Baltimore (AL) 1

1982 – St. Louis (NL) 6 – Milwaukee (AL) 3

1985 – Kansas City (AL) 11 – St. Louis (NL) 0

1986 – New York (NL) 8 – Boston (AL) 5

1987 – Minnesota (AL) 4 – St. Louis (NL) 2

1991 – Minnesota (AL) 1 – Atlanta (NL) 0 (10 innings)

1997 – Florida (NL) 3 – Cleveland (AL) 2 (11 innings)

2001 – Arizona (NL) 3 – New York (AL) 2 (decided in the bottom of the 9th)

2002 – Anaheim (AL) 4 – San Francisco (NL) 1

2011 – St. Louis Cardinals (NL) 6 – Texas Rangers (AL) 2

2014 – San Francisco Giants (NL) 3 – Kansas City Royals (AL) 2 (no scoring after the 4th inning)

2016 – Chicago Cubs (NL) 8 – Cleveland Indians (AL) 7 (decided in the 10th inning)

Summary statistics:

33 seven-game Series (29 percent of 112 series played, including 4 in a best-of-nine format, none of which lasted 9 games)

17 Series decided by 1 or 2 runs

12 of those 15 Series decided by 1 run (6 times in extra innings or the winning team’s last at-bat)

4 consecutive seven-game Series 1955-58, all involving the New York Yankees (10 percent of the Yankees’ Series — 8 of 41 — went to seven games)

Does the World Series deliver high drama? Seldom. In fact, only about 10 percent of the time (12 of 112 decided by 1 run in game 7). The other 90 percent of the time it’s merely an excuse to fill seats and sell advertising, inasmuch as it’s seldom a contest between both leagues’ best teams.

A Drought Endeth

Tonight the Chicago Cubs beat the Los Angeles Dodgers to become champions of the National League for 2016. The Cubs thus ended the longest pennant drought of the 16 old-line franchises in the National and American Leagues, having last made a World Series appearance 71 years ago in 1945. The Cubs last won the World Series 108 years ago in 1908, another ignominious record for an old-line team.

Here are the most recent league championships and World Series wins by the other old-line National League teams: Atlanta (formerly Boston and Milwaukee) Braves — 1999, 1995; Cincinnati Reds — 1990, 1990; Los Angeles (formerly Brooklyn) Dodgers — 1988, 1988; Philadelphia Phillies — 2009, 2008; Pittsburgh Pirates — 1979, 1979; San Francisco (formerly New York) Giants — 2014, 2014; and St. Louis Cardinals — 2013, 2011.

The American League lineup looks like this: Baltimore Orioles (formerly Milwaukee Brewers and St. Louis Browns) — 1983, 1983; Boston Red Sox — 2013, 2013; Chicago White Sox — 2005, 2005; Cleveland Indians — 2016 (previously 1997), 1948; Detroit Tigers — 2012, 1984; Minnesota Twins (formerly Washington Senators) — 1991, 1991; New York Yankees — 2009, 2009; and Oakland (formerly Philadelphia and Kansas City) Athletics — 1990, 1989.

Facts about Hall-of-Fame Hitters

In this post, I look at the batting records of the 136 position players who accrued most or all of their playing time between 1901 and 2015. With the exception of a bulge in the .340-.345 range, the frequency distribution of lifetime averages for those 136 players looks like a rather ragged normal distribution:

Distribution of HOF lifetime BA

That’s Ty Cobb (.366) at the left, all by himself (1 person = 0.7 percent of the 136 players considered here). To Cobb’s right, also by himself, is Rogers Hornsby (.358). The next solo slot to the right of Hornsby’s belongs to Ed Delahanty (.346). The bulge between .340 and .345 is occupied by Tris Speaker, Billy Hamilton, Ted Williams, Babe Ruth, Harry Heilmann, Bill Terry, Willie Keeler, George Sisler, and Lou Gehrig. At the other end, in the anchor slot, is Ray Schalk (.253), to his left in the next slot are Harmon Killebrew (.256) and Rabbit Maranville (.258). The group in the .260-.265 column comprises Gary Carter, Joe Tinker, Luis Aparacio, Ozzie Smith, Reggie Jackson, and Bill Mazeroski.

Players with relatively low batting averages — Schalk, Killibrew, etc. — are in the Hall of Fame because of their prowess as fielders or home-run hitters. Many of the high-average players were also great fielders or home-run hitters (or both). In any event, for your perusal here’s the complete list of 136 position players under consideration in this post:

Lifetime BA of 136 HOFers

For the next exercise, I normalized the Hall of Famers’ single-season averages, as discussed here. I included only those seasons in which a player qualified for that year’s batting championship by playing in enough games, compiling enough plate appearances, or attaining enough at-bats (the criteria have varied).

For the years 1901-2015, the Hall-of-Famers considered here compiled  1,771 seasons in which they qualified for the batting title. (That’s 13 percent of the 13,463 batting-championship-qualifying seasons compiled by all major leaguers in 1901-2015.) Plotting the Hall-of-Famers’ normalized single-season averages against age, I got this:

HOF batters - normalzed BA by age

The r-squared value of the polynomial fit, though low, is statistically significant (p<.01). The equation yields the following information:

HOF batters - changes in computed mean BA

The green curve traces the difference between the mean batting average at a given age and the mean batting average at the mean peak age, which is 28.3. For example, by the equation, the average Hall of Famer batted .2887 at age 19, and .3057 at age 28.3 — a rise of .0017 over 9.3 years.

The black line traces the change in the mean batting average from age to age; the increase is positive, though declining from ages 20 through 28, then negative (and still declining) through the rest of the average Hall of Famer’s career.

The red line represents the change in the rate of change, which is constant at -.00044 points (-4.4 percent) a year.

In tabular form:

HOF batters - mean BA stats vs age

Finally, I should note that the combined lifetime batting average of the 136 players is .302, as against the 1901-2015 average of .262 for all players. In other words, the Hall of Famers hit safely in 30.2 percent of at-bats; all players hit safely in 26.2 percent of at-bats. What’s the big deal about 4 percentage points?

To find out, I consulted “Back to Baseball,” in which I found the significant determinants of run-scoring. In the years 1901-1919 (the “dead ball” era), a 4 percentage-point (.040) rise in batting average meant, on average, an increase in runs scored per 9 innings of 1.18. That’s a significant jump in offensive output, given that the average number of runs scored per 9 innings was 3.97 in 1901-1919.

For 1920-2015, a rising in batting average of 4 percentage points mean, on average, an increase in runs scored per 9 innings of 1.03, as against an average number of runs scored per 9 innings of 4.51. That’s also significant, and it doesn’t include the effect of extra-base hits, which Hall of Famers produced at a greater rate than other players.

So Hall of Famers, on the whole, certainly made greater offensive contributions than other players, and some of them were peerless in the field. But do all Hall of Famers really belong in the Hall? No, but that’s the subject of another post.

Baseball’s Greatest 40-and-Older Hitters

Drawing on the Play Index at, I discovered the following bests for major-league hitters aged 40 and older:

Most games played — Pete Rose, 732

Most games in starting lineup — Pete Rose, 643

Most plate appearances — Pete Rose, 2955

Most at-bats — Pete Rose, 2574

Most runs — Sam Rice, 327

Most hits — Pete Rose, 699

Most doubles — Sam Rice, 95

Most triples — Honus Wagner, 36

Most home runs — Carlton Fisk, 72

Most runs batted in — Carlton Fisk, 282

Most stolen bases — Rickey Henderson, 109

Most times caught stealing — Rickey Henderson, 34

Most times walked — Pete Rose, 320

Most times struck out — Julio Franco, 336

Highest batting average — Ty Cobb, .343*

Highest on-base percentage — Barry Bonds, .464*

Highest slugging percentage — Barry Bonds, .561*

Highest on-base-plus-slugging percentage (OPS) — Barry Bonds, 1.025*

Most sacrifice hits (bunts) — Honus Wagner, 45

Also of note:

Babe Ruth had only 6 home runs as a 40-year-old in his final (partial) season, as a member of the Boston Braves.

Ted Williams is remembered as a great “old” player, and he was. But his 40-and-over record (compiled in 1959-60) is almost matched by that of his great contemporary, Stan Musial (whose 40-and-older record was compiled in 1961-63):

Williams vs. Musial 40 and older
* In each case, this excludes players with small numbers of plate appearances (always fewer than 20). Also, David Ortiz has a slugging average of .652 and an OPS of 1.067 for the 2016 season (his first as a 40-year-old), but the season isn’t over.

Griffey and Piazza: True Hall-of-Famers or Not?

Ken Griffey Jr. and Mike Piazza have just been voted into baseball’s Hall of Fame.

Griffey belongs there. Follow this link and you’ll see, in the first table, that he’s number 45 on the list of offensive players whom I consider deserving of the honor.

Piazza doesn’t belong there. He falls short of the 8,000 plate appearances (or more) that I would require to prove excellence over a sustained span. Piazza would be a true Hall of Famer if I relaxed the standard to 7,500 plate appearances, but what’s the point of having standards if they can be relaxed just to reward popularity (or mediocrity)?


Back to Baseball

In “Does Velocity Matter?” I diagnosed the factors that account for defensive success or failure, as measured by runs allowed per nine innings of play. There’s a long list of significant variables: hits, home runs, walks, errors, wild pitches, hit batsmen, and pitchers’ ages. (Follow the link for the whole story.)

What about offensive success or failure? It turns out that it depends on fewer key variables, though there is a distinct difference between the “dead ball” era of 1901-1919 and the subsequent years of 1920-2015. Drawing on statistics available at I developed several regression equations and found three of particular interest:

  • Equation 1 covers the entire span from 1901 through 2015. It’s fairly good for 1920-2015, but poor for 1901-1919.
  • Equation 2 covers 1920-2015, and is better than Equation 1 for those years. I also used it for backcast scoring in 1901-1919 — and it’s worse than equation 1.
  • Equation 5 gives the best results for 1901-1919. I also used it to forecast scoring in 1920-2015, and it’s terrible for those years.

This graph shows the accuracy of each equation:

Estimation errors as a percentage of runs scored

Unsurprising conclusion: Offense was a much different thing in 1901-1919 than in subsequent years. And it was a simpler thing. Here’s Equation 5, for 1901-1919:

RS9 = -5.94 + BA(29.39) + E9(0.96) + BB9(0.27)

Where 9 stands for “per 9 innings” and
RS = runs scored
BA = batting average
E9 = errors committed
BB = walks

The adjusted r-squared of the equation is 0.971; the f-value is 2.19E-12 (a very small probability that the equation arises from chance). The p-values of the constant and the first two explanatory variables are well below 0.001; the p-value of the third explanatory variable is 0.01.

In short, the name of the offensive game in 1901-1919 was getting on base. Not so the game in subsequent years. Here’s Equation 2, for 1920-2015:

RS9 = -4.47 + BA(25.81) + XBH(0.82) + BB9(0.30) + SB9(-0.21) + SH9(-0.13)

Where 9, RS, BA, and BB are defined as above and
XBH = extra-base hits
SB = stolen bases
SH = sacrifice hits (i.e., sacrifice bunts)

The adjusted r-squared of the equation is 0.974; the f-value is 4.73E-71 (an exceedingly small probability that the equation arises from chance). The p-values of the constant and the first four explanatory variables are well below 0.001; the p-value of the fifth explanatory variable is 0.03.

In other words, get on base, wait for the long ball, and don’t make outs by trying to steal or bunt the runner(s) along,.

Does Velocity Matter?

I came across some breathless prose about the rising trend in the velocity of pitches. (I’m speaking of baseball, in case you didn’t know. Now’s your chance to stop reading.) The trend, such as it is, dates to 2007, when the characteristics of large samples of pitches began to be recorded. (The statistics are available here.) What does the trend look like? The number of pitchers in the samples varies from 77 to 94 per season. I computed three trends for the velocity of fastballs: one for the top 50 pitchers in each season, one for the top 75 pitchers in each season, and one for each season’s full sample:

Pitching velocity trends

Assuming that the trend is real, what difference does it make to the outcome of play? To answer that question I looked at the determinants of runs allowed per 9 innings of play from 1901 through 2015, drawing on statistics available at I winnowed the statistics to obtain three equations with explanatory variables that pass the sniff test:*

  • Equation 5 covers the post-World War II era (1946-2015). I used it for backcast estimates of runs allowed in each season from 1901 through 1945.
  • Equation 7 covers the entire span from 1901 through 2015.
  • Equation 8 covers the pre-war era (1901-1940). I used it to forecast estimates of runs allowed in each season from 1941 through 2015.

This graph shows the accuracy of each equation:

Estimation errors as perentage of runs allowed

Equation 7, even though it spans vastly different baseball eras, is as good as or better than equations 5 and 8, even though they’re tailored to their eras. Here’s equation 7:

RA9 = -5.01 + H9(0.67) + HR9(0.73) + BB9(0.32) + E9(0.60) + WP9(0.69) + HBP9(0.51) + PAge(0.03)

Where 9 stands for “per 9 innings” and
RA = runs allowed
H = hits allowed
HR = home runs allowed
BB = bases on balls allowed
E = errors committed
WP = wild pitches
HBP = batters hit by pitches
PAge = average age of pitchers

The adjusted r-squared of the equation is 0.988; the f-value is 7.95E-102 (a microscopically small probability that the equation arises from chance). See the first footnote regarding the p-values of the explanatory variables.

What does this have to do with velocity? Let’s say that velocity increased by 1 mile an hour between 2007 and 2015 (see chart above). The correlations for 2007-2015 between velocity and the six pitcher-related variables (H, HR, BB, WP, HBP, and PAge), though based on small samples, are all moderately strong to very strong (r-squared values 0.32 to 0.83). The combined effects of an increase in velocity of 1 mile an hour on those six variables yield an estimated decrease in RA9 of 0.74. The actual decrease from 2007 to 2015, 0.56, is close enough that I’m inclined to give a lot of credit to the rise in velocity.**

What about the long haul? Pitchers have been getting bigger and stronger — and probably faster — for decades. The problem is that a lot of other things have been changing for decades: the baseball, gloves, ballparks, the introduction of night games, improvements in lighting, an influx of black and Latin players, variations in the size of the talent pool relative to the number of major-league teams, the greater use of relief pitchers generally and closers in particular, the size and strength of batters, the use of performance-enhancing drugs, and so on. Though I would credit the drop in RA9 to a rise in velocity over a brief span of years — during which the use of PEDs probably declined dramatically — I won’t venture a conclusion about the long haul.
* I looked for equations where explanatory variables have intuitively correct signs (e.g., runs allowed should be positively related to walks) and low p-values (i.e., low probability of inclusion by chance). The p-values for the variables in equation 5 are all below 0.01; for equation 7 the p-values all are below 0.001. In the case of equation 8, I accepted two variables with p-values greater than 0.01 but less than 0.10.

** It’s also suggestive that the relationship between velocity and the equation 7 residuals for 2007-2015 is weak and statistically insignificant. This could mean that the effects of velocity are adequately reflected in the coefficients on the pitcher-related variables.

The Yankees vs. the Rest of the American League

No one doubts that the Yankees have been the dominant team in the history of American League. Just how dominant? An obvious measure is the percentage of league championships won by the Yankees: 35 percent (40 of the 114) in the 1901-2015 seasons (no champion was declared for the 1994 season, which ended before post-season play could begin). More compellingly, the Yankees won 45 percent of the championships from their first in 1921 through their last in 2009, and 43 percent since their first in 1921.

Of course, not all championships are created equal. From 1901 through 1960 there were 8 teams in the American League, and the team with the best winning percentage in a season was the league’s champion for that season. The same arrangement prevailed during 1961-1968, when there were 10 teams in the league. After that there were two 6-team divisions (1969-1976), two 7-team divisions (1997-1993), three divisions of 5, 5, and 4 teams each (1994-2012), and three 5-team divisions (2013 to the present).

Since the creation of divisions, league champions have been determined by post-season playoffs involving division champions and, since 1994, wild-card teams (one such team in 1994-2011 and two such teams since 2012). Post-season playoffs often result in the awarding of the league championship to a team that didn’t have the best record in that season. (See this post, for example.) A division championship, on the other hand, is (by definition) awarded to the team with the division’s best record in that season.

Here’s how I’ve dealt with this mess:

The team with the league’s best record in 1901-1960 gets credit for 1 championship (pennant).

The team with the league’s best record in 1961-1968 gets credit for 1.25 pennants because the league had 1.25 (10/8) as many teams in 1961-1968 than in 1901-1960.

Similarly, the team with the best record in its division from 1969-2015 gets credit for the number of teams in its division divided by 8. Thus a division winner in the era of 6-team divisions gets credit for 0.750 (6/8) pennant; a division winner in the era of 7-team divisions gets credit for 0.875 (7/8) pennant,; and a division winner in the era of 4-team and 5-team divisions gets credit for 0.500 (4/8) pennant or 0.625 (5/8) pennant.

The Yankees, for example,won 25 pennants in 1901-1960, each valued at 1; 4 pennants in 1961-1968, each valued at 1.25; a division championship in 1969-1976, valued at 0.75; and 14 division championships in 1977-2015, each valued at 0.625. That adds to 43-pennant equivalents in 115 seasons (1994 counts under this method). That’s 0.374 pennant-equivalents per season of the Yankees’ existence (including 1901-1902, when the predecessor franchise was located in Baltimore and the several seasons when the team was known as the New York Highlanders).

I computed the same ratio for the other American League teams, including the Brewers — who entered in 1969 (as the Seattle Pilots) and moved to the National League after the 1997 season — and the Astros — who moved from the National League 2013. Here’s how the 16 franchises stack up:

Pennant-equivalents per season

The Red Sox, despite the second-best overall W-L record, have the fifth-best pennant-equivalents/season record; they have had the misfortune of playing in the same league and same division as the Yankees for 115 years. The Athletics, on the other hand, escaped the shadow of the Yankees in 1969 — when divisional play began — and have made the most of it.

There are many other stories behind the numbers. Ask, and I will tell them.

Mister Hockey, R.I.P.

Gordie Howe, the greatest goal-scorer in NHL history, has died at the age of 88. It was my privilege to watch Howe in action during the several years when I lived within range of the WXYZ, the Detroit TV station whose play-by-play announcer was Budd Lynch.

Why do I say that Howe was the greatest goal-scorer? Wayne Gretzky scored more career goals than Howe, and had nine seasons in which his goal-scoring surpassed Howe’s best season. I explained it 10 years ago. The rest of this post is a corrected version of the original.

Wayne (The Great One) Gretzky holds the all-time goal-scoring record for major-league hockey:

  • 894 goals in 1,487 regular-season games in the National Hockey League (1979-80 season through 1998-99 season)
  • Another 46 regular-season goals in the 80 games he played in the World Hockey Association (1978-79 season)
  • A total of 940 goals in 1,567 games, or 0.600 goals per game.

The raw numbers suggest that Gretzky far surpassed Gordie (Mr. Hockey) Howe, who finished his much longer career with the following numbers:

  • 801 regular-season goals in 1,767 NHL games (1946-47 through 1970-71 and 1979-80)
  • Another 174 goals and 419 games in the WHA (1973-74 through 1978-79)
  • A total of 975 goals in 2,186 games, or 0.446 goals per game.

That makes Gretzky the greater goal scorer, eh? Not so fast. Comparing Gretzky’s raw numbers with those of Howe is like comparing Pete Rose’s total base hits (4,256) with Ty Cobb’s (4,189), without mentioning that Rose compiled his hits in far more at-bats (14,053) than Cobb (11,434). Thus Cobb’s lifetime average of .366 far surpasses Rose’s average of .303. Moreover, Cobb compiled his higher average in an era when batting averages were generally lower than they were in Rose’s era.

Similarly, Howe scored most of his goals in an era when the average team scored between 2.5 and 3 goals a game; Gretzky scored most of his goals in an era when the average team scored between 3.5 and 4 goals a game. The right way to compare Gretzky and Howe’s goal-scoring prowess is to compare the number of goals they scored in each season to the average output of a team in that season. This following graph does just that.

Howe vs. Gretzky
Sources: Howe’s season-by-season statistics; Gretzky’s season-by-season statistics; gateway to NHL and WHA season-by-season league statistics.

Gretzky got off to a faster start than Howe, but Howe had the better record from age 24 onward. Gretzky played 20 NHL seasons, the first ending when he was 19 years old and the last ending when he was 38 years old. Over the comparable 20-season span, Howe scored 4.3 percent more adjusted goals than did Gretzky. Moreover, Howe’s adjusted-goal total for his entire NHL-WHA career (32 seasons) exceeds Gretzky’s (21 seasons) by 43 percent. Howe not only lasted longer, but his decline was more gradual than Gretzky’s.

And Howe — unlike Gretzky — played an all-around game. He didn’t need an enforcer for protection; he was his own enforcer, as many an opponent learned the hard way.

Gordie Howe was not only Mister Hockey, he was also Mister Goal Scorer. “No doot aboot it.”

With Gordie Howe and Ty Cobb, Detroit’s major-league hockey and baseball franchises can claim the greatest players of their respective sports.

A Rather Normal Distribution

I found a rather normal distribution from the real world — if you consider major-league baseball to be part of the real world. In a recent post I explained how I normalized batting statistics for the 1901-2015 seasons, and displayed the top-25 single-season batting averages, slugging percentages, and on-base-plus-slugging percentages after normalization.

I have since discovered that the normalized single-season batting averages for 14,067 player-seasons bear a strong resemblance to a textbook normal distribution:

Distribution of normalized single-season batting averrages

How close is this to a textbook normal distribution? Rather close, as measured by the percentage of observations that are within 1, 2, 3, and 4 standard deviations from the mean:

Distribution of normalized single-season batting averrages_table

Ty Cobb not only compiled the highest single-season average (4.53 SD above the mean) but 5 of the 12 single-season averages more than 4 SD above the mean:

Ty Cobb's normalized single-season batting_SD from mean

Cobb’s superlative performances in the 13-season span from 1907 through 1919 resulted in 12 American League batting championships. (The unofficial number has been reduced to 11 because it was later found that Cobb actually lost the 1910 title by a whisker — .3834 to Napoleon Lajoie’s .3841.)

Cobb’s normalized batting average for his worst full season (1924) is better than 70 percent of the 14,067 batting averages compiled by full-time players in the 115 years from 1901 through 2015. And getting on base was only part of what made Cobb the greatest player of all time.

Baseball’s Greatest and Worst Teams

When talk turns to the greatest baseball team of all time, most baseball fans will nominate the 1927 New York Yankees. Not only did that team post a won-lost record of 110-44, for a W-L percentage of .714, but its roster featured several future Hall-of-Famers: Babe Ruth, Lou Gehrig, Herb Pennock, Waite Hoyt, Earl Combs, and Tony Lazzeri. As it turns out, the 1927 Yankees didn’t have the best record in “modern” baseball, that is, since the formation of the American League in 1901. Here are the ten best seasons (all above .700), ranked by W-L percentage:

Team Year G W L W-L%
Cubs 1906 155 116 36 .763
Pirates 1902 142 103 36 .741
Pirates 1909 154 110 42 .724
Indians 1954 156 111 43 .721
Mariners 2001 162 116 46 .716
Yankees 1927 155 110 44 .714
Yankees 1998 162 114 48 .704
Cubs 1907 155 107 45 .704
Athletics 1931 153 107 45 .704
Yankees 1939 152 106 45 .702

And here are the 20 worst seasons, all below .300:

Team Year G W L W-L%
Phillies 1945 154 46 108 .299
Brown 1937 156 46 108 .299
Phillies 1939 152 45 106 .298
Browns 1911 152 45 107 .296
Braves 1909 155 45 108 .294
Braves 1911 156 44 107 .291
Athletics 1915 154 43 109 .283
Phlllies 1928 152 43 109 .283
Red Sox 1932 154 43 111 .279
Browns 1939 156 43 111 .279
Phillies 1941 155 43 111 .279
Phillies 1942 151 42 109 .278
Senators 1909 156 42 110 .276
Pirates 1952 155 42 112 .273
Tigers 2003 162 43 119 .265
Athletics 1919 140 36 104 .257
Senators 1904 157 38 113 .252
Mets 1962 161 40 120 .250
Braves 1935 153 38 115 .248
Athletics 1916 154 36 117 .235

But it takes more than a season, or even a few of them, to prove a team’s worth. The following graphs depict the best records in the American and National Leagues over nine-year spans:

Centered nine-year W-L record, best AL

Centered nine-year W-L record, best NL

For sustained excellence over a long span of years, the Yankees are the clear winners. Moreover, the Yankees’ best nine-year records are centered on 1935 and 1939. In the nine seasons centered on 1935 — namely 1931-1939 — the Yankees compiled a W-L percentage of .645. In those nine seasons, the Yankees won five American League championships and as many World Series. The Yankees compiled a barely higher W-L percentage of .646 in the nine seasons centered on 1939 — 1935-1943. But in those nine seasons, the Yankees won the American League championship seven times — 1936, 1937, 1938, 1939, 1941, 1942, and 1943 — and the World Series six times (losing to the Cardinals in 1942).

Measured by league championships, the Yankees compiled better nine-year streaks, winning eight pennants in 1949-1957, 1950-1958, and 1955-1963. But for sheer, overall greatness, I’ll vote for the Yankees of the 1930s and early 1940s. Babe Ruth graced the Yankees through 1934, and the 1939 team (to pick one) included future Hall-of-Famers Bill Dickey, Joe Gordon, Joe DiMaggio, Lou Gehring (in his truncated final season), Red Ruffing, and Lefty Gomez.

Here are the corresponding worst nine-year records in the two leagues:

Centered nine-year W-L record, worst AL

Centered nine-year W-L record, worst NL

The Phillies — what a team! The Phillies, Pirates, and Cubs should have been demoted to Class D leagues.

What’s most interesting about the four graphs is the general decline in the records of the best teams and the general rise in the records of the worst teams. That’s a subject for another post.

Great (Batting) Performances

The normal values of batting average (BA), slugging percentage (SLG), and on-base plus slugging (OPS) have fluctuated over time:

Average major league batting statistics_1901-2015

In sum, no two seasons are alike, and some are vastly different from others. To level the playing field (pun intended), I did the following:

  • Compiled single-season BA, SLG, and OPS data for all full-time batters (those with enough times at bat in a season to qualify for the batting title) from 1901 through 2015 — a total of 14,067 player-seasons. (Source: the Play Index at
  • Normalized (“normed”) each season’s batting statistics to account for inter-seasonal differences. For example, a batter whose BA in 1901 was .272 — the overall average for that year — is credited with the same average as a batter whose BA in 1902 was .267 — the overall average for that year.
  • Ranked the normed values of BA, SLG, and OPS for those 14,067 player-seasons.

I then sorted the rankings to find the top 25 player-seasons in each category:

Top-25 single-season offensive records

I present all three statistics because they represent different aspects of offensive prowess. BA was the most important of the three statistics until the advent of the “lively ball” era in 1919. Accordingly, the BA list is dominated by seasons played before that era, when the name of the game was “small ball.” The SLG and OPS lists are of course dominated by seasons played in the lively ball era.

Several seasons compiled by Barry Bonds and Mark McGwire showed up in the top-25 lists that I presented in an earlier post. I have expunged those seasons because of the dubious nature of Bonds’s and McGwire’s achievements.

The preceding two paragraphs lead to the question of the commensurability (or lack thereof) of cross-temporal statistics. This is from the earlier post:

There are many variations in the conditions of play that have resulted in significant changes in offensive statistics. Among those changes are the use of cleaner and more tightly wound baseballs, the advent of night baseball, better lighting for night games, bigger gloves, lighter bats, bigger and stronger players, the expansion of the major leagues in fits and starts, the size of the strike zone, the height of the pitching mound, and — last but far from least in this list — the integration of black and Hispanic players into major league baseball. In addition to these structural variations, there are others that mitigate against the commensurability of statistics over time; for example, the rise and decline of each player’s skills, the skills of teammates (which can boost or depress a player’s performance), the characteristics of a player’s home ballpark (where players generally play half their games), and the skills of the opposing players who are encountered over the course of a career.

Despite all of these obstacles to commensurability, the urge to evaluate the relative performance of players from different teams, leagues, seasons, and eras is irrepressible. is rife with such evaluations; the Society for American Baseball Research (SABR) revels in them; many books offer them (e.g., this one); and I have succumbed to the urge more than once.

It is one thing to have fun with numbers. It is quite another thing to ascribe meanings to them that they cannot support.

And yet, it seems right that the top 25 seasons should include so many of Ty Cobb’s, Babe Ruth’s, and of their great contemporaries Jimmie Foxx, Lou Gehrig, Rogers Hornsby, Shoeless Joe Jackson, Nap Lajoie, Tris Speaker, George Sisler, and Honus Wagner. It signifies the greatness of the later players who join them on the lists: Hank Aaron, George Brett, Rod Carew, Roberto Clemente, Mickey Mantle, Willie McCovey, Stan Musial, Frank Thomas, and Ted Williams.

Cobb’s dominance of the BA leader-board merits special attention. Cobb holds 9 of the top 19 slots on the BA list. That’s an artifact of his reign as the American League’s leading hitter in 12 of the 13 seasons from 1907 through 1919. But there was more to Cobb than just “hitting it where they ain’t.” Cobb probably was the most exciting ball player of all time, because he was much more than a hitting machine.

Charles Leershen offers chapter and verse about Cobb’s prowess in his book Ty Cobb: A Terrible Beauty. Here are excerpts of Leershen’s speech “Who Was Ty Cobb? The History We Know That’s Wrong,” which is based on his book:

When Cobb made it to first—which he did more often than anyone else; he had three seasons in which he batted over .400—the fun had just begun. He understood the rhythms of the game and he constantly fooled around with them, keeping everyone nervous and off balance. The sportswriters called it “psychological baseball.” His stated intention was to be a “mental hazard for the opposition,” and he did this by hopping around in the batter’s box—constantly changing his stance as the pitcher released the ball—and then, when he got on base, hopping around some more, chattering, making false starts, limping around and feigning injury, and running when it was least expected. He still holds the record for stealing home, doing so 54 times. He once stole second, third, and home on three consecutive pitches, and another time turned a tap back to the pitcher into an inside-the-park home run.

“The greatness of Ty Cobb was something that had to be seen,” George Sisler said, “and to see him was to remember him forever.” Cobb often admitted that he was not a natural, the way Shoeless Joe Jackson was; he worked hard to turn himself into a ballplayer. He had nine styles of slides in his repertoire: the hook, the fade-away, the straight-ahead, the short or swoop slide (“which I invented because of my small ankles”), the head-first, the Chicago slide (referred to by him but never explained), the first-base slide, the home-plate slide, and the cuttle-fish slide—so named because he purposely sprayed dirt with his spikes the way squid-like creatures squirt ink. Coming in, he would watch the infielder’s eyes to determine which slide to employ.

There’s a lot more in the book, which I urge you to read — especially if you’re a baseball fan who appreciates snappy prose and documented statements (as opposed to the myths that have grown up around Cobb).

Cobb’s unparalleled greatness was still fresh in the minds of baseball people in 1936, when the first inductees to baseball’s Hall of Fame were elected. It was Cobb — not Babe Ruth — who received the most votes among the five players selected for membership in the Hall.

Yankee-Killers (and Victims)

The New York Yankees won 40 American League championships in the 89 years from 1921 through 2009, and compiled a .585 record over that span (including inter-league games and games against American League expansion franchises). Here’s how Yankees teams fared against traditional rivals in that span:

Yankees vs. traditional rivals

The Yankees’ traditional rivals won 40 league championships altogether. Only the Athletics managed to win more championships (9) to the Tigers’ 7 during the 89-year span. The Orioles/Browns also won 7, followed by the Red Sox and Twin/Senators with 6 each, the Indians with 3, and the White Sox with 2.  Expansion franchises won the other 9 championships.

Fittingly, the Tigers’ Frank Lary was the leading Yankee-killer among pitchers who had at least 20 decisions against the Yankees during 1921-2009:

Pitchers vs. Yankees

All statistics presented in this post are derived from and that site’s Play Index (a subscription service).

*     *     *

Some related posts:

The Hall of Fame Reconsidered

Baseball Trivia for the Fourth of July

All-Time Hitter-Friendly Ballparks

May the Best Team Lose

Competitiveness in Major-League Baseball

Baseball: The King of Team Sports

The End of a Dynasty (the Yankees’ dynasty, that is; updated here)

What Makes a Winning Team?

The American League’s Greatest Hitters: Part II (with a link to Part I)

The Winningest Managers

Ty Cobb and the State of Science

This post was inspired by “Layman’s Guide to Understanding Scientific Research” at bluebird of bitterness.

The thing about history is that it’s chock full of lies. Well, a lot of the lies are just incomplete statements of the truth. Think of history as an artificially smooth surface, where gaps in knowledge have been filled by assumptions and guesses, and where facts that don’t match the surrounding terrain have been sanded down. Charles Leershen offers an excellent example of the lies that became “history” in his essay “Who Was Ty Cobb? The History We Know That’s Wrong.” (I’m now reading the book on which the essay is based, and it tells the same tale, at length.)

Science is much like history in its illusory certainty. Stand back from things far enough and you see a smooth, mathematical relationship. Look closer, however, and you find rough patches. A classic example is found in physics, where the big picture of general relativity doesn’t mesh with the small picture of quantum mechanics.

Science is based on guesses, also known as hypotheses. The guesses are usually informed by observation, but they are guesses nonetheless. Even when a guess has been lent credence by tests and observations, it only becomes a theory — a working model of a limited aspect of physical reality. A theory is never proven; it can only be disproved.

Science, in other words, is never “settled.” Napoleon is supposed to have said “What is history but a fable agreed upon?” It seems, increasingly, that so-called scientific facts are nothing but a fable that some agree upon because they wish to use those “facts” as a weapon with which to advance their careers and political agendas. Or they simply wish to align themselves with the majority, just as Barack Obama’s popularity soared (for a few months) after he was re-elected.

*     *     *

Related reading:

Wikipedia, “Replication Crisis

John P.A. Ionnidis, “Why Most Published Research Findings Are False,” PLOS Medicine, August 30, 2005

Liberty Corner, “Science’s Anti-Scientific Bent,” April 12, 2006

Politics & Prosperity, “Modeling Is Not Science,” April 8, 2009

Politics & Prosperity, “Physics Envy,” May 26, 2010

Politics & Prosperity, “Demystifying Science,” October 5, 2011 (also see the long list of related posts at the bottom)

Politics & Prosperity, “The Science Is Settled,” May 25, 2014

Politics & Prosperity, “The Limits of Science, Illustrated by Scientists,” July 28, 2014

Steven E. Koonin, “Climate Science Is Not Settled,”, September 19, 2014

Joel Achenbach, “No, Science’s Reproducibility Problem Is Not Limited to Psychology,” The Washington Post, August 28, 2015

William A. Wilson, “Scientific Regress,” First Things, May 2016

Jonah Goldberg, “Who Are the Real Deniers of Science?, May 20, 2016

Steven Hayward, “The Crisis of Scientific Credibility,” Power Line, May 25, 2016

There’s a lot more here.

A Summing Up

I started blogging in the late 1990s with a home page that I dubbed Liberty Corner (reconstructed here). I maintained the home page until 2000. When the urge to resume blogging became irresistible in 2004, I created the Blogspot version of Liberty Corner, where I blogged until May 2008.

My weariness with “serious” blogging led to the creation of Americana, Etc., “A blog about baseball, history, humor, language, literature, movies, music, nature, nostalgia, philosophy, psychology, and other (mostly) apolitical subjects.” I began that blog in July 2008 and posted there sporadically until September 2013.

But I couldn’t resist commenting on political, economic, and social issues, so I established Politics & Prosperity in February 2009. My substantive outpourings ebbed and flowed, until August 2015, when I hit a wall.

Now, almost two decades and more than 3,000 posts since my blogging debut, I am taking another rest from blogging — perhaps a permanent rest.

Instead of writing a valedictory essay, I chose what I consider to be the best of my blogging, and assigned each of my choices to one of fifteen broad topics. (Many of the selections belong under more than one heading, but I avoided repetition for the sake of brevity.) You may jump directly to any of the fifteen topics by clicking on one of these links:

Posts are listed in chronological order under each heading. If you are looking for a post on a particular subject, begin with the more recent posts and work your way backward in time, by moving up the list or using the “related posts” links that are included in most of my posts.

Your explorations may lead you to posts that no longer represent my views. This is especially the case with respect to John Stuart Mill’s “harm principle,” which figures prominently in my early dissertations on libertarianism, but which I have come to see as shallow and lacking in prescriptive power. Thus my belief that true libertarianism is traditional conservatism. (For more, see “On Liberty and Libertarianism” in the sidebar and many of the posts under “X. Libertarianism and Other Political Philosophies.”)

The following list of “bests” comprises about 700 entries, which is less than a fourth of my blogging output. I also commend to you my “Not-So-Random Thoughts” series — I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, and XVI — and “The Tenor of the Times.”

I. The Academy, Intellectuals, and the Left
Like a Fish in Water
Why So Few Free-Market Economists?
Academic Bias
Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
We, the Children of the Enlightenment
The Left and Its Delusions
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
The Culture War
Ruminations on the Left in America
The Euphemism Conquers All
Defending the Offensive


II. Affirmative Action, Race, and Immigration
Affirmative Action: A Modest Proposal
After the Bell Curve
A Footnote . . .
Schelling and Segregation
Illogic from the Pro-Immigration Camp
Affirmative Action: Two Views from the Academy, Revisited
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
Evolution and Race
“Wading” into Race, Culture, and IQ
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Nature, Nurture, and Inequality


III. Americana, Etc.: Movies, Music, Nature, Nostalgia, Sports, and Trivia
Speaking of Modern Art
Making Sense about Classical Music
An Addendum about Classical Music
My Views on Classical Music, Vindicated
But It’s Not Music
Mister Hockey
Testing for Steroids
Explaining a Team’s W-L Record
The American League’s Greatest Hitters
The American League’s Greatest Hitters: Part II
Conducting, Baseball, and Longevity
Who Shot JFK, and Why?
The Passing of Red Brick Schoolhouses and a Way of Life
Baseball: The King of Team Sports
May the Best Team Lose
All-Time Hitter-Friendly Ballparks (With Particular Attention to Tiger Stadium)
A Trip to the Movies
Another Trip to the Movies
The Hall of Fame Reconsidered
Facts about Presidents (a reference page)


IV. The Constitution and the Rule of Law
Unintended Irony from a Few Framers
Social Security Is Unconstitutional
What Is the Living Constitution?
The Legality of Teaching Intelligent Design
The Legality of Teaching Intelligent Design: Part II
Law, Liberty, and Abortion
An Answer to Judicial Supremacy?
Final (?) Words about Preemption and the Constitution
More Final (?) Words about Preemption and the Constitution
Who Are the Parties to the Constitutional Contract?
The Slippery Slope of Constitutional Revisionism
The Ruinous Despotism of Democracy
How to Think about Secession
A New, New Constitution
Secession Redux
A Declaration of Independence
First Principles
The Constitution: Original Meaning, Corruption, and Restoration
The Unconstitutionality of the Individual Mandate
Does the Power to Tax Give Congress Unlimited Power?
Does Congress Have the Power to Regulate Inactivity?
Substantive Due Process and the Limits of Privacy
The Southern Secession Reconsidered
Abortion and the Fourteenth Amendment
Obamacare: Neither Necessary nor Proper
Privacy Is Not Sacred
Our Perfect, Perfect Constitution
Reclaiming Liberty throughout the Land
Obamacare, Slopes, Ratchets, and the Death-Spiral of Liberty
Another Thought or Two about the Obamacare Decision
Secession for All Seasons
Restoring Constitutional Government: The Way Ahead
“We the People” and Big Government
How Libertarians Ought to Think about the Constitution
Abortion Rights and Gun Rights
The States and the Constitution
Getting “Equal Protection” Right
How to Protect Property Rights and Freedom of Association and Expression
The Principles of Actionable Harm
Judicial Supremacy: Judicial Tyranny
Does the Power to Tax Give Congress Unlimited Power? (II)
The Beginning of the End of Liberty in America
Substantive Due Process, Liberty of Contract, and States’ “Police Power”
U.S. Supreme Court: Lines of Succession (a reference page)


V. Economics: Principles and Issues
Economics: A Survey (a reference page that gives an organized tour of relevant posts, many of which are also listed below)
Fear of the Free Market — Part I
Fear of the Free Market — Part II
Fear of the Free Market — Part III
Trade Deficit Hysteria
Why We Deserve What We Earn
Who Decides Who’s Deserving?
The Main Causes of Prosperity
That Mythical, Magical Social Security Trust Fund
Social Security, Myth and Reality
Nonsense and Sense about Social Security
More about Social Security
Social Security Privatization and the Stock Market
Oh, That Mythical Trust Fund!
The Real Meaning of the National Debt
Socialist Calculation and the Turing Test
Social Security: The Permanent Solution
The Social Welfare Function
Libertarian Paternalism
A Libertarian Paternalist’s Dream World
Talk Is Cheap
Giving Back to the Community
The Short Answer to Libertarian Paternalism
Second-Guessing, Paternalism, Parentalism, and Choice
Another Thought about Libertarian Paternalism
Why Government Spending Is Inherently Inflationary
Ten Commandments of Economics
More Commandments of Economics
Capitalism, Liberty, and Christianity
Risk and Regulation
Back-Door Paternalism
Liberty, General Welfare, and the State
Another Voice Against the New Paternalism
Monopoly and the General Welfare
The Causes of Economic Growth
Slippery Paternalists
The Importance of Deficits
It’s the Spending, Stupid!
There’s More to Income than Money
Science, Axioms, and Economics
Mathematical Economics
The Last(?) Word about Income Inequality
Why “Net Neutrality” Is a Bad Idea
The Feds and “Libertarian Paternalism”
The Anti-Phillips Curve
Status, Spite, Envy, and Income Redistribution
Economics: The Dismal (Non) Science
A Further Note about “Libertarian” Paternalism
Apropos Paternalism
Where’s My Nobel?
Toward a Capital Theory of Value
The Laffer Curve, “Fiscal Responsibility,” and Economic Growth
Stability Isn’t Everything
Income and Diminishing Marginal Utility
What Happened to Personal Responsibility?
The Causes of Economic Growth
Economic Growth since WWII
A Short Course in Economics
Addendum to a Short Course in Economics
Monopoly: Private Is Better than Public
The “Big Five” and Economic Performance
Does the Minimum Wage Increase Unemployment?
Rationing and Health Care
The Perils of Nannyism: The Case of Obamacare
More about the Perils of Obamacare
Health-Care Reform: The Short of It
Toward a Risk-Free Economy
Enough of “Social Welfare”
A True Flat Tax
The Case of the Purblind Economist
How the Great Depression Ended
Why Outsourcing Is Good: A Simple Lesson for “Liberal” Yuppies
Microeconomics and Macroeconomics
The Illusion of Prosperity and Stability
The Deficit Commission’s Deficit of Understanding
“Buy Local”
“Net Neutrality”
The Bowles-Simpson Report
The Bowles-Simpson Band-Aid
Competition Shouldn’t Be a Dirty Word
Subjective Value: A Proof by Example
The Stagnation Thesis
Taxing the Rich
More about Taxing the Rich
Money, Credit, and Economic Fluctuations
A Keynesian Fantasy Land
“Tax Expenditures” Are Not Expenditures
The Keynesian Fallacy and Regime Uncertainty
Does “Pent Up” Demand Explain the Post-War Recovery?
Creative Destruction, Reification, and Social Welfare
What Free-Rider Problem?
Why the “Stimulus” Failed to Stimulate
The Arrogance of (Some) Economists
The “Jobs Speech” That Obama Should Have Given
Say’s Law, Government, and Unemployment
Regime Uncertainty and the Great Recession
Regulation as Wishful Thinking
Extreme Economism
We Owe It to Ourselves
In Defense of the 1%
Lay My (Regulatory) Burden Down
Irrational Rationality
The Burden of Government
Economic Growth Since World War II
The Rationing Fallacy
Government in Macroeconomic Perspective
Keynesianism: Upside-Down Economics in the Collectivist Cause
How High Should Taxes Be?
The 80-20 Rule, Illustrated
Economic Horror Stories: The Great “Demancipation” and Economic Stagnation
Baseball Statistics and the Consumer Price Index
Why Are Interest Rates So Low?
Vulgar Keynesianism and Capitalism
America’s Financial Crisis Is Now
“Ensuring America’s Freedom of Movement”: A Review
“Social Insurance” Isn’t Insurance — Nor Is Obamacare
The Keynesian Multiplier: Phony Math
The True Multiplier
Discounting in the Public Sector
Some Inconvenient Facts about Income Inequality
Mass (Economic) Hysteria: Income Inequality and Related Themes
Social Accounting: A Tool of Social Engineering
Playing the Social Security Trust Fund Shell Game
Income Inequality and Economic Growth
A Case for Redistribution, Not Made
McCloskey on Piketty
The Rahn Curve Revisited
The Slow-Motion Collapse of the Economy
Nature, Nurture, and Inequality
Understanding Investment Bubbles
The Real Burden of Government
Diminishing Marginal Utility and the Redistributive Urge
Capitalism, Competition, Prosperity, and Happiness
Further Thoughts about the Keynesian Multiplier


VI. Humor, Satire, and Wry Commentary
Political Parlance
Some Management Tips
Ten-Plus Commandments of Liberalism, er, Progressivism
To Pay or Not to Pay
The Ghost of Impeachments Past Presents “The Trials of William Jefferson Whatsit”
Getting It Perfect
His Life As a Victim
Bah, Humbug!
PC Madness
The Seven Faces of Blogging
Trans-Gendered Names
More Names
Stuff White (Liberal Yuppie) People Like
Driving and Politics
“Men’s Health”
I’ve Got a LIttle List
Driving and Politics (2)
A Sideways Glance at Military Strategy
A Sideways Glance at the Cabinet
A Sideways Glance at Politicians’ Memoirs
The Madness Continues


VII. Infamous Thinkers and Political Correctness
Sunstein at the Volokh Conspiracy
More from Sunstein
Cass Sunstein’s Truly Dangerous Mind
An (Imaginary) Interview with Cass Sunstein
Professor Krugman Flunks Economics
Peter Singer’s Fallacy
Slippery Sunstein
Sunstein and Executive Power
Nock Reconsidered
In Defense of Ann Coulter
Goodbye, Mr. Pitts
Our Miss Brooks
How to Combat Beauty-ism
The Politically Correct Cancer: Another Weapon in the War on Straight White Males
Asymmetrical (Ideological) Warfare
Social Justice
Peter Presumes to Preach
More Social Justice
Luck-Egalitarianism and Moral Luck
Empathy Is Overrated
In Defense of Wal-Mart
An Economist’s Special Pleading: Affirmative Action for the Ugly
Another Entry in the Sunstein Saga
Obesity and Statism (Richard Posner)
Obama’s Big Lie
The Sunstein Effect Is Alive and Well in the White House
Political Correctness vs. Civility
IQ, Political Correctness, and America’s Present Condition
Sorkin’s Left-Wing Propaganda Machine
Baseball or Soccer? David Brooks Misunderstands Life
Sunstein the Fatuous
Good Riddance
The Gaystapo at Work
The Gaystapo and Islam
The Perpetual Nudger


VIII. Intelligence and Psychology
Conservatism, Libertarianism, and “The Authoritarian Personality”
The F Scale, Revisited
The Psychologist Who Played God
Intelligence, Personality, Politics, and Happiness
Intelligence as a Dirty Word
Intelligence and Intuition
Nonsense about Presidents, IQ, and War
IQ, Political Correctness, and America’s Present Condition
Greed, Conscience, and Big Government
Privilege, Power, and Hypocrisy


IX. Justice
I’ll Never Understand the Insanity Defense
Does Capital Punishment Deter Homicide?
Libertarian Twaddle about the Death Penalty
A Crime Is a Crime
Crime and Punishment
Abortion and Crime
Saving the Innocent?
Saving the Innocent?: Part II
A Useful Precedent
More on Abortion and Crime
More Punishment Means Less Crime
More About Crime and Punishment
More Punishment Means Less Crime: A Footnote
Clear Thinking about the Death Penalty
Let the Punishment Fit the Crime
Cell Phones and Driving: Liberty vs. Life
Another Argument for the Death Penalty
Less Punishment Means More Crime
Crime, Explained
Clear Thinking about the Death Penalty
What Is Justice?
Myopic Moaning about the War on Drugs
Saving the Innocent
Why Stop at the Death Penalty?
A Case for Perpetual Copyrights and Patents
The Least Evil Option
Legislating Morality
Legislating Morality (II)
Round Up the Usual Suspects
Left-Libertarians, Obama, and the Zimmerman Case
Free Will, Crime, and Punishment
Stop, Frisk, and Save Lives
Poverty, Crime, and Big Government
Crime Revisited
A Cop-Free World?


X. Libertarianism and Other Political Philosophies
The Roots of Statism in the United States
Libertarian-Conservatives Are from the Earth, Liberals Are from the Moon
Modern Utilitarianism
The State of Nature
Libertarianism and Conservatism
Judeo-Christian Values and Liberty
Redefining Altruism
Fundamentalist Libertarians, Anarcho-Capitalists, and Self-Defense
Where Do You Draw the Line?
Moral Issues
A Paradox for Libertarians
A Non-Paradox for Libertarians
Religion and Liberty
Science, Evolution, Religion, and Liberty
Whose Incompetence Do You Trust?
Enough of Altruism
Thoughts That Liberals Should Be Thinking
More Thoughts That Liberals Should Be Thinking
The Corporation and the State
Libertarianism and Preemptive War: Part II
Anarchy: An Empty Concept
The Paradox of Libertarianism
Privacy: Variations on the Theme of Liberty
The Fatal Naïveté of Anarcho-Libertarianism
Liberty as a Social Construct
This Is Objectivism?
Social Norms and Liberty (a reference page)
Social Norms and Liberty (a followup post)A Footnote about Liberty and the Social Compact
The Adolescent Rebellion Syndrome
Liberty and Federalism
Finding Liberty
Nock Reconsidered
The Harm Principle
Footnotes to “The Harm Principle”
The Harm Principle, Again
Rights and Cosmic Justice
Liberty, Human Nature, and the State
Idiotarian Libertarians and the Non-Aggression Principle
Slopes, Ratchets, and the Death Spiral of Liberty
Postive Rights and Cosmic Justice: Part I
Positive Rights and Cosmic Justice: Part II
The Case against Genetic Engineering
Positive Rights and Cosmic Justice: Part III
A Critique of Extreme Libertarianism
Libertarian Whining about Cell Phones and Driving
The Golden Rule, for Libertarians
Positive Rights and Cosmic Justice: Part IV
Anarchistic Balderdash
Compare and Contrast
Irrationality, Suboptimality, and Voting
Wrong, Wrong, Wrong
The Political Case for Traditional Morality
Compare and Contrast, Again
Pascal’s Wager, Morality, and the State
The Fear of Consequentialism
Optimality, Liberty, and the Golden Rule
The People’s Romance
Objectivism: Tautologies in Search of Reality
Morality and Consequentialism
On Liberty
Greed, Cosmic Justice, and Social Welfare
Positive Rights and Cosmic Justice
Fascism with a “Friendly” Face
Democracy and Liberty
The Interest-Group Paradox
Inventing “Liberalism”
Civil Society and Homosexual “Marriage”
What Is Conservatism?
Utilitarianism vs. Liberty
Fascism and the Future of America
The Indivisibility of Economic and Social Liberty
Law and Liberty
Negative Rights
Negative Rights, Social Norms, and the Constitution
Tocqueville’s Prescience
Accountants of the Soul
Invoking Hitler
The Unreality of Objectivism
“Natural Rights” and Consequentialism
Rawls Meets Bentham
The Left
Our Enemy, the State
Pseudo-Libertarian Sophistry vs. True Libertarianism
What Are “Natural Rights”?
The Golden Rule and the State
Libertarian Conservative or Conservative Libertarian?
Bounded Liberty: A Thought Experiment
Evolution, Human Nature, and “Natural Rights”
More Pseudo-Libertarianism
The Meaning of Liberty
Positive Liberty vs. Liberty
On Self-Ownership and Desert
Understanding Hayek
Corporations, Unions, and the State
Facets of Liberty
Burkean Libertarianism
Rights: Source, Applicability, How Held
What Is Libertarianism?
Nature Is Unfair
True Libertarianism, One More Time
Human Nature, Liberty, and Rationalism
Utilitarianism and Psychopathy
A Declaration and Defense of My Prejudices about Governance
Libertarianism and Morality
Libertarianism and Morality: A Footnote
What Is Bleeding-Heart Libertarianism?
Liberty, Negative Rights, and Bleeding Hearts
Cato, the Kochs, and a Fluke
Why Conservatism Works
A Man for No Seasons
Bleeding-Heart Libertarians = Left-Statists
Not Guilty of Libertarian Purism
Liberty and Society
Tolerance on the Left
The Eclipse of “Old America”
Genetic Kinship and Society
Liberty as a Social Construct: Moral Relativism?
Defending Liberty against (Pseudo) Libertarians
The Fallacy of the Reverse-Mussolini Fallacy
Defining Liberty
Getting It Almost Right
The Social Animal and the “Social Contract”
The Futile Search for “Natural Rights”
The Pseudo-Libertarian Temperament
Parsing Political Philosophy (II)
Modern Liberalism as Wishful Thinking
Getting Liberty Wrong
Romanticizing the State
Libertarianism and the State
Egoism and Altruism
My View of Libertarianism
Sober Reflections on “Charlie Hebdo”
“The Great Debate”: Not So Great
No Wonder Liberty Is Disappearing
The Principles of Actionable Harm
More About Social Norms and Liberty


XI. Politics, Politicians, and the Consequences of Government
Starving the Beast
Torture and Morality
Starving the Beast, Updated
Starving the Beast: Readings
Presidential Legacies
The Rational Voter?
FDR and Fascism
The “Southern Strategy”
An FDR Reader
The “Southern Strategy”: A Postscript
The Modern Presidency: A Tour of American History
Politicizing Economic Growth
The End of Slavery in the United States
I Want My Country Back
What Happened to the Permanent Democrat Majority?
More about the Permanent Democrat Majority
Undermining the Free Society
Government Failure: An Example
The Public-School Swindle
PolitiFact Whiffs on Social Security
The Destruction of Society in the Name of “Society”
About Democracy
Externalities and Statism
Taxes: Theft or Duty?
Society and the State
Don’t Use the “S” Word When the “F” Word Will Do
The Capitalist Paradox Meets the Interest-Group Paradox
Is Taxation Slavery?
A Contrarian View of Universal Suffrage
The Hidden Tragedy of the Assassination of Lincoln
America: Past, Present, and Future
IQ, Political Correctness, and America’s Present Condition
Progressive Taxation Is Alive and Well in the U.S. of A.
“Social Insurance” Isn’t Insurance — Nor Is Obamacare
“We the People” and Big Government
The Culture War
The Fall and Rise of American Empire
O Tempora O Mores!
Presidential Treason
A Home of One’s Own
The Criminality and Psychopathy of Statism
Surrender? Hell No!
Social Accounting: A Tool of Social Engineering
Playing the Social Security Trust Fund Shell Game
Two-Percent Tyranny
A Sideways Glance at Public “Education”
Greed, Conscience, and Big Government
The Many-Sided Curse of Very Old Age
The Slow-Motion Collapse of the Economy
How to Eradicate the Welfare State, and How Not to Do It
“Blue Wall” Hype
Does Obama Love America?
Obamanomics in Action
Democracy, Human Nature, and the Future of America
1963: The Year Zero


XII. Science, Religion, and Philosophy
Same Old Story, Same Old Song and Dance
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
Free Will: A Proof by Example?
Science in Politics, Politics in Science
Evolution and Religion
Science, Evolution, Religion, and Liberty
What’s Wrong with Game Theory
Is “Nothing” Possible?
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
The Purpose-Driven Life
The Tenth Dimension
The Universe . . . Four Possibilities
Atheism, Religion, and Science Redux
“Warmism”: The Myth of Anthropogenic Global Warming
More Evidence against Anthropogenic Global Warming
Yet More Evidence against Anthropogenic Global Warming
Pascal’s Wager, Morality, and the State
Achilles and the Tortoise: A False Paradox
The Greatest Mystery
Modeling Is Not Science
Freedom of Will and Political Action
Fooled by Non-Randomness
Randomness Is Over-Rated
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
Beware the Rare Event
Landsburg Is Half-Right
What Is Truth?
The Improbability of Us
Wrong Again
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
Evolution and the Golden Rule
A Digression about Special Relativity
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
Temporal and Spatial Agreement
In Defense of Subjectivism
The Atheism of the Gaps
The Ideal as a False and Dangerous Standard
Demystifying Science
Religion on the Left
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Scientism, Evolution, and the Meaning of Life
Luck and Baseball, One More Time
Are the Natural Numbers Supernatural?
The Candle Problem: Balderdash Masquerading as Science
Mysteries: Sacred and Profane
More about Luck and Baseball
Combinatorial Play
Something from Nothing?
Pseudoscience, “Moneyball,” and Luck
Something or Nothing
Understanding the Monty Hall Problem
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
The Fallacy of Human Progress
The Glory of the Human Mind
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
AGW: The Death Knell
Mind, Cosmos, and Consciousness
The Limits of Science (II)
Not Over the Hill
The Pretence of Knowledge
“The Science Is Settled”
The Compleat Monty Hall Problem
“Settled Science” and the Monty Hall Problem
Evolution, Culture, and “Diversity”
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?


XIII. Self-Ownership (abortion, euthanasia, marriage, and other aspects of the human condition)
Feminist Balderdash
Libertarianism, Marriage, and the True Meaning of Family Values
Law, Liberty, and Abortion
Privacy, Autonomy, and Responsibility
Parenting, Religion, Culture, and Liberty
The Case against Genetic Engineering
A “Person” or a “Life”?
A Wrong-Headed Take on Abortion
In Defense of Marriage
Crimes against Humanity
Abortion and Logic
The Myth That Same-Sex “Marriage” Causes No Harm
Abortion, Doublethink, and Left-Wing Blather
Abortion, “Gay Rights,” and Liberty
Dan Quayle Was (Almost) Right
The Most Disgusting Thing I’ve Read Today
Posner the Fatuous
Marriage: Privatize It and Revitalize It


XIV. War and Peace
Getting It Wrong: Civil Libertarians and the War on Terror (A Case Study)
Libertarian Nay-Saying on Foreign and Defense Policy, Revisited
Right On! For Libertarian Hawks Only
Understanding Libertarian Hawks
Defense, Anarcho-Capitalist Style
The Illogic of Knee-Jerk Civil Liberties Advocates
Getting It All Wrong about the Risk of Terrorism
Conservative Revisionism, Conservative Backlash, or Conservative Righteousness?
But Wouldn’t Warlords Take Over?
Sorting Out the Libertarian Hawks and Doves
Shall We All Hang Separately?
September 11: A Remembrance
September 11: A Postscript for “Peace Lovers”
Give Me Liberty or Give Me Non-Aggression?
NSA “Eavesdropping”: The Last Word (from Me)
Riots, Culture, and the Final Showdown
Thomas Woods and War
In Which I Reply to the Executive Editor of The New York Times
“Peace for Our Time”
Taking on Torture
Conspiracy Theorists’ Cousins
September 11: Five Years On
How to View Defense Spending
The Best Defense . . .
A Skewed Perspective on Terrorism
Not Enough Boots: The Why of It
Here We Go Again
“The War”: Final Grade
Torture, Revisited
Waterboarding, Torture, and Defense
Liberalism and Sovereignty
The Media, the Left, and War
Getting It Wrong and Right about Iran
The McNamara Legacy: A Personal Perspective
The “Predator War” and Self-Defense
The National Psyche and Foreign Wars
A Moralist’s Moral Blindness
A Grand Strategy for the United States
The Folly of Pacifism
Rating America’s Wars
Transnationalism and National Defense
The Next 9/11?
The Folly of Pacifism, Again
September 20, 2001: Hillary Clinton Signals the End of “Unity”
Patience as a Tool of Strategy
The War on Terror, As It Should Have Been Fought
The Cuban Missile Crisis, Revisited
Preemptive War
Preemptive War and Iran
Some Thoughts and Questions about Preemptive War
Defense as an Investment in Liberty and Prosperity
Riots, Culture, and the Final Showdown (revisited)
The Barbarians Within and the State of the Union
The World Turned Upside Down
Utilitarianism and Torture
Defense Spending: One More Time
Walking the Tightrope Reluctantly
The President’s Power to Kill Enemy Combatants


XV. Writing and Language
“Hopefully” Arrives
Hopefully, This Post Will Be Widely Read
Why Prescriptivism?
A Guide to the Pronunciation of General American English
On Writing (a comprehensive essay about writing, which covers some of the material presented in other posts in this section)


The Hall of Fame Reconsidered

Several years ago I wrote some posts (e.g., here and here) about the criteria for membership in baseball’s Hall of Fame, and named some players who should and shouldn’t be in the Hall. A few days ago I published an updated version of my picks. I’ve since deleted that post because, on reflection, I find my criteria too narrow. I offer instead:

  • broad standards of accomplishment that sweep up most members of the Hall who have been elected as players
  • ranked lists of players who qualify for consideration as Hall of Famers, based on those standards.

These are the broad standards of accomplishment for batters:

  • at least 8,000 plate appearances (PA) — a number large enough to indicate that a player was good enough to have attained a long career in the majors, and
  • a batting average of at least .250 — a low cutuff point that allows the consideration of mediocre hitters who might have other outstanding attributes (e.g., base-stealing, fielding).

I rank retired batters who meet those criteria by career wins above average (WAA) per career PA. WAA for a season is a measure of a player’s total offensive and defensive contribution, relative to other players in the same season. (WAA therefore normalizes cross-temporal differences in batting averages, the frequency of home runs, the emphasis on base-stealing, and the quality of fielders’ gloves, for example.) Because career WAA is partly a measure of longevity rather than skill, I divide by career PA to arrive at a normalized measure of average performance over the span of a player’s career.

These are the broad standards of accomplishment for pitchers:

  • at least 3,000 innings pitched, or
  • appearances least 1,000 games (to accommodate short-inning relievers with long careers).

I rank retired pitchers who meet these criteria by career ERA+,. This is an adjusted earned run average (ERA) that accounts for differences in ballparks and cross-temporal differences in pitching conditions (the resilience of the baseball, batters’ skill, field conditions, etc.). Some points to bear in mind:

  • My criteria are broad but nevertheless slanted toward players who enjoyed long careers. Some present Hall of Famers with short careers are excluded (e.g., Ralph Kiner, Sandy Koufax). However great their careers might have been, they didn’t prove themselves over the long haul, so I’m disinclined to include them in my Hall of Fame.
  • I drew on the Play Index at for the statistics on which the lists are based. The Play Index doesn’t cover years before 1900. That doesn’t bother me because the “modern game” really began in the early 1900s (see here, here, and here). The high batting averages and numbers of games won in the late 1800s can’t be compared with performances in the 20th and 21st centuries.
  • Similarly, players whose careers were spent mainly or entirely in the Negro Leagues are excluded because their accomplishments — however great — can’t be calibrated with the accomplishments of players in the major leagues.

In the following lists of rankings, each eligible player is assigned an ordinal rank, which is based on the adjacent index number. For batters, the index number represents career WAA/PA, where the highest value (Babe Ruth’s) is equal to 100. For pitchers, the index number represents career ERA+, where the highest value (Mariano Rivera’s) is equal to 100. The lists are coded as follows:

  • Blue — elected to the Hall of Fame. (N.B. Joe Torre is a member of the Hall of Fame, but he was elected as a manager, not as a player.)
  • Red — retired more than 5 seasons but not yet elected
  • Bold (with asterisk) — retired less than 5 seasons.

Now, at last, the lists (commentary follows):

Hall of fame candidates_batters

If Bill Mazeroski is in the Hall of Fame, why not everyone who outranks him ? (Barry Bonds, Sammy Sosa, and some others excepted, of course. Note that Mark McGwire didn’t make the list; he had 7,660 PA.) There are plenty of players with more impressive credentials than Mazeroski, whose main claim to fame is a World-Series-winning home run in 1960. Mazeroski is reputed to have been an excellent second-baseman, but WAA accounts for fielding prowess — and other things. Maz’s excellence as a fielder still leaves him at number 194 on my list of 234 eligible batters.

Here’s the list of eligible pitchers:

Hall of fame candidates_pitchers

If Rube Marquard — 111th-ranked of 122 eligible pitchers — is worthy of the Hall, why not all of those pitchers who outrank him? (Roger Clemens excepted, of course.) Where would I draw the line? My Hall of Fame would include the first 100 on the list of batters and the first 33 on the list of pitchers (abusers of PEDs excepted) — and never more than 100 batters and 33 pitchers. Open-ended membership means low standards. I’ll have none of it.

As of today, the top-100 batters would include everyone from Babe Ruth through Joe Sewell (number 103 on the list in the first table). I exclude Barry Bonds (number 3), Manny Ramirez (number 61), and Sammy Sosa (number 99). The top-33 pitchers would include everyone from Mariano Rivera through Eddie Plank (number 34 on the list in the second table). I exclude Roger Clemens (number 5).

My purge would eliminate 109 of the players who are now official members of the Hall of Fame, and many more players who are likely to be elected. The following tables list the current members whom I would purge (blue), and the current non-members (red and bold)  who would miss the cut:

Hall of fame batters not in top 100

Hall of fame pitchers not in top 33

Sic transit gloria mundi.


Baseball Trivia for the 4th of July

It was a “fact” — back in the 1950s when I became a serious fan of baseball — that the team that led its league on the 4th of July usually won the league championship. (That was in the days before divisional play made it possible for less-than-best teams to win league championships and World Series.)

How true was the truism? I consulted the Play Index at to find out. Here’s a season-by-season list of teams that had the best record on the 4th of July and at season’s end:

Teams with best record on 4th of July and end of season

It’s obvious that the team with the best record on the 4th of July hasn’t “usually” had the best record at the end of the season — if “usually” means “almost all of the time.”   In fact, for 1901-1950, the truism was true only 64 percent of the time in the American League and 60 percent of the time in the National League. The numbers for 1901-2014: American League, 60 percent; National League, 55 percent.

There are, however, two eras in which the team with the best record on the 4th of July “usually” had the best record at season’s  end — where “usually” is defined by a statistical test.* Applying that test, I found that

  • from 1901 through 1928 the best National League team on the 4th of July usually led the league at the end of the season (i.e., 75 percent of the time); and
  • from 1923 through 1958 the best American League team on the 4th of July usually led the league at the end of the season (i.e., 83 percent of the time).

I was a fan of the Detroit Tigers in the 1950s, and therefore more interested in the American League than the National League. So, when I became a fan it was true (of the American League) that the best team on the 4th of July usually led the league at the end of the season.

It’s no longer true. And even if it has happened 55 to 60 percent of the time in the past 114 years, don’t bet your shirt that it will happen in a particular season.

*     *     *

Related post: May the Best Team Lose

* The event E occurs when a team has the league’s best record on the 4th of July and at the end of the season. E “usually” occurs during a defined set of years if the difference between the frequency of occurrence during that set of years is significantly different than the frequency of occurrence in other years. Significantly, in this case, means that a t-test yields a probability of less than 0.01 that the difference in frequencies occurs by chance.