Baseball Roundup: Pennant Droughts, Post-Season Play, and Seven-Game World Series

The occasion for this post is the end of the 2019 World Series, which was unique in one way: It is the only Series in which the road team won every game. I begin with a discussion of pennant droughts — the number of years that the 30 teams in MLB have gone without winning a league championship or a World Series. Next is a dissection of post-season play, which has devolved into something like a game of chance rather than a contest between the best teams of each league. I close with a recounting and analysis of the classic World Series — the 38 that have gone to seven games.

PENNANT DROUGHTS

Everyone in the universe knows that when the Chicago Cubs won the National League championship in 2016, that feat ended what had been the longest pennant drought of the 16 old-line franchises in the National and American Leagues. The mini-bears had gone 71 years since winning the NL championship in 1945. The Cubs went on to win the 2016 World Series; their previous win had occurred 108 years earlier, in 1908.

Here are the most recent league championships and World Series wins by the other old-line National League teams: Atlanta (formerly Boston and Milwaukee) Braves — 1999, 1995; Cincinnati Reds — 1990, 1990; Los Angeles (formerly Brooklyn) Dodgers — 2018, 1988; Philadelphia Phillies — 2009, 2008; Pittsburgh Pirates — 1979, 1979; San Francisco (formerly New York) Giants — 2014, 2014; and St. Louis Cardinals — 2013, 2011.

The American League lineup looks like this: Baltimore Orioles (formerly Milwaukee Brewers and St. Louis Browns) — 1983, 1983; Boston Red Sox — 2018, 2018; Chicago White Sox — 2005, 2005; Cleveland Indians — 2016 (previously 1997), 1948; Detroit Tigers — 2012, 1984; Minnesota Twins (formerly Washington Senators) — 1991, 1991; New York Yankees — 2009, 2009; and Oakland (formerly Philadelphia and Kansas City) Athletics — 1990, 1989.

What about the expansion franchises, of which there are 14? I won’t separate them by league because two of them (Milwaukee and Houston) have switched leagues since their inception. Here they are, in this format: Team (year of creation) — year of last league championship, year of last WS victory:

Arizona Diamondbacks (1998) — 2001, 2001

Colorado Rockies (1993) — 2007, never

Houston Astros (1962) — 2019, 2017

Kansas City Royals (1969) — 2015, 2015

Los Angeles Angels (1961) –2002, 2002

Miami Marlins (1993) — 2003, 2003

Milwaukee Brewers (1969, as Seattle Pilots) –1982, never

New York Mets (1962) — 2015, 1986

San Diego Padres (1969) — 1998, never

Seattle Mariners (1977) — never, never

Tampa Bay Rays (1998) — 2008, never

Texas Rangers (1961, as expansion Washington Senators) — 2011, never

Toronto Blue Jays (1977) — 1993, 1993

Washington Nationals (1969, as Montreal Expos) — 2019, 2019

POST-SEASON PLAY — OR, MAY THE BEST TEAM LOSE

The first 65 World Series (1903 and 1905-1968) were contests between the best teams in the National and American Leagues. The winner of a season-ending Series was therefore widely regarded as the best team in baseball for that season (except by the fans of the losing team and other soreheads).

The advent of divisional play in 1969 meant that the Series could include a team that wasn’t the best in its league. From 1969 through 1993, when participation in the Series was decided by a single postseason playoff between division winners (1981 excepted), the leagues’ best teams met in only 10 of 24 series.

The advent of three-tiered postseason play in 1995 and four-tiered postseason play in 2012 has only made matters worse.

By the numbers:

  • Postseason play originally consisted of a World Series (period) involving 1/8 of major-league teams — the best in each league. Postseason play now involves 1/3 of major-league teams and 7 postseason playoffs (3 in each league plus the inter-league World Series).
  • Only 4 of the 25 Series from 1995 through 2019 featured the best teams of both leagues, as measured by W-L record.
  • Of the 25 Series from 1995 through 2019, only 9 were won by the best team in a league.
  • Of the same 25 Series, 12 (48 percent) were won by the better of the two teams, as measured by W-L record. Of the 65 Series played before 1969, 35 were won by the team with the better W-L record and 2 involved teams with the same W-L record. So before 1969 the team with the better W-L record won 35/63 of the time for an overall average of 56 percent. That’s not significantly different from the result for the 25 Series played in 1995-2019, but the teams in the earlier era were always their league’s best, which is no longer true. . .
  • From 1995 through 2019, a league’s best team (based on W-L record) appeared in a Series only 18 of 50 possible times — 7 times for the NL, 11 times for the AL. A random draw among teams qualifying for post-season play would have resulted in the selection of each league’s best team about 9 times.
  • Division winners opposed each other in just over half (13/25) of the Series from 1995 through 2019.
  • Wild-card teams appeared in 11 of those Series, with all-wild-card Series in 2002 and 2014.
  • Wild-card teams occupied almost 1/4 of the slots in the 1995-2019 Series — 12 out of 50.

The winner of the World Series used to be its league’s best team over the course of the entire season, and the winner had to beat the best team in the other league. Now, the winner of the World Series usually can claim nothing more than having won the most postseason games. Why not eliminate the 162-game regular season, select the postseason contestants at random, and go straight to postseason play?

Here are the World Series pairings for 1995-2019 (National League teams listed first; + indicates winner of World Series):

1995 –
Atlanta Braves (division winner; .625 W-L, best record in NL)+
Cleveland Indians (division winner; .694 W-L, best record in AL)

1996 –
Atlanta Braves (division winner; .593, best in NL)
New York Yankees (division winner; .568, 2nd-best in AL)+

1997 –
Florida Marlins (wild-card team; .568, 2nd-best in NL)+
Cleveland Indians (division winner; .534, 4th-best in AL)

1998 –
San Diego Padres (division winner; .605 3rd-best in NL)
New York Yankees (division winner, .704, best in AL)+

1999 –
Atlanta Braves (division winner; .636, best in NL)
New York Yankees (division winner; .605, best in AL)+

2000 –
New York Mets (wild-card team; .580, 4th-best in NL)
New York Yankees (division winner; .540, 5th-best in AL)+

2001 –
Arizona Diamondbacks (division winner; .568, 4th-best in NL)+
New York Yankees (division winner; .594, 3rd-best in AL)

2002 –
San Francisco Giants (wild-card team; .590, 4th-best in NL)
Anaheim Angels (wild-card team; .611, 3rd-best in AL)+

2003 –
Florida Marlins (wild-card team; .562, 3rd-best in NL)+
New York Yankees (division winner; .623, best in AL)

2004 –
St. Louis Cardinals (division winner; .648, best in NL)
Boston Red Sox (wild-card team; .605, 2nd-best in AL)+

2005 –
Houston Astros (wild-card team; .549, 3rd-best in NL)
Chicago White Sox (division winner; .611, best in AL)*

2006 –
St. Louis Cardinals (division winner; .516, 5th-best in NL)+
Detroit Tigers (wild-card team; .586, 3rd-best in AL)

2007 –
Colorado Rockies (wild-card team; .552, 2nd-best in NL)
Boston Red Sox (division winner; .593, tied for best in AL)+

2008 –
Philadelphia Phillies (division winner; .568, 2nd-best in NL)+
Tampa Bay Rays (division winner; .599, 2nd-best in AL)

2009 –
Philadelphia Phillies (division winner; .574, 2nd-best in NL)
New York Yankees (division winner; .636, best in AL)+

2010 —
San Francisco Giants (division winner; .568, 2nd-best in NL)+
Texas Rangers (division winner; .556, 4th-best in AL)

2011 —
St. Louis Cardinals (wild-card team; .556, 4th-best in NL)+
Texas Rangers (division winner; .593, 2nd-best in AL)

2012 —
San Francisco Giants (division winner; .580, 3rd-best in AL)+
Detroit Tigers (division winner; .543, 7th-best in AL)

2013 —
St. Louis Cardinals (division winner; .599, best in NL)
Boston Red Sox (division winner; .599, best in AL)+

2014 —
San Francisco Giants (wild-card team; .543, 4th-best in NL)+
Kansas City Royals (wild-card team; .549, 4th-best in AL)

2015 —
New York Mets (division winner; .556, 5th-best in NL)
Kansas City Royals (division winner; .586, best in AL)+

2016 —
Chicago Cubs (division winner; .640, best in NL)+
Cleveland Indians (division winner; .584, 2nd-best in AL)

2017 —
Los Angeles Dodgers (division winner; .642, best in NL)
Houston Astros (division winner; .623, best in AL)+

2018 —
Los Angeles Dodgers (division winner; .564, 3rd-best in NL)
Boston Red Sox (division winner; .667, best in AL)+

2019 —
Washington Nationals (wild-card team; .574, 3rd-best in NL)+
Houston Astros (divison winner; .660, best in AL)

THE SEVEN-GAME WORLD SERIES

The seven-game World Series holds the promise of high drama. That promise is fulfilled if the Series stretches to a seventh game and that game goes down to the wire. Courtesy of Baseball-Reference.com, here are the scores of the deciding games of every seven-game Series:

1909 – Pittsburgh (NL) 8 – Detroit (AL) 0

1912 – Boston (AL) 3 – New York (NL) 2 (10 innings)

1924 – Washington (AL) 4 – New York (NL) 3 (12 innings)

1925 – Pittsburgh (NL) 9 – Washington (AL) 7

1926 – St. Louis (NL) 3 – New York (AL) 2

1931 – St. Louis (NL) 4 – Philadelphia (AL) 2

1934 – St. Louis (NL) 11 – Detroit (AL) 0

1940 – Cincinnati (NL) 2 – Detroit (AL) 1

1945 – Detroit (AL) 9 – Chicago (NL) 3

1946 – St. Louis (NL) 4 – Boston (AL) 3

1947 – New York (AL) 5 – Brooklyn (NL) 2

1955 – Brooklyn (NL) 2 – New York (AL) 0

1956 – New York (AL) 9 – Brooklyn (NL) 0

1957 – Milwaukee (NL) 5 – New York (AL) 0

1958 – New York (AL) 6 – Milwaukee (NL) 2

1960 – Pittsburgh (NL) 10 – New York (AL) 9 (decided by Bill Mazeroski’s home run in the bottom of the 9th)

1964 – St. Louis (NL) 7 – New York (AL) 5

1965 – Los Angeles (NL) 2 – Minnesota (AL) 0

1967 – St. Louis (NL) 7 – Boston (AL) 2

1968 – Detroit (AL) 4 – St. Louis (NL) 1

1971 – Pittsburgh (NL) 2 – Baltimore (AL) 1

1972 – Oakland (AL) 3 – Cincinnati (NL) 2

1973 – Oakland (AL) 5 – New York (NL) 2

1975 – Cincinnati (AL) 4 – Boston (AL) 3

1979 – Pittsburgh (NL) 4 – Baltimore (AL) 1

1982 – St. Louis (NL) 6 – Milwaukee (AL) 3

1985 – Kansas City (AL) 11 – St. Louis (NL) 0

1986 – New York (NL) 8 – Boston (AL) 5

1987 – Minnesota (AL) 4 – St. Louis (NL) 2

1991 – Minnesota (AL) 1 – Atlanta (NL) 0 (10 innings)

1997 – Florida (NL) 3 – Cleveland (AL) 2 (11 innings)

2001 – Arizona (NL) 3 – New York (AL) 2 (decided in the bottom of the 9th)

2002 – Anaheim (AL) 4 – San Francisco (NL) 1

2011 – St. Louis (NL) 6 – Texas (AL) 2

2014 – San Francisco (NL) 3 – Kansas City (AL) 2

2016 – Chicago (NL) 8 – Cleveland (AL) 7 (10 innings)

2017 – Houston (AL) 5 – Los Angeles (AL) 1

2019 – Washington (NL) 6 – Houston (AL) 2

Summary statistics:

34 percent (38) of 112 Series have gone to the limit of seven games (another four Series were in a best-of-nine format, but none went to nine games).

20 of the 38 Series were decided by 1 or 2 runs.

14 of those Series were decided by 1 run (7 times in extra innings or the winning team’s last at-bat).

20 of the 38 Series were won by the team that was behind after five games

6 of the 38 Series were won by the team that was behind after four games.

There were 4 consecutive seven-game Series 1955-58, all involving the New York Yankees (almost 1/5 of the Yankees’ Series — 8 of 41 — went to seven games).

Does the World Series deliver high drama? If a seven-game Series is high drama, the World Series has delivered about 1/3 of the time. If high drama means a seven-game Series in which the final game was decided by 1 run, the World Series has delivered about 1/8 of the time. If high drama means a seven-game series where the final game was decided by only 1 run in extra innings or the winning team’s final at-bat, the World Series has delivered only 1/16 percent of the time.

The rest of the time the World Series is merely an excuse to fill seats and sell advertising, inasmuch as it’s seldom a contest between the best team in each league.

Competitiveness in Major-League Baseball

Only 30 days and fewer than 30 games per team remain in major-league baseball’s regular season. There are all-but-certain winners in three of six divisions: the New York Yankees, American League (AL) East; Houston Astros, AL West; and Los Angeles Dodgesr, National League (NL) West.

The Boston Red Sox, last year’s AL East and World Series champions, probably won’t make it to the AL wild-card playoff game. The Milwaukee Brewers, last year’s NL Central champs, are in the same boat. The doormat of AL Central, the Detroit Tigers, are handily winning the race to the bottom, with this year’s worst record in the major leagues.

Anecdotes, however, won’t settle the question whether major-league baseball is becoming more or less competitive. Numbers won’t settle the question, either, but they might shed some light on the matter. Consider this graph, which I will explain and discuss below:


Based on statistics for the National League and American League compiled at Baseball-Reference.com.

Though the NL began play in 1876, I have analyzed its record from 1901 through 2018, for parallelism with the AL, which began play in 1901. The rough similarity of the two time series lends weight to the analysis that I will offer shortly.

First, what do the numbers mean? The deviation between a team’s won-lost (W-L) record and the average for the league is simply

Dt = Rt – Rl , where, Rt is the team’s record and Rl is the league’s record in a given season.

If the team’s record is .600 and the league’s record is .500 (as it always was until the onset of interleague play in 1997), then Dt = .100. And if a team’s record is .400 and the league’s record is .500, then Dt = -.100. Given that wins and losses cancel each other, the mean deviation for all teams in a league would be zero, or very near zero, which wouldn’t tell us much about the spread around the league average. So I use the absolute values of Dt and average them. In the case of teams with deviations of .100 and -.100, the absolute values of the deviations would be .100 and .100, yielding a mean of .100. In a more closely contested season, the deviations for the two teams might be .050 and -.050, yielding a mean absolute deviation of .050.

The smaller the mean absolute deviation, the more competitive the league in that season. Season-by-season plots of the means are rather jagged, obscuring long-term trends. I therefore used centered five-year averages of mean absolute deviations.

Both leagues generally became more competitive from the early 1900s until around 1970. Since then, the AL has experienced two less-competitive periods: the late 1970s (when the New York Yankees re-emerged as a dominant team), and the early 2000s (when the Yankees were enjoying another era of dominance that began in the late 1990s). (The Yankees’ earlier periods of dominance show up as local peaks in the black line centered at 1930 and the early 1950s.)

The NL line highlights the dominance of the Chicago Cubs in the early 1900s and the recent dominance of the Chicago Cubs, Los Angeles Dodgers, and St. Louis Cardinals.

What explains the long-term movement toward greater competitiveness in both leagues? Here’s my hypothesis:

Integration, which began in the late 1940s, eventually expanded the pool of baseball talent by opening the door not only to American blacks but also to black and white Hispanics from Latin America. (There were a few non-black Hispanic players in major-league ball before integration, but they were notable freckles on the game’s pale complexion.) Integration of blacks and Latins continue for decades after the last major-league team was nominally integrated in the late 1950s.

Meanwhile, the minor leagues were dwindling — from highs of 59 leagues and 448 teams in 1949 to 15 leagues and 176 teams in 2017. Players who might otherwise have stayed in the minor leagues have been promoted to the major leagues more often than in the past.

That tendency was magnified by expansion of the major leagues. The AL started in 1961 (2 teams), and the NL followed suit in 1962 (2 teams). Further expansion in 1969 (2 teams in each league), 1977 (2 teams in the AL), 1993 (2 teams in the NL), and 1998 (1 team in each league) brought the number of major-league teams to 30.

While there are now 88 percent more major-league teams than there were in 1949, there are far fewer teams in major-league and minor-league ball, combined. Meanwhile the population of the United States has more than doubled, and that source of talent has been augmented significantly by the recruitment of players of Latin America.

Further, free agency, which began in the mid-1970s, allowed weaker teams to attract high-quality players by offering them more money than stronger teams found it wise to offer, given roster limitations. Each team may carry only 25 players on its active roster until the final month of the season. Therefore, no matter how much money a team’s owner has, the limit on the size of his team’s roster constrains his ability to sign the best players available for every position. So the richer pool of talent is spread more evenly across teams.

What’s Wrong With Baseball?

A short list:

Long commercial breaks.

Too many players on a team, especially pitchers.

Too many pitching changes.

Small playing fields.

Butterfly-net gloves.

Loud music (sic) and exploding scoreboards.

Too many young kids in the stands.

Roaming concessionaires clogging aisles and blocking views.

Hairy players.

Interviews with players (especially hairy ones).

Interviews with managers and coaches.

More than one announcer in the booth.

Night games.

Small strike zones.

Competitiveness in Major-League Baseball (III)

UPDATED 10/04/17

I first looked at this 10 years ago. I took a second look 3 years ago. This is an updated version of the 3-year-old post, which draws on the 10-year-old post.

Yesterday marked the final regular-season games of the 2014 season of major league baseball (MLB), In observance of that event, I’m shifting from politics to competitiveness in MLB. What follows is merely trivia and speculation. If you like baseball, you might enjoy it. If you don’t like baseball, I hope that you don’t think there’s a better team sport. There isn’t one.

Here’s how I compute competitiveness for each league and each season:

INDEX OF COMPETITIVENESS = AVEDEV/AVERAGE; where

AVEDEV = the average of the absolute value of deviations from the average number of games won by a league’s teams in a given season, and

AVERAGE =  the average number of games won by a league’s teams in a given season.

For example, if the average number of wins is 81, and the average of the absolute value of deviations from 81 is 8, the index of competitiveness is 0.1 (rounded to the nearest 0.1). If the average number of wins is 81 and the average of the absolute value of deviations from 81 is 16, the index of competitiveness is 0.2.  The lower the number, the more competitive the league.

With some smoothing, here’s how the numbers look over the long haul:


Based on statistics for the National League and American League compiled at Baseball-Reference.com.

The National League grew steadily more competitive from 1940 to 1987, and has slipped only a bit since then. The American League’s sharp climb began in 1951, peaked in 1989, slipped until 2006, and has since risen to the NL’s level. In any event, there’s no doubt that both leagues are — and in recent decades have been — more competitive than they were in the early to middle decades of the 20th century. Why?

My hypothesis: integration compounded by expansion, with an admixture of free agency and limits on the size of rosters.

Let’s start with integration. The rising competitiveness of the NL after 1940 might have been a temporary thing, but it continued when NL teams (led by the Brooklyn Dodgers) began to integrate by adding Jackie Robinson in 1947. The Cleveland Indians of the AL followed suit, by adding Larry Doby later in the same season. By the late 1950s, all major league teams (then 16) had integrated, though the NL seems to have integrated faster. The more rapid integration of the NL could explain its earlier ascent to competitiveness. Integration was followed in short order by expansion: The AL began to expand in 1961 and the NL began to expand in 1962.

How did expansion and integration combine to make the leagues more competitive? Several years ago, I opined:

[G]iven the additional competition for talent [following] expansion, teams [became] more willing to recruit players from among the black and Hispanic populations of the U.S. and Latin America. That is to say, teams [came] to draw more heavily on sources of talent that they had (to a large extent) neglected before expansion.

Further, free agency, which began in the mid-1970s,

made baseball more competitive by enabling less successful teams to attract high-quality players by offering them more money than other, more successful, teams. Money can, in some (many?) cases, compensate a player for the loss of psychic satisfaction of playing on a team that, on its record, is likely to be successful.

Finally,

[t]he competitive ramifications of expansion and free agency [are] reinforced by the limited size of team rosters (e.g., each team may carry only 25 players until September 1). No matter how much money an owner has, the limit on the size of his team’s roster constrains his ability to sign all (even a small fraction) of the best players.

It’s not an elegant hypothesis, but it’s my own (as far as I know). I offer it for discussion.

UPDATE

Another way of looking at the degree of competitiveness is to look at the percentage of teams in W-L brackets. I chose these seven: .700+, .600-.699, .500-.599, .400-.499, .300-.399, and <.300. The following graphs give season-by-season percentages for the two leagues:

Here’s how to interpret the graphs, taking the right-hand bar (2017) in the American League graph as an example:

  • No team had a W-L record of .700 or better.
  • About 13 percent (2 teams) had records of .600-.699; the same percentage, of course, had records of .600 or better because there were no teams in the top bracket.
  • Only one-third (5 teams) had records of .500 or better, including one-fifth (3 teams) with records of .500-.599.
  • Fully 93 percent of teams (14) had records of .400 or better, including 9 teams with records of .400-.499.
  • One team (7 percent) had a record of .300-.399.
  • No teams went below .300.

If your idea of competitiveness is balance — with half the teams at .500 or better — you will be glad to see that in a majority of years half the teams have had records of .500 or better. However, the National League has failed to meet that standard in most seasons since 1983. The American League, by contrast, met or exceeded that standard in every season from 2000 through 2016, before decisively breaking the streak in 2017.

Below are the same two graphs, overlaid with annual indices of competitiveness. (Reminder: lower numbers = greater competitiveness.)

I prefer the index of competitiveness, which integrates the rather jumbled impression made by the bar graphs. What does it all mean? I’ve offered my thoughts. Please add yours.

A Baseball Note: The 2017 Astros vs. the 1951 Dodgers

If you were following baseball in 1951 (as I was), you’ll remember how that season’s Brooklyn Dodgers blew a big lead, wound up tied with the New York Giants at the end of the regular season, and lost a 3-game playoff to the Giants on Bobby Thomson’s “shot heard ’round the world” in the bottom of the 9th inning of the final playoff game.

On August 11, 1951, the Dodgers took a doubleheader from the Boston Braves and gained their largest lead over the Giants — 13 games. The Dodgers at that point had a W-L record of 70-36 (.660), and would top out at .667 two games later. But their W-L record for the rest of the regular season was only .522. So the Giants caught them and went on to win what is arguably the most dramatic playoff in the history of professional sports.

The 2017 Astros peaked earlier than the 1951 Dodgers, attaining a season-high W-L record of .682 on July 5, and leading the second-place team in the AL West by 18 games on July 28. The Astros’ lead has dropped to 12 games, and the team’s W-L record since the July 5 peak is only .438.

The Los Angeles Angels might be this year’s version of the 1951 Giants. The Angels have come from 19 games behind the Astros on July 28, to trail by 12. In that span, the Angels have gone 11-4 (.733).

Hold onto your hats.

The American League’s Greatest Hitters: III

This post supersedes “The American League’s Greatest Hitters: Part II” and “The American League’s Greatest Hitters.” Here, I build on “Bigger, Stronger, and Faster — but Not Quicker?” which assesses the long-term trend (or lack thereof) in batting skill.

Specifically, I derived ballpark factors (BF) for each AL team for each season from 1901 through 2016. For example, the fabled New York Yankees of 1927 hit 1.03 times as well at home as on the road. Given a schedule evenly divided between home and road games, this means that batting averages for the Yankees of 1927 were inflated by 1.015 relative to batting averages for players on other teams.

The BA of a 1927 Yankee — as adjusted by the method described in “Bigger, Stronger…” — should therefore be multiplied by a BF of 0.985 (1/1.015) to obtain that Yankee’s “true” BA for that season. (This is a player-season-specific adjustment, in addition the long-term trend adjustment applied in “Bigger, Stronger…,” which captures a gradual and general decline in home-park advantage.)

I made those adjustments for 147 players who had at least 5,000 plate appearances in the AL and an official batting average (BA) of at least .285 in those plate appearances. Here are the adjustments, plotted against the middle year of each player’s AL career:

batting-average-analysis-top-147-al-hitters-unadjusted-graph

When all is said and done, there are only 43 qualifying players with an adjusted career BA of .300 or higher:

batting-average-analysis-greatest-hitters-top-43-table

Here’s a graph of the relationship between adjusted career BA and middle-of-career year:

batting-average-analysis-top-43-al-hitters-graph

The curved line approximates the trend, which is downward until about the mid-1970s, then slightly upward. But there’s a lot of variation around that trend, and one player — Ty Cobb at .360 — clearly stands alone as the dominant AL hitter of all time.

Michael Schell, in Baseball’s All-Time Best Hitters, ranks Cobb second behind Tony Gwynn, who spent his career (1982-2001) in the National League (NL), and much closer to Rod Carew, who played only in the AL (1967-1985). Schell’s adjusted BA for Cobb is .340, as opposed to .332 for Carew, an advantage of .008 for Cobb. I have Cobb at .360 and Carew at .338, an advantage of .022 for Cobb. The difference in our relative assessments of Cobb and Carew is typical; Schell’s analysis is biased (intentionally or not) toward recent and contemporary players and against players of the pre-World War II era.

Here’s how Schell’s adjusted BAs stack up against mine, for 32 leading hitters rated by both of us:

batting-average-analysis-schell-vs-pandp

Schell’s bias toward recent and contemporary players is most evident in his standard-deviation (SD) adjustment:

In his book Full House, Stephen Jay Gould, an evolutionary biologist [who imported his ideological biases into his work]…. Gould imagines [emphasis added] that there is a “wall” of human ability. The best players at the turn of the [20th] century may have been close to the “wall,” many of their peers were not. Over time, progressively better hitters replace the weakest hitters. As a result, the best current hitters do not stand out as much from their peers.

Gould and I believe that the reduction in the standard deviation [of BA within a season] demonstrates that there has been an improvement in the overall quality of major league baseball today compared to nineteenth-century and early twentieth-century play. [pp. 94-95]

Thus Schell’s SD adjustment, which slashes the BA of the better hitters of the early part of the 20th century because the SDs of that era were higher than the SDs after World War II. The SD adjustment is seriously flawed for several reasons:

1. There may be a “wall” of human ability, or it may truly be imaginary. Even if there is such a wall, we have no idea how close Ty Cobb, Tony Gwynn, and other great hitters have been to it. That is to say, there’s no a priori reason (contra Schell’s implicit assumption) that Cobb couldn’t have been closer to the wall than Gwynn.

2. It can’t be assumed that reaction time — an important component of human ability, and certainly of hitting ability — has improved with time. In fact, there’s a plausible hypothesis to the contrary, which is stated in “Bigger, Stronger…” and examined there, albeit inconclusively.

3. Schell’s discussion of relative hitting skill implies, wrongly, that one player’s higher BA comes at the expense of other players. Not so. BA is a measure of the ability of a hitter to hit safely given the quality of pitching and other conditions (examined in detail in “Bigger, Stronger…”). It may be the case that weaker hitters were gradually replaced by better ones, but that doesn’t detract from the achievements of the better hitter, like Ty Cobb, who racked up his hits at the expense of opposing pitchers, not other batters.

4. Schell asserts that early AL hitters were inferior to their NL counterparts, thus further justifying an SD adjustment that is especially punitive toward early AL hitters (e.g., Cobb). However, early AL hitters were demonstrably inferior to their NL counterparts only in the first two years of the AL’s existence, and well before the arrival of Cobb, Joe Jackson, Tris Speaker, Harry Heilmann, Babe Ruth, George Sisler, Lou Gehrig, and other AL greats of the pre-World War II era. Thus:

batting-average-analysis-single-season-change-in-ba-following-league-switch

There seems to have been a bit of backsliding between 1905 and 1910, but the sample size for those years is too small to be meaningful. On the other hand, after 1910, hitters enjoyed no clear advantage by moving from NL to AL (or vice versa). The data for 1903 through 1940, taken altogether, suggest parity between the two leagues during that span.

One more bit of admittedly sketchy evidence:

  • Cobb hit as well as Heilmann during Cobb’s final nine seasons as a regular player (1919-1927), which span includes the years in which the younger Heilmann won batting titles with average of .394, .403, 398, and .393.
  • In that same span, Heilmann outhit Ruth, who was the same age as Heilmann.
  • Ruth kept pace with the younger Gehrig during 1925-1932.
  • In 1936-1938, Gehrig kept pace with the younger Joe DiMaggio, even though Gehrig’s BA dropped markedly in 1938 with the onset of the disease that was to kill him.
  • The DiMaggio of 1938-1941 was the equal of the younger Ted Williams, even though the final year of the span saw Williams hit .406.
  • Williams’s final three years as a regular, 1956-1958, overlapped some of the prime seasons of Mickey Mantle, who was 13 years Williams’s junior. Williams easily outhit Mantle during those years, and claimed two batting titles to Mantle’s one.

I see nothing in the preceding recitation to suggest that the great hitters of the years 1901-1940 were inferior to the great hitters of the post-WWII era. In fact, it points in the opposite direction. This might be taken as indirect confirmation of the hypothesis that reaction times have slowed. Or it might have something to do with the emergence of football and basketball as “serious” professional sports after WWII, an emergence that could well have led potentially great hitters to forsake baseball for another sport. Yet another possibility is that post-war prosperity and educational opportunities drew some potentially great hitters into non-athletic trades and professions. In other words, unlike Schell, I remain open to the possibility that there may have been a real, if slight, decline in hitting talent after WWII — a decline that was gradually reversed because of the eventual effectiveness of integration (especially of Latin American players) and the explosion of salaries with the onset of free agency.

Finally, in “Bigger, Stronger…” I account for the cross-temporal variation in BA by applying a general algorithm and then accounting for 14 discrete factors, including the ones applied by Schell. As a result, I practically eliminate the effects of the peculiar conditions that cause BA to be inflated in some eras relative to other eras. (See figure 7 of “Bigger, Stronger…” and the accompanying discussion.) Even after taking all of those factors into account, Cobb still stands out as the best AL hitter of all time — by a wide margin.

And given Cobb’s statistical dominance over his contemporaries in the NL, he still stands as the greatest hitter in the history of the major leagues.

Back to Baseball

In “Does Velocity Matter?” I diagnosed the factors that account for defensive success or failure, as measured by runs allowed per nine innings of play. There’s a long list of significant variables: hits, home runs, walks, errors, wild pitches, hit batsmen, and pitchers’ ages. (Follow the link for the whole story.)

What about offensive success or failure? It turns out that it depends on fewer key variables, though there is a distinct difference between the “dead ball” era of 1901-1919 and the subsequent years of 1920-2015. Drawing on statistics available at Baseball-Reference.com. I developed several regression equations and found three of particular interest:

  • Equation 1 covers the entire span from 1901 through 2015. It’s fairly good for 1920-2015, but poor for 1901-1919.
  • Equation 2 covers 1920-2015, and is better than Equation 1 for those years. I also used it for backcast scoring in 1901-1919 — and it’s worse than equation 1.
  • Equation 5 gives the best results for 1901-1919. I also used it to forecast scoring in 1920-2015, and it’s terrible for those years.

This graph shows the accuracy of each equation:

Estimation errors as a percentage of runs scored

Unsurprising conclusion: Offense was a much different thing in 1901-1919 than in subsequent years. And it was a simpler thing. Here’s Equation 5, for 1901-1919:

RS9 = -5.94 + BA(29.39) + E9(0.96) + BB9(0.27)

Where 9 stands for “per 9 innings” and
RS = runs scored
BA = batting average
E9 = errors committed
BB = walks

The adjusted r-squared of the equation is 0.971; the f-value is 2.19E-12 (a very small probability that the equation arises from chance). The p-values of the constant and the first two explanatory variables are well below 0.001; the p-value of the third explanatory variable is 0.01.

In short, the name of the offensive game in 1901-1919 was getting on base. Not so the game in subsequent years. Here’s Equation 2, for 1920-2015:

RS9 = -4.47 + BA(25.81) + XBH(0.82) + BB9(0.30) + SB9(-0.21) + SH9(-0.13)

Where 9, RS, BA, and BB are defined as above and
XBH = extra-base hits
SB = stolen bases
SH = sacrifice hits (i.e., sacrifice bunts)

The adjusted r-squared of the equation is 0.974; the f-value is 4.73E-71 (an exceedingly small probability that the equation arises from chance). The p-values of the constant and the first four explanatory variables are well below 0.001; the p-value of the fifth explanatory variable is 0.03.

In other words, get on base, wait for the long ball, and don’t make outs by trying to steal or bunt the runner(s) along,.

A Rather Normal Distribution

I found a rather normal distribution from the real world — if you consider major-league baseball to be part of the real world. In a recent post I explained how I normalized batting statistics for the 1901-2015 seasons, and displayed the top-25 single-season batting averages, slugging percentages, and on-base-plus-slugging percentages after normalization.

I have since discovered that the normalized single-season batting averages for 14,067 player-seasons bear a strong resemblance to a textbook normal distribution:

Distribution of normalized single-season batting averrages

How close is this to a textbook normal distribution? Rather close, as measured by the percentage of observations that are within 1, 2, 3, and 4 standard deviations from the mean:

Distribution of normalized single-season batting averrages_table

Ty Cobb not only compiled the highest single-season average (4.53 SD above the mean) but 5 of the 12 single-season averages more than 4 SD above the mean:

Ty Cobb's normalized single-season batting_SD from mean

Cobb’s superlative performances in the 13-season span from 1907 through 1919 resulted in 12 American League batting championships. (The unofficial number has been reduced to 11 because it was later found that Cobb actually lost the 1910 title by a whisker — .3834 to Napoleon Lajoie’s .3841.)

Cobb’s normalized batting average for his worst full season (1924) is better than 70 percent of the 14,067 batting averages compiled by full-time players in the 115 years from 1901 through 2015. And getting on base was only part of what made Cobb the greatest player of all time.

Baseball’s Greatest and Worst Teams

When talk turns to the greatest baseball team of all time, most baseball fans will nominate the 1927 New York Yankees. Not only did that team post a won-lost record of 110-44, for a W-L percentage of .714, but its roster featured several future Hall-of-Famers: Babe Ruth, Lou Gehrig, Herb Pennock, Waite Hoyt, Earl Combs, and Tony Lazzeri. As it turns out, the 1927 Yankees didn’t have the best record in “modern” baseball, that is, since the formation of the American League in 1901. Here are the ten best seasons (all above .700), ranked by W-L percentage:

Team Year G W L W-L%
Cubs 1906 155 116 36 .763
Pirates 1902 142 103 36 .741
Pirates 1909 154 110 42 .724
Indians 1954 156 111 43 .721
Mariners 2001 162 116 46 .716
Yankees 1927 155 110 44 .714
Yankees 1998 162 114 48 .704
Cubs 1907 155 107 45 .704
Athletics 1931 153 107 45 .704
Yankees 1939 152 106 45 .702

And here are the 20 worst seasons, all below .300:

Team Year G W L W-L%
Phillies 1945 154 46 108 .299
Brown 1937 156 46 108 .299
Phillies 1939 152 45 106 .298
Browns 1911 152 45 107 .296
Braves 1909 155 45 108 .294
Braves 1911 156 44 107 .291
Athletics 1915 154 43 109 .283
Phlllies 1928 152 43 109 .283
Red Sox 1932 154 43 111 .279
Browns 1939 156 43 111 .279
Phillies 1941 155 43 111 .279
Phillies 1942 151 42 109 .278
Senators 1909 156 42 110 .276
Pirates 1952 155 42 112 .273
Tigers 2003 162 43 119 .265
Athletics 1919 140 36 104 .257
Senators 1904 157 38 113 .252
Mets 1962 161 40 120 .250
Braves 1935 153 38 115 .248
Athletics 1916 154 36 117 .235

But it takes more than a season, or even a few of them, to prove a team’s worth. The following graphs depict the best records in the American and National Leagues over nine-year spans:

Centered nine-year W-L record, best AL

Centered nine-year W-L record, best NL

For sustained excellence over a long span of years, the Yankees are the clear winners. Moreover, the Yankees’ best nine-year records are centered on 1935 and 1939. In the nine seasons centered on 1935 — namely 1931-1939 — the Yankees compiled a W-L percentage of .645. In those nine seasons, the Yankees won five American League championships and as many World Series. The Yankees compiled a barely higher W-L percentage of .646 in the nine seasons centered on 1939 — 1935-1943. But in those nine seasons, the Yankees won the American League championship seven times — 1936, 1937, 1938, 1939, 1941, 1942, and 1943 — and the World Series six times (losing to the Cardinals in 1942).

Measured by league championships, the Yankees compiled better nine-year streaks, winning eight pennants in 1949-1957, 1950-1958, and 1955-1963. But for sheer, overall greatness, I’ll vote for the Yankees of the 1930s and early 1940s. Babe Ruth graced the Yankees through 1934, and the 1939 team (to pick one) included future Hall-of-Famers Bill Dickey, Joe Gordon, Joe DiMaggio, Lou Gehring (in his truncated final season), Red Ruffing, and Lefty Gomez.

Here are the corresponding worst nine-year records in the two leagues:

Centered nine-year W-L record, worst AL

Centered nine-year W-L record, worst NL

The Phillies — what a team! The Phillies, Pirates, and Cubs should have been demoted to Class D leagues.

What’s most interesting about the four graphs is the general decline in the records of the best teams and the general rise in the records of the worst teams. That’s a subject for another post.

Great (Batting) Performances

The normal values of batting average (BA), slugging percentage (SLG), and on-base plus slugging (OPS) have fluctuated over time:

Average major league batting statistics_1901-2015

In sum, no two seasons are alike, and some are vastly different from others. To level the playing field (pun intended), I did the following:

  • Compiled single-season BA, SLG, and OPS data for all full-time batters (those with enough times at bat in a season to qualify for the batting title) from 1901 through 2015 — a total of 14,067 player-seasons. (Source: the Play Index at Baseball-Reference.com.)
  • Normalized (“normed”) each season’s batting statistics to account for inter-seasonal differences. For example, a batter whose BA in 1901 was .272 — the overall average for that year — is credited with the same average as a batter whose BA in 1902 was .267 — the overall average for that year.
  • Ranked the normed values of BA, SLG, and OPS for those 14,067 player-seasons.

I then sorted the rankings to find the top 25 player-seasons in each category:

Top-25 single-season offensive records

I present all three statistics because they represent different aspects of offensive prowess. BA was the most important of the three statistics until the advent of the “lively ball” era in 1919. Accordingly, the BA list is dominated by seasons played before that era, when the name of the game was “small ball.” The SLG and OPS lists are of course dominated by seasons played in the lively ball era.

Several seasons compiled by Barry Bonds and Mark McGwire showed up in the top-25 lists that I presented in an earlier post. I have expunged those seasons because of the dubious nature of Bonds’s and McGwire’s achievements.

The preceding two paragraphs lead to the question of the commensurability (or lack thereof) of cross-temporal statistics. This is from the earlier post:

There are many variations in the conditions of play that have resulted in significant changes in offensive statistics. Among those changes are the use of cleaner and more tightly wound baseballs, the advent of night baseball, better lighting for night games, bigger gloves, lighter bats, bigger and stronger players, the expansion of the major leagues in fits and starts, the size of the strike zone, the height of the pitching mound, and — last but far from least in this list — the integration of black and Hispanic players into major league baseball. In addition to these structural variations, there are others that mitigate against the commensurability of statistics over time; for example, the rise and decline of each player’s skills, the skills of teammates (which can boost or depress a player’s performance), the characteristics of a player’s home ballpark (where players generally play half their games), and the skills of the opposing players who are encountered over the course of a career.

Despite all of these obstacles to commensurability, the urge to evaluate the relative performance of players from different teams, leagues, seasons, and eras is irrepressible. Baseball-Reference.com is rife with such evaluations; the Society for American Baseball Research (SABR) revels in them; many books offer them (e.g., this one); and I have succumbed to the urge more than once.

It is one thing to have fun with numbers. It is quite another thing to ascribe meanings to them that they cannot support.

And yet, it seems right that the top 25 seasons should include so many of Ty Cobb’s, Babe Ruth’s, and of their great contemporaries Jimmie Foxx, Lou Gehrig, Rogers Hornsby, Shoeless Joe Jackson, Nap Lajoie, Tris Speaker, George Sisler, and Honus Wagner. It signifies the greatness of the later players who join them on the lists: Hank Aaron, George Brett, Rod Carew, Roberto Clemente, Mickey Mantle, Willie McCovey, Stan Musial, Frank Thomas, and Ted Williams.

Cobb’s dominance of the BA leader-board merits special attention. Cobb holds 9 of the top 19 slots on the BA list. That’s an artifact of his reign as the American League’s leading hitter in 12 of the 13 seasons from 1907 through 1919. But there was more to Cobb than just “hitting it where they ain’t.” Cobb probably was the most exciting ball player of all time, because he was much more than a hitting machine.

Charles Leershen offers chapter and verse about Cobb’s prowess in his book Ty Cobb: A Terrible Beauty. Here are excerpts of Leershen’s speech “Who Was Ty Cobb? The History We Know That’s Wrong,” which is based on his book:

When Cobb made it to first—which he did more often than anyone else; he had three seasons in which he batted over .400—the fun had just begun. He understood the rhythms of the game and he constantly fooled around with them, keeping everyone nervous and off balance. The sportswriters called it “psychological baseball.” His stated intention was to be a “mental hazard for the opposition,” and he did this by hopping around in the batter’s box—constantly changing his stance as the pitcher released the ball—and then, when he got on base, hopping around some more, chattering, making false starts, limping around and feigning injury, and running when it was least expected. He still holds the record for stealing home, doing so 54 times. He once stole second, third, and home on three consecutive pitches, and another time turned a tap back to the pitcher into an inside-the-park home run.

“The greatness of Ty Cobb was something that had to be seen,” George Sisler said, “and to see him was to remember him forever.” Cobb often admitted that he was not a natural, the way Shoeless Joe Jackson was; he worked hard to turn himself into a ballplayer. He had nine styles of slides in his repertoire: the hook, the fade-away, the straight-ahead, the short or swoop slide (“which I invented because of my small ankles”), the head-first, the Chicago slide (referred to by him but never explained), the first-base slide, the home-plate slide, and the cuttle-fish slide—so named because he purposely sprayed dirt with his spikes the way squid-like creatures squirt ink. Coming in, he would watch the infielder’s eyes to determine which slide to employ.

There’s a lot more in the book, which I urge you to read — especially if you’re a baseball fan who appreciates snappy prose and documented statements (as opposed to the myths that have grown up around Cobb).

Cobb’s unparalleled greatness was still fresh in the minds of baseball people in 1936, when the first inductees to baseball’s Hall of Fame were elected. It was Cobb — not Babe Ruth — who received the most votes among the five players selected for membership in the Hall.

The Hall of Fame Reconsidered

Several years ago I wrote some posts (e.g., here and here) about the criteria for membership in baseball’s Hall of Fame, and named some players who should and shouldn’t be in the Hall. A few days ago I published an updated version of my picks. I’ve since deleted that post because, on reflection, I find my criteria too narrow. I offer instead:

  • broad standards of accomplishment that sweep up most members of the Hall who have been elected as players
  • ranked lists of players who qualify for consideration as Hall of Famers, based on those standards.

These are the broad standards of accomplishment for batters:

  • at least 8,000 plate appearances (PA) — a number large enough to indicate that a player was good enough to have attained a long career in the majors, and
  • a batting average of at least .250 — a low cutuff point that allows the consideration of mediocre hitters who might have other outstanding attributes (e.g., base-stealing, fielding).

I rank retired batters who meet those criteria by career wins above average (WAA) per career PA. WAA for a season is a measure of a player’s total offensive and defensive contribution, relative to other players in the same season. (WAA therefore normalizes cross-temporal differences in batting averages, the frequency of home runs, the emphasis on base-stealing, and the quality of fielders’ gloves, for example.) Because career WAA is partly a measure of longevity rather than skill, I divide by career PA to arrive at a normalized measure of average performance over the span of a player’s career.

These are the broad standards of accomplishment for pitchers:

  • at least 3,000 innings pitched, or
  • appearances least 1,000 games (to accommodate short-inning relievers with long careers).

I rank retired pitchers who meet these criteria by career ERA+,. This is an adjusted earned run average (ERA) that accounts for differences in ballparks and cross-temporal differences in pitching conditions (the resilience of the baseball, batters’ skill, field conditions, etc.). Some points to bear in mind:

  • My criteria are broad but nevertheless slanted toward players who enjoyed long careers. Some present Hall of Famers with short careers are excluded (e.g., Ralph Kiner, Sandy Koufax). However great their careers might have been, they didn’t prove themselves over the long haul, so I’m disinclined to include them in my Hall of Fame.
  • I drew on the Play Index at Baseball-Reference.com for the statistics on which the lists are based. The Play Index doesn’t cover years before 1900. That doesn’t bother me because the “modern game” really began in the early 1900s (see here, here, and here). The high batting averages and numbers of games won in the late 1800s can’t be compared with performances in the 20th and 21st centuries.
  • Similarly, players whose careers were spent mainly or entirely in the Negro Leagues are excluded because their accomplishments — however great — can’t be calibrated with the accomplishments of players in the major leagues.

In the following lists of rankings, each eligible player is assigned an ordinal rank, which is based on the adjacent index number. For batters, the index number represents career WAA/PA, where the highest value (Babe Ruth’s) is equal to 100. For pitchers, the index number represents career ERA+, where the highest value (Mariano Rivera’s) is equal to 100. The lists are coded as follows:

  • Blue — elected to the Hall of Fame. (N.B. Joe Torre is a member of the Hall of Fame, but he was elected as a manager, not as a player.)
  • Red — retired more than 5 seasons but not yet elected
  • Bold (with asterisk) — retired less than 5 seasons.

Now, at last, the lists (commentary follows):

Hall of fame candidates_batters

If Bill Mazeroski is in the Hall of Fame, why not everyone who outranks him ? (Barry Bonds, Sammy Sosa, and some others excepted, of course. Note that Mark McGwire didn’t make the list; he had 7,660 PA.) There are plenty of players with more impressive credentials than Mazeroski, whose main claim to fame is a World-Series-winning home run in 1960. Mazeroski is reputed to have been an excellent second-baseman, but WAA accounts for fielding prowess — and other things. Maz’s excellence as a fielder still leaves him at number 194 on my list of 234 eligible batters.

Here’s the list of eligible pitchers:

Hall of fame candidates_pitchers

If Rube Marquard — 111th-ranked of 122 eligible pitchers — is worthy of the Hall, why not all of those pitchers who outrank him? (Roger Clemens excepted, of course.) Where would I draw the line? My Hall of Fame would include the first 100 on the list of batters and the first 33 on the list of pitchers (abusers of PEDs excepted) — and never more than 100 batters and 33 pitchers. Open-ended membership means low standards. I’ll have none of it.

As of today, the top-100 batters would include everyone from Babe Ruth through Joe Sewell (number 103 on the list in the first table). I exclude Barry Bonds (number 3), Manny Ramirez (number 61), and Sammy Sosa (number 99). The top-33 pitchers would include everyone from Mariano Rivera through Eddie Plank (number 34 on the list in the second table). I exclude Roger Clemens (number 5).

My purge would eliminate 109 of the players who are now official members of the Hall of Fame, and many more players who are likely to be elected. The following tables list the current members whom I would purge (blue), and the current non-members (red and bold)  who would miss the cut:

Hall of fame batters not in top 100

Hall of fame pitchers not in top 33

Sic transit gloria mundi.

Signature

Baseball Trivia for the 4th of July

It was a “fact” — back in the 1950s when I became a serious fan of baseball — that the team that led its league on the 4th of July usually won the league championship. (That was in the days before divisional play made it possible for less-than-best teams to win league championships and World Series.)

How true was the truism? I consulted the Play Index at Baseball-Reference.com to find out. Here’s a season-by-season list of teams that had the best record on the 4th of July and at season’s end:

Teams with best record on 4th of July and end of season

It’s obvious that the team with the best record on the 4th of July hasn’t “usually” had the best record at the end of the season — if “usually” means “almost all of the time.”   In fact, for 1901-1950, the truism was true only 64 percent of the time in the American League and 60 percent of the time in the National League. The numbers for 1901-2014: American League, 60 percent; National League, 55 percent.

There are, however, two eras in which the team with the best record on the 4th of July “usually” had the best record at season’s  end — where “usually” is defined by a statistical test.* Applying that test, I found that

  • from 1901 through 1928 the best National League team on the 4th of July usually led the league at the end of the season (i.e., 75 percent of the time); and
  • from 1923 through 1958 the best American League team on the 4th of July usually led the league at the end of the season (i.e., 83 percent of the time).

I was a fan of the Detroit Tigers in the 1950s, and therefore more interested in the American League than the National League. So, when I became a fan it was true (of the American League) that the best team on the 4th of July usually led the league at the end of the season.

It’s no longer true. And even if it has happened 55 to 60 percent of the time in the past 114 years, don’t bet your shirt that it will happen in a particular season.

*     *     *

Related post: May the Best Team Lose

__________
* The event E occurs when a team has the league’s best record on the 4th of July and at the end of the season. E “usually” occurs during a defined set of years if the difference between the frequency of occurrence during that set of years is significantly different than the frequency of occurrence in other years. Significantly, in this case, means that a t-test yields a probability of less than 0.01 that the difference in frequencies occurs by chance.

Signature

May the Best Team Lose

This is an update of a six-season-old post. It includes 2016 post-season play to date. I will update it again after the 2016 World Series.

The first 65 World Series (1903 and 1905-1968) were contests between the best teams in the National and American Leagues. The winner of a season-ending Series was therefore widely regarded as the best team in baseball for that season (except by the fans of the losing team and other soreheads). The advent of divisional play in 1969 meant that the Series could include a team that wasn’t the best in its league. From 1969 through 1993, when participation in the Series was decided by a single postseason playoff between division winners (1981 excepted), the leagues’ best teams met in only 10 of 24 series. The advent of three-tiered postseason play in 1995 and four-tiered postseason play in 2012, has only made matters worse.*

By the numbers:

  • Postseason play originally consisted of a World Series (period) involving 1/8 of major-league teams — the best in each league. Postseason play now involves 1/3 of major-league teams and 7 postseason series (3 in each league plus the inter-league World Series).
  • Only 3 of the 22 Series from 1995 through 2016 have featured the best teams of both leagues, as measured by W-L record.
  • Of the 21 Series from 1995 through 2015, only 6 were won by the best team in a league.
  • Of the same 21 Series, 10 (48 percent) were won by the better of the two teams, as measured by W-L record. Of the 65 Series played before 1969, 35 were won by the team with the better W-L record and 2 involved teams with the same W-L record. So before 1969 the team with the better W-L record won 35/63 of the time for an overall average of 56 percent. That’s not significantly different from the result for the 21 Series played in 1995-2015, but the teams in the earlier era were each league’s best, which is no longer true. . .
  • From 1995 through 2016, a league’s best team (based on W-L record) appeared in a Series only 15 of 44 possible times — 6 times for the NL (pure luck), 9 times for the AL (little better than pure luck). (A random draw among teams qualifying for post-season play would have resulted in the selection of each league’s best team about 6 times out of 22.)
  • Division winners have opposed each other in only 11 of the 22 Series from 1995 through 2016.
  • Wild-card teams have appeared in 10 of those Series, with all-wild-card Series in 2002 and 2014.
  • Wild-card teams have occupied more than one-fourth of the slots in the 1995-2016 Series — 12 slots out of 44.

The winner of the World Series used to be a league’s best team over the course of the entire season, and the winner had to beat the best team in the other league. Now, the winner of the World Series usually can claim nothing more than having won the most postseason games — 11 or 12 out of as many as 19 or 20. Why not eliminate the 162-game regular season, select the postseason contestants at random, and go straight to postseason play?

__________
* Here are the World Series pairings for 1994-2016 (National League teams listed first; + indicates winner of World Series):

1995 –
Atlanta Braves (division winner; .625 W-L, best record in NL)+
Cleveland Indians (division winner; .694 W-L, best record in AL)

1996 –
Atlanta Braves (division winner; .593, best in NL)
New York Yankees (division winner; .568, second-best in AL)+

1997 –
Florida Marlins (wild-card team; .568, second-best in NL)+
Cleveland Indians (division winner; .534, fourth-best in AL)

1998 –
San Diego Padres (division winner; .605 third-best in NL)
New York Yankees (division winner, .704, best in AL)+

1999 –
Atlanta Braves (division winner; .636, best in NL)
New York Yankees (division winner; .605, best in AL)+

2000 –
New York Mets (wild-card team; .580, fourth-best in NL)
New York Yankees (division winner; .540, fifth-best in AL)+

2001 –
Arizona Diamondbacks (division winner; .568, fourth-best in NL)+
New York Yankees (division winner; .594, third-best in AL)

2002 –
San Francisco Giants (wild-card team; .590, fourth-best in NL)
Anaheim Angels (wild-card team; .611, third-best in AL)+

2003 –
Florida Marlines (wild-card team; .562, third-best in NL)+
New York Yankees (division winner; .623, best in AL)

2004 –
St. Louis Cardinals (division winner; .648, best in NL)
Boston Red Sox (wild-card team; .605, second-best in AL)+

2005 –
Houston Astros (wild-card team; .549, third-best in NL)
Chicago White Sox (division winner; .611, best in AL)*

2006 –
St. Louis Cardinals (division winner; .516, fifth-best in NL)+
Detroit Tigers (wild-card team; .586, third-best in AL)

2007 –
Colorado Rockies (wild-card team; .552, second-best in NL)
Boston Red Sox (division winner; .593, tied for best in AL)+

2008 –
Philadelphia Phillies (division winner; .568, second-best in NL)+
Tampa Bay Rays (division winner; .599, second-best in AL)

2009 –
Philadelphia Phillies (division winner; .574, second-best in NL)
New York Yankees (division winner; .636, best in AL)+

2010 —
San Francisco Giants (division winner; .568, second-best in NL)+
Texas Rangers (division winner; .556, fourth-best in AL)

2011 —
St. Louis Cardinals (wild-card team; .556, fourth-best in NL)+
Texas Rangers (division winner; .593, second-best in AL)

2012 —
San Francisco Giants (division winner; .580, third-best in AL)+
Detroit Tigers (division winner; .543, seventh-best in AL)

2013 —
St. Louis Cardinals (division winner; .599, best in NL)
Boston Red Sox (division winner; .599, best in AL)+

2014 —
San Francisco Giants (wild-card team; .543, 4th-best in NL)+
Kansas City Royals (wild-card team; .549, 4th-best in AL)

2015 —
New York Mets (division winner; .556, 5th best in NL)
Kansas City Royals (division winner; .586, best in AL)+

2016 —
Chicago Cubs (division winner; .640, best in NL)
Cleveland Indians (division winner; .584, 2nd best in AL)

Signature

Competitiveness in Major League Baseball

Yesterday marked the final regular-season games of the 2014 season of major league baseball (MLB), In observance of that event, I’m shifting from politics to competitiveness in MLB. What follows is merely trivia and speculation. If you like baseball, you might enjoy it. If you don’t like baseball, I hope that you don’t think there’s a better team sport. There isn’t one.

Here’s how I compute competitiveness for each league and each season:

INDEX OF COMPETITIVENESS = AVEDEV/AVERAGE; where

AVEDEV = the average of the absolute value of deviations from the average number of games won by a league’s teams in a given season, and

AVERAGE =  the average number of games won by a league’s teams in a given season.

For example, if the average number of wins is 81, and the average of the absolute value of deviations from 81 is 8, the index of competitiveness is 0.1 (rounded to the nearest 0.1). If the average number of wins is 81 and the average of the absolute value of deviations from 81 is 16, the index of competitiveness is 0.2.  The lower the number, the more competitive the league.

With some smoothing, here’s how the numbers look over the long haul:

Index of competitiveness
Based on numbers of wins by season and by team, for the National League and American League, as compiled at Baseball-Reference.com.

I drew a separate line for the American League without the Yankees, to show the effect of the Yankees’ dominance from the early 1920s to the early 1960s, and the 10 years or so beginning around 1995.

The National League grew steadily more competitive from 1940 to 1987, and has slipped only a bit since then. The American League’s climb began in 1951, and peaked in 1989; the AL has since slipped a bit more than the NL, but seems to be rebounding. In any event, there’s no doubt that both leagues are — and in recent decades have been — more competitive than they were in the early to middle decades of the 20th century. Why?

My hypothesis: integration compounded by expansion, with an admixture of free agency and limits on the size of rosters.

Let’s start with integration. The rising competitiveness of the NL after 1940 might have been a temporary thing, but it continued when NL teams (led by the Brooklyn Dodgers) began to integrate, by adding Jackie Robinson in 1947. The Cleveland Indians of the AL followed suit, by adding Larry Doby later in the same season. By the late 1950s, all major league teams (then 16) had integrated, though the NL seems to have integrated faster. The more rapid integration of the NL could explain its earlier ascent to competitiveness. Integration was followed in short order by expansion: The AL began to expand in 1961 and the NL began to expand in 1962.

How did expansion and integration combine to make the leagues more competitive? Several years ago, I opined:

[G]iven the additional competition for talent [following] expansion, teams [became] more willing to recruit players from among the black and Hispanic populations of the U.S. and Latin America. That is to say, teams [came] to draw more heavily on sources of talent that they had (to a large extent) neglected before expansion.

Further, free agency, which began in the mid-1970s,

made baseball more competitive by enabling less successful teams to attract high-quality players by offering them more money than other, more successful, teams. Money can, in some (many?) cases, compensate a player for the loss of psychic satisfaction of playing on a team that, on its record, is likely to be successful.

Finally,

[t]he competitive ramifications of expansion and free agency [are] reinforced by the limited size of team rosters (e.g., each team may carry only 25 players from May through August). No matter how much money an owner has, the limit on the size of his team’s roster constrains his ability to sign all (even a small fraction) of the best players.

It’s not an elegant hypothesis, but it’s my own (as far as I know). I offer it for discussion.

Signature

*     *     *

Other related posts:
The End of a Dynasty
What Makes a Winning Team
More Lessons from Baseball
Not Over the Hill

Decline

Although I’ve declared baseball the “king of team sports,” I would agree with anyone who says that baseball is past its prime. When was that prime? Arguably, it was the original lively ball era, which by my reckoning extended from 1920 to 1941. The home run had become much more prevalent than in earlier dead-ball era, but not so prevalent that it dominated offensive strategy. Thus batting averages were high and scoring proceeded at a higher pace than in any of the other eras that I’ve identified.

In 1930, for example, the entire National League batted .303. The Chicago Cubs of that season finished in second place and batted .309 (not the highest team average in the league). The average number of runs scored in a Cubs’ game was 12.0 — a number surpassed only by the lowly Philadelphia Phillies, whose games yielded an average of 13.8 runs, most of them scored by the Phillies’ opponents. Despite the high scoring, the average Cubs game of the 1930 season lasted only 2 hours and 5 minutes. (An estimate that I derived from the sample of 67 Cubs’ games for which times are available, here.)

In sum, baseball’s first lively ball era produced what fans love to see: scoring. A great pitching duel is fine, but a great pitching duel is a rare thing. Too many low-scoring games are the result of failed offensive opportunities, which are marked by a high count of runners left of base. Once runners get on base, what fans want (or at least one team’s fans want) is to see them score.

The game in the first lively ball era was, as I say, dynamic because scoring depended less on the home run than it did in later eras. And the game unfolded at a smart pace. That pace, by the way, was about the same as it had been in the middle of the dead-ball era. (For example, the times recorded for the Cubs’ two games against the Cincinnati Reds on July 4, 1911, are 2:05 and 2:00.)

Baseball has declined since the first lively ball era, not just because the game has become more static but also because it now unfolds at a much slower pace. The average length of a game in 2014 is 3:08 (for games through 07/17/14) — more than an hour longer than the games played by the Cubs in 1930.

Baseball is far from the only cultural phenomenon that has declined from its peak. I have written several times about the decline of art and music, movies, language, and morals and mores: here, here, here, and here. (Each of the foregoing links leads to a post that includes links to related items.)

Baseball is sometimes called a metaphor for life. (It’s a better metaphor than soccer, to be sure.) I now venture to say that the decline of baseball is a metaphor for the decline of art, music, movies, language, and morals and mores.

Indeed, the decline of baseball is a metaphor for the decline of liberty in America, which began in earnest — and perhaps inexorably — during the New Deal, even as the first lively ball era was on the wane.

*     *     *

See also “The Fall and Rise of American Empire.”

Baseball: The King of Team Sports

There are five major team sports: baseball, basketball, football (American style), ice hockey, and soccer (European football). The skills and abilities required to play these sports at the top professional level are several and varied. But, in my opinion — based on experience and spectating — the skills can be ranked hierarchically and across sports. When the ordinal rankings are added, baseball comes out on top by a wide margin; hockey is in the middle; basketball, football, and soccer are effectively tied for least-demanding of skill and ability.

Ranking of sports by skill and ability

Baseball or Soccer? David Brooks Misunderstands Life

David Brooks — who is what passes for a conservative at The New York Times — once again plays useful idiot to the left. Brooks’s latest offering to the collectivist cause is “Baseball or Soccer?” Here are the opening paragraphs of Brooks’s blathering, accompanied by my comments (underlined, in brackets):

Baseball is a team sport, but it is basically an accumulation of individual activities. [So is soccer, and so is any team sport. For example, at any moment the ball is kicked by only one member of a team, not by the team as a whole.] Throwing a strike, hitting a line drive or fielding a grounder is primarily an individual achievement. [This short list omits the many ways in which baseball involves teamwork; for example: every pitch, involves coordination between pitcher and catcher, and fielders either position themselves according to the pitch that’s coming or are able to anticipate the likely direction of a batted ball; the double play is an excellent and more obvious example of teamwork; so is the pickoff play, from pitcher to baseman or catcher to baseman; the hit and run play is another obvious example of teamwork; on a fly to the outfield, where two fielders are in position to make the catch, the catch is made by the fielder in better position for a throw or with the better throwing arm.] The team that performs the most individual tasks well will probably win the game. [Teamwork consists of the performance of individual tasks, in soccer as well as in baseball.]

Soccer is not like that. [False; see above.] In soccer, almost no task, except the penalty kick and a few others, is intrinsically individual. [False; see above.] Soccer, as Simon Critchley pointed out recently in The New York Review of Books, is a game about occupying and controlling space. [So is American football. And so what?] ….

As Critchley writes, “Soccer is a collective game, a team game, and everyone has to play the part which has been assigned to them, which means they have to understand it spatially, positionally and intelligently and make it effective.” [Hmm… Sounds like every other team sport, except that none of them — soccer included, is “collective.” All of them — soccer included — involve cooperative endeavors of various kinds. The success of those cooperative endeavors depends very much on the skills that individuals bring to them. The real difference between soccer and baseball is that baseball demands a greater range of individual skills, and is played in such a way that some of those skills are on prominent display.] ….

Most of us spend our days thinking we are playing baseball, but we are really playing soccer. [To the extent that any of us think such things, those who think they are playing baseball, rather than soccer, are correct. See the preceding comment.]

At this point, Brooks shifts gears. I’ll quote some relevant passages, then comment at length:

We think we individually choose what career path to take, whom to socialize with, what views to hold. But, in fact, those decisions are shaped by the networks of people around us more than we dare recognize.

This influence happens through at least three avenues. First there is contagion. People absorb memes, ideas and behaviors from each other the way they catch a cold…. The overall environment influences what we think of as normal behavior without being much aware of it. Then there is the structure of your network. There is by now a vast body of research on how differently people behave depending on the structure of the social networks. People with vast numbers of acquaintances have more job opportunities than people with fewer but deeper friendships. Most organizations have structural holes, gaps between two departments or disciplines. If you happen to be in an undeveloped structural hole where you can link two departments, your career is likely to take off.

Innovation is hugely shaped by the structure of an industry at any moment. Individuals in Silicon Valley are creative now because of the fluid structure of failure and recovery….

Finally, there is the power of the extended mind. There is also a developed body of research on how much our very consciousness is shaped by the people around us. Let me simplify it with a classic observation: Each close friend you have brings out a version of yourself that you could not bring out on your own. When your close friend dies, you are not only losing the friend, you are losing the version of your personality that he or she elicited.

Brooks has gone from teamwork — which he gets wrong — to socialization and luck. As with Brooks’s (failed) baseball-soccer analogy, the point is to belittle individual effort by making it seem inconsequential, or less consequential than the “masses” believe it to be.

You may have noticed that Brooks is re-running Obama’s big lie: “If you’ve got a business — you didn’t build that.  Somebody else made that happen.” As I wrote here,

… Obama is trying, not so subtly, to denigrate those who are successful in business (e.g., Mitt Romney) and to make a case for redistributionism. The latter rests on Obama’s (barely concealed) premise that the fruits of a collective enterprise should be shared on some basis other than market valuations of individual contributions….

It is (or should be) obvious that Obama’s agenda is the advancement of collectivist statism. I will credit Obama for the sincerity of his belief in collectivist statism, but his sincerity only underscores and how dangerous he is….

Well, yes, everyone is strongly influenced by what has gone before, and by the social and economic milieu in which one finds oneself. Where does that leave us? Here:

  • Social and economic milieu are products of individual acts, including acts that occur in the context of cooperative efforts.
  • It is up to the individual to make the most (or least) of his social and economic inheritance and milieu.
  • Those who make the most (or least) of their background and situation are rightly revered or despised for their individual efforts. Consider, for example, Washington and Lincoln, on the one hand, and Hitler and Stalin, on the other hand.
  • Beneficial cooperation arises from the voluntary choices of individuals. Destructive “cooperation” (collectivism)  — the imposition of rules through superior force (usually government) — usually thwarts the individual initiative and ingenuity that underlie scientific and economic progress.

Brooks ends with this:

Once we acknowledge that, in life, we are playing soccer, not baseball, a few things become clear. First, awareness of the landscape of reality is the highest form of wisdom. It’s not raw computational power that matters most; it’s having a sensitive attunement to the widest environment, feeling where the flow of events is going. Genius is in practice perceiving more than the conscious reasoning. [A false distinction between baseball and soccer, followed by false dichotomies.]

Second, predictive models [of what?] will be less useful [than what?]. Baseball is wonderful for sabermetricians. In each at bat there is a limited [but huge] range of possible outcomes. Activities like soccer are not as easily renderable statistically, because the relevant spatial structures are harder to quantify. [B.S. “Sabermetrics” is coming to soccer.] Even the estimable statistician Nate Silver of FiveThirtyEight gave Brazil a 65 percent chance of beating Germany. [An “estimable statistician” would know that such a statement is meaningless; see the discussion of probability here.]

Finally, Critchley notes that soccer is like a 90-minute anxiety dream — one of those frustrating dreams when you’re trying to get somewhere but something is always in the way. This is yet another way soccer is like life. [If you seek a metaphor for life, try blowing a fastball past a fastball hitter; try punching the ball to right when you’re behind in the count; try stealing second, only to have the batter walked intentionally; try to preserve your team’s win with a leaping catch and a throw to home plate; etc., etc., etc.]

The foregoing parade of non sequitur, psychobabble, and outright error simply proves that Brooks doesn’t know what he’s talking about. I hereby demote him from “useful idiot” to plain old “idiot.”

*     *     *

Related posts:
He’s Right, Don’t Listen to Him
Killing Conservatism in Order to Save It
Ten Commandments of Economics
More Commandments of Economics
Three Truths for Central Planners
Columnist, Heal Thyself
Our Miss Brooks
Miss Brooks’s “Grand Bargain”
More Fool He
Dispatches from the Front
David Brooks, Useful Idiot for the Left
“We the People” and Big Government
“Liberalism” and Personal Responsibility

More Lessons from Baseball

Regular readers of this blog will know that I sometimes draw on the game of baseball and its statistics to make points about various subjects — longevity, probability, politics, management, and cosmology, for example. (See the links at the bottom of this post.)

Today’s sermon is about the proper relationship between owners and management. I will address two sets of graphs giving the won-lost (W-L) records of the “old 16” major-league franchises. The “old 16” refers to the 8 franchises in the National League (NL) and the 8 franchises in American League (AL) in 1901, the first year of the AL’s existence as a major league. Focusing on the “old 16” affords the long view that’s essential in thinking about success in an endeavor, whether it is baseball, business, or empire-building.

The first graph in each set gives the centered 11-year average W-L record for each of the old teams in each league, and for the league’s expansion teams taken as a group. The 11-year averages are based on annual W-L records for 1901-2013. The subsequent graphs in each set give, for each team and group of expansion teams, 11-year averages and annual W-L records. Franchise moves from one city to another are indicated by vertical black lines. The titles of each graph indicates the city or cities in which the team has been located and the team’s nickname or nicknames.

Here are the two sets of graphs:

W-L records of old-8 NL franchises

W-L records of old-8 AL franchises

What strikes me about the first graph in each set is the convergence of W-L records around 1990. My conjecture: The advent of free agency in the 1970s must have enabled convergence. Stability probably helped, too. The AL had been stable since 1977, when it expanded to 14 teams; the NL had been stable since 1969, when it expanded to 12 teams. As the expansion teams matured, some of them became more successful, at the expense of the older teams. This explanation is consistent with the divergence after 1993, with the next round of expansion (there was another in 1998). To be sure, all of this conjecture warrants further analysis. (Here’s an analysis from several years ago that I still like.)

Let’s now dispose of franchise shifts as an explanation for a better record. I observe the following:

The Braves were probably on the upswing when they moved from Boston to Milwaukee in 1953. They were on the downswing at the time of their second move, from Milwaukee to Atlanta in 1966. It took many years and the acquisition of astute front office and a good farm system to turn the Braves around.

The Dodgers’ move to LA in 1958 didn’t help the team, just the owners’ bank accounts. Ditto the Giants’ move to San Francisco in 1958.

Turning to the AL, the St. Louis Browns became the latter-day Baltimore Orioles in 1954. That move was accompanied by a change in ownership. The team’s later successes seem to have been triggered by the hiring of Paul Richards and Lee McPhail to guide the team and build its farm system. The Orioles thence became a good-to-great from the mid-1960 to early 1980s, with a resurgence in the late 1980s and early 1990s. The team’s subsequent decline seems due to the meddlesome Peter Angelos, who became CEO in 1993.

The Athletics, like the Braves, moved twice. First, in 1955 from Philadelphia to Kansas City, and again in 1968 from Kansas City to Oakland. The first move had no effect until Charles O. Finley took over the team. His ownership carried over to Oakland. Finley may have been the exceptional owner whose personal involvement in the team’s operations helped to make it successful. But the team’s post-Finely record (1981-present) under less-involved owners suggests otherwise. The team’s pre-Kansas City record reflects Connie Mack’s tight-fisted ways. Mack — owner-manager of the A’s from 1901 until 1950 — was evidently a good judge of talent and a skilled field manager, but as an owner he had a penchant for breaking up great teams to rid himself of high-priced talent — with disastrous consequences for the A’s W-L record from the latter 1910s to late 1920s, and from the early 1930s to the end of Mack’s reign.

The Washington Senators were already resurgent under owner Calvin Griffith when the franchise was moved to Minnesota for the 1961 season. The Twins simply won more consistently than they had under the tight-fisted ownership of Clark Griffith, Calvin’s father.

Bottom line: There’s no magic in a move. A team’s success depends on the willingness of owners to spend bucks and to hire good management — and then to get out of the way. (Yes, George Steinbrenner bankrolled a lot of pennant-winning teams during his ownership years, from 1973 to 2010, but the Yankees’ record improved as “The Boss” became a less-intrusive owner from the mid-1990s until his death.)

There are many other stories behind the graphs — just begging to be told, but I’ll leave it at that.

Except to say this: The “owners” of America aren’t “the people,” romantic political pronouncements to the contrary notwithstanding. As government has become more deeply entrenched in the personal and business affairs of Americans, there has emerged a ruling class which effectively “owns” America. It is composed of professional politicians and bureaucrats, who find ample aid and comfort in the arms of left-wing academicians and the media. The “owners’ grip on power is sustained by the votes of the constituencies to which they pander.

Yes, the constituencies include “crony capitalists,” who benefit from regulatory barriers to competition and tax breaks. Though it must be said that they produce things, and would probably do well without the benefits they reap from professional politicians and bureaucrats. Far more powerful are the non-producers, who are granted favors based on their color, gender, age, etc., in return for the tens of millions of votes that they cast to keep the “owners” in power.

Far too many Americans are whiners who grovel at the feet of their “owners,” begging for handouts. Far too few Americans are self-managed winners.

*     *     *

Related posts:

The Hall of Fame and Morality

Jonathan Mahler, in the course of an incoherent article about baseball, makes this observation:

This year, not a single contemporary player was voted into the Hall of Fame because so many eligible players were suspected of steroid use. Never mind that Cooperstown has its share of racists, wife beaters and even a drug dealer. (To say nothing of the spitballers.)

Those few sentences typify the confusion rampant in Mahler’s offering. The use of steroids and other performance-enhancing drugs calls into question the legitimacy of the users’ accomplishments on the field. Racism, wife-beating, and drug-dealing — deplorable as they are — do not cast a shadow on the perpetrators’ performance as baseball players. As for the spitball, it was legal in baseball until 1920, and when it was outlawed its avowed practitioners were allowed to continue using it. (Some modern pitchers have been accused of using it from time to time, but I can’t think of one who used it so much that his career is considered a sham.)

Election to the Hall of Fame isn’t (or shouldn’t be) a moral judgment. If it were, I suspect that the Hall of Fame would be a rather empty place, especially if serial adultery and alcohol abuse were grounds for disqualification.

At the risk of being called a moral agnostic, which I am not, I say this: Election to the Hall of Fame (as a player) should reflect the integrity and excellence of on-field performance. Period.

I do have strong views about the proper qualifications for election to the Hall of Fame (as a player). You can read them here, here, and here. I’ve also analyzed the statistical evidence for indications of the use of performance-enhancing drugs by a few notable players: Barry Bonds and Mark McGwire (both guilty) and Roger Clemens (unproved).