What’s in a Trend?

I sometimes forget myself and use “trend”. Then I see a post like “Trends for Existing Home Sales in the U.S.” and am reminded why “trend” is a bad word. This graphic is the centerpiece of the post:

There was a sort of upward trend from June 2016 until August 2017, but the trend stopped. So it wasn’t really a trend was it? (I am here using “trend” in way that it seems to be used generally, that is, as a direction of movement into the future.)

After a sort of flat period, the trend turned upward again, didn’t it? No, because the trend had been broken, so a new trend began in the early part of 2018. But it was a trend only until August 2018, when it became a different trend — mostly downward for several months.

Is there a flat trend now, or as the author of the piece puts it: “Existing home sales in the U.S. largely continued treading water through August 2019”? Well that was the trend — temporary pattern is a better descriptor — but it doesn’t mean that the value of existing-home sales will continue to hover around $1.5 trillion.

The moral of the story: The problem with “trend” is that it implies a direction of movement into the future —  a future will look like a lot like the past. But a trend is only a trend for as long as it lasts. And who knows how long it will last, that is, when it will stop?

I hope to start a trend toward the disuse of “trend”. My hope is futile.

Beware of Outliers

An outlier, in the field of operations research, is an unusual event that can distract the observer from the normal run of events. Because an outlier is an unusual event, it is more memorable than events of the same kind that occur more frequently.

Take the case of the late Bill Buckner, who was a steady first baseman and good hitter for many years. What is Buckner remembered for? Not his many accomplishments in a long career. No, he is remembered for a fielding error that cost his team (the accursed Red Sox) game 6 of the 1986 World Series, a game that would have clinched the series for the Red Sox had they won it. But they lost it, and went on to lose the deciding 7th game.

Buckner’s bobble was an outlier that erased from the memories of most fans his prowess as a player and the many occasions on which he helped his team to victory. He is remembered, if at all, for the error — though he erred on less than 1/10 of 1 percent of more than 15,000 fielding plays during his career.

I am beginning to think of America’s decisive victory in Word War II as an outlier.

To be continued.

The Compleat Monty Hall Problem

Wherein your humble blogger gets to the bottom of the Monty Hall problem, sorts out the conflicting solutions, and declares that the standard solution is the right solution, but not to the Monty Hall problem as it’s usually posed.

THE MONTY HALL PROBLEM AND THE TWO “SOLUTIONS”

The Monty Hall problem, first posed as a statistical puzzle in 1975, has been notorious since 1990, when Marilyn vos Savant wrote about it in Parade. Her solution to the problem, to which I will come, touched off a controversy that has yet to die down. But her solution is now widely accepted as the correct one; I refer to it here as the standard solution.

This is from the Wikipedia entry for the Monty Hall problem:

The Monty Hall problem is a brain teaser, in the form of a probability puzzle (Gruber, Krauss and others), loosely based on the American television game show Let’s Make a Deal and named after its original host, Monty Hall. The problem was originally posed in a letter by Steve Selvin to the American Statistician in 1975 (Selvin 1975a), (Selvin 1975b). It became famous as a question from a reader’s letter quoted in Marilyn vos Savant‘s “Ask Marilyn” column in Parade magazine in 1990 (vos Savant 1990a):

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Here’s a complete statement of the problem:

1. A contestant sees three doors. Behind one of the doors is a valuable prize, which I’ll denote as $. Undesirable or worthless items are behind the other two doors; I’ll denote those items as x.

2. The contestant doesn’t know which door conceals $ and which doors conceal x.

3. The contestant chooses a door at random.

4. The host, who knows what’s behind each of the doors, opens one of the doors not chosen by the contestant.

5. The door chosen by the host may not conceal $; it must conceal an x. That is, the host always opens a door to reveal an x.

6. The host then asks the contestant if he wishes to stay with the door he chose initially (“stay”) or switch to the other unopened door (“switch”).

7. The contestant decides whether to stay or switch.

8. The host then opens the door finally chosen by the contestant.

9. If $ is revealed, the contestant wins; if x is revealed the contestant loses.

One solution (the standard solution) is to switch doors because there’s a 2/3 probability that $ is hidden behind the unopened door that the contestant didn’t choose initially. In vos Savant’s own words:

Yes; you [the contestant] should switch. The first [initially chosen] door has a 1/3 chance of winning, but the second [other unopened] door has a 2/3 chance.

The other solution (the alternative solution) is indifference. Those who propound this solution maintain that there’s a equal chance of finding $ behind either of the doors that remain unopened after the host has opened a door.

As it turns out, the standard solution doesn’t tell a contestant what to do in a particular game. But the standard solution does point to the right strategy for someone who plays or bets on a large number of games.

The alternative solution accurately captures the unpredictability of any particular game. But indifference is only a break-even strategy for a person who plays or bets on a large number of games.

EXPLANATION OF THE STANDARD SOLUTION

The contestant may choose among three doors, and there are three possible ways of arranging the items behind the doors: S x x; x $ x; and x x $. The result is nine possible ways in which a game may unfold:

Equally likely outcomes

Events 1, 5, and 9 each have two branches. But those branches don’t count as separate events. They’re simply subsets of the same event; when the contestant chooses a door that hides $, the host must choose between the two doors that hide x, but he can’t open both of them. And his choice doesn’t affect the outcome of the event.

It’s evident that switching would pay off with a win in 2/3 of the possible events; whereas, staying with the original choice would off in only 1/3 of the possible events. The fractions 1/3 and 2/3 are usually referred to as probabilities: a 2/3 probability of winning $ by switching doors, as against a 1/3 probability of winning $ by staying with the initially chosen door.

Accordingly, proponents of the standard solution — who are now legion — advise the individual (theoretical) contestant to switch. The idea is that switching increases one’s chance (probability) of winning.

A CLOSER LOOK AT THE STANDARD SOLUTION

There are three problems with the standard solution:

1. It incorporates a subtle shift in perspective. The Monty Hall problem, as posed, asks what a contestant should do. The standard solution, on the other hand, represents the expected (long-run average) outcome of many events, that is, many plays of the game. For reasons I’ll come to, the outcome of a single game can’t be described by a probability.

2.  Lists of possibilities, such as those in the diagram above, fail to reflect the randomness inherent in real events.

3. Probabilities emerge from many repetitions of the kinds of events listed above. It is meaningless to ascribe a probability to a single event. In case of the Monty Hall problem, many repetitions of the game will yield probabilities approximating those given in the standard solution, but the outcome of each repetition will be unpredictable. It is therefore meaningless to say that a contestant has a 2/3 chance of winning a game if he switches. A 2/3 chance of winning refers to the expected outcome of many repetitions, where the contestant chooses to switch every time. To put it baldly: How does a person win 2/3 of a game? He either wins or doesn’t win.

Regarding points 2 and 3, I turn to Probability, Statistics and Truth (second revised English edition, 1957), by Richard von Mises:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply toe theory of probability we must have a practically unlimited sequence of uniform observations. (p. 11)

*     *     *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the same of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. (p. 11)

*     *     *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute [e.g., winning $ rather than x ] in a given collective [a series of attempts to win $ rather than x ]. (pp. 11-12)

*     *     *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attributed would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective [emphasis in the original]. (p. 15)

*     *     *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? It is important to note that this statement remains valid also if the calculated probability has one of the two extreme values 1 or 0 [emphasis added]. (p. 33)

To bring the point home, here are the results of 50 runs of the Monty Hall problem, where each result represents (i) a random initial choice between Door 1, Door 2, and Door 3; (ii) a random array of $, x, and x behind the three doors; (iii) the opening of a door (other than the one initially chosen) to reveal an x; and (iv) a decision, in every case, to switch from the initially chosen door to the other unopened door:

Results of 50 games

What’s relevant here isn’t the fraction of times that $ appears, which is 3/5 — slightly less than the theoretical value of 2/3.  Just look at the utter randomness of the results. The first three outcomes yield the “expected” ratio of two wins to one loss, though in the real game show the two winners and one loser would have been different persons. The same goes for any sequence, even the final — highly “improbable” (i.e., random) — string of nine straight wins (which would have accrued to nine different contestants). And who knows what would have happened in games 51, 52, etc.

If a person wants to win 2/3 of the time, he must find a game show that allows him to continue playing the game until he has reached his goal. As I’ve found in my simulations, it could take as many as 10, 20, 70, or 300 games before the cumulative fraction of wins per game converges on 2/3.

That’s what it means to win 2/3 of the time. It’s not possible to win a single game 2/3 of the time, which is the “logic” of the standard solution as it’s usually presented.

WHAT ABOUT THE ALTERNATIVE SOLUTION?

The alternative solution doesn’t offer a winning strategy. In this view of the Monty Hall problem, it doesn’t matter which unopened door a contestant chooses. In effect, the contestant is advised to flip a coin.

As discussed above, the outcome of any particular game is unpredictable, so a coin flip will do just as well as any other way of choosing a door. But randomly selecting an unopened door isn’t a good strategy for repeated plays of the game. Over the long run, random selection means winning about 1/2 of all games, as opposed to 2/3 for the “switch” strategy. (To see that the expected probability of winning through random selection approaches 1/2, return to the earlier diagram; there, you’ll see that $ occurs in 9/18 = 1/2 of the possible outcomes for “stay” and “switch” combined.)

Proponents of the alternative solution overlook the importance of the host’s selection of a door to open. His choice isn’t random. Therein lies the secret of the standard solution — as a long-run strategy.

WHY THE STANDARD SOLUTION WORKS IN THE LONG RUN

It’s commonly said by proponents of the standard solution that when the host opens a door, he gives away information that the contestant can use to increase his chance of winning that game. One nonsensical version of this explanation goes like this:

  • There’s a 2/3 probability that $ is behind one of the two doors not chosen initially by the contestant.
  • When the host opens a door to reveal x, that 2/3 “collapses” onto the other door that wasn’t chosen initially. (Ooh … a “collapsing” probability. How exotic. Just like Schrödinger’s cat.)

Of course, the host’s action gives away nothing in the context of a single game, the outcome of which is unpredictable. The host’s action does help in the long run, if you’re in a position to play or bet on a large number of games. Here’s how:

  • The contestant’s initial choice (IC) will be wrong 2/3 of the time. That is, in 2/3 of a large number of games, the $ will be behind one of the other two doors.
  • Because of the rules of the game, the host must open one of those other two doors (HC1 and HC2); he can’t open IC.
  • When IC hides an x (which happens 2/3 of the time), either HC1 and HC2 must conceal the $; the one that doesn’t conceal the $ conceals an x.
  • The rules require the host to open the door that conceals an x.
  • Therefore, about 2/3 of the time the $ will be behind HC1 or HC2, and in those cases it will always be behind the door (HC1 or HC2) that the host doesn’t open.
  • It follows that the contestant, by consistently switching from IC to the remaining unopened door (HC1 or HC2), will win the $ about 2/3 of the time.

The host’s action transforms the probability — the long-run frequency — of choosing the winning door from 1/2 to 2/3. But it does so if and only if the player or bettor always switches from IC to HC1 or HC2 (whichever one remains unopened).

You can visualize the steps outlined above by looking at the earlier diagram of possible outcomes.

That’s all there is. There isn’t any more.

Baseball Statistics and the Consumer Price Index

Faithful readers of this blog will have noticed that I like to invoke baseball when addressing matters far afield from America’s pastime. (See this, this, this, this, this, this, this, this, and this.) It lately occurred to me that baseball statistics, properly understood, illustrate the inherent meaninglessness of the Consumer Price Index (CPI).

What does the CPI purport to measure? The Bureau of Labor Statistics (BLS) — compiler of the index — says that it “is a measure of the average change over time in the prices paid by urban consumers for a market basket of consumer goods and services.” Read that statement carefully. The CPI does not measure the average change in prices of the goods and services purchased by every urban consumer; it measures the prices of a “market basket” of goods and services that is thought to represent the purchases of a “typical” consumer. Further, the composition of that “market basket” is assumed to change, over time, in accordance with the preferences of the “typical” consumer. (There is more about the CPI in the note at the bottom of this post.)

To understand the arbitrariness of the CPI — as regards the construction of the “market basket” and the estimation of the prices of its components — one must read no further than the Bureau’s own list of questions and answers, some of which I have reproduced in the footnote. As a measure of your cost of living — at any time or over time — the CPI is as useful as the statement that the average depth of a swimming pool is 5 feet; a non-swimmer who is 6 feet tall puts himself in danger of drowning if he jumps into the deep end of such a pool.

The BLS nevertheless computes one version CPI back to January 1913. If you believe that prices in 1913 can be compared with prices in 2013, you must believe that baseball statistics yield meaningful comparisons of the performance of contemporary players and the players of bygone years. I enjoy making such comparisons, but I do not endorse their validity. As I will discuss later in this post, my reservations about cross-temporal comparisons of baseball statistics apply also to cross-temporal comparisons of prices.

Let us begin our journey into baseball statistics with three popular measures of batting prowess: batting average (BA), slugging percentage (SLG), and on-base plus slugging (OPS). The “normal” values of these statistics have varied widely:

Average major league batting statistics_1901-2012
Source: League Year-by-Year Batting at Baseball-Reference.com.

Aside from the upward trends of SLG and OPS, which are unsurprising to anyone with a passing knowledge of baseball’s history, the most striking feature of these statistics is their synchronicity. Players (and fans) of the 1920s and 1930s enjoyed an upsurge in BA, SLG, and OPS that was echoed in the 1980s and 1990s. How can the three statistics rise in lockstep when BA usually suffers with emphasis on the long ball (captured in SLG and OPS)? The three statistics can rise in lockstep only because of changes in the conditions of play that allow batters to hit for a better average while also getting more long hits. By the same token, changes in conditions of play can have the opposite effect of causing offensive statistics to fall, across the board. But given constant conditions of play, there usually is a tradeoff between batting average and long hits. A key point, to which I will return, is the essential incommensurability of statistics gathered under different conditions of play (or economic activity).

There are many variations in the conditions of play that have resulted in significant changes in offensive statistics. Among those changes are the use of cleaner and more tightly wound baseballs, the advent of night baseball, better lighting for night games, bigger gloves, lighter bats, bigger and stronger players, the expansion of the major leagues in fits and starts, the size of the strike zone, the height of the pitching mound, and — last but far from least in this list — the integration of black and Hispanic players into major league baseball. In addition to these structural variations, there are others that mitigate against the commensurability of statistics over time; for example, the rise and decline of each player’s skills, the skills of teammates (which can boost or depress a player’s performance), the characteristics of a player’s home ballpark (where players generally play half their games), and the skills of the opposing players who are encountered over the course of a career.

Despite all of these obstacles to commensurability, the urge to evaluate the relative performance of players from different teams, leagues, seasons, and eras is irrepressible. Baseball-Reference.com is rife with such evaluations; the Society for American Baseball Research (SABR) revels in them; many books offer them (e.g., this one); and I have succumbed to the urge more than once.

It is one thing to have fun with numbers. It is quite another thing to ascribe meanings to them that they cannot support. Consider the following cross-temporal comparison of baseball statistics:

Top-25 single-season offensive records
Source: Derived from the Play Index at Baseball-Reference.com. (Most baseball fans will recognize all of the names but one: Cy Seymour. His life and career are detailed in this article.)

Take, for example, the players ranked 17-25 in single-season BA. The range of BA for those 9 seasons (.384 to .388) is insignificantly small; it represents a maximum difference of only 4 hits per 1,000 times at bat. Given the vastly different conditions of play — and of the players — what does it mean to say that Rod Carew in 1977 and George Brett in 1980 had essentially the same BA as Honus Wagner in 1905 and 1908? It means nothing. The only thing that is essentially the same is the normalized BA that I concocted to represent those (and other) seasons. Offering normalized BA in evidence is to beg the question. In fact, any cross-temporal comparison of BA (or SLG or OPS) is essentially meaningless.

By the same token, it means nothing to say that prices in 2013 are X times as high as prices in 1913, when — among many other things — consumers in 2013 have access to a vastly richer “market basket” of products and services. Further, the products and services of 2013 that bear a passing resemblance to those of 1913 (e.g., houses, automobiles, telephone service) are demonstrably superior in quality.

So, it is fun to play with numbers, but when it comes to using them to make cross-temporal comparisons — especially over a span of decades — be very wary. Better yet, resist the temptation to make those cross-temporal comparisons, except for the fun of it.
____________
A SELECTION OF QUESTIONS AND ANSWERS ABOUT THE CPI, FROM THIS PAGE AT THE WEBSITE OF THE BUREAU OF LABOR STATISTICS:

Whose buying habits does the CPI reflect?

The CPI reflects spending patterns for each of two population groups: all urban consumers and urban wage earners and clerical workers. The all urban consumer group represents about 87 percent of the total U.S. population. It is based on the expenditures of almost all residents of urban or metropolitan areas, including professionals, the self-employed, the poor, the unemployed, and retired people, as well as urban wage earners and clerical workers. Not included in the CPI are the spending patterns of people living in rural nonmetropolitan areas, farm families, people in the Armed Forces, and those in institutions, such as prisons and mental hospitals. Consumer inflation for all urban consumers is measured by two indexes, namely, the Consumer Price Index for All Urban Consumers (CPI-U) and the Chained Consumer Price Index for All Urban Consumers (C-CPI-U)….

The Consumer Price Index for Urban Wage Earners and Clerical Workers (CPI-W) is based on the expenditures of households included in the CPI-U definition that also meet two requirements: more than one-half of the household’s income must come from clerical or wage occupations, and at least one of the household’s earners must have been employed for at least 37 weeks during the previous 12 months. The CPI-W population represents about 32 percent of the total U.S. population and is a subset, or part, of the CPI-U population….

Does the CPI measure my experience with price change?

Not necessarily. It is important to understand that BLS bases the market baskets and pricing procedures for the CPI-U and CPI-W populations on the experience of the relevant average household, not of any specific family or individual. It is unlikely that your experience will correspond precisely with either the national indexes or the indexes for specific cities or regions….

How is the CPI market basket determined?

The CPI market basket is developed from detailed expenditure information provided by families and individuals on what they actually bought. For the current CPI, this information was collected from the Consumer Expenditure Surveys for 2007 and 2008. In each of those years, about 7,000 families from around the country provided information each quarter on their spending habits in the interview survey. To collect information on frequently purchased items, such as food and personal care products, another 7,000 families in each of these years kept diaries listing everything they bought during a 2-week period….

What goods and services does the CPI cover?

The CPI represents all goods and services purchased for consumption by the reference population (U or W) BLS has classified all expenditure items into more than 200 categories, arranged into eight major groups. Major groups and examples of categories in each are as follows:

  • FOOD AND BEVERAGES (breakfast cereal, milk, coffee, chicken, wine, full service meals, snacks)
  • HOUSING (rent of primary residence, owners’ equivalent rent, fuel oil, bedroom furniture)
  • APPAREL (men’s shirts and sweaters, women’s dresses, jewelry)
  • TRANSPORTATION (new vehicles, airline fares, gasoline, motor vehicle insurance)
  • MEDICAL CARE (prescription drugs and medical supplies, physicians’ services, eyeglasses and eye care, hospital services)
  • RECREATION (televisions, toys, pets and pet products, sports equipment, admissions);
  • EDUCATION AND COMMUNICATION (college tuition, postage, telephone services, computer software and accessories);
  • OTHER GOODS AND SERVICES (tobacco and smoking products, haircuts and other personal services, funeral expenses)….

For each of the more than 200 item categories, using scientific statistical procedures, the Bureau has chosen samples of several hundred specific items within selected business establishments frequented by consumers to represent the thousands of varieties available in the marketplace. For example, in a given supermarket, the Bureau may choose a plastic bag of golden delicious apples, U.S. extra fancy grade, weighing 4.4 pounds to represent the Apples category….

How do I read or interpret an index?

An index is a tool that simplifies the measurement of movements in a numerical series. Most of the specific CPI indexes have a 1982-84 reference base. That is, BLS sets the average index level (representing the average price level)-for the 36-month period covering the years 1982, 1983, and 1984-equal to 100. BLS then measures changes in relation to that figure. An index of 110, for example, means there has been a 10-percent increase in price since the reference period; similarly, an index of 90 means a 10-percent decrease….

Can the CPIs for individual areas be used to compare living costs among the areas?

No, an individual area index measures how much prices have changed over a specific period in that particular area; it does not show whether prices or living costs are higher or lower in that area relative to another. In general, the composition of the market basket and the relative prices of goods and services in the market basket during the expenditure base period vary substantially across areas….

Fooled by Non-Randomness

Nassim Nicholas Taleb, in his best-selling Fooled by Randomness, charges human beings with the commission of many perceptual and logical errors. One reviewer captures the point of the book, which is to

explore luck “disguised and perceived as non-luck (that is, skills).” So many of the successful among us, he argues, are successful due to luck rather than reason. This is true in areas beyond business (e.g. Science, Politics), though it is more obvious in business.

Our inability to recognize the randomness and luck that had to do with making successful people successful is a direct result of our search for pattern. Taleb points to the importance of symbolism in our lives as an example of our unwillingness to accept randomness. We cling to biographies of great people in order to learn how to achieve greatness, and we relentlessly interpret the past in hopes of shaping our future.

Only recently has science produced probability theory, which helps embrace randomness. Though the use of probability theory in practice is almost nonexistent.

Taleb says the confusion between luck and skill is our inability to think critically. We enjoy presenting conjectures as truth and are not equipped to handle probabilities, so we attribute our success to skill rather than luck.

Taleb writes in a style found all too often on best-seller lists: pseudo-academic theorizing “supported” by selective (often anecdotal) evidence. I sometimes enjoy such writing, but only for its entertainment value. Fooled by Randomness leaves me unfooled, for several reasons.

THE FUNDAMENTAL FLAW

The first reason that I am unfooled by Fooled… might be called a meta-reason. Standing back from the book, I am able to perceive its essential defect: According to Taleb, human affairs — especially economic affairs, and particularly the operations of financial markets — are dominated by randomness. But if that is so, only a delusional person can truly claim to understand the conduct of human affairs. Taleb claims to understand the conduct of human affairs. Taleb is therefore either delusional or omniscient.

Given Taleb’s humanity, it is more likely that he is delusional — or simply fooled, but not by randomness. He is fooled because he proceeds from the assumption of randomness instead of exploring the ways and means by which humans are actually capable of shaping events. Taleb gives no more than scant attention to those traits which, in combination, set humans apart from other animals: self-awareness, empathy, forward thinking, imagination, abstraction, intentionality, adaptability, complex communication skills, and sheer brain power. Given those traits (in combination) the world of human affairs cannot be random. Yes, human plans can fail of realization for many reasons, including those attributable to human flaws (conflict, imperfect knowledge, the triumph of hope over experience, etc.). But the failure of human plans is due to those flaws — not to the randomness of human behavior.

What Taleb sees as randomness is something else entirely. The trajectory of human affairs often is unpredictable, but it is not random. For it is possible to find patterns in the conduct of human affairs, as Taleb admits (implicitly) when he discusses such phenomena as survivorship bias, skewness, anchoring, and regression to the mean.

A DISCOURSE ON RANDOMNESS

What Is It?

Taleb, having bloviated for dozens of pages about the failure of humans to recognize randomness, finally gets around to (sort of) defining randomness on pages 168 and 169 (of the 2005 paperback edition):

…Professor Karl Pearson … devised the first test of nonrandomness (it was in reality a test of deviation from normality, which for all intents and purposes, was the same thing). He examined millions of runs of [a roulette wheel] during the month of July 1902. He discovered that, with high degree of statistical significance … the runs were not purely random…. Philosophers of statistics call this the reference case problem to explain that there is no true attainable randomness in practice, only in theory….

…Even the fathers of statistical science forgot that a random series of runs need not exhibit a pattern to look random…. A single random run is bound to exhibit some pattern — if one looks hard enough…. [R]eal randomness does not look random.

The quoted passage illustrates nicely the superficiality of Fooled by Randomness, and (I must assume) the muddledness of Taleb’s thinking:

  • He accepts a definition of randomness which describes the observation of outcomes of mechanical processes (e.g., the turning of a roulette wheel, the throwing of dice) that are designed to yield random outcomes. That is, randomness of the kind cited by Taleb is in fact the result of human intentions.
  • If “there is no true attainable randomness,” why has Taleb written a 200-plus page book about randomness?
  • What can he mean when he says “a random series of runs need not exhibit a pattern to look random”? The only sensible interpretation of that bit of nonsense would be this: It is possible for a random series of runs to contain what looks like a pattern. But remember that the random series of runs to which Taleb refers is random only because humans intended its randomness.
  • It is true enough that “A single random run is bound to exhibit some pattern — if one looks hard enough.” Sure it will. But it remains a single random run of a process that is intended to produce randomness, which is utterly unlike such events as transactions in financial markets.

One of the “fathers of statistical science” mentioned by Taleb (deep in the book’s appendix) is Richard von Mises, who in Probability Statistics and Truth defines randomness as follows:

First, the relative frequencies of the attributes [e.g. heads and tails] must possess limiting values [i.e., converge on 0.5, in the case of coin tosses]. Second, these limiting values must remain the same in all partial sequences which may be selected from the original one in an arbitrary way. Of course, only such partial sequences can be taken into consideration as can be extended indefinitely, in the same way as the original sequence itself. Examples of this kind are, for instance, the partial sequences formed by all odd members of the original sequence, or by all members for which the place number in the sequence is the square of an integer, or a prime number, or a number selected according to some other rule, whatever it may be. (pp. 24-25 of the 1981 Dover edition, which is based on the author’s 1951 edition)

Gregory J. Chaitin, writing in Scientific American (“Randomness and Mathematical Proof,” vol. 232, no. 5 (May 1975), pp. 47-52), offers this:

We are now able to describe more precisely the differences between the[se] two series of digits … :

01010101010101010101
01101100110111100010

The first could be specified to a computer by a very simple algorithm, such as “Print 01 ten times.” If the series were extended according to the same rule, the algorithm would have to be only slightly larger; it might be made to read, for example, “Print 01 a million times.” The number of bits in such an algorithm is a small fraction of the number of bits in the series it specifies, and as the series grows larger the size of the program increases at a much slower rate.

For the second series of digits there is no corresponding shortcut. The most economical way to express the series is to write it out in full, and the shortest algorithm for introducing the series into a computer would be “Print 01101100110111100010.” If the series were much larger (but still apparently patternless), the algorithm would have to be expanded to the corresponding size. This “incompressibility” is a property of all random numbers; indeed, we can proceed directly to define randomness in terms of incompressibility: A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself [emphasis added].

This is another way of saying that if you toss a balanced coin 1,000 times the only way to describe the outcome of the tosses is to list the 1,000 outcomes of those tosses. But, again, the thing that is random is the outcome of a process designed for randomness.

Taking Mises and Chaitin’s definitions together, we can define random events as events which are repeatable, convergent on a limiting value, and truly patternless over a large number of repetitions. Evolving economic events (e.g., stock-market trades, economic growth) are not alike (in the way that dice are, for example), they do not converge on limiting values, and they are not patternless, as I will show.

In short, Taleb fails to demonstrate that human affairs in general or financial markets in particular exhibit randomness, properly understood.

Randomness and the Physical World

Nor are we trapped in a random universe. Returning to Mises, I quote from the final chapter of Probability, Statistics and Truth:

We can only sketch here the consequences of these new concepts [e.g., quantum mechanics and Heisenberg’s principle of uncertainty] for our general scientific outlook. First of all, we have no cause to doubt the usefulness of the deterministic theories in large domains of physics. These theories, built on a solid body of experience, lead to results that are well confirmed by observation. By allowing us to predict future physical events, these physical theories have fundamentally changed the conditions of human life. The main part of modern technology, using this word in its broadest sense, is still based on the predictions of classical mechanics and physics. (p. 217)

Even now, almost 60 years on, the field of nanotechnology is beginning to hardness quantum mechanical effects in the service of a long list of useful purposes.

The physical world, in other words, is not dominated by randomness, even though its underlying structures must be described probabilistically rather than deterministically.

Summation and Preview

A bit of unpredictability (or “luck”) here and there does not make for a random universe, random lives, or random markets. If a bit of unpredictability here and there dominated our actions, we wouldn’t be here to talk about randomness — and Taleb wouldn’t have been able to marshal his thoughts into a published, marketed, and well-sold book.

Human beings are not “designed” for randomness. Human endeavors can yield unpredictable results, but those results do not arise from random processes, they derive from skill or the lack therof, knowledge or the lack thereof (including the kinds of self-delusions about which Taleb writes), and conflicting objectives.

An Illustration from Life

To illustrate my position on randomness, I offer the following digression about the game of baseball.

At the professional level, the game’s poorest players seldom rise above the low minor leagues. But even those poorest players are paragons of excellence when compared with the vast majority of American males of about the same age. Did those poorest players get where they were because of luck? Perhaps some of them were in the right place at the right time, and so were signed to minor league contracts. But their luck runs out when they are called upon to perform in more than a few games. What about those players who weren’t in the right place at the right time, and so were overlooked in spite of skills that would have advanced them beyond the rookie leagues? I have no doubt that there have been many such players. But, in the main, professional baseball abounds with the lion’s share of skilled baseball players who are there because they intend to be there, and because baseball clubs intend for them to be there.

Now, most minor leaguers fail to advance to the major leagues, even for the proverbial “cup of coffee” (appearing in few games at the end of the major-league season, when teams are allowed to expand their rosters following the end of the minor-league season). Does “luck” prevent some minor leaguers from advancement to “the show” (the major leagues)? Of course. Does “luck” result in the advancement of some minor leaguers to “the show”? Of course. But “luck,” in this context, means injury, illness, a slump, a “hot” streak, and the other kinds of unpredictable events that ballplayers are subject to. Are the events random? Yes, in the sense that they are unpredictable, but I daresay that most baseball players do not succumb to bad luck or advance very for or for very long because of good luck. In fact, ballplayers who advance to the major leagues, and then stay there for more than a few seasons, do so because they possess (and apply) greater skill than their minor-league counterparts. And make no mistake, each player’s actions are so closely watched and so extensively quantified that it isn’t hard to tell when a player is ready to be replaced.

It is true that a player may experience “luck” for a while during a season, and sometimes for a whole season. But a player will not be consistently “lucky” for several seasons. The length of his career (barring illness, injury, or voluntary retirement), and his accomplishments during that career, will depend mainly on his inherent skills and his assiduousness in applying those skills.

No one believes that Ty Cobb, Babe Ruth, Ted Williams, Christy Matthewson, Warren Spahn, and the dozens of other baseball players who rank among the truly great were lucky. No one believes that the vast majority of the the tens of thousands of minor leaguers who never enjoyed more than the proverbial cup of coffee were unlucky. No one believes that the vast majority of the millions of American males who never made it to the minor leagues were unlucky. Most of them never sought a career in baseball; those who did simply lacked the requisite skills.

In baseball, as in life, “luck” is mainly an excuse and rarely an explanation. We prefer to apply “luck” to outcomes when we don’t like the true explanations for them. In the realm of economic activity and financial markets, one such explanation (to which I will come) is the exogenous imposition of governmental power.

ARE ECONOMIC AND FINANCIAL OUTCOMES TRULY RANDOM?

They Cannot Be, Given Competition

Returning to Taleb’s main theme — the randomness of economic and financial events — I quote this key passage (my comments are in brackets and boldface):

…Most of [Bill] Gates'[s] rivals have an obsessive jealousy of his success. They are maddened by the fact that he managed to win so big while many of them are struggling to make their companies survive. [These are unsupported claims that I include only because they set the stage for what follows.]

Such ideas go against classical economic models, in which results either come from a precise reason (there is no account for uncertainty) or the good guy wins (the good guy is the one who is most skilled and has some technical superiority). [The “good guy” theory would come as a great surprise to “classical” economists, who quite well understood imperfect competition based on product differentiation and monopoly based on (among other things) early entry into a market.] Economists discovered path-dependent effects late in their game [There is no “late” in a “game” that had no distinct beginning and has no pre-ordained end.], then tried to publish wholesale on the topic that otherwise be bland and obvious. For instance, Brian Arthur, an economist concerned with nonlinearities at the Santa Fe Institute [What kinds of nonlinearities are found at the Santa Fe Institute?], wrote that chance events coupled with positive feedback other than technological superiority will determine economic superiority — not some abstrusely defined edge in a given area of expertise. [It would come as no surprise to economists — even “classical” ones — that many factors aside from technical superiority determine market outcomes.] While early economic models excluded randomness, Arthur explained how “unexpected orders, chance meetings with lawyers, managerial whims … would help determine which ones acheived early sales and, over time, which firms dominated.”

Regarding the final sentence of the quoted passage, I refer back to the example of baseball. A person or a firm may gain an opportunity to succeed because of the kinds of “luck” cited by Brian Arthur, but “good luck” cannot sustain an incompetent performer for very long.  And when “bad luck” happens to competent individuals and firms they are often (perhaps usually) able to overcome it.

While overplaying the role of luck in human affairs, Taleb underplays the role of competition when he denigrates “classical economic models,” in which competition plays a central role. “Luck” cannot forever outrun competition, unless the game is rigged by governmental intervention, namely, the writing of regulations that tend to favor certain competitors (usually market incumbents) over others (usually would-be entrants). The propensity to regulate at the behest of incumbents (who plead “public interest,” of course) is a proof of the power of competition to shape economic outcomes. It is loathed and feared, and yet it leads us in the direction to which classical economic theory points: greater output and lower prices.

Competition is what ensures that (for the most part) the best ballplayers advance to the major leagues. It’s what keeps “monopolists” like Microsoft hopping (unless they have a government-guaranteed monopoly), because even a monopolist (or oligopolist) can face competition, and eventually lose to it — witness the former “Big Three” auto makers, many formerly thriving chain stores (from Kresge’s to Montgomery Ward’s), and numerous other brand names of days gone by. If Microsoft survives and thrives, it will be because it actually offers consumers more value for their money, either in the way of products similar to those marketed by Microsoft or in entirely new products that supplant those offered by Microsoft.

Monopolists and oligopolists cannot survive without constant innovation and attention to their customers’ needs.Why? Because they must compete with the offerors of all the other goods and services upon which consumers might spend their money. There is nothing — not even water — which cannot be produced or delivered in competitive ways. (For more, see this.)

The names of the particular firms that survive the competitive struggle may be unpredictable, but what is predictable is the tendency of competitive forces toward economic efficiency. In other words, the specific outcomes of economic competition may be unpredictable (which is not a bad thing), but the general result — efficiency — is neither unpredictable nor a manifestation of randomness or “luck.”

Taleb, had he broached the subject of competition would (with his hero George Soros) denigrate it, on the ground that there is no such thing as perfect competition. But the failure of competitive forces to mimic the model of perfect competition does not negate the power of competition, as I have summarized it here. Indeed, the failure of competitive forces to mimic the model of perfect competition is not a failure, for perfect competition is unattainable in practice, and to hold it up as a measure of the effectiveness of market forces is to indulge in the Nirvana fallacy.

In any event, Taleb’s myopia with respect to competition is so complete that he fails to mention it, let alone address its beneficial effects (even when it is less than perfect). And yet Taleb dares to dismiss as a utopist Milton Friedman (p. 272) — the same Milton Friedman who was among the twentieth century’s foremost advocates of the benefits of competition.

Are Financial Markets Random?

Given what I have said thus far, I find it almost incredible that anyone believes in the randomness of financial markets. It is unclear where Taleb stands on the random-walk hypothesis, but it is clear that he believes financial markets to be driven by randomness. Yet, contradictorily, he seems to attack the efficient-markets hypothesis (see pp. 61-62), which is the foundation of the random-walk hypothesis.

What is the random-walk hypothesis? In brief, it is this: Financial markets are so efficient that they instantaneously reflect all information bearing on the prices of financial instruments that is then available to persons buying and selling those instruments. (The qualifier “then available to persons buying and selling those instruments” leaves the door open for [a] insider trading and [b] arbitrage, due to imperfect knowledge on the part of some buyers and/or sellers.) Because information can change rapidly and in unpredictable ways, the prices of financial instruments move randomly. But the random movement is of a very special kind:

If a stock goes up one day, no stock market participant can accurately predict that it will rise again the next. Just as a basketball player with the “hot hand” can miss the next shot, the stock that seems to be on the rise can fall at any time, making it completely random.

And, therefore, changes in stock prices cannot be predicted.

Note, however, the focus on changes. It is that focus which creates the illusion of randomness and unpredictability. It is like hoping to understand the movements of the planets around the sun by looking at the random movements of a particle in a cloud chamber.

When we step back from day-to-day price changes, we are able to see the underlying reality: prices (instead of changes) and price trends (which are the opposite of randomness). This (correct) perspective enables us to see that stock prices (on the whole) are not random, and to identify the factors that influence the broad movements of the stock market.
For one thing, if you look at stock prices correctly, you can see that they vary cyclically. Here is a telling graphic (from “Efficient-market hypothesis” at Wikipedia):

Returns on stocks vs. PE ratioPrice-Earnings ratios as a predictor of twenty-year returns based upon the plot by Robert Shiller (Figure 10.1,[18] source). The horizontal axis shows the real price-earnings ratio of the S&P Composite Stock Price Index as computed in Irrational Exuberance (inflation adjusted price divided by the prior ten-year mean of inflation-adjusted earnings). The vertical axis shows the geometric average real annual return on investing in the S&P Composite Stock Price Index, reinvesting dividends, and selling twenty years later. Data from different twenty-year periods is color-coded as shown in the key. See also ten-year returns. Shiller states that this plot “confirms that long-term investors—investors who commit their money to an investment for ten full years—did do well when prices were low relative to earnings at the beginning of the ten years. Long-term investors would be well advised, individually, to lower their exposure to the stock market when it is high, as it has been recently, and get into the market when it is low.”[18] This correlation between price to earnings ratios and long-term returns is not explained by the efficient-market hypothesis.

Why should stock prices tend to vary cyclically? Because stock prices generally are driven by economic growth (i.e., changes in GDP), and economic growth is strongly cyclical. (See this post.)

More fundamentally, the economic outcomes reflected in stock prices aren’t random, for they depend mainly on intentional behavior along well-rehearsed lines (i.e., the production and consumption of goods and services in ways that evolve over time). Variations in economic behavior, even when they are unpredictable, have explanations; for example:

  • Innovation and capital investment spur the growth of economic output.
  • Natural disasters slow the growth of economic output (at least temporarily) because they absorb resources that could have gone to investment  (as well as consumption).
  • Governmental interventions (taxation and regulation), if not reversed, dampen growth permanently.

There is nothing in those three statements that hasn’t been understood since the days of Adam Smith. Regarding the third statement, the general slowing of America’s economic growth since the advent of the Progressive Era around 1900 is certainly not due to randomness, it is due to the ever-increasing burden of taxation and regulation imposed on the economy — an entirely predictable result, and certainly not a random one.

In fact, the long-term trend of the stock market (as measured by the S&P 500) is strongly correlated with GDP. And broad swings around that trend can be traced to governmental intervention in the economy. The following graph shows how the S&P 500, reconstructed to 1870, parallel constant-dollar GDP:

The next graph shows the relationship more clearly.

090711_Real S&P 500 vs Real GDP

090711_Real S&P 500 vs Real GDP_2

The wild swings around the trend line began in the uncertain aftermath of World War I, which saw the imposition of production and price controls. The swings continued with the onset of the Great Depression (which can be traced to governmental action), the advent of the anti-business New Deal, and the imposition of production and price controls on a grand scale during World War II. The next downswing was occasioned by the culmination the Great Society, the “oil shocks” of the early 1970s, and the raging inflation that was touched off by — you guessed it — government policy. The latest downswing is owed mainly to the financial crisis born of yet more government policy: loose money and easy loans to low-income borrowers.

And so it goes, wildly but predictably enough if you have the faintest sense of history. The moral of the story: Keep your eye on government and a hand on your wallet.

CONCLUSION

There is randomness in economic affairs, but they are not dominated by randomness. They are dominated by intentions, including especially the intentions of the politicians and bureaucrats who run governments. Yet, Taleb has no space in his book for the influence of their deeds economic activity and financial markets.

Taleb is right to disparage those traders (professional and amateur) who are lucky enough to catch upswings, but are unprepared for downswings. And he is right to scoff at their readiness to believe that the current upswing (uniquely) will not be followed by a downswing (“this time it’s different”).

But Taleb is wrong to suggest that traders are fooled by randomness. They are fooled to some extent by false hope, but more profoundly by their inablity to perceive the economic damage wrought by government. They are not alone of course; most of the rest of humanity shares their perceptual failings.

Taleb, in that respect, is only somewhat different than most of the rest of humanity. He is not fooled by false hope, but he is fooled by non-randomness — the non-randomness of government’s decisive influence on economic activity and financial markets. In overlooking that influence he overlooks the single most powerful explanation for the behavior of markets in the past 90 years.