Landsburg Is Half-Right

*     *     *

God does not play dice with the universe. — Albert Einstein

Einstein, stop telling God what to do. — Niels Bohr

*     *     *

In a post at The Big Questions blog, Steven Landsburg writes:

Richard Dawkins . . . [has] got this God thing all wrong. Here’s some of his latest, from the Wall Street Journal:

Where does [Darwinian evolution] leave God? The kindest thing to say is that it leaves him with nothing to do, and no achievements that might attract our praise, our worship or our fear. Evolution is God’s redundancy notice, his pink slip. But we have to go further. A complex creative intelligence with nothing to do is not just redundant. A divine designer is all but ruled out by the consideration that he must be at least as complex as the entities he was wheeled out to explain. God is not dead. He was never alive in the first place.

But Darwinian evolution can’t replace God, because Darwinian evolution (at best) explains life, and explaining life was never the hard part. The Big Question is not: Why is there life? The Big Question is: Why is there anything?

So far, so good. But Landsburg doesn’t quit when he’s ahead:

Ah, says, Dawkins, but there’s no role for God there either:

Making the universe is the one thing no intelligence, however superhuman, could do, because an intelligence is complex—statistically improbable —and therefore had to emerge, by gradual degrees, from simpler beginnings

That, however, is just wrong. It is not true that all complex things emerge by gradual degrees from simpler beginnings. In fact, the most complex thing I’m aware of is the system of natural numbers (0,1,2,3, and all the rest of them) together with the laws of arithmetic. That system did not emerge, by gradual degrees, from simpler beginnings. . . .

Now I happen to agree with Professor Dawkins that God is unnecessary, but I think he’s got the reason precisely backward. God is unnecessary not because complex things require simple antecedents but because they don’t. That allows the natural numbers to exist with no antecedents at all. . . .

What breathtaking displays of arrogance. Dawkins presumes that the only kind of intelligence that can exist is the kind that comes about through evolution. Landsburg wishes us to believe that complex things can exist on their own, without antecedents, which is why there is no God. (He fudges by saying “God is unnecessary” but we know what he really believes, don’t we?)

Landsburg’s “proof” of the non-existence of God is the existence of natural numbers, a “system [that] did not emerge, by gradual degrees, from simpler beginnings.” Landsburg’s assertion about natural numbers (and the laws of arithmetic) is true only if numbers exist independently of human thought, that is, if they are ideal Platonic forms. But where do ideal Platonic forms come from? And if some complex things don’t require antecedents, how does that rule out the existence of God — who, by definition, embodies all complexity?

Related posts:
Same Old Story, Same Old Song and Dance
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
Evolution and Religion
Science, Evolution, Religion, and Liberty
Science, Logic, and God
The Universe . . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
A Non-Believer Defends Religion
Evolution as God?
The Greatest Mystery

Beware the Rare Event

Carl Bialik, “The Numbers Guy” at The Wall Street Journal, notes that

a 1-in-5.2 million shot came through in Bulgaria, as the same six winning numbers turned up in two consecutive drawings. And 18 Bulgarians profited by betting on recent history: They chose the winning combination of numbers from the drawing four days earlier — which hadn’t been selected by anyone the first time around — and split the pot.

The coincidence drew international news coverage and sparked a probe by a government-appointed commission. Bulgarian officials ultimately chalked it up to coincidence. . . .

The general principle . . . is that this would have happened eventually. There are lotteries in dozens of countries, and multiple ones within countries — scores in the U.S. alone. Many of these lotteries have had multiple drawings each week for decades. If there have been, say, a million lottery drawings, then a coincidence as unlikely as this one becomes more of a 1-in-5 yawn. That still means that any one player’s chances of winning the lottery are close to zero.

In short, regardless of less-than-amazing coincidences, it is a rare event to hold the winning number in a lottery.

People are drawn to coincidences of the kind described by Bialik
because the coincidences are rare events, as are celebrity scandals (as opposed to the relatively stable marriages of “ordinary” people), aircraft accidents that take 200 lives (as opposed to myriad uneventful flights), the acts of murderers and other violent criminals (as opposed to the relatively civilized behavior of most people), and so on.

A problem with rare events — “outliers” in the terminology of operations research — is that, despite their rarity, they attract a disproportionate share of public and political attention. They skew our perceptions of normality. A rare but notorious tragedy usually is followed by calls for government to “do something” to prevent future tragedies of similar kinds.

Consider, for example, the National Highway Traffic Safety Administration (NHTSA), the Occupational Safety and Health Administration (OSHA), and the Consumer Product Safety Commission (CPSC). All three agencies were established in 1970-72, in the wave of fear-mongering that followed the publication of Ralph Nader’s Unsafe at Any Speed. All three agencies were inspired by the occurrence of relatively rare events. And those occurrences had been in decline long before the establishment of NHTSA, OSHA, and CPSC.

I introduce in evidence Figures 1 and 2 of “Safety at Any Price?” by W. Kip Viscusi and Ted Gary (Regulation, Fall 2002, pp. 54-63), which indicate that unintentional injury deaths in the United States had been falling steadily, long before the advent of NHTSA, OSHA, and CPSC.* In 1928, the first year treated by the authors, the annual rate of unintentional injury deaths arising from accidents of all kinds was only about 80 per 100,000 persons; that is, about 8/100 of 1 percent of the population died of unintentionally inflicted injuries.  By 1960, the rate was only about 5/100 of 1 percent of the population, and by 1990 it was down to 3.5/100 of 1 percent of the population, where it has leveled off.

In other words, the incidence of fatal accidents declined faster before the establishment of NHTSA, OSHA, and CPSC than it has since. NHTSA, OSHA, and CPSC have had no demonstrable effect on the incidence of fatal accidents. Why? Because human beings tend to act responsibly, for the sake of self-preservation. When, on rare occasions, they fail to act responsibly — or their machinery fails them — they can be counted on to learn from their misfortunes and the misfortunes of others. And there is nothing new about learning from experience and applying that learning to improve our material possessions. Just ask a caveman.

What, then, is the role of NHTSA, OSHA, and CPSC? They are like cheerleaders who claim credit for their team’s victories because they cavort on the sidelines for the entertainment of the crowd. Cheerleaders notwithstanding, the team generally does what it was going to do, anyway, except when a cheerleader gets too close to the action and obstructs it. Sometimes a cheerleader’s obstruction accidentally benefits the cheerleader’s team; other times, it hurts the cheerleader’s team. There are three main differences between NHTSA, OSHA, and CPSC and cheerleaders. Cheerleaders (a) aren’t supposed to interfere with the players (and rarely do); (b) they provide their services relatively cheaply or free of charge; and (c) they are more attractive than most bureaucrats.

The occurrence of a rare event should be an occasion for noting that it is a rare event. It should not be an occasion for the creation of a costly, intrusive, and essentially ineffective regulatory agency or a sheaf of misguided regulations.

__________

* Figure 1 seems to show a general rise in the rate of deaths from motor vehicle accidents between 1945 and 1975. That increase  is explained by the growing use of automobiles. As shown in Figure 2 of the Viscusi-Gayer article, death rates per vehicle and per vehicle-mile had been dropping steadily until 1960, when those rates rose slightly for a few years before continuing to decline. For more about the long-term trend in deaths from motor-vehicle accidents, see this post, which also includes a discussion of the Peltzman effect: “the hypothesized tendency of people to react to a safety regulation by increasing other risky behavior, offsetting some or all of the benefit of the regulation.” There is more evidence for the Peltzman effect in this article.

Anthropogenic Global Warming Is Dead, Just Not Buried Yet

I once wrote a very long post in which I presented some of the evidence against the theory of anthropogenic global warming: “‘Warmism’: The Myth of Anthropogenic Global Warming.” Much has been written since then to further undermine the fanatical and destructive belief that humans are the cause of the sharp rise in Earth’s temperature from the mid-1960s to the late 1990s.

Now comes what may be the coup de grace: a post by Steve McIntyre at his blog, Climate Audit. There, McIntyre offers strong evidence that the tree-ring data on which the infamous “hockey stick” is based were, um, selected for the purpose of creating the “hockey stick” effect.

The jury is still out, but my money is on McIntyre.

P.S. There’s more here and here.

Putting Risks in Perspective

According to the Centers for Disease Control, about eight-tenths of one percent of Americans died in 2005 (the most recent year for which CDC has published death rates). That’s about 800 persons (825.9 to be precise) out of every 100,000.

To put that number in perspective, imagine a dozen dozen eggs (i.e., a gross of eggs, for those who still know the numeric meaning of “gross”). Only about one of those eggs is broken in the span of a year, in spite of all of the hazards to which the eggs are exposed.

Remember that analogy the next time you read or hear about the “threats” posed by heart disease, cancer, Alzheimer’s, motor-vehicle accidents, firearms, etc., etc., etc. The combined effect of all such “threats” is close to nil; more than 99 percent of Americans survive every year, and more than 70 percent of those who don’t survive are old (age 65 and older). But that’s not the kind of “news” of that sells advertising.

(For much more about mortality in the United States, go here.)

The Cell-Phone Scourge

Today’s edition of The New York Times carries an article by Matt Richtel, “U.S. Withheld Data on Risks of Distracted Driving.” Richtel writes (in part):

In 2003, researchers at a federal agency proposed a long-term study of 10,000 drivers to assess the safety risk posed by cellphone use behind the wheel.

They sought the study based on evidence that such multitasking was a serious and growing threat on America’s roadways.

But such an ambitious study never happened. And the researchers’ agency, the National Highway Traffic Safety Administration, decided not to make public hundreds of pages of research and warnings about the use of phones by drivers — in part, officials say, because of concerns about angering Congress.

On Tuesday, the full body of research is being made public for the first time by two consumer advocacy groups, which filed a Freedom of Information Act lawsuit for the documents. The Center for Auto Safety and Public Citizen provided a copy to The New York Times, which is publishing the documents on its Web site….

“We’re looking at a problem that could be as bad as drunk driving, and the government has covered it up,” said Clarence Ditlow, director of the Center for Auto Safety…..

The highway safety researchers estimated that cellphone use by drivers caused around 955 fatalities and 240,000 accidents over all in 2002.

The researchers also shelved a draft letter they had prepared for Transportation Secretary Norman Y. Mineta to send, warning states that hands-free laws might not solve the problem.

That letter said that hands-free headsets did not eliminate the serious accident risk. The reason: a cellphone conversation itself, not just holding the phone, takes drivers’ focus off the road, studies showed.

The research mirrors other studies about the dangers of multitasking behind the wheel. Research shows that motorists talking on a phone are four times as likely to crash as other drivers, and are as likely to cause an accident as someone with a .08 blood alcohol content.

The three-person research team based the fatality and accident estimates on studies that quantified the risks of distracted driving, and an assumption that 6 percent of drivers were talking on the phone at a given time. That figure is roughly half what the Transportation Department assumes to be the case now.

More precise data does not exist because most police forces have not collected long-term data connecting cellphones to accidents. That is why the researchers called for the broader study with 10,000 or more drivers.

“We nevertheless have concluded that the use of cellphones while driving has contributed to an increasing number of crashes, injuries and fatalities,” according to a “talking points” memo the researchers compiled in July 2003.

It added: “We therefore recommend that the drivers not use wireless communication devices, including text messaging systems, when driving, except in an emergency.”

It comes as no news to any observant person that using a cell-phone of any kind can be a dangerous distraction to a driver. Richtel cites some of the previous work on the subject, work that I have cited in earlier posts about the cell-phone scourge. (See, especially, “Cell Phones and Driving, Once More” and its addendum.)

Richtel’s piece underscores the dangers of driving while using a cell phone. It also — perhaps unwittingly — underscores the misfeasance and malfeasance that are typical of government. I say unwittingly because TNYT has a (selective) bias toward government: nanny-ism = good; defense and justice = bad. In this case, the Times finds government in the wrong because it hasn’t been nanny-ish enough.

The Times to the contrary, government has but one legitimate role: to protect citizens from harm. I have no objection to laws banning cell-phone use by drivers, even though — at first blush — such laws might seem anti-libertarian. So-called libertarians who defend driving-while-cell-phoning are merely indulging in the kind of posturing that I have come to expect from the cosseted solipsists who, unfortunately, have come to dominate — and represent — what now passes for libertarianism.

Such “libertarians” to the contrary, liberty comes with obligations. One of those obligations is the responsibility to act in ways that do not harm others. (Another obligation is to defend liberty, or to pay taxes so that others can defend it on your behalf.) The “right” to drive does not include the “right” to drive while drunk or distracted. In sum, a ban on cell-phone use by drivers is entirely libertarian. As I have said,

for the vast majority of drivers there is no alternative to the use of public streets and highways. Relatively few persons can afford private jets and helicopters for commuting and shopping. And as far as I know there are no private, drunk-drivers-and-cell-phones-banned highways. Yes, there might be a market for those drunk-drivers-and-cell-phones-banned highways, but that’s not the reality of here-and-now.

So, I can avoid the (remote) risk of death by second-hand smoke by avoiding places where people smoke. But I cannot avoid the (less-than-remote) risk of death at the hands of a drunk or cell-phone yakker. Therefore, I say, arrest the drunks, cell-phone users, nail-polishers, newspaper-readers, and others of their ilk on sight; slap them with heavy fines; add jail terms for repeat offenders; and penalize them even more harshly if they take life, cause injury, or inflict property damage.

Randomness Is Over-Rated

In the preceding post (“Fooled by Non-Randomness“), I had much to say about Nassim Nicholas Taleb’s Fooled by Randomness. The short of is this: Taleb over-rates the role of randomness in financial markets. In fact, his understanding of randomness seems murky.

My aim here is to offer a clearer picture of randomness (or the lack of it), especially as it relates to human behavior. Randomness, as explained in the preceding post, has almost nothing to do with human behavior, which is dominated by intention. Taleb’s misapprehension of randomness leads him  to overstate the importance of a thing called survivor(ship) bias, to which I will turn after dealing with randomness.

WHERE IS RANDOMNESS FOUND?

Randomness — true randomness — is to be found mainly in the operation of fair dice, fair roulette wheels, cryptograhic pinwheels, and other devices designed expressly for the generation of random values. But what about randomness in human affairs?

What we often call random events in human affairs really are non-random events whose causes we do not and, in some cases, cannot know. Such events are unpredictable, but they are not random. Such is the case with such things as rolls (throws) of fair dice — which are considered random events. Dice-rolls are “random” only because it is impossible to perceive the precise conditions of each roll in “real time,” even though knowledge of those conditions would enable a sharp-eyed observer to forecast the outcome of each throw with some accuracy, if the observer were armed with — and had instant access to — analyses of the results of myriad throws whose precise conditions had been captured by various recording devices.

An observer who lacks such information, and who considers the throws of fair dice to be random events, will see that the total number of pips showing on both dice converges on the following frequency distribution:

Rolled Freq.
2 0.028
3 0.056
4 0.083
5 0.111
6 0.139
7 0.167
8 0.139
9 0.111
10 0.083
11 0.056
12 0.028

This frequency distribution is really a shorthand way of writing 28 times out of 1,000; 56 times out of 1,000; etc.

Stable frequency distributions, such as the one given above, have useful purposes. In the case of craps, for example, a bettor can minimize his losses to the house (over a long period of time) if he takes the frequency distribution into account in his betting. Even more usefully, perhaps, an observed divergence from the normal frequency distribution (over many rolls of the dice) would indicate bias caused by (a) an unusual and possibly fraudulent condition (e.g., loaded dice) or (b) a player’s special skill in manipulating dice to skew the frequency distribution in a certain direction.

Randomness, then, is found in (a) the results of non-intentional actions, where (b) we lack sufficient knowledge to understand the link between actions and results.

THE REAL WORLD OF HUMAN AFFAIRS: IT IS WHAT IT IS

You will have noticed the beautiful symmetry of the frequency distribution for dice-rolling. Two-thirds of a large number of dice-rolls will have values of 5 through 9. Values 3 and 4 together will comprise about 14 percent of the rolls, as will values 10 and 11 together. Values 2 and 12 each will comprise less than 3 percent of the rolls.

In other words, the frequency distribution for dice-rolls closely resembles a normal distribution (bell curve). The virtue of this regularity is that it makes predictable the outcome of a large number of dice-rolls; and it makes obvious (over many dice-rolls) a rigged game involving dice. A statistically unexpected distribution of dice-rolls would be considered non-random or, more plainly, rigged — that is, intended by the rigging party.

To state the underlying point explicitly: It is unreasonable to reduce intentional human behavior to probabilistic formulas. Humans don’t behave like dice, roulette balls, or similar “random” devices. But that is what Taleb (and others) do when they ascribe unusual success in financial markets to “luck.” For example, here is what Taleb says on page 136:

I do not deny that if someone performed better than the crowd in the past, there is a presumption of his ability to do better in the future. But the presumption might be weak, very weak, to the point of being useless in decision making. Why? Because it all depends on two factors: The randomness-content of his profession and the number of [persons in the profession].

What Taleb means is this:

  • Success in a profession where randomness dominates outcomes is likely to have the same kind of distribution as that of an event that is considered random, like rolling dice.
  • That being the case, a certain percentage of the members of the profession will, by chance, seem to have great success.
  • If a profession has relatively few members, than a successful person in that profession is more of a standout than a successful person in a profession with, say, thousands of members.

Let me count the assumptions embedded in Taleb’s argument:

  1. Randomness actually dominates some professions. (In particular, he is thinking of the profession of trading financial instruments: stocks, bonds, derivatives, etc.)
  2. Success in a randomness-dominated profession therefore has almost nothing to do with the relevant skills of a member of that profession, nor with the member’s perspicacity in applying those skills.
  3. It follows that a very successful member of a randomness-dominated profession is probably very successful because of luck.
  4. The probability of stumbling across a very successful member of a randomness-dominated profession depends on the total number of members of the profession, given that the probability of success in the profession is distributed in a non-random way (as with dice-rolls).

One of the ways in which Taleb illustrates his thesis is to point to the mutual-fund industry, where far fewer than half the industry’s actively managed funds fail to match the performance of benchmark indices (e.g., S&P 500) over periods of 5 years and longer. But broad, long-term movements in financial markets are not random — as I show in the preceding post.

Nor is trading in financial instruments random; traders do not roll dice or flip coins when they make trades. (Well, the vast majority don’t.) That a majority (or even a super-majority) of actively managed funds does less well than an index fund has nothing to do with randomness and everything to do with the distribution of stock-picking skills. The research required to make informed decisions about financial instruments is arduous and expensive — and not every fool can do it well. Moreover, decision-making — even when based on thorough research — is clouded by uncertainty about the future and the variety of events that can affect the prices of financial instruments.

It is therefore unsurprising  that the distribution of skills in the financial industry is skewed; there are relatively few professionals who have what it takes to succeed over the long run, and relatively many professionals (or would-be professionals) who compile mediocre-to-awful records.

I say it again: The most successful professionals are not successful because of luck, they are successful because of skill. There is no statistically predetermined percentage of skillful traders; the actual percentage depends on the skills of entrants and their willingness (if skillful) to make a career of it. A relevant analogy is found in the distribution of incomes:

In 2007, all households in the United States earned roughly $7.896 trillion [25]. One half, 49.98%, of all income in the US was earned by households with an income over $100,000, the top twenty percent. Over one quarter, 28.5%, of all income was earned by the top 8%, those households earning more than $150,000 a year. The top 3.65%, with incomes over $200,000, earned 17.5%. Households with annual incomes from $50,000 to $75,000, 18.2% of households, earned 16.5% of all income. Households with annual incomes from $50,000 to $95,000, 28.1% of households, earned 28.8% of all income. The bottom 10.3% earned 1.06% of all income.

The outcomes of human endeavor are skewed because the distribution of human talents is skewed. It would be surprising to find as many as one-half of traders beating the long-run average performance of the various markets in which they operate.

BACK TO BASEBALL

To drive the point home, I return to the example of baseball, which I treated at length in the preceding post. Baseball, like most games, has many “random” elements, which is to say that baseball players cannot always predict accurately such things as the flight of a thrown or batted ball, the course a ball will take when it bounces off grass or an outfield fence, the distance and direction of a throw from the outfield, and so on. But despite the many unpredictable elements of the game, skill dominates outcomes over the course of seasons and careers. Moreover, skill is not distributed in a neat way, say, along a bell curve. A good case in point is the distribution of home runs:

  • There have been 16,884 players and 253,498 home runs in major-league history (1876 – present), an average of 15 home runs per person who have played in the major leagues since 1876. About 2,700 players have more than 15 home runs; about 14,000 players have fewer than 15 home runs; and about 100 players have exactly 15 home runs. Of the 2,700 players with more than 15 home runs, there are (as of yesterday) 1,006 with 74 or more home runs, and 25 with 500 or more home runs. (I obtained data about the frequency of career home runs with this search tool at Baseball-Reference.com.)
  • The career home-run statistic, in other words, has an extremely long, thin “tail” that, at first, rises gradually from 0 to 15. This tail represents the home-run records of about 89 percent of all the men who have played in the major leagues. The tail continues to broaden until, at the other end, it becomes a very short, very fat hump, which represents the 0.15 percent of players with 500 or more home runs.
  • There may be a standard statistical distribution which seems to describe the incidence of career home runs. But to say that the career home-run statistic matches any kind of distribution is merely to posit an after-the-fact “explanation” of a phenomenon that has one essential explanation: Some hitters are better at hitting home runs than other players; those better home-run hitters are more likely to stay in the major leagues long enough to compile a lot of home runs. (Even 74 home runs is a lot, relative to the mean of 15.)

And so it is with traders and other active “players” in financial markets. They differ in skill, and their skill differences cannot be arrayed neatly along a bell curve or any other mathematically neat frequency distribution. To adapt a current coinage, they are what they are — nothing more, nothing less.

TALEB’S A PRIORI WORLDVIEW, WITH A BIAS

Taleb, of course, views the situation the other way around. He sees an a priori distribution of “winners” and losers,” where “winners” are determined mainly by luck, not skill. Moreover, we — the civilians on the sidelines — labor under the false impression about the relative number of “winners” because

it is natural for those who failed to vanish completely. Accordingly, one sees the survivors, and only the survivors, which imparts such a mistaken perception of the odds [favoring success]. (p. 137)

Here, Taleb is playing a variation on a favorite theme: survivor(ship) bias. What is it? Here are three quotations that may help you understand it:

Survivor bias is a prominent form of ex-post selection bias. It exists in data sets that exclude a disproportionate share of non-surviving firms…. (“Accounting Information Free of Selection Bias: A New UK Database 1953-1999 “)

Survivorship bias causes performance results to be overstated
because accounts that have been terminated, which may have
underperformed, are no longer in the database. This is the most
documented and best understood source of peer group bias. For example, an unsuccessful management product that was
terminated in the past is excluded from current peer groups.
This screening out of losers results in an overstatement of past
performance. A good illustration of how survivor bias can skew
things is the “marathon analogy”, which asks: If only 100 runners out of a 1,000?contestant marathon actually finish, is the 100th the last? Or in the top ten percent? (“Warning! Peer Groups Are Hazardous to Our Wealth“)

It is true that a number of famous successful people have spent 10,000 hours practising. However, it is also true that many people we have never heard of because they weren’t successful also practised for 10,000 hours. And that there are successful people who were very good without practising for 10,000 hours before their breakthrough (the Rolling Stones, say). And Gordon Brown isn’t very good at being Prime Minister despite preparing for 10,000 hours. (“Better Services without Reform? It’s Just a Con“)

First of all, there are no “odds” favoring success — even in financial markets. Financial “players” do what they can do, and most of them — like baseball players — simply don’t have what it takes for great success. Outcomes are skewed, not because of (fictitious) odds but because talent is distributed unevenly.

CONCLUSION

The real lesson for us spectators is not to assume that the “winners” are merely lucky. No, the real lesson is to seek out those “winners” who have proven their skills over a long period of time, through boom and bust and boom and bust.

Those who do well, over the long run, do not do so merely because they have survived. They have survived because they do well.

Fooled by Non-Randomness

Nassim Nicholas Taleb, in his best-selling Fooled by Randomness, charges human beings with the commission of many perceptual and logical errors. One reviewer captures the point of the book, which is to

explore luck “disguised and perceived as non-luck (that is, skills).” So many of the successful among us, he argues, are successful due to luck rather than reason. This is true in areas beyond business (e.g. Science, Politics), though it is more obvious in business.

Our inability to recognize the randomness and luck that had to do with making successful people successful is a direct result of our search for pattern. Taleb points to the importance of symbolism in our lives as an example of our unwillingness to accept randomness. We cling to biographies of great people in order to learn how to achieve greatness, and we relentlessly interpret the past in hopes of shaping our future.

Only recently has science produced probability theory, which helps embrace randomness. Though the use of probability theory in practice is almost nonexistent.

Taleb says the confusion between luck and skill is our inability to think critically. We enjoy presenting conjectures as truth and are not equipped to handle probabilities, so we attribute our success to skill rather than luck.

Taleb writes in a style found all too often on best-seller lists: pseudo-academic theorizing “supported” by selective (often anecdotal) evidence. I sometimes enjoy such writing, but only for its entertainment value. Fooled by Randomness leaves me unfooled, for several reasons.

THE FUNDAMENTAL FLAW

The first reason that I am unfooled by Fooled… might be called a meta-reason. Standing back from the book, I am able to perceive its essential defect: According to Taleb, human affairs — especially economic affairs, and particularly the operations of financial markets — are dominated by randomness. But if that is so, only a delusional person can truly claim to understand the conduct of human affairs. Taleb claims to understand the conduct of human affairs. Taleb is therefore either delusional or omniscient.

Given Taleb’s humanity, it is more likely that he is delusional — or simply fooled, but not by randomness. He is fooled because he proceeds from the assumption of randomness instead of exploring the ways and means by which humans are actually capable of shaping events. Taleb gives no more than scant attention to those traits which, in combination, set humans apart from other animals: self-awareness, empathy, forward thinking, imagination, abstraction, intentionality, adaptability, complex communication skills, and sheer brain power. Given those traits (in combination) the world of human affairs cannot be random. Yes, human plans can fail of realization for many reasons, including those attributable to human flaws (conflict, imperfect knowledge, the triumph of hope over experience, etc.). But the failure of human plans is due to those flaws — not to the randomness of human behavior.

What Taleb sees as randomness is something else entirely. The trajectory of human affairs often is unpredictable, but it is not random. For it is possible to find patterns in the conduct of human affairs, as Taleb admits (implicitly) when he discusses such phenomena as survivorship bias, skewness, anchoring, and regression to the mean.

A DISCOURSE ON RANDOMNESS

What Is It?

Taleb, having bloviated for dozens of pages about the failure of humans to recognize randomness, finally gets around to (sort of) defining randomness on pages 168 and 169 (of the 2005 paperback edition):

…Professor Karl Pearson … devised the first test of nonrandomness (it was in reality a test of deviation from normality, which for all intents and purposes, was the same thing). He examined millions of runs of [a roulette wheel] during the month of July 1902. He discovered that, with high degree of statistical significance … the runs were not purely random…. Philosophers of statistics call this the reference case problem to explain that there is no true attainable randomness in practice, only in theory….

…Even the fathers of statistical science forgot that a random series of runs need not exhibit a pattern to look random…. A single random run is bound to exhibit some pattern — if one looks hard enough…. [R]eal randomness does not look random.

The quoted passage illustrates nicely the superficiality of Fooled by Randomness, and (I must assume) the muddledness of Taleb’s thinking:

  • He accepts a definition of randomness which describes the observation of outcomes of mechanical processes (e.g., the turning of a roulette wheel, the throwing of dice) that are designed to yield random outcomes. That is, randomness of the kind cited by Taleb is in fact the result of human intentions.
  • If “there is no true attainable randomness,” why has Taleb written a 200-plus page book about randomness?
  • What can he mean when he says “a random series of runs need not exhibit a pattern to look random”? The only sensible interpretation of that bit of nonsense would be this: It is possible for a random series of runs to contain what looks like a pattern. But remember that the random series of runs to which Taleb refers is random only because humans intended its randomness.
  • It is true enough that “A single random run is bound to exhibit some pattern — if one looks hard enough.” Sure it will. But it remains a single random run of a process that is intended to produce randomness, which is utterly unlike such events as transactions in financial markets.

One of the “fathers of statistical science” mentioned by Taleb (deep in the book’s appendix) is Richard von Mises, who in Probability Statistics and Truth defines randomness as follows:

First, the relative frequencies of the attributes [e.g. heads and tails] must possess limiting values [i.e., converge on 0.5, in the case of coin tosses]. Second, these limiting values must remain the same in all partial sequences which may be selected from the original one in an arbitrary way. Of course, only such partial sequences can be taken into consideration as can be extended indefinitely, in the same way as the original sequence itself. Examples of this kind are, for instance, the partial sequences formed by all odd members of the original sequence, or by all members for which the place number in the sequence is the square of an integer, or a prime number, or a number selected according to some other rule, whatever it may be. (pp. 24-25 of the 1981 Dover edition, which is based on the author’s 1951 edition)

Gregory J. Chaitin, writing in Scientific American (“Randomness and Mathematical Proof,” vol. 232, no. 5 (May 1975), pp. 47-52), offers this:

We are now able to describe more precisely the differences between the[se] two series of digits … :

01010101010101010101
01101100110111100010

The first could be specified to a computer by a very simple algorithm, such as “Print 01 ten times.” If the series were extended according to the same rule, the algorithm would have to be only slightly larger; it might be made to read, for example, “Print 01 a million times.” The number of bits in such an algorithm is a small fraction of the number of bits in the series it specifies, and as the series grows larger the size of the program increases at a much slower rate.

For the second series of digits there is no corresponding shortcut. The most economical way to express the series is to write it out in full, and the shortest algorithm for introducing the series into a computer would be “Print 01101100110111100010.” If the series were much larger (but still apparently patternless), the algorithm would have to be expanded to the corresponding size. This “incompressibility” is a property of all random numbers; indeed, we can proceed directly to define randomness in terms of incompressibility: A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself [emphasis added].

This is another way of saying that if you toss a balanced coin 1,000 times the only way to describe the outcome of the tosses is to list the 1,000 outcomes of those tosses. But, again, the thing that is random is the outcome of a process designed for randomness.

Taking Mises and Chaitin’s definitions together, we can define random events as events which are repeatable, convergent on a limiting value, and truly patternless over a large number of repetitions. Evolving economic events (e.g., stock-market trades, economic growth) are not alike (in the way that dice are, for example), they do not converge on limiting values, and they are not patternless, as I will show.

In short, Taleb fails to demonstrate that human affairs in general or financial markets in particular exhibit randomness, properly understood.

Randomness and the Physical World

Nor are we trapped in a random universe. Returning to Mises, I quote from the final chapter of Probability, Statistics and Truth:

We can only sketch here the consequences of these new concepts [e.g., quantum mechanics and Heisenberg’s principle of uncertainty] for our general scientific outlook. First of all, we have no cause to doubt the usefulness of the deterministic theories in large domains of physics. These theories, built on a solid body of experience, lead to results that are well confirmed by observation. By allowing us to predict future physical events, these physical theories have fundamentally changed the conditions of human life. The main part of modern technology, using this word in its broadest sense, is still based on the predictions of classical mechanics and physics. (p. 217)

Even now, almost 60 years on, the field of nanotechnology is beginning to hardness quantum mechanical effects in the service of a long list of useful purposes.

The physical world, in other words, is not dominated by randomness, even though its underlying structures must be described probabilistically rather than deterministically.

Summation and Preview

A bit of unpredictability (or “luck”) here and there does not make for a random universe, random lives, or random markets. If a bit of unpredictability here and there dominated our actions, we wouldn’t be here to talk about randomness — and Taleb wouldn’t have been able to marshal his thoughts into a published, marketed, and well-sold book.

Human beings are not “designed” for randomness. Human endeavors can yield unpredictable results, but those results do not arise from random processes, they derive from skill or the lack therof, knowledge or the lack thereof (including the kinds of self-delusions about which Taleb writes), and conflicting objectives.

An Illustration from Life

To illustrate my position on randomness, I offer the following digression about the game of baseball.

At the professional level, the game’s poorest players seldom rise above the low minor leagues. But even those poorest players are paragons of excellence when compared with the vast majority of American males of about the same age. Did those poorest players get where they were because of luck? Perhaps some of them were in the right place at the right time, and so were signed to minor league contracts. But their luck runs out when they are called upon to perform in more than a few games. What about those players who weren’t in the right place at the right time, and so were overlooked in spite of skills that would have advanced them beyond the rookie leagues? I have no doubt that there have been many such players. But, in the main, professional baseball abounds with the lion’s share of skilled baseball players who are there because they intend to be there, and because baseball clubs intend for them to be there.

Now, most minor leaguers fail to advance to the major leagues, even for the proverbial “cup of coffee” (appearing in few games at the end of the major-league season, when teams are allowed to expand their rosters following the end of the minor-league season). Does “luck” prevent some minor leaguers from advancement to “the show” (the major leagues)? Of course. Does “luck” result in the advancement of some minor leaguers to “the show”? Of course. But “luck,” in this context, means injury, illness, a slump, a “hot” streak, and the other kinds of unpredictable events that ballplayers are subject to. Are the events random? Yes, in the sense that they are unpredictable, but I daresay that most baseball players do not succumb to bad luck or advance very for or for very long because of good luck. In fact, ballplayers who advance to the major leagues, and then stay there for more than a few seasons, do so because they possess (and apply) greater skill than their minor-league counterparts. And make no mistake, each player’s actions are so closely watched and so extensively quantified that it isn’t hard to tell when a player is ready to be replaced.

It is true that a player may experience “luck” for a while during a season, and sometimes for a whole season. But a player will not be consistently “lucky” for several seasons. The length of his career (barring illness, injury, or voluntary retirement), and his accomplishments during that career, will depend mainly on his inherent skills and his assiduousness in applying those skills.

No one believes that Ty Cobb, Babe Ruth, Ted Williams, Christy Matthewson, Warren Spahn, and the dozens of other baseball players who rank among the truly great were lucky. No one believes that the vast majority of the the tens of thousands of minor leaguers who never enjoyed more than the proverbial cup of coffee were unlucky. No one believes that the vast majority of the millions of American males who never made it to the minor leagues were unlucky. Most of them never sought a career in baseball; those who did simply lacked the requisite skills.

In baseball, as in life, “luck” is mainly an excuse and rarely an explanation. We prefer to apply “luck” to outcomes when we don’t like the true explanations for them. In the realm of economic activity and financial markets, one such explanation (to which I will come) is the exogenous imposition of governmental power.

ARE ECONOMIC AND FINANCIAL OUTCOMES TRULY RANDOM?

They Cannot Be, Given Competition

Returning to Taleb’s main theme — the randomness of economic and financial events — I quote this key passage (my comments are in brackets and boldface):

…Most of [Bill] Gates'[s] rivals have an obsessive jealousy of his success. They are maddened by the fact that he managed to win so big while many of them are struggling to make their companies survive. [These are unsupported claims that I include only because they set the stage for what follows.]

Such ideas go against classical economic models, in which results either come from a precise reason (there is no account for uncertainty) or the good guy wins (the good guy is the one who is most skilled and has some technical superiority). [The “good guy” theory would come as a great surprise to “classical” economists, who quite well understood imperfect competition based on product differentiation and monopoly based on (among other things) early entry into a market.] Economists discovered path-dependent effects late in their game [There is no “late” in a “game” that had no distinct beginning and has no pre-ordained end.], then tried to publish wholesale on the topic that otherwise be bland and obvious. For instance, Brian Arthur, an economist concerned with nonlinearities at the Santa Fe Institute [What kinds of nonlinearities are found at the Santa Fe Institute?], wrote that chance events coupled with positive feedback other than technological superiority will determine economic superiority — not some abstrusely defined edge in a given area of expertise. [It would come as no surprise to economists — even “classical” ones — that many factors aside from technical superiority determine market outcomes.] While early economic models excluded randomness, Arthur explained how “unexpected orders, chance meetings with lawyers, managerial whims … would help determine which ones acheived early sales and, over time, which firms dominated.”

Regarding the final sentence of the quoted passage, I refer back to the example of baseball. A person or a firm may gain an opportunity to succeed because of the kinds of “luck” cited by Brian Arthur, but “good luck” cannot sustain an incompetent performer for very long.  And when “bad luck” happens to competent individuals and firms they are often (perhaps usually) able to overcome it.

While overplaying the role of luck in human affairs, Taleb underplays the role of competition when he denigrates “classical economic models,” in which competition plays a central role. “Luck” cannot forever outrun competition, unless the game is rigged by governmental intervention, namely, the writing of regulations that tend to favor certain competitors (usually market incumbents) over others (usually would-be entrants). The propensity to regulate at the behest of incumbents (who plead “public interest,” of course) is a proof of the power of competition to shape economic outcomes. It is loathed and feared, and yet it leads us in the direction to which classical economic theory points: greater output and lower prices.

Competition is what ensures that (for the most part) the best ballplayers advance to the major leagues. It’s what keeps “monopolists” like Microsoft hopping (unless they have a government-guaranteed monopoly), because even a monopolist (or oligopolist) can face competition, and eventually lose to it — witness the former “Big Three” auto makers, many formerly thriving chain stores (from Kresge’s to Montgomery Ward’s), and numerous other brand names of days gone by. If Microsoft survives and thrives, it will be because it actually offers consumers more value for their money, either in the way of products similar to those marketed by Microsoft or in entirely new products that supplant those offered by Microsoft.

Monopolists and oligopolists cannot survive without constant innovation and attention to their customers’ needs.Why? Because they must compete with the offerors of all the other goods and services upon which consumers might spend their money. There is nothing — not even water — which cannot be produced or delivered in competitive ways. (For more, see this.)

The names of the particular firms that survive the competitive struggle may be unpredictable, but what is predictable is the tendency of competitive forces toward economic efficiency. In other words, the specific outcomes of economic competition may be unpredictable (which is not a bad thing), but the general result — efficiency — is neither unpredictable nor a manifestation of randomness or “luck.”

Taleb, had he broached the subject of competition would (with his hero George Soros) denigrate it, on the ground that there is no such thing as perfect competition. But the failure of competitive forces to mimic the model of perfect competition does not negate the power of competition, as I have summarized it here. Indeed, the failure of competitive forces to mimic the model of perfect competition is not a failure, for perfect competition is unattainable in practice, and to hold it up as a measure of the effectiveness of market forces is to indulge in the Nirvana fallacy.

In any event, Taleb’s myopia with respect to competition is so complete that he fails to mention it, let alone address its beneficial effects (even when it is less than perfect). And yet Taleb dares to dismiss as a utopist Milton Friedman (p. 272) — the same Milton Friedman who was among the twentieth century’s foremost advocates of the benefits of competition.

Are Financial Markets Random?

Given what I have said thus far, I find it almost incredible that anyone believes in the randomness of financial markets. It is unclear where Taleb stands on the random-walk hypothesis, but it is clear that he believes financial markets to be driven by randomness. Yet, contradictorily, he seems to attack the efficient-markets hypothesis (see pp. 61-62), which is the foundation of the random-walk hypothesis.

What is the random-walk hypothesis? In brief, it is this: Financial markets are so efficient that they instantaneously reflect all information bearing on the prices of financial instruments that is then available to persons buying and selling those instruments. (The qualifier “then available to persons buying and selling those instruments” leaves the door open for [a] insider trading and [b] arbitrage, due to imperfect knowledge on the part of some buyers and/or sellers.) Because information can change rapidly and in unpredictable ways, the prices of financial instruments move randomly. But the random movement is of a very special kind:

If a stock goes up one day, no stock market participant can accurately predict that it will rise again the next. Just as a basketball player with the “hot hand” can miss the next shot, the stock that seems to be on the rise can fall at any time, making it completely random.

And, therefore, changes in stock prices cannot be predicted.

Note, however, the focus on changes. It is that focus which creates the illusion of randomness and unpredictability. It is like hoping to understand the movements of the planets around the sun by looking at the random movements of a particle in a cloud chamber.

When we step back from day-to-day price changes, we are able to see the underlying reality: prices (instead of changes) and price trends (which are the opposite of randomness). This (correct) perspective enables us to see that stock prices (on the whole) are not random, and to identify the factors that influence the broad movements of the stock market.
For one thing, if you look at stock prices correctly, you can see that they vary cyclically. Here is a telling graphic (from “Efficient-market hypothesis” at Wikipedia):

Returns on stocks vs. PE ratioPrice-Earnings ratios as a predictor of twenty-year returns based upon the plot by Robert Shiller (Figure 10.1,[18] source). The horizontal axis shows the real price-earnings ratio of the S&P Composite Stock Price Index as computed in Irrational Exuberance (inflation adjusted price divided by the prior ten-year mean of inflation-adjusted earnings). The vertical axis shows the geometric average real annual return on investing in the S&P Composite Stock Price Index, reinvesting dividends, and selling twenty years later. Data from different twenty-year periods is color-coded as shown in the key. See also ten-year returns. Shiller states that this plot “confirms that long-term investors—investors who commit their money to an investment for ten full years—did do well when prices were low relative to earnings at the beginning of the ten years. Long-term investors would be well advised, individually, to lower their exposure to the stock market when it is high, as it has been recently, and get into the market when it is low.”[18] This correlation between price to earnings ratios and long-term returns is not explained by the efficient-market hypothesis.

Why should stock prices tend to vary cyclically? Because stock prices generally are driven by economic growth (i.e., changes in GDP), and economic growth is strongly cyclical. (See this post.)

More fundamentally, the economic outcomes reflected in stock prices aren’t random, for they depend mainly on intentional behavior along well-rehearsed lines (i.e., the production and consumption of goods and services in ways that evolve over time). Variations in economic behavior, even when they are unpredictable, have explanations; for example:

  • Innovation and capital investment spur the growth of economic output.
  • Natural disasters slow the growth of economic output (at least temporarily) because they absorb resources that could have gone to investment  (as well as consumption).
  • Governmental interventions (taxation and regulation), if not reversed, dampen growth permanently.

There is nothing in those three statements that hasn’t been understood since the days of Adam Smith. Regarding the third statement, the general slowing of America’s economic growth since the advent of the Progressive Era around 1900 is certainly not due to randomness, it is due to the ever-increasing burden of taxation and regulation imposed on the economy — an entirely predictable result, and certainly not a random one.

In fact, the long-term trend of the stock market (as measured by the S&P 500) is strongly correlated with GDP. And broad swings around that trend can be traced to governmental intervention in the economy. The following graph shows how the S&P 500, reconstructed to 1870, parallel constant-dollar GDP:

The next graph shows the relationship more clearly.

090711_Real S&P 500 vs Real GDP

090711_Real S&P 500 vs Real GDP_2

The wild swings around the trend line began in the uncertain aftermath of World War I, which saw the imposition of production and price controls. The swings continued with the onset of the Great Depression (which can be traced to governmental action), the advent of the anti-business New Deal, and the imposition of production and price controls on a grand scale during World War II. The next downswing was occasioned by the culmination the Great Society, the “oil shocks” of the early 1970s, and the raging inflation that was touched off by — you guessed it — government policy. The latest downswing is owed mainly to the financial crisis born of yet more government policy: loose money and easy loans to low-income borrowers.

And so it goes, wildly but predictably enough if you have the faintest sense of history. The moral of the story: Keep your eye on government and a hand on your wallet.

CONCLUSION

There is randomness in economic affairs, but they are not dominated by randomness. They are dominated by intentions, including especially the intentions of the politicians and bureaucrats who run governments. Yet, Taleb has no space in his book for the influence of their deeds economic activity and financial markets.

Taleb is right to disparage those traders (professional and amateur) who are lucky enough to catch upswings, but are unprepared for downswings. And he is right to scoff at their readiness to believe that the current upswing (uniquely) will not be followed by a downswing (“this time it’s different”).

But Taleb is wrong to suggest that traders are fooled by randomness. They are fooled to some extent by false hope, but more profoundly by their inablity to perceive the economic damage wrought by government. They are not alone of course; most of the rest of humanity shares their perceptual failings.

Taleb, in that respect, is only somewhat different than most of the rest of humanity. He is not fooled by false hope, but he is fooled by non-randomness — the non-randomness of government’s decisive influence on economic activity and financial markets. In overlooking that influence he overlooks the single most powerful explanation for the behavior of markets in the past 90 years.

The “Big Five” and Economic Performance

The “Big Five” doesn’t comprise Honda, Toyota, Ford, GM, and Chrysler (soon to become the become the “Big Four”: Honda, Toyota, Ford, and GM-Chrysler-Obama Inc.). The “Big Five” refers to the Big Five personality traits: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.

I discussed the Big Five at length here, and touched on them here. Now comes Arnold Kling, with an economic analysis of the Big Five, which draws on Daniel Nettle’s Personality: What Makes You the Way You Are. Kling, in the course of his post, discusses Nettle’s interpretations of the Big Five.

Regarding Openness, Kling quotes Nettle thusly:

Some people are keen on reading and galleries and theatre and music, whilst others are not particularly interested in any of them. This tendency towards greater exploration of all complex recreational practices is uniquely predicted by Openness….

High Openness scorers are strongly drawn to artistic and investigative professions, and will often schew traditional institutional structure and progression in order to pursue them.

Precisely. For example,  a high Openness scorer (93rd percentile) progressed from low-paid analyst (with a BA in economics) to well-paid VP for finance and administration (with nothing more than the same BA in economics), stopping along the way to own and run a business and manage groups of PhDs. The underlying lesson: Education is far less important to material success than intellectual flexibility (high Openness), combined with drive (high Conscientiusness and Neuroticism), and focus (low Extraversion and Agreeableness).

Kling says this about Conscientiousness (self-discipline and will power):

I think that people with low Conscientiousness annoy me more than just about any other type of people.

Me, too (Conscientiousness score: 99th percentile). I find it hard to be around individuals who always put off until tomorrow what they could do in a minute, who never read or return the books and DVDs you lend them, who are always ready to excuse failings (theirs and others), and who then try to cast their lack of organization (and resulting lack of personal accomplishment) as a virtue: “Life is too short to sweat the small stuff.” Yeah, but you never sweat the big stuff, either; look at the state of your house and your bank account. The small stuff and big stuff come in a single package.

According to Kling, “Nettle thinks of Extraversion as something like lust for life, sensation-seeking, and ambition.” More from Nettle:

We should be careful in equating Extraversion with sociability… shyness is most often due to … high Neuroticism and anxiety….

…The introvert is, in a way, aloof from the rewards of the world, which gives him tremendous strength and independence from them.

Right on, says this introvert (Extraversion score: 4th percentile).

Kling says this about Agreeableness:

To be agreeable, you have to be able to “mentalize” (read the feelings of others, which autistic people have trouble doing) and empathize (that is, care about others’ feelings, given that you can read them. Sociopaths can read you, but they don’t mind making you feel bad.)

On average, women are more agreeable than men. That is why Peter Thiel may have been onto something when he said that our country changed when women got the right to vote. If people project their personalities onto politics, and if agreeability goes along with more socialist policies, then giving women the right to vote should make countries more socialist.

Thiel is on to something. Although socialism gained a foothold in the U.S. during TR’s reign (i.e., long before the passage of Amendment XIX to the Constitution), it’s important to note that women were prominent agitators and muck-rakers in the early 1900s. Among other things, women were the driving force behind Prohibition. That failed experiment can now be seen as an extremely socialistic policy; it attempted to dictate a “lifestyle” choice, just as today’s socialists try to dictate  “lifestyle” choices about what we smoke, eat, drive, say, etc. — and with too-frequent success. If socialism isn’t a “motherly” attitude, I don’t know what is. (Full disclosure, my Agreeableness score is 4th percentile. Just leave me alone and I’ll live my life quite well, without any help from government, thank you.)

Finally, there’s Neuroticism, about which Nettle says:

There are motivational advantages of Neuroticism. There may be cognitive ones too. It has long been known that, on average, people are over-optimistic about the outcomes of their behaviour, especially once they have a plan… This is well documented in the business world, with its over-optimistic growth plans, and also in military leadership, where it is clear that generals are routinely over-sanguine about their likely progress and under-reflective about the complexities….

…Professional occupations are those that mainly involve thinking, and it is illuminating that Neuroticism tended to be advantageous in these fields and not in, say, sales.

Neuroticism (also known as Emotional Stability) is explained this way by an organization that administers the “Big Five” test:

People low in emotional stability are emotionally reactive. They respond emotionally to events that would not affect most people, and their reactions tend to be more intense than normal. They are more likely to interpret ordinary situations as threatening, and minor frustrations as hopelessly difficult. Their negative emotional reactions tend to persist for unusually long periods of time, which means they are often in a bad mood. These problems in emotional regulation can diminish a ones ability to think clearly, make decisions, and cope effectively with stress.

Take a person who is low in Emotional Stability (my score: 12th percentile), low in Extraversion, but high in Conscientiousness and Openness. Such a person is willing and able to tune out the distractions of the outside world, and to channel his drive and intellectual acumen in productive, creative ways — until he finally says “enough,” and quits the world of work to enjoy the better things in life.

Freedom of Will and Political Action

INTRODUCTION

Without freedom of will, human beings could not make choices, let alone “good” or “bad” ones. Without freedom of will there would be no point in arguing about political philosophies, for — as the song says — whatever will be, will be.

This post examines freedom of will from the vantage point of indeterminacy. Instead of offering a direct proof of freedom of will, I suggest that we might as well believe as if and act as though we possess it, given our inability to delve the depths of the physical universe or the human psyche.

Why focus on indeterminacy? Think of my argument as a variant of Pascal’s wager, which can be summarized as follows:

Even though the existence of God cannot be determined through reason, a person should wager as though God exists, because so living has everything to gain, and nothing to lose.

Whatever its faults — and it has many — Pascal’s wager suggests a way out of an indeterminate situation.

The wager I make in this post is as follows:

  • We cannot discern the deepest physical and psychological truths.
  • Therefore, we cannot say with certainty whether we have freedom of will.
  • We might as well act as if we have freedom of will; if we do not have it, our (illusory) choices cannot make us worse off, but if we do have it our choices may make us better off.

The critical word in the conclusion is “may.” Our choices may make us better off, but only if they are wise choices. It is “may” which gives weight to our moral and political choices. The wrong ones can make us worse off; the right ones, better off.

PHYSICAL INDETERMINACY

Our Inherent Limitations as Humans

I begin with the anthropic principle, which (as summarized and discussed here),

refers to the idea that the attributes of the universe must be consistent with the requirements of our own existence.

In fact, there is no scientific reason to believe that the universe was created in order that human beings might exist. From a scientific standpoint, we are creatures of the universe, not its raison d’etre.

The view that we, the human inhabitants of Earth, have a privileged position is a bias that distorts our observations about the universe. Philosopher-physicist-mathematician Nick Bostrom explains the bias:

[T]here are selection effects that arise not from the limitations of some measuring device but from the fact that all observations require the existence of an appropriately positioned observer. Our data is [sic] filtered not only by limitations in our instrumentation but also by the precondition that somebody be there to “have” the data yielded by the instruments (and to build the instruments in the first place). The biases that occur due to that precondition … we shall call … observation selection effects….

Even trivial selection effects can sometimes easily be overlooked:

It was a good answer that was made by one who when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods,—‘Aye,’ asked he again, ‘but where are they painted that were drowned after their vows?’ And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happens much oftener, neglect and pass them by. (Bacon 1620)

When even a plain and simple selection effect, such as the one that Francis Bacon comments on in the quoted passage, can escape a mind that is not paying attention, it is perhaps unsurprising that observation selection effects, which tend to be more abstruse, have only quite recently been given a name and become a subject of systematic study.

The term “anthropic principle” … is less than three decades old. There are, however, precursors from much earlier dates. For example, in Hume’s Dialogues Concerning Natural Religion, one can find early expressions of some ideas of anthropic selection effects. Some of the core elements of Kant’s philosophy about how the world of our experience is conditioned on the forms of our sensory and intellectual faculties are not completely unrelated to modern ideas about observation selection effects as important methodological considerations in theory-evaluation, although there are also fundamental differences. In Ludwig Boltzmann’s attempt to give a thermodynamic account of time’s arrow …, we find for perhaps the first time a scientific argument that makes clever use of observation selection effects…. A more successful invocation of observation selection effects was made by R. H. Dicke (Dicke 1961), who used it to explain away some of the “large-number coincidences”, rough order-of-magnitude matches between some seemingly unrelated physical constants and cosmic parameters, that had previously misled such eminent physicists as Eddington and Dirac into a futile quest for an explanation involving bold physical postulations.

The modern era of anthropic reasoning dawned quite recently, with a series of papers by Brandon Carter, another cosmologist. Carter coined the term “anthropic principle” in 1974, clearly intending it to convey some useful guidance about how to reason under observation selection effects….

The term “anthropic” is a misnomer. Reasoning about observation selection effects has nothing in particular to do with homo sapiens, but rather with observers in general…

We humans, as the relevant observers of the physical world, can perceive only those patterns that we are capable of perceiving, given the wiring of our brains and the instruments that we design with the use of our brains. Because of our inherent limitations, the limitations that our limitations impose on our instruments, and the inherent limitations of the instruments, we may never be able to see all that there is to see in the universe, even in that part of the universe which is close at hand.

We may never know, for example, whether physical laws change or remain the same in all places and for all time. We may never know (as a matter of scientific observation) how the universe originated, given that its cause(s) (whether Divine or otherwise) may lie outside the boundaries of the universe.

Implications for the Physical Sciences

It follows that the order which we find in the universe may bear no resemblance to the real order of the universe. It may simply be the case that we are incapable of perceiving certain phenomena and the physical laws that guide them, which — for all we know — may change from place to place and time to time.

A good case in point involves the existence of God, which many doubt and many others deny. The doubters and deniers are unable to perceive the existence of God, whereas many believers claim that they can do so. But the inability of doubters and deniers to perceive the existence of God does not disprove God’s existence, as an honest doubter or denier will admit.

It is trite but true to say that we do not know what we do not know; that is, there are unknown unknowns. Given our limitations as observers, the universe likely contains many unknown unknowns that will never become known unknowns.

Given our limitations, we must make do with our perceptions of the universe. Making do means that we learn what we are able to learn (imperfectly) about the universe and its components, and we then use our imperfect knowledge to our advantage wherever possible. (A crude analogy occurs in baseball, where a batter who doesn’t understand why a curveball curves is nevertheless able to hit one.)

THE INDETERMINACY OF HUMAN BEHAVIOR

The tautologous assumption that individuals act in such a way as to maximize their happiness tells us nothing about political or economic outcomes. (The assumption remains tautologous despite altruism, which is nothing more than another way of enhancing the happiness of altruistic individuals.) We can know nothing about the likely course of political and economic events until we know something about the psychological drives that shape those events. Even if we know something (or a great deal) about psychological drives, can we ever know enough to say that human behavior is (or is not) deterministic? The answer I offer here is “no.”

A Conflict of Visions

Economic and political behavior depends greatly on human psychology. For example, Thomas Sowell, in A Conflict of Visions, posits two opposing visions: the unconstrained vision (I would call idealism) and the constrained vision (which I would call realism). At the end of chapter 2, Sowell summarizes the difference between the two visions:

The dichotomy between constrained and unconstrained visions is based on whether or not inherent limitations of man are among the key elements included in each vision…. These different ways of conceiving man and the world lead not merely to different conclusions but to sharply divergent, often diametrically opposed, conclusions on issues ranging from justice to war.

Thus, in chapter 5, Sowell writes:

The enormous importance of evolved systemic interactions in the constrained vision does not make it a vision of collective choice, for the end results are not chosen at all — the prices, output, employment, and interest rates emerging from competition under laissez-faire economics being the classic example. Judges adhering closely to the written law — avoiding the choosing of results per se — would be the analogue in law. Laissez-faire economics and “black letter” law are essentially frameworks, with the locus of substantive discretion being innumerable individuals.

By contrast,

those in the tradition of the unconstrained vision almost invariably assume that some intellectual and moral pioneers advance far beyond their contemporaries, and in one way or another lead them toward ever-higher levels of understanding and practice. These intellectual and moral pioneers become the surrogate decision-makers, pending the eventual progress of mankind to the point where all can make moral decisions.

Digging Deeper

Sowell’s analysis is enlightening, but not comprehensive. The human psyche has many more facets than political realism and idealism. Consider the “Big Five” personality traits:

In psychology, the “Big Five” personality traits are five broad factors or dimensions of personality developed through lexical analysis. This is the rational and statistical analysis of words related to personality as found in natural-language dictionaries.[1] The traits are also referred to as the “Five Factor Model” (FFM).

The model is considered to be the most comprehensive empirical or data-driven enquiry into personality. The first public mention of the model was in 1933, by L. L. Thurstone in his presidential address to the American Psychological Association. Thurstone’s comments were published in Psychological Review the next year.[2]

The five factors are Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN, or CANOE if rearranged). Some disagreement remains about how to interpret the Openness factor, which is sometimes called “Intellect.” [3] Each factor consists of a cluster of more specific traits that correlate together. For example, extraversion includes such related qualities as sociability, excitement seeking, impulsiveness, and positive emotions.

The “Big Five” model is open to criticism, but even assuming its perfection we are left with an unpredictable human psyche. For example, I tested myself (here), with the following results:

Extraversion — 4th percentile
Agreeableness — 4th percentile
Conscientiousness — 99th percentile
Emotional stability — 12th percentile
Openness — 93rd percentile

(NOTE: “Emotional stability” is also called “neuroticism,” “a tendency to experience unpleasant emotions easily, such as anger, anxiety, depression, or vulnerability.” My “neuroticism” doesn’t involve anxiety, except to the extent that I am super-conscientious and, therefore, bothered by unfinished business. Nor does it involve depression or vulnerability. But I am easily angered by incompetence, stupidity, and carelessness. There is far too much of that stuff in the world, which explains my low scores on “extraversion” and “agreeableness.” “Openness” measures my intellectual openness, of course, and not my openness to people.)

I daresay that anyone else who happens to have the same scores as mine (which are only transitory) will have arrived at those scores by an entirely different route. That is, he or she probably differs from me on many of the following dimensions: age, race, ethic/genetic inheritance, income and education of parents and self, location of residence, marital status, number and gender of children (if any), tastes in food, drink, and entertainment. The list could go on, but the principle should be obvious: There is no accounting for psychological differences, or if there is, the accounting is beyond our ken.

Is everyone with my psychological-genetic-demographic profile a radical-right-minarchist like me? I doubt it very much. But even if that were so, it would be impossible to collect the data to prove it, whereas the (likely) case of a single exception would disprove it.

A Caveat, of Sorts

There is something in the human psyche that seems to drive us toward statism. What that says about human nature is almost trite: Happiness — for many humans — involves neither not wealth-maximization or liberty. It involves attitudes that can be expressed as “safety in numbers,” “going along with the crowd,” and “harm is worse than gain.” And it involves the political manipulation of those attitudes in the service of a drive that is not universal but which can dominate events, namely, the drive to power.

CONCLUSION

The preceding caveat notwithstanding, I have made the case that I set out to make:

We might as well act as if we have freedom of will; if we do not have it, our (illusory) choices cannot make us worse off, but if we do have it our choices may make us better off.

In fact, the caveat points to the necessity of acting as if we have freedom of will. Only by doing so can we hope to overcome the psychological tendencies that cause us political and economic harm. For those tendencies are just that — tendencies. They are not iron rules of conduct. And they have been overcome before.

Modeling Is Not Science

The title of this post applies, inter alia, to econometric models — especially those that purport to forecast macroeconomic activity — and climate models — especially those that purport to forecast global temperatures. I have elsewhere essayed my assessments of macroeconomic and climate models. (See this and this, for example.) My purpose here is to offer a general warning about models that claim to depict and forecast the behavior of connected sets of phenomena (systems) that are large, complex, and dynamic. I draw, in part, on a paper that I wrote 28 years ago. That paper is about warfare models, but it has general applicability.

HEMIBEL THINKING

Philip M. Morse and George E. Kimball, pioneers in the field of military operations research — the analysis and modeling of military operations — wrote that the

successful application of operations research usually results in improvements by factors of 3 or 10 or more. . . . In our first study of any operation we are looking for these large factors of possible improvement. . . .

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Methods of Operations Research, 1946, p. 38)

This is science-speak for the following proposition: In large, complex, and dynamic systems (e.g., war, economy, climate) there is much uncertainty about the relevant parameters, about how to characterize their interactions mathematically, and about their numerical values.

Hemibel thinking assumes great importance in light of the imprecision inherent in models of large, complex, and dynamic systems. Consider, for example, a simple model with only 10 parameters. Even if such a model doesn’t omit crucial parameters or mischaracterize their interactions,  its results must be taken with large doses of salt. Simple mathematics tells the cautionary tale: An error of about 12 percent in the value of each parameter can produce a result that is off by a factor of 3 (a hemibel); An error of about 25 percent in the value of each parameter can produce a result that is off by a factor of 10. (Remember, this is a model of a relatively small system.)

If you think that models and “data” about such things as macroeconomic activity and climatic conditions cannot be as inaccurate as that, you have no idea how such models are devised or how such data are collected and reported. It would be kind to say that such models are incomplete, inaccurate guesswork. It would be fair to say that all too many of them reflect their developers’ policy biases.

Of course, given a (miraculously) complete model, data errors might (miraculously) be offsetting, but don’t bet on it. It’s not that simple: Some errors will be large and some errors will be small (but which are which?), and the errors may lie in either direction (but in which direction?). In any event, no amount of luck can prevent a modeler from constructing a model whose estimates advance a favored agenda (e.g., massive, indiscriminate government spending; massive, futile, and costly efforts to cool the planet).

NO MODEL IS EVER PROVEN

The construction of a model is only one part of the scientific method. A model means nothing unless it can be tested repeatedly against facts (facts not already employed in the development of the model) and, through such tests, is found to be more accurate than alternative explanations of the same facts.As Morse and Kimball put it,

[t]o be valuable [operations research] must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. (Op. cit., p. 10)

Even after rigorous testing, a model is never proven. It is, at best, a plausible working hypothesis about relations between the phenomena that it encompasses.

A model is never proven for two reasons. First, new facts may be discovered that do not comport with the model. Second, the facts upon which a model is based may be open to a different interpretation, that is, they may support a new model that yields better predictions than its predecessor.

The fact that a model cannot be proven can be take as an excuse for action: “We must act on the best information we have.”  That excuse — which justifies an entire industry, namely, government-funded analysis — does not fly, as I discuss below.

MODELS LIE WHEN LIARS MODEL

Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a masterful demonstration of a model that proves what it was designed to prove by the assumptions built into it, see this.

IMPLICATIONS

Government policies can be dangerous and impoverishing things. Despite that, it is hard (if not impossible) to modify and reverse government policies. Consider, for example, the establishment of public schools more than a century ago, the establishment of Social Security more than 70 years ago, and the establishment of Medicare and Medicaid more than 40 years ago. There is plenty of evidence that all four institutions are monumentally expensive failures. But all four institutions have become so entrenched that to call for their abolition is to be thought of as an eccentric, if not an uncaring anti-government zealot. (For the latest about public schools, see this.)

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).

CLOSING THOUGHTS

The average person is so mystified and awed by “science” that he has little if any understanding of its limitations and pitfalls, some of which I have addressed here in the context of modeling. The average person’s mystification and awe are unjustified, given that many so-called scientists exploit the public’s mystification and awe in order to advance personal biases, gain the approval of other scientists (whence “consensus”), and garner funding for research that yields results congenial to its sponsors (e.g., global warming is an artifact of human activity).

Isaac Newton, who must be numbered among the greatest scientists in human history, was not a flawless scientist. (Has there ever been one?) But scientists and non-scientists alike should heed Newton on the subject of scientific humility:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. (Quoted in Horace Freeland Judson,The Search for Solutions, 1980, p. 5.)


Related reading: Willis Eschenbach, “How Not to Model the Historical Temperature“, Watts Up With That?, March 25, 2018

Math Puzzler

Here is the problem (from Misha Lemeshko, via Eugene Volokh):

8809 = 6
7111 = 0
2172 = 0
6666 = 4
1111 = 0
3213 = 0
7662 = 2
9312 = 1
0000 = 4
2222 = 0
3333 = 0
5555 = 0
8193 = 3
8096 = 5
7777 = 0
9999 = 4
7756 = 1
6855 = 3
9881 = 5
5531 = 0

2581 = ?

I found the general and specific solutions to the problem after pondering it for about 15 minutes. Can you do it?

If you’ve given up, or want to check your answers against mine, scroll down.

Specific solution: 2581 = 2, because…

General solution: The value of a string of numbers comprising the integers 0, 1, 2, 3, 5, 6, 7, 8, 9 is equal to the sum of the values of the integers contained in the string, where the value assigned to each integer is equal to the number of closed curves contained in it. Thus: 0 = 1, 2 = 0, 3 = 0, 5 = 0, 6 = 1, 7 = 0, 8 = 2, and 9 = 1. Therefore, for example, 0000 = 4 because each integer in the string has 1 closed curve; that is, 1 + 1 + 1 + 1 = 4.

Note that the preceding general solution omits the integer 4. Why? There is no way of determining the value of 4 because it doesn’t occur in Lemeshko’s list of strings. If, however, the value of 4 were known to be 0 (e.g., 8884 = 6, 1114 = 0), the general solution would be as follows: The value of a string of numbers comprising the integers 0 through 9 is equal to the sum of the values of the integers contained in the string, where the value assigned to each integer is equal to the number of closed curves contained in it. Thus: 0 = 1, 2 = 0, 3 = 0, 4 = 0, 5 = 0, 6 = 1, 7 = 0, 8 = 2, and 9 = 1. Therefore, for example, 4444 = 0 (0 + 0 + 0 + 0 = 0) because 4 (in standard typography) contains a closed area but not a closed curve.

If, however, the value of 4 were known to be 1 (e.g., 8884 = 7, or 1114 =1), the general solution would be as follows: The value of a string of numbers comprising the integers 0 through 9 is equal to the sum of the values of the integers contained in the string, where the value assigned to each integer is equal to the number of closed areas contained in it. Thus: 0 = 1, 2 = 0, 3 = 0, 4 = 1, 5 = 0, 6 = 1, 7 = 0, 8 = 2, and 9 = 1. Therefore, for example, 4444 = 4 because each integer in the string has 1 closed area; that is, 1 + 1 + 1 + 1 = 4.

A Person’s Truth

Intellectual truth is what you “know” because the “knowledge” flows from a logical argument (which may be supported, in part, by “facts”). Real truth is what you know from direct knowledge.

Intellectual truth can be useful; often, it is indispensable. If your father tells you that it is dangerous — probably life-threatening — to drive a car into a stone wall at 60 miles and hour, you are well advised to heed your father. You should do so even though he probably doesn’t know of the danger from experience or observation.

Indeed, the horizon of useful intellectual truth is vast and seemingly infinite. It encompasses much (but not all) of science, not to mention technology (applied science), and even folklore (where it represents insights gained by trial and error).

Intellectual truth intersects with real truth in many ways. A good example of an intersection is found in counting, which is the foundation of mathematics. We often count real objects that we can sense for ourselves in order to determine such things as whether there are enough eggs to make a cake, enough clean shirts to last until the next laundry day, etc. The act of counting came long before the development of mathematics as a discipline, yet mathematics tells us (among many things) why counting “works” and how to employ it in a variety of ways ranging from the simple and obvious to the dauntingly complex ways (e.g., from addition — a form of counting — and multiplication — a form of addition — to such abstruse subjects as number theory.

I use counting as an example because it leads to the moral of this post: Intellectual truth is real truth only where it comports with real truth. Intellectual truth which doesn’t comport with real truth — or which hasn’t yet been found to be consistent with real truth — is mere conjecture.

Science, Evolution, Religion, and Liberty

If a man will begin with certainties, he shall end in doubts, but if he will be content to begin with doubts, he shall end in certainties.

Francis Bacon (1561–1626),
British philosopher, essayist, statesman.
The Advancement of Learning, bk. 1, ch. 5, sct. 8 (1605).
(Source:
Bartleby.com)

Science begins with doubts — questions about the workings of the world around us — and moves bit by bit toward greater certainty, without ever reaching complete certainty. Philosophy and religion begin with certainties — a priori explanations about the workings of the world — and end in doubts because the world cannot be explained by pure faith or pure reason. But philosophy and religion can tell us how to live life morally, whereas science can only help us live life more comfortably, if that is what we wish to do.

Scientists — when they are being scientists — begin with questions (doubts), which lead them to make observations, and from those observations they derive theories about the workings of the universe or some aspect of it. Those theories can then be tested to see if they have predictive power, and revised when they are found wanting, that is, when new observations (facts) cast doubt on their validity. Scientific facts may sometimes be beyond doubt (e.g., the temperature at which water freezes under specified conditions), but scientific theories — which are generalizations from facts — are never beyond doubt. Or they never should be.

Consider Albert Einstein, arguably the greatest scientist who has yet lived. According to physicist Lee Smolin,

[a]lthough Einstein was . . . the discoverer of quantum phenomena, he became in time the main opponent of the theory of quantum mechanics. By his own account, he spent far more time thinking about quantum theory than he did about relativity. But he never found a theory of quantum physics that satisfied him. . . .

Quantum theory was not the only theory that bothered Einstein. Few people have appreciated how dissatisfied he was with his own theories of relativity. Special relativity grew out of Einstein’s insight that the laws of electromagnetism cannot depend on relative motion and that the speed of light therefore must be always the same, no matter how the source or the observer moves. . . . Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.

Why? The main reason was that he wanted to extend relativity to include all observers, whereas his special theory postulates only an equivalence among a limited class of observers—those who aren’t accelerating. A second reason was to incorporate gravity, making use of a new principle he called the equivalence principle. This postulates that observers can never distinguish the effects of gravity from those of acceleration so long as they observe phenomena only in their immediate neighborhood. By this principle [general relativity] he linked the problem of gravity with the problem of extending relativity to all observers. . . .

[I]n spite of the great triumph general relativity represented, Einstein did not linger long over it. For Einstein, quantum physics was the essential mystery, and nothing could be really fundamental that was not part of the solution to that problem. As general relativity didn’t explain quantum theory, it had to be provisional as well. It could only be a step towards Einstein’s goal, which was to find a theory of quantum phenomena that would agree with all the experiments, but satisfy his demand for clarity and completeness.

Einstein imagined for a time that such a theory could come from an extension of general relativity. Thus he entered into the final period of his scientific life, his search for a unified field theory. He sought an extension of general relativity that would incorporate electromagnetism, thereby wedding the large-scale world where gravity dominates with the small-scale world of quantum physics. . . .

[B]y the end of his life Einstein had to some extent abandoned his search for a unified field theory. He had failed to find a version of the theory that did what was most important to him, which is to explain quantum phenomena in a way that involved neither measurements nor statistics. In his last years he was moving on to something even more radical. He proposed to give up the idea that space and time are continuous. . . .

I think a sober assessment is that up until now, almost all of us who work in theoretical physics have failed to live up to Einstein’s legacy. His demand for a coherent theory of principle was uncompromising. It has not been reached—not by quantum theory, not by special or general relativity, not by anything invented since. Einstein’s moral clarity, his insistence that we should accept nothing less than a theory that gives a completely coherent account of individual phenomena, cannot be followed unless we reject almost all contemporary theoretical physics as insufficient. . . .

In my whole career as a theoretical physicist, I have known only a handful of colleagues of whom it can truly be said have followed Einstein’s path. They are driven, as Einstein was, by a moral need for clear understanding. In everything they do, these few strive continually to invent a new theory of principle that could satisfy the strictest demands of coherence and consistency without regard to fashion or the professional consequences. Most have paid for their independence in a harder career path than equally talented scientists who follow the research agendas of the big professors.

I have quoted Smolin at length because he reveals two key facets of Einstein, the scientist: a willingness to abandon a theory, and a stubbornness about challenging the conventional wisdom, even though its proponents were equally eminent scientists.

Einstein stands as a paragon among scientists: unwilling to run with the herd, unwilling to “follow any fad or popular direction,” as Smolin puts it elsewhere in the essay quoted above. Now we seem to have herds of so-called scientists who cling to certain theories because those theories are popular and dominant. They may be great scientists — or hacks — who have come to a certain worldview and are loathe to abandon it, or they may be followers of renowned scientists who lack the imagination to see alternative explanations of phenomena. Whatever the case, a “scientist” who insists on the truth of his worldview has abandoned science for something that might as well be called religion or philosophy.

In the case of global warming, we’ve seen the herd instinct at work for many years. It has become an article of faith among academic and government scientists not only that global warming is due mainly to human activity but also that it is “bad.” Dr. Roy Spencer, an atmospheric scientist, stands back from the fray in “Let’s Be Honest about the Real Consensus” (link added):

“Consensus” among scientists is not definitive, and some have even argued that in science it is meaningless or counterproductive. After all, even scientific “laws” have been disproved in the past (e.g. the Law of Parity in nuclear physics). Global warming is a process that can not be measured in controlled lab experiments, and so in many respects it can not be tested or falsified in the traditional scientific sense. Nevertheless, I’m willing to admit that in the policymakers’ realm, scientific consensus might have some limited value. But let’s be honest about what that consensus refers to: that “humans influence the climate”. Not that “global warming is a serious threat to mankind”.

Moreover, it’s certainly not clear that the scientific consensus about global warming is correct. (See, for example, this earlier post.)

Now we come to evolution. I have written elsewhere about the tendency of evolutionary biologists (and their hangers-on at places like The Panda’s Thumb) to act like priests of a secular religion. But just how firm is the ground on which their temple is built? Not all that firm, according to a recent report in ScienceDaily:

Contrary to inheritance laws the scientific world has accepted for more than 100 years, some plants revert to normal traits carried by their grandparents, bypassing genetic abnormalities carried by both parents.These mutant parent plants apparently have hidden templates containing genetic information from the preceding generation that can be transferred to their offspring, even though the traits aren’t evident in the parents, according to Purdue University researchers. This discovery flies in the face of the scientific laws of inheritance first described by Gregor Mendel in the mid-1800s and still taught in classrooms around the world today.

“This means that inheritance can happen more flexibly than we thought in the past,” said Robert Pruitt, a Purdue Department of Botany and Plant Pathology molecular geneticist. “While Mendel’s laws that we learned in high school still are fundamentally correct, they’re not absolute.

“If the inheritance mechanism we found in the research plant Arabidopsis exists in animals, too, it’s possible that it will be an avenue for gene therapy to treat or cure diseases in both plants and animals.”

The study is published in the March 24 issue of the journal Nature. . . .

Editor’s Note: The original news release can be found here.

Such findings don’t discredit evolutionary theory, but they do underscore two points:

  • Evolutionary theory is still very much in flux.
  • Prevailing scientific theories are never as secure as they seem to be — or as many of their adherents would like them to be.

Nevertheless, the scientific consensus seems to be that any scientist who even entertains intelligent design (ID) as a supplementary explanation of the development of life forms has somehow become a non-scientist. Consider the recent controversy surrounding Dr. Richard Sternberg, as described in The Washington Post of August 19:

Evolutionary biologist Richard Sternberg made a fateful decision a year ago.

As editor of the hitherto obscure Proceedings of the Biological Society of Washington, Sternberg decided to publish a paper making the case for “intelligent design,” a controversial theory that holds that the machinery of life is so complex as to require the hand — subtle or not — of an intelligent creator.

Within hours of publication, senior scientists at the Smithsonian Institution — which has helped fund and run the journal — lashed out at Sternberg as a shoddy scientist and a closet Bible thumper.

“They were saying I accepted money under the table, that I was a crypto-priest, that I was a sleeper cell operative for the creationists,” said Steinberg, 42 , who is a Smithsonian research associate. “I was basically run out of there.”

An independent agency has come to the same conclusion, accusing top scientists at the Smithsonian’s National Museum of Natural History of retaliating against Sternberg by investigating his religion and smearing him as a “creationist.”

The U.S. Office of Special Counsel, which was established to protect federal employees from reprisals, examined e-mail traffic from these scientists and noted that “retaliation came in many forms . . . misinformation was disseminated through the Smithsonian Institution and to outside sources. The allegations against you were later determined to be false.”

“The rumor mill became so infected,” James McVay, the principal legal adviser in the Office of Special Counsel, wrote to Sternberg, “that one of your colleagues had to circulate [your résumé] simply to dispel the rumor that you were not a scientist.” . . .

A small band of scientists argue for intelligent design, saying evolutionary theory’s path is littered with too many gaps and mysteries, and cannot account for the origin of life.

Most evolutionary biologists, not to mention much of the broader scientific community, dismiss intelligent design as a sophisticated version of creationism. . . .

Sternberg’s case has sent ripples far beyond the Beltway. The special counsel accused the National Center for Science Education, an Oakland, Calif.-based think tank that defends the teaching of evolution, of orchestrating attacks on Sternberg.

“The NCSE worked closely with” the Smithsonian “in outlining a strategy to have you investigated and discredited,” McVay wrote to Sternberg. . . .

Sternberg is an unlikely revolutionary. He holds two PhDs in evolutionary biology, his graduate work draws praise from his former professors, and in 2000 he gained a coveted research associate appointment at the Smithsonian Institution.

Not long after that, Smithsonian scientists asked Sternberg to become the unpaid editor of Proceedings of the Biological Society of Washington, a sleepy scientific journal affiliated with the Smithsonian. Three years later, Sternberg agreed to consider a paper by Stephen C. Meyer, a Cambridge University-educated philosopher of science who argues that evolutionary theory cannot account for the vast profusion of multicellular species and forms in what is known as the Cambrian “explosion,” which occurred about 530 million years ago.

Scientists still puzzle at this great proliferation of life. But Meyer’s paper went several long steps further, arguing that an intelligent agent — God, according to many who espouse intelligent design — was the best explanation for the rapid appearance of higher life-forms.

Sternberg harbored his own doubts about Darwinian theory. He also acknowledged that this journal had not published such papers in the past and that he wanted to stir the scientific pot.

“I am not convinced by intelligent design but they have brought a lot of difficult questions to the fore,” Sternberg said. “Science only moves forward on controversy.” . . .

When the article appeared, the reaction was near instantaneous and furious. Within days, detailed scientific critiques of Meyer’s article appeared on pro-evolution Web sites. “The origin of genetic information is thoroughly understood,” said Nick Matzke of the NCSE. “If the arguments were coherent this paper would have been revolutionary– but they were bogus.”

A senior Smithsonian scientist wrote in an e-mail: “We are evolutionary biologists and I am sorry to see us made into the laughing stock of the world, even if this kind of rubbish sells well in backwoods USA.”

An e-mail stated, falsely, that Sternberg had “training as an orthodox priest.” Another labeled him a “Young Earth Creationist,” meaning a person who believes God created the world in the past 10,000 years.

This latter accusation is a reference to Sternberg’s service on the board of the Baraminology Study Group, a “young Earth” group. Sternberg insists he does not believe in creationism. “I was rather strong in my criticism of them,” he said. “But I agreed to work as a friendly but critical outsider.” . . .

“I loathe careerism and the herd mentality,” [Sternberg says]. “I really think that objective truth can be discovered and that popular opinion and consensus thinking does more to obscure than to reveal.”

At the core of ID is the hypothesis of irreducible complexity, which is the subject of a Wikipedia article that also provides many links to various aspects of the controversy about irreducible complexity and ID. To quote from that article: “Irreducible complexity is not an argument that evolution does not occur, but rather an argument that it is incomplete.”

Is irreducible complexity an unscientific proposition (an unfalsifiable hypothesis), as many of its critics charge? And if it is a falsifiable hypothesis, where does it stand? The answers to those questions shift so rapidly that the best I can do here is quote from the Wikipedia article:

Some critics [of irreducible complexity], such as Jerry Coyne (professor of evolutionary biology at the University of Chicago) and Eugenie Scott (a physical anthropologist and executive director of the National Center for Science Education) have argued that the concept of irreducible complexity, and more generally, the theory of Intelligent Design is not falsifiable, and therefore, not scientific.

[Michael] Behe [a leading proponent of ID] argues that the theory that irreducibly complex systems could not have been evolved can be falsified by an experiment where such systems are evolved. For example, he posits taking bacteria with no flagella and imposing a selective pressure for mobility. If, after a few thousand generations, the bacteria evolved the bacterial flagellum, then Behe believes that this would refute his theory.

Other critics take a different approach, pointing to experimental evidence that they believe falsifies the argument for Intelligent Design from irreducible complexity. For example, Kenneth Miller cites the lab work of Barry Hall on E. coli, which he asserts is evidence that “Behe is wrong.”

The problem is that as every pro-ID hypothesis is falsified (assuming that it is, eventually), another pro-ID hypothesis can be produced. For, there must be a very large number of biological manifestations that have not yet been explained by documented facts. Until such documented facts are produced, a proper scientist would keep irreducible complexity on the table as a possible explanation of an unexplained manifestation. But I have noticed a tendency among die-hard evolutionists — those for whom evolution is a religion — to resort to the practice of extrapolating from documented facts to argue that evolution could explain such-and-such, if only the necessary facts weren’t inconveniently missing. In a word, they are cheaters. (For more, see this post.)

I think it really boils down to this: Anti-ID scientists cannot prove that ID is unscientific; pro-ID scientists cannot prove that ID is anything more than a convenient explantion for currently unexplained phenomena. It’s the scientific (or non-scientific) version of a Mexican standoff.

Where does that leave us? It leaves us here:

When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth. (“Sherlock Holmes” in The Adventure of the Blanched Soldier)

What are the possibilities with which we must begin? In addition to the evolution of evolutionary biology, there are these alternatives, taken from the Wikipedia article on irreducible complexity:

  • Intelligent Design, the argument that irreducible complexity occurs through the input of some “intelligent designer”. One example of an Intelligent Design theory is Creationism (although it can be argued that this begs the question, as it does not say how or what created the Creator, and, if no creator was necessary to create the Creator, why creators should be needed for all other entities).
  • Francis Crick‘s suggestion that life on Earth may have been seeded by aliens (although it can be argued that this begs the question, as it does not say how the alien life arose).

You may have noticed that the list conflates two entirely different issues. There is the question of how life arose — which, I submit, can only be a matter of faith or conjecture — and there is the question of how life has developed, regardless of how it arose — which can be a matter for scientific investigation. Therein lies the crux of the problem. It is impossible to eliminate any explanation of the origin of life or the development of life forms, as long as that explanation doesn’t conflict with facts. Similarly, it is impossible to eliminate any explanation of the origin of the universe, as long as that explanation doesn’t conflict with facts. Staunch evolutionists — those who resist Creationism, intelligent design, or any other unfalsifiable or unfalsified explanation for the origin of the universe, the origin of life, or the development of life forms — are merely invoking their preferred worldview — not facts.

The best that science can do, under any foreseeable circumstances, is to investigate how life developed from the point in the known history of the universe at which there is evidence of life. But many (perhaps most) evolutionists and their hangers-on aren’t content to pursue that scientific agenda. As Frederick Turner puts it:

In many cases it is clear that the beautiful and hard-won theory of evolution, now proved beyond reasonable doubt, is being cynically used by some — who do not much care about it as such — to support an ulterior purpose: a program of atheist indoctrination, and an assault on the moral and spiritual goals of religion. A truth used for unworthy purposes is quite as bad as a lie used for ends believed to be worthy. If religion can be undermined in the hearts and minds of the people, then the only authority left will be the state, and, not coincidentally, the state’s well-paid academic, legal, therapeutic and caring professions. If creationists cannot be trusted to give a fair hearing to evidence and logic because of their prior commitment to religious doctrine, some evolutionary partisans cannot be trusted because they would use a general social acceptance of the truth of evolution as a way to set in place a system of helpless moral license in the population and an intellectual elite to take care of them.

“Mainstream” evolutionists might be willing to consider alien origins, complexity theory, and quantum evolution, given the provenance of those theories. But those same evolutionists are unlikely to back down from their resistence to intelligent design. Why? Because ID threatens their underlying agenda, which — as Turner suggests — is the ascendancy of scientism, scientific elites, and the strident atheists who support them. Another case in point is the strong vein of resistance to the Big Bang theory, because it’s consistent with a Creation. (Sample the results of this Google search, for example.) The irony of it all is that atheism is an unscientific belief in an unfalsifiable proposition, namely, that there is no God. Moreover, if there is a God, He doesn’t need to rely on Big Bangs or other such pyrotechnics to work His will.

Am I going too far when I join Frederick Turner in his distrust of “evolutionary partisans”? I think not. Peruse The Panda’s Thumb, where, for example, one contributor posted approvingly of an article arguing that the teaching of intelligent design should be ruled unconstitutional because it is unscientific. As I wrote at the time,

[t]hink of the fine mess we’d be in if the courts were to rule against the teaching of intelligent design not because it amounts to an establishment of religion but because it’s unscientific. That would open the door to all sorts of judicial mischief. The precedent could — and would — be pulled out of context and used in limitless ways to justify government interference in matters where government has no right to interfere.

It’s bad enough that government is in the business of funding science — though I can accept such funding wheere it actually aids our defense effort. But, aside from that, government has no business deciding for the rest of us what’s scientific or unscientific. When it gets into that business, you had better be ready for a rerun of the genetic policies of the Third Reich.

Scientific elites and their hangers-on, like paternalists of all kinds, would like to tell us how to live our lives — for our own good, of course — because they think they have the answers, or can find them. (They would be benign technocrats, of course, unlike their counterparts in the old USSR.) And when they are thwarted, they get in a snit and issue manifestos.

But, as I said at the outset, science isn’t about how to live morally, it’s about how to live life more comfortably, if that is what we wish to do. To know how to live life morally we must turn to a philosophy that promotes liberty, and we must not reject the moral code of the Judeo-Christian tradition, in which one finds much support for liberty.

I’m very much for science, properly understood, which is the increase of knowledge. I’m very much against the misuse of science by scientists (and others) who invoke it to advance an extra-scientific agenda. Science, properly done, begins with doubts and ends in certainties, but those certainties extend only to the realm of observable, documented facts. Science has no claim to superiority over philosophy or religion in the extra-factual realm of morality.

I close by paraphrasing my son’s comment about my post on “Religion and Liberty“:

The basis of liberty is extra-scientific; thus the need for non-scientific moral institutions.

Further reading:
Evolution (Wikipedia article)
Intelligent Design (Wikipedia article)
Intelligent Design: A Special Report from History Magazine
The Little Engine That Could…Undo Darwinism (The American Spectator article)
Faith-Based Evolution (Tech Central Station article)
Darwin and Design: The Evolution of a Flawed Debate (Tech Central Station article)
Intelligent Decline, Revisited (Tech Central Station article)
The Real Intelligent Designers (Tech Central Station article)
Divine Evolution (Tech Central Station article)
The Case Against Intelligent Design (The New Republic article)
Discovery Institute (the leading proponents of ID)
The Talk.Origins Archive (a collection of articles and essays that explore the creationism/evolution controversy from a mainstream scientific perspective)
Show Me the Science (anti-ID by noted philosopher Daniel C. Dennett)
Intelligent Design Has No Place in the Science Curriculum (The Chronicle of Higher Education article)

Related posts:

Hemibel Thinking
(07/16/04)
Climatology (07/16/04)
Global Warming: Realities and Benefits (07/18/04)
Words of Caution for the Cautious (07/21/04)
Scientists in a Snit (08/04/04)
Another Blow to Climatology? (08/21/04)
Bad News for Politically Correct Science (10/18/04)
Another Blow to Chicken-Little Science (10/27/04)
Bad News for Enviro-Nuts (11/27/04)
Going Too Far with the First Amendment (01/01/05)
Atheism, Religion, and Science (01/03/05)
The Limits of Science (01/05/05)
Three Perspectives on Life: A Parable (01/15/05)
Beware of Irrational Atheism (01/22/05)
The Hockey Stick Is Broken (01/31/05)
The Creation Model (02/23/05)
The Thing about Science (03/24/05)
Religion and Personal Responsibility (04/08/05)
Science in Politics, Politics in Science (05/11/05)
Global Warming and Life (07/18/05)
Evolution and Religion (07/25/05)
Speaking of Religion (07/26/05)
Words of Caution for Scientific Dogmatists (08/19/05)
Religion and Liberty (08/25/05)