Mathematical Economics

This is the fourth entry in a series of loosely connected posts on economics. Previous entries are here, here, and here.

Economics is a study of human behavior, not an exercise in mathematical modeling or statistical analysis, though both endeavors may augment an understanding of human behavior. Economics is about four things:

  • wants, as they are perceived by the persons who have those wants
  • how people try to satisfy their wants through mutually beneficial, cooperative action, which includes but is far from limited to market-based exchanges
  • how exogenous forces, including government interventions, enable or thwart the satisfaction of wants
  • the relationships between private action, government interventions, and changes in the composition, rate, and direction of economic activity

In sum, economics is about the behavior of human beings, which is why it’s called a social science. Well, economics used to be called a social science, but it’s been a long time (perhaps fifty years) since I’ve heard or read an economist refer to it as a social science. The term is too reminiscent of “soft and fuzzy” disciplines such as history, social psychology, sociology, political science, and civics or social studies (names for the amalgam of sociology and government that was taught in high schools way back when). No “soft and fuzzy” stuff for physics-envying economists.

However, the behavior of human beings — their thoughts and emotions, how those things affect their actions, and how they interact — is fuzzy, to say the least. Which explains why mathematical economics is largely an exercise in mental masturbation.

In my disdain for mathematical economics, I am in league with Arnold Kling, who is the most insightful economist I have yet encountered in more than fifty years of studying and reading about economics. I especially recommend Kling’s Specialization and Trade: A Reintroduction to Economics. It’s a short book, but chock-full of wisdom and straight thinking about what makes the economy tick. Here’s the blurb from

Since the end of the second World War, economics professors and classroom textbooks have been telling us that the economy is one big machine that can be effectively regulated by economic experts and tuned by government agencies like the Federal Reserve Board. It turns out they were wrong. Their equations do not hold up. Their policies have not produced the promised results. Their interpretations of economic events — as reported by the media — are often of-the-mark, and unconvincing.

A key alternative to the one big machine mindset is to recognize how the economy is instead an evolutionary system, with constantly-changing patterns of specialization and trade. This book introduces you to this powerful approach for understanding economic performance. By putting specialization at the center of economic analysis, Arnold Kling provides you with new ways to think about issues like sustainability, financial instability, job creation, and inflation. In short, he removes stiff, narrow perspectives and instead provides a full, multi-dimensional perspective on a continually evolving system.

And he does, without using a single graph. He uses only a few simple equations to illustrate the bankruptcy of macroeconomic theory.

Those economists who rely heavily on mathematics like to say (and perhaps even believe) that mathematical expression is more precise than mere words. But, as Kling points out in “An Important Emerging Economic Paradigm,” mathematical economics is a language of “faux precision,” which is useful only when applied to well defined, narrow problems. It can’t address the big issues — such as economic growth — which depend on variables such as the rule of law and social norms which defy mathematical expression and quantification.

I would go a step further and argue that mathematical economics borders on obscurantism. It’s a cult whose followers speak an arcane language not only to communicate among themselves but to obscure the essentially bankrupt nature of their craft from others. Mathematical expression actually hides the assumptions that underlie it. It’s far easier to identify and challenge the assumptions of “literary” economics than it is to identify and challenge the assumptions of mathematical economics.

I daresay that this is true even for persons who are conversant in mathematics. They may be able to manipulate easily the equations of mathematical economics, but they are able to do so without grasping the deeper meanings — the assumptions and complexities — hidden by those equations. In fact, the ease of manipulating the equations gives them a false sense of mastery of the underlying, real concepts.

Much of the economics profession is nevertheless dedicated to the protection and preservation of the essential incompetence of mathematical economists. This is from “An Important Emerging Economic Paradigm”:

One of the best incumbent-protection rackets going today is for mathematical theorists in economics departments. The top departments will not certify someone as being qualified to have an advanced degree without first subjecting the student to the most rigorous mathematical economic theory. The rationale for this is reminiscent of fraternity hazing. “We went through it, so should they.”

Mathematical hazing persists even though there are signs that the prestige of math is on the decline within the profession. The important Clark Medal, awarded to the most accomplished American economist under the age of 40, has not gone to a mathematical theorist since 1989.

These hazing rituals can have real consequences. In medicine, the controversial tradition of long work hours for medical residents has come under scrutiny over the last few years. In economics, mathematical hazing is not causing immediate harm to medical patients. But it probably is working to the long-term detriment of the profession.

The hazing ritual in economics has as least two real and damaging consequences. First, it discourages entry into the economics profession by persons who aren’t high-IQ freaks, and who, like Kling, can discuss economic behavior without resorting to the sterile language of mathematics. Second, it leads to economics that’s irrelevant to the real world — and dead wrong.

Reaching back into my archives, I found a good example of irrelevance and wrongness in Thomas Schelling‘s game-theoretic analysis of segregation. Eleven years ago, Tyler Cowen (Marginal Revolution), who was mentored by Schelling at Harvard, praised Schelling’s Nobel prize by noting, among other things, Schelling’s analysis of the economics of segregation:

Tom showed how communities can end up segregated even when no single individual cares to live in a segregated neighborhood. Under the right conditions, it only need be the case that the person does not want to live as a minority in the neighborhood, and will move to a neighborhood where the family can be in the majority. Try playing this game with white and black chess pieces, I bet you will get to segregation pretty quickly.

Like many game-theoretic tricks, Schelling’s segregation gambit omits much important detail. It’s artificial to treat segregation as a game in which all whites are willing to live with black neighbors as long as they (the whites) aren’t in the minority. Most whites (including most liberals) do not want to live anywhere near any “black rednecks” if they can help it. Living in relatively safe, quiet, and attractive surroundings comes far ahead of whatever value there might be in “diversity.”

“Diversity” for its own sake is nevertheless a “good thing” in the liberal lexicon. The Houston Chronicle noted Schelling’s Nobel by saying that Schelling’s work

helps explain why housing segregation continues to be a problem, even in areas where residents say they have no extreme prejudice to another group.

Segregation isn’t a “problem,” it’s the solution to a potential problem. Segregation today is mainly a social phenomenon, not a legal one. It reflects a rational aversion on the part of whites to having neighbors whose culture breeds crime and other types of undesirable behavior.

As for what people say about their racial attitudes: Believe what they do, not what they say. Most well-to-do liberals — including black one like the Obamas — choose to segregate themselves and their children from black rednecks. That kind of voluntary segregation, aside from demonstrating liberal hypocrisy about black redneck culture, also demonstrates the rationality of choosing to live in safer and more decorous surroundings.

Dave Patterson of the defunct Order from Chaos put it this way:

[G]ame theory has one major flaw inherent in it: The arbitrary assignment of expected outcomes and the assumption that the values of both parties are equally reflected in these external outcomes. By this I mean a matrix is filled out by [a conductor, and] it is up to that conductor’s discretion to assign outcome values to that grid. This means that there is an inherent bias towards the expected outcomes of conductor.

Or: Garbage in, garbage out.

Game theory points to the essential flaw in mathematical economics, which is reductionism: “An attempt or tendency to explain a complex set of facts, entities, phenomena, or structures by another, simpler set.”

Reductionism is invaluable in many settings. To take an example from everyday life, children are warned — in appropriate stern language — not to touch a hot stove or poke a metal object into an electrical outlet. The reasons given are simple ones: “You’ll burn yourself” and “You’ll get a shock and it will hurt you.” It would be futile (in almost all cases) to try to explain to a small child the physical and physiological bases for the warnings. The child wouldn’t understand the explanations, and the barrage of words might cause him to forget the warnings.

The details matter in economics. It’s easy enough to say, for example, that a market equilibrium exists where the relevant supply and demand curves cross (in a graphical representation) or where the supply and demand functions yield equal values of price and quantity (in a mathematical representation). But those are gross abstractions from reality, as any economist knows — or should know. Expressing economic relationships in mathematical terms lends them an unwarranted air of precision.

Further, all mathematical expressions, no matter how complex, can be expressed in plain language, though it may be hard to do so when the words become too many and their relationships too convoluted. But until one tries to do so, one is at the mercy of the mathematical economist whose equation has no counterpart in the real world of economic activity. In other words, an equation represents nothing more than the manipulation of mathematical relationships until it’s brought to earth by plain language and empirical testing. Short of that, it’s as meaningful as Urdu is to a Cockney.

Finally, mathematical economics lends aid and comfort to proponents of economic control. Whether or not they understand the mathematics or the economics, the expression of congenial ideas in mathematical form lends unearned — and dangerous — credibility to the controller’s agenda. The relatively simple multiplier is a case in point. As I explain in “The Keynesian Multiplier: Phony Math,”

the Keynesian investment/government-spending multiplier simply tells us that if ∆Y = $5 trillion, and if b = 0.8, then it is a matter of mathematical necessity that ∆C = $4 trillion and ∆I + ∆G = $1 trillion. In other words, a rise in I + G of $1 trillion doesn’t cause a rise in Y of $5 trillion; rather, Y must rise by $5 trillion for C to rise by $4 trillion and I + G to rise by $1 trillion. If there’s a causal relationship between ∆G and ∆Y, the multiplier doesn’t portray it.

I followed that post with “The True Multiplier“:

Math trickery aside, there is evidence that the Keynesian multiplier is less than 1. Robert J. Barro of Harvard University opens an article in The Wall Street Journal with the statement that “economists have not come up with explanations … for multipliers above one.”

Barro continues:

A much more plausible starting point is a multiplier of zero. In this case, the GDP is given, and a rise in government purchases requires an equal fall in the total of other parts of GDP — consumption, investment and net export. . . .

What do the data show about multipliers? Because it is not easy to separate movements in government purchases from overall business fluctuations, the best evidence comes from large changes in military purchases that are driven by shifts in war and peace. A particularly good experiment is the massive expansion of U.S. defense expenditures during World War II. The usual Keynesian view is that the World War II fiscal expansion provided the stimulus that finally got us out of the Great Depression. Thus, I think that most macroeconomists would regard this case as a fair one for seeing whether a large multiplier ever exists.

I have estimated that World War II raised U.S. defense expenditures by $540 billion (1996 dollars) per year at the peak in 1943-44, amounting to 44% of real GDP. I also estimated that the war raised real GDP by $430 billion per year in 1943-44. Thus, the multiplier was 0.8 (430/540). The other way to put this is that the war lowered components of GDP aside from military purchases. The main declines were in private investment, nonmilitary parts of government purchases, and net exports — personal consumer expenditure changed little. Wartime production siphoned off resources from other economic uses — there was a dampener, rather than a multiplier. . . .

There are reasons to believe that the war-based multiplier of 0.8 substantially overstates the multiplier that applies to peacetime government purchases. For one thing, people would expect the added wartime outlays to be partly temporary (so that consumer demand would not fall a lot). Second, the use of the military draft in wartime has a direct, coercive effect on total employment. Finally, the U.S. economy was already growing rapidly after 1933 (aside from the 1938 recession), and it is probably unfair to ascribe all of the rapid GDP growth from 1941 to 1945 to the added military outlays. [“Government Spending Is No Free Lunch,” The Wall Street Journal (, January 22, 2009]

This is from Valerie A. Ramsey of  the University of California-San Diego and the National Bureau of Economic Research:

. . . [I]t appears that a rise in government spending does not stimulate private spending; most estimates suggest that it significantly lowers private spending. These results imply that the government spending multiplier is below unity. Adjusting the implied multiplier for increases in tax rates has only a small effect. The results imply a multiplier on total GDP of around 0.5. [“Government Spending and Private Activity,” January 2012]

In fact,

for the period 1947-2012 I estimated the year-over-year percentage change in GDP (denoted as Y%) as a function of G/GDP (denoted as G/Y):

Y% = 0.09 – 0.17(G/Y)

Solving for Y% = 0 yields G/Y = 0.53; that is, Y% will drop to zero if G/Y rises to 0.53 (or thereabouts). At the present level of G/Y (about 0.4), Y% will hover just above 2 percent, as it has done in recent years. (See the graph immediately above.)

If G/Y had remained at 0.234, its value in 1947:

  • Real growth would have been about 5 percent a year, instead of 3.2 percent (the actual value for 1947-2012).
  • The total value of Y for 1947-2012 would have been higher by $500 trillion (98 percent).
  • The total value of G would have been lower by $61 trillion (34 percent).

The last two points, taken together, imply a cumulative government-spending multiplier (K) for 1947-2012 of about -8. That is, aggregate output in 1947-2012 declined by 8 dollars for every dollar of government spending above the amount represented by G/Y = 0.234.

But -8 is only an average value for 1947-2012. It gets worse. The reduction in Y is cumulative; that is, every extra dollar of G reduces the amount of Y that is available for growth-producing investment, which leads to a further reduction in Y, which leads to a further reduction in growth-producing investment, and on and on. (Think of the phenomenon as negative compounding; take a dollar from your savings account today, and the value of the savings account years from now will be lower than it would have been by a multiple of that dollar: [1 + interest rate] raised to nth power, where n = number of years.) Because of this cumulative effect, the effective value of K in 2012 was about -14.

The multiplier is a seductive and easy-to-grasp mathematical construct. But in the hands of politicians and their economist-enablers, it has been an instrument of economic destruction.

Perhaps “higher” mathematical economics is potentially less destructive because it’s inside game played by economists for the benefit of economists. I devoutly hope that’s true.

Economists As Scientists

This is the third entry in a series of loosely connected posts on economics. The first entry is here and the second entry is here. (Related posts by me are noted parenthetically throughout this one.)

Science is something that some people “do” some of the time. There are full-time human beings and part-time scientists. And the part-timers are truly scientists only when they think and act in accordance with the scientific method.*

Acting in accordance with the scientific method is a matter of attitude and application. The proper attitude is one of indifference about the correctness of a hypothesis or theory. The proper application rejects a hypothesis if it can’t be tested, and rejects a theory if it’s refuted (falsified) by relevant and reliable observations.

Regarding attitude, I turn to the most famous person who was sometimes a scientist: Albert Einstein. This is from the Wikipedia article about the Bohr-Einstein debate:

The quantum revolution of the mid-1920s occurred under the direction of both Einstein and [Niels] Bohr, and their post-revolutionary debates were about making sense of the change. The shocks for Einstein began in 1925 when Werner Heisenberg introduced matrix equations that removed the Newtonian elements of space and time from any underlying reality. The next shock came in 1926 when Max Born proposed that mechanics were to be understood as a probability without any causal explanation.

Einstein rejected this interpretation. In a 1926 letter to Max Born, Einstein wrote: “I, at any rate, am convinced that He [God] does not throw dice.” [Apparently, Einstein also used the line in Bohr’s presence, and Bohr replied, “Einstein, stop telling God what to do.” — TEA]

At the Fifth Solvay Conference held in October 1927 Heisenberg and Born concluded that the revolution was over and nothing further was needed. It was at that last stage that Einstein’s skepticism turned to dismay. He believed that much had been accomplished, but the reasons for the mechanics still needed to be understood.

Einstein’s refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated.

It’s true that quantum mechanics was inchoate in the mid-1920s, and that it took a couple of decades to mature into quantum field theory. But there’s more than a trace of “attitude” in Einstein’s refusal to accept quantum mechanics, to stay abreast of developments in the theory, and to search quixotically for his own theory of everything, which he hoped would obviate the need for a non-deterministic explanation of quantum phenomena.

Improper application of the scientific method is rife. See, for example the Wikipedia article about the replication crisis, John Ioannidis’s article, “Why Most Published Research Findings Are False.” (See also “Ty Cobb and the State of Science” and “Is Science Self-Correcting?“) For a thorough analysis of the roots of the crisis, read Michael Hart’s book, Hubris: The Troubling Science, Economics, and Politics of Climate Change.

A bad attitude and improper application are both found among the so-called scientists who declare that the “science” of global warming is “settled,” and that human-generated CO2 emissions are the primary cause of the apparent rise in global temperatures during the last quarter of the 20th century. The bad attitude is the declaration of “settled science.” In “The Science Is Never Settled” I give many prominent examples of the folly of declaring it to be “settled.”

The improper application of the scientific method with respect to global warming began with the hypothesis that the “culprit” is CO2 emissions generated by the activities of human beings — thus anthropogenic global warming (AGW). There’s no end of evidence to the contrary, some of which is summarized in these posts and many of the links found therein. There’s enough evidence, in my view, to have rejected the CO2 hypothesis many times over. But there’s a great deal of money and peer-approval at stake, so the rush to judgment became a stampede. And attitude rears its ugly head when pro-AGW “scientists” shun the real scientists who are properly skeptical about the CO2 hypothesis, or at least about the degree to which CO2 supposedly influences temperatures. (For a depressingly thorough account of the AGW scam, read Michael Hart’s Hubris: The Troubling Science, Economics, and Politics of Climate Change.)

I turn now to economists, as I have come to know them in more than fifty years of being taught by them, working with them, and reading their works. Scratch an economist and you’re likely to find a moralist or reformer just beneath a thin veneer of rationality. Economists like to believe that they’re objective. But they aren’t; no one is. Everyone brings to the table a large serving of biases that are incubated in temperament, upbringing, education, and culture.

Economists bring to the table a heaping helping of tunnel vision. “Hard scientists” do, too, but their tunnel vision is generally a good thing, because it’s actually aimed at a deeper understanding of the inanimate and subhuman world rather than the advancement of a social or economic agenda. (I make a large exception for “hard scientists” who contribute to global-warming hysteria, as discussed above.)

Some economists, especially behavioralists, view the world through the lens of wealth-and-utility-maximization. Their great crusade is to force everyone to make rational decisions (by their lights), through “nudging.” It almost goes without saying that government should be the nudger-in-chief. (See “The Perpetual Nudger” and the many posts linked to therein.)

Other economists — though far fewer than in the past — have a thing about monopoly and oligopoly (the domination of a market by one or a few sellers). They’re heirs to the trust-busting of the late 1800s and early 1900s, a movement led by non-economists who sought to blame the woes of working-class Americans on the “plutocrats” (Rockefeller, Carnegie, Ford, etc.) who had merely made life better and more affordable for Americans, while also creating jobs for millions of them and reaping rewards for the great financial risks that they took. (See “Monopoly and the General Welfare” and “Monopoly: Private Is Better than Public.”) As it turns out, the biggest and most destructive monopoly of all is the federal government, so beloved and trusted by trust-busters — and too many others. (See “The Rahn Curve Revisited.”)

Nowadays, a lot of economists are preoccupied by income inequality, as if it were something evil and not mainly an artifact of differences in intelligence, ambition, and education, etc. And inequality — the prospect of earning rather grand sums of money — is what drives a lot of economic endeavor, to good of workers and consumers. (See “Mass (Economic) Hysteria: Income Inequality and Related Themes” and the many posts linked to therein.) Remove inequality and what do you get? The Soviet Union and Communist China, in which everyone is equal except party operatives and their families, friends, and favorites.

When the inequality-preoccupied economists are confronted by the facts of life, they usually turn their attention from inequality as a general problem to the (inescapable) fact that an income distribution has a top one-percent and top one-tenth of one-percent — as if there were something especially loathsome about people in those categories. (Paul Krugman shifted his focus to the top one-tenth of one percent when he realized that he’s in the top one percent, so perhaps he knows that’s he’s loathsome and wishes to deny it, to himself.)

Crony capitalism is trotted out as a major cause of very high incomes. But that’s hardly a universal cause, given that a lot of very high incomes are earned by athletes and film stars beside whom most investment bankers and CEOs are making peanuts. Moreover, as I’ve said on several occasions, crony capitalists are bright and driven enough to be in the stratosphere of any income distribution. Further, the fertile soil of crony capitalism is the regulatory power of government that makes it possible.

Many economists became such, it would seem, in order to promote big government and its supposed good works — income redistribution being one of them. Joseph Stiglitz and Paul Krugman are two leading exemplars of what I call the New Deal school of economic thought, which amounts to throwing government and taxpayers’ money at every perceived problem, that is, every economic outcome that is deemed unacceptable by accountants of the soul. (See “Accountants of the Soul.”)

Stiglitz and Krugman — both Nobel laureates in economics — are typical “public intellectuals” whose intelligence breeds in them a kind of arrogance. (See “Intellectuals and Society: A Review.”) It’s the kind of arrogance that I mentioned in the preceding post in this series: a penchant for deciding what’s best for others.

New Deal economists like Stiglitz and Krugman carry it a few steps further. They ascribe to government an impeccable character, an intelligence to match their own, and a monolithic will. They then assume that this infallible and wise automaton can and will do precisely what they would do: Create the best of all possible worlds. (See the many posts in which I discuss the nirvana fallacy.)

New Deal economists, in other words, live their intellectual lives  in a dream-world populated by the likes of Jiminy Cricket (“When You Wish Upon a Star”), Dorothy (“Somewhere Over the Rainbow”), and Mary Jane of a long-forgotten comic book (“First I shut my eyes real tight, then I wish with all my might! Magic words of poof, poof, piffles, make me just as small as [my mouse] Sniffles!”).

I could go on, but you should by now have grasped the point: What too many economists want to do is change human nature, channel it in directions deemed “good” (by the economist), or simply impose their view of “good” on everyone. To do such things, they must rely on government.

It’s true that government can order people about, but it can’t change human nature, which has an uncanny knack for thwarting Utopian schemes. (Obamacare, whose chief architect was economist Jonathan Gruber, is exhibit A this year.) And government (inconveniently for Utopians) really consists of fallible, often unwise, contentious human beings. So government is likely to march off in a direction unsought by Utopian economists.

Nevertheless, it’s hard to thwart the tax collector. The regulator can and does make things so hard for business that if one gets off the ground it can’t create as much prosperity and as many jobs as it would in the absence of regulation. And the redistributor only makes things worse by penalizing success. Tax, regulate, and redistribute should have been the mantra of the New Deal and most presidential “deals” since.

I hold economists of the New Deal stripe partly responsible for the swamp of stagnation into which the nation’s economy has descended. (See “Economic Growth Since World War II.”) Largely responsible, of course, are opportunistic if not economically illiterate politicians who pander to rent-seeking, economically illiterate constituencies. (Yes, I’m thinking of old folks and the various “disadvantaged” groups with which they have struck up an alliance of convenience.)

The distinction between normative economics and positive economics is of no particular use in sorting economists between advocates and scientists. A lot of normative economics masquerades as positive economics. The work of Thomas Piketty and his comrades-in-arms comes to mind, for example. (See “McCloskey on Piketty.”) Almost everything done to quantify and defend the Keynesian multiplier counts as normative economics, inasmuch as the work is intended (wittingly or not) to defend an intellectual scam of 80 years’ standing. (See “The Keynesian Multiplier: Phony Math,” “The True Multiplier,” and “Further Thoughts about the Keynesian Multiplier.”)

Enough said. If you want to see scientific economics in action, read Regulation. Not every article in it exemplifies scientific inquiry, but a good many of them do. It’s replete with articles about microeconomics, in which the authors uses real-world statistics to validate and quantify the many axioms of economics.

A final thought is sparked by Arnold Kling’s post, “Ed Glaeser on Science and Economics.” Kling writes:

I think that the public has a sort of binary classification. If it’s “science,” then an expert knows more than the average Joe. If it’s not a science, then anyone’s opinion is as good as anyone else’s. I strongly favor an in-between category, called a discipline. Think of economics as a discipline, where it is possible for avid students to know more than ordinary individuals, but without the full use of the scientific method.

On this rare occasion I disagree with Kling. The accumulation of knowledge about economic variables, or pseudo-knowledge such as estimates of GDP (see “Macroeconomics and Microeconomics“), either leads to well-tested, verified, and reproducible theories of economic behavior or it leads to conjectures, of which there are so many opposing ones that it’s “take your pick.” If that’s what makes a discipline, give me the binary choice between science and story-telling. Most of economics seems to be story-telling. “Discipline” is just a fancy word for it.

Collecting baseball cards and memorizing the statistics printed on them is a discipline. Most of economics is less useful than collecting baseball cards — and a lot more destructive.

Here’s my hypothesis about economists: There are proportionally as many of them who act like scientists as there are baseball players who have career batting averages of at least .300.
* Richard Feynman, a physicist and real scientist, had a different view of the scientific method than Karl Popper’s standard taxonomy. I see Feynman’s view as complementary to Popper’s, not at odds with it. What is “constructive skepticism” (Feynman’s term) but a gentler way of saying that a hypothesis or theory might be falsified and that the act of falsification may point to a better hypothesis or theory?

Economics and Science

This is the second entry in what I expect to be a series of loosely connected posts on economics. The first entry is here.

Science is unnecessarily daunting to the uninitiated, which is to say, the vast majority of the populace. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Here I will dissect science, then turn to economics and begin a discussion of its scientific and non-scientific aspects. It has both, though at least one non-scientific aspect (the Keynesian multiplier) draws an inordinate amount of attention, and has many true believers within the profession.

Science is knowledge, but not all knowledge is science. A scientific body of knowledge is systematic; that is, the granular facts or phenomena which comprise the body of knowledge must be connected in patterned ways. The purported facts or phenomena of a science must represent reality, things that can be observed and measured in some way. Scientists may hypothesize the existence of an unobserved thing (e.g., the ether, dark matter), in an effort to explain observed phenomena. But the unobserved thing stands outside scientific knowledge until its existence is confirmed by observation, or because it remains standing as the only plausible explanation of observable phenomena. Hypothesized things may remain outside the realm of scientific knowledge for a very long time, if not forever. The Higgs boson, for example, was hypothesized in 1964 and has been tentatively (but not conclusively) confirmed since its “discovery” in 2011.

Science has other key characteristics. Facts and patterns must be capable of validation and replication by persons other than those who claim to have found them initially. Patterns should have predictive power; thus, for example, if the sun fails to rise in the east, the model of Earth’s movements which says that it will rise in the east is presumably invalid and must be rejected or modified so that it correctly predicts future sunrises or the lack thereof. Creating a model or tweaking an existing model just to account for a past event (e.g., the failure of the Sun to rise, the apparent increase in global temperatures from the 1970s to the 1990s) proves nothing other than an ability to “predict” the past with accuracy.

Models are usually clothed in the language of mathematics and statistics. But those aren’t scientific disciplines in themselves; they are tools of science. Expressing a theory in mathematical terms may lend the theory a scientific aura, but a theory couched in mathematical terms is not a scientific one unless (a) it can be tested against facts yet to be ascertained and events yet to occur, and (b) it is found to accord with those facts and events consistently, by rigorous statistical tests.

A science may be descriptive rather than mathematical. In a descriptive science (e.g., plant taxonomy), particular phenomena sometimes are described numerically (e.g., the number of leaves on the stem of a species), but the relations among various phenomena are not reducible to mathematics. Nevertheless, a predominantly descriptive discipline will be scientific if the phenomena within its compass are connected in patterned ways, can be validated, and are applicable to newly discovered entities.

Non-scientific disciplines can be useful, whereas some purportedly scientific disciplines verge on charlatanism. Thus, for example:

  • History, by my reckoning, is not a science because its account of events and their relationships is inescapably subjective and incomplete. But a knowledge of history is valuable, nevertheless, for the insights it offers into the influence of human nature on the outcomes of economic and political processes.
  • Physics is a science in most of its sub-disciplines, but there are some (e.g., cosmology) where it descends into the realm of speculation. It is informed, fascinating speculation to be sure, but speculation all the same. The idea of multiverses, for example, can’t be tested, inasmuch as human beings and their tools are bound to the known universe.
  • Economics is a science only to the extent that it yields empirically valid insights about  specific economic phenomena (e.g., the effects of laws and regulations on the prices and outputs of specific goods and services). Then there are concepts like the Keynesian multiplier, about which I’ll say more in this series. It’s a hypothesis that rests on a simplistic, hydraulic view of the economic system. (Other examples of pseudo-scientific economic theories are the labor theory of value and historical determinism.)

In sum, there is no such thing as “science,” writ large; that is, no one may appeal, legitimately, to “science” in the abstract. A particular discipline may be a science, but it is a science only to the extent that it comprises a factual and replicable body of patterned knowledge. Patterned knowledge includes theories with predictive power.

A scientific theory is a hypothesis that has thus far been confirmed by observation. Every scientific theory rests eventually on axioms: self-evident principles that are accepted as true without proof. The principle of uniformity (which can be traced to Galileo) is an example of such an axiom:

Uniformitarianism is the assumption that the same natural laws and processes that operate in the universe now have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of causal structure throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, uniformitarianism has been a key first principle of virtually all fields of science

Thus, for example, if observer B is moving away from observer A at a certain speed, observer A will perceive that he is moving away from observer B at that speed. It follows that an observer cannot determine either his absolute velocity or direction of travel in space. The principle of uniformity is a fundamental axiom of modern physics, most notably of Einstein’s special and general theories of relativity.

There’s a fine line between an axiom and a theory. Was the idea of a geocentric universe an axiom or a theory? If it was taken as axiomatic — as it surely was by many scientists for about 2,000 years — then it’s fair to say that an axiom can give way under the pressure of observational evidence. (Such an event is what Thomas Kuhn calls a paradigm shift.) But no matter how far scientists push the boundaries of knowledge, they must at some point rely on untestable axioms, such as the principle of uniformity. There are simply deep and (probably) unsolvable mysteries that science is unlikely to fathom.

This brings me to economics, which — in my view — rests on these self-evident axioms:

1. Each person strives to maximize his or her sense of satisfaction, which may also be called well-being, happiness, or utility (an ugly word favored by economists). Striving isn’t the same as achieving, of course, because of lack of information, emotional decision-making, buyer’s remorse, etc

2. Happiness can and often does include an empathic concern for the well-being of others; that is, one’s happiness may be served by what is usually labelled altruism or self-sacrifice.

3. Happiness can be and often is served by the attainment of non-material ends. Not all persons (perhaps not even most of them) are interested in the maximization of wealth, that is, claims on the output of goods and services. In sum, not everyone is a wealth maximizer. (But see axiom number 12.)

4. The feeling of satisfaction that an individual derives from a particular product or service is situational — unique to the individual and to the time and place in which the individual undertakes to acquire or enjoy the product or service. Generally, however, there is a (situationally unique) point at which the acquisition or enjoyment of additional units of a particular product or service during a given period of time tends to offer less satisfaction than would the acquisition or enjoyment of units of other products or services that could be obtained at the same cost.

5. The value that a person places on a product or service is subjective. Products and services don’t have intrinsic values that apply to all persons at a given time or period of time.

6. The ability of a person to acquire products and services, and to accumulate wealth, depends (in the absence of third-party interventions) on the valuation of the products and services that are produced in part or whole by the person’s labor (mental or physical), or by the assets that he owns (e.g., a factory building, a software patent). That valuation is partly subjective (e.g., consumers’ valuation of the products and services, an employer’s qualitative evaluation of the person’s contributions to output) and partly objective (e.g., an employer’s knowledge of the price commanded by a product or service, an employer’s measurement of an employees’ contribution to the quantity of output).

7. The persons and firms from which products and services flow are motivated by the acquisition of income, with which they can acquire other products and services, and accumulate wealth for personal purposes (e.g., to pass to heirs) or business purposes (e.g., to expand the business and earn more income). So-called profit maximization (seeking to maximize the difference between the cost of production and revenue from sales) is a key determinant of business decisions but far from the only one. Others include, but aren’t limited to, being a “good neighbor,” providing employment opportunities for local residents, and underwriting philanthropic efforts.

8. The cost of production necessarily influences the price at which a good or and service will be offered for sale, but doesn’t solely determine the price at which it will be sold. Selling price depends on the subjective valuation of the products or service, prospective buyers’ incomes, and the prices of other products and services, including those that are direct or close substitutes and those to which users may switch, depending on relative prices.

9. The feeling of satisfaction that a person derives from the acquisition and enjoyment of the “basket” of products and services that he is able to buy, given his income, etc., doesn’t necessarily diminish, as long as the person has access to a great variety of products and services. (This axiom and axiom 12 put paid to the myth of diminishing marginal utility of income.)

10. Work may be a source of satisfaction in itself or it may simply be a means of acquiring and enjoying products and services, or acquiring claims to them by accumulating wealth. Even when work is satisfying in itself, it is subject to the “law” of diminishing marginal satisfaction.

11. Work, for many (but not all) persons, is no longer be worth the effort if they become able to subsist comfortably enough by virtue of the wealth that they have accumulated, the availability of redistributive schemes (e.g., Social Security and Medicare), or both. In such cases the accumulation of wealth often ceases and reverses course, as it is “cashed in” to defray the cost of subsistence (which may be far more than minimal).

12. However, there are not a few persons whose “work” is such a great source of satisfaction that they continue doing it until they are no longer capable of doing so. And there are some persons whose “work” is the accumulation of wealth, without limit. Such persons may want to accumulate wealth in order to “do good” or to leave their heirs well off or simply for the satisfaction of running up the score. The justification matters not. There is no theoretical limit to the satisfaction that a particular person may derive from the accumulation of wealth. Moreover, many of the persons (discussed in axiom 11) who aren’t able to accumulate wealth endlessly would do so if they had the ability and the means to take the required risks.

13. Individual degrees of satisfaction (happiness, etc.) are ephemeral, nonquantifiable, and incommensurable. There is no such thing as a social welfare function that a third party (e.g., government) can maximize by taking from A to give to B. If there were such a thing, its value would increase if, for example, A were to punch B in the nose and derive a degree of pleasure that somehow more than offsets the degree of pain incurred by B. (The absurdity of a social-welfare function that allows As to punch Bs in their noses ought to be enough shame inveterate social engineers into quietude — but it won’t. They derive great satisfaction from meddling.) Moreover, one of the primary excuses for meddling is that income (and thus wealth) has a  diminishing marginal utility, so it makes sense to redistribute from those with higher incomes (or more wealth) to those who have less of either. Marginal utility is, however, unknowable (see axioms 4 and 5), and may not always be negative (see axioms 9 and 12).

14. Whenever a third party (government, do-gooders, etc.) intervene in the affairs of others, that third party is merely imposing its preferences on those others. The third party sometimes claims to know what’s best for “society as a whole,” etc., but no third party can know such a thing. (See axiom 13.)

15. It follows from axiom 13 that the welfare of “society as a whole” can’t be aggregated or measured. An estimate of the monetary value of the economic output of a nation’s economy (Gross Domestic Product) is by no means an estimate of the welfare of “society as a whole.” (Again, see axiom 13.)

That may seem like a lot of axioms, which might give you pause about my claim that some aspects of economics are scientific. But economics is inescapably grounded in axioms such as the ones that I propound. This aligns me (mainly) with the Austrian economists, whose leading light was Ludwig von Mises. Gene Callahan writes about him at the website of the Ludwig von Mises Institute:

As I understand [Mises], by categorizing the fundamental principles of economics as a priori truths and not contingent facts open to empirical discovery or refutation, Mises was not claiming that economic law is revealed to us by divine action, like the ten commandments were to Moses. Nor was he proposing that economic principles are hard-wired into our brains by evolution, nor even that we could articulate or comprehend them prior to gaining familiarity with economic behavior through participating in and observing it in our own lives. In fact, it is quite possible for someone to have had a good deal of real experience with economic activity and yet never to have wondered about what basic principles, if any, it exhibits.

Nevertheless, Mises was justified in describing those principles as a priori, because they are logically prior to any empirical study of economic phenomena. Without them it is impossible even to recognize that there is a distinct class of events amenable to economic explanation. It is only by pre-supposing that concepts like intention, purpose, means, ends, satisfaction, and dissatisfaction are characteristic of a certain kind of happening in the world that we can conceive of a subject matter for economics to investigate. Those concepts are the logical prerequisites for distinguishing a domain of economic events from all of the non-economic aspects of our experience, such as the weather, the course of a planet across the night sky, the growth of plants, the breaking of waves on the shore, animal digestion, volcanoes, earthquakes, and so on.

Unless we first postulate that people deliberately undertake previously planned activities with the goal of making their situations, as they subjectively see them, better than they otherwise would be, there would be no grounds for differentiating the exchange that takes place in human society from the exchange of molecules that occurs between two liquids separated by a permeable membrane. And the features which characterize the members of the class of phenomena singled out as the subject matter of a special science must have an axiomatic status for practitioners of that science, for if they reject them then they also reject the rationale for that science’s existence.

Economics is not unique in requiring the adoption of certain assumptions as a pre-condition for using the mode of understanding it offers. Every science is founded on propositions that form the basis rather than the outcome of its investigations. For example, physics takes for granted the reality of the physical world it examines. Any piece of physical evidence it might offer has weight only if it is already assumed that the physical world is real. Nor can physicists demonstrate their assumption that the members of a sequence of similar physical measurements will bear some meaningful and consistent relationship to each other. Any test of a particular type of measurement must pre-suppose the validity of some other way of measuring against which the form under examination is to be judged.

Why do we accept that when we place a yardstick alongside one object, finding that the object stretches across half the length of the yardstick, and then place it alongside another object, which only stretches to a quarter its length, that this means the first object is longer than the second? Certainly not by empirical testing, for any such tests would be meaningless unless we already grant the principle in question. In mathematics we don’t come to know that 2 + 2 always equals 4 by repeatedly grouping two items with two others and counting the resulting collection. That would only show that our answer was correct in the instances we examined — given the assumption that counting works! — but we believe it is universally true. [And it is universally true by the conventions of mathematics. If what we call “5” were instead called “4,” 2 + 2 would always equal 5. — TEA] Biology pre-supposes that there is a significant difference between living things and inert matter, and if it denied that difference it would also be denying its own validity as a special science. . . .

The great fecundity from such analysis in economics is due to the fact that, as acting humans ourselves, we have a direct understanding of human action, something we lack in pondering the behavior of electrons or stars. The contemplative mode of theorizing is made even more important in economics because the creative nature of human choice inherently fails to exhibit the quantitative, empirical regularities, the discovery of which characterizes the modern, physical sciences. (Biology presents us with an interesting intermediate case, as many of its findings are qualitative.) . . .

[A] person can be presented with scores of experiments supporting a particular scientific theory is sound, but no possible experiment ever can demonstrate to him that experimentation is a reasonable means by which to evaluate a scientific theory. Only his intuitive grasp of its plausibility can bring him to accept that proposition. (Unless, of course, he simply adopts it on the authority of others.) He can be led through hundreds of rigorous proofs for various mathematical theorems and be taught the criteria by which they are judged to be sound, but there can be no such proof for the validity of the method itself. (Kurt Gödel famously demonstrated that a formal system of mathematical deduction that is complex enough to model even so basic a topic as arithmetic might avoid either incompleteness or inconsistency, but always must suffer at least one of those flaws.) . . .

This ultimate, inescapable reliance on judgment is illustrated by Lewis Carroll in Alice Through the Looking Glass. He has Alice tell Humpty Dumpty that 365 minus one is 364. Humpty is skeptical, and asks to see the problem done on paper. Alice dutifully writes down:

365 – 1 = 364

Humpty Dumpty studies her work for a moment before declaring that it seems to be right. The serious moral of Carroll’s comic vignette is that formal tools of thinking are useless in convincing someone of their conclusions if he hasn’t already intuitively grasped the basic principles on which they are built.

All of our knowledge ultimately is grounded on our intuitive recognition of the truth when we see it. There is nothing magical or mysterious about the a priori foundations of economics, or at least nothing any more magical or mysterious than there is about our ability to comprehend any other aspect of reality.

(Callahan has more to say here. For a technical discussion of the science of human action, or praxeology, read this. Some glosses on Gödel’s incompleteness theorem are here.)

I omitted an important passage from the preceding quotation, in order to single it out. Callahan says also that

Mises’s protégé F.A. Hayek, while agreeing with his mentor on the a priori nature of the “logic of action” and its foundational status in economics, still came to regard investigating the empirical issues that the logic of action leaves open as a more important undertaking than further examination of that logic itself.

I agree with Hayek. It’s one thing to know axiomatically that the speed of light is constant; it is quite another (and useful) thing to know experimentally that the speed of light (in empty space) is about 671 million miles an hour. Similarly, it is one thing to deduce from the axioms of economics that demand curves generally slope downward; it is quite another (and useful) thing to estimate specific demand functions.

But one must always be mindful of the limitations of quantitative methods in economics. As James Sheehan writes at the website of the Mises Institute,

economists are prone to error when they ascribe excessive precision to advanced statistical techniques. They assume, falsely, that a voluminous amount of historical observations (sample data) can help them to make inferences about the future. They presume that probability distributions follow a bell-shaped pattern. They make no provision for the possibility that past correlations between economic variables and data were coincidences.

Nor do they account for the possibility, as economist Robert Lucas demonstrated, that people will incorporate predictable patterns into their expectations, thus canceling out the predictive value of such patterns. . . .

As [Nassim Nicholas] Taleb points out [in Fooled by Randomness], the popular Monte Carlo simulation “is more a way of thinking than a computational method.” Employing this way of thinking can enhance one’s understanding only if its weaknesses are properly understood and accounted for. . . .

Taleb’s critique of econometrics is quite compatible with Austrian economics, which holds that dynamic human actions are too subjective and variegated to be accurately modeled and predicted.

In some parts of Fooled by Randomness, Taleb almost sounds Austrian in his criticisms of economists who worship “the efficient market religion.” Such economists are misguided, he argues, because they begin with the flawed hypothesis that human beings act rationally and do what is mathematically “optimal.” . . .

As opposed to a Utopian Vision, in which human beings are rational and perfectible (by state action), Taleb adopts what he calls a Tragic Vision: “We are faulty and there is no need to bother trying to correct our flaws.” It is refreshing to see a highly successful practitioner of statistics and finance adopt a contrarian viewpoint towards economics.

Yet, as Arnold Kling explains, many (perhaps most) economists have lost sight of the axioms of economics in their misplaced zeal to emulate the methods of the physical sciences:

The most distinctive trend in economic research over the past hundred years has been the increased use of mathematics. In the wake of Paul Samuelson’s (Nobel 1970) Ph.D dissertation, published in 1948, calculus became a requirement for anyone wishing to obtain an economics degree. By 1980, every serious graduate student was expected to be able to understand the work of Kenneth Arrow (Nobel 1972) and Gerard Debreu (Nobel 1983), which required mathematics several semesters beyond first-year calculus.

Today, the “theory sequence” at most top-tier graduate schools in economics is controlled by math bigots. As a result, it is impossible to survive as an economics graduate student with a math background that is less than that of an undergraduate math major. In fact, I have heard that at this year’s American Economic Association meetings, at a seminar on graduate education one professor quite proudly said that he ignored prospective students’ grades in economics courses, because their math proficiency was the key predictor of their ability to pass the coursework required to obtain an advanced degree.

The raising of the mathematical bar in graduate schools over the past several decades has driven many intelligent men and women (perhaps women especially) to pursue other fields. The graduate training process filters out students who might contribute from a perspective of anthropology, biology, psychology, history, or even intense curiosity about economic issues. Instead, the top graduate schools behave as if their goal were to produce a sort of idiot-savant, capable of appreciating and adding to the mathematical contributions of other idiot-savants, but not necessarily possessed of any interest in or ability to comprehend the world to which an economist ought to pay attention.

. . . The basic question of What Causes Prosperity? is not a question of how trading opportunities play out among a given array of goods. Instead, it is a question of how innovation takes place or does not take place in the context of institutional factors that are still poorly understood.

Mathematics, as I have said, is a tool of science, it’s not science in itself. Dressing hypothetical relationships in the garb of mathematics doesn’t validate them.

Where, then, is the science in economics? And where is the nonsense? I’ve given you some hints (and more than hints). There’s more to come.

The Essence of Economics

This is the first entry in what I expect to be a series of loosely connected posts on economics.

Market-based voluntary exchange is an important if not dominant method of satisfying wants. To grasp that point, think of your day: You sleep and awaken in a house or apartment that you didn’t build yourself, but which is “yours” by dint of payments that you make from income you earn by doing things of value to other persons.* During your days at home, in a workplace, or in a vacation spot you spend many hours using products and services that you buy from others — everything from toilet paper, soap, and shampoo to clothing, food, transportation, entertainment, internet access, etc.

It is not that the things that you do for yourself and in direct cooperation with others are unimportant or valueless. Economists acknowledge the psychic value of self-sufficiency and the economic value of non-market cooperation, but they can’t measure the value of those things. Economists typically focus on market-based exchange because it involves transactions with measurable monetary values.

Another thing that economists can’t deal with, because it’s beyond the ken of economics, is the essence of life itself: one’s total sense of well-being, especially as it is influenced by the things done for oneself, solitary joys (reading, listening to music), and the happiness (or sadness) shared with friends and loved ones.

In sum, despite the pervasiveness of voluntary exchange, economics really treats only the marginalia of life — the rate at which a person would exchange a unit of X for a unit of Y, not how X or Y stacks up in the grand scheme of living.

That is the essence of economics, as a discipline. There is much more to it than that, of course; for example, how supply meets demand, how exogenous factors affect economic behavior, how activity at the level of the person or firm sends ripples across the economy, and why those ripples can’t be aggregated meaningfully.

More to come.
* Obviously, a lot of people derive their income from transfer payments (Social Security, food stamps, etc.), which I’ll address in future posts.

The Wages of Simplistic Economics

If this Wikipedia article accurately reflects what passes for microeconomics these days, the field hasn’t advanced since I took my first micro course almost 60 years ago. And my first micro course was based on Alfred Marshall’s Principles of Economics, first published in 1890.

What’s wrong with micro as it’s taught today, and as it has been taught for the better part of 126 years? It’s not the principles themselves, which are eminently sensible and empirically valid: Supply curves slope upward, demand curves slope downward, competition yields lower prices, etc. What’s wrong is the heavy reliance on two-dimensional graphical representations of the key variables and their interactions; for example, how utility functions (which are gross abstractions) generate demand curves, and how cost functions generate supply curves.

The cautionary words of Marshall and his many successors about the transitory nature of such variables is no match for the vivid, and static, images imprinted in the memories of the millions of students who took introductory microeconomics as undergraduates. Most of them took no additional courses in micro, and probably just an introductory course in macroeconomics — equally misleading.

Micro, as it is taught now, seems to purvey the same fallacy as it did when Marshall’s text was au courant. The fallacy, which is embedded in the easy-to-understand-and remember graphs of supply and demand under various competitive conditions, is the apparent rigidity of those conditions. Professional economists (or some of them, at least) understand that economic conditions are fluid, especially in the absence of government regulation. But the typical student will remember the graph that depicts the dire results of a monopolistic market and take it as a writ for government intervention; for example:

Power that controls the economy should be in the hands of elected representatives of the people, not in the hands of an industrial oligarchy.

William O. Douglas
(dissent in U.S. v. Columbia Steel Co.)

Quite the opposite is true, as I argue at length in this post. Douglas, unfortunately, served on the Supreme Court from 1939 to 1975. He majored in English and economics, and presumably had more than one course in economics. But he was an undergraduate in the waning days of the anti-business, pro-regulation Progressive Era. So he probably never got past the simplistic idea of “monopoly bad, trust-busting good.”

If only the Supreme Court (and government generally) had been blessed with men like Maxwell Anderson, who wrote this:

When a gov­ernment takes over a people’s eco­nomic life, it becomes absolute, and when it has become absolute, it destroys the arts, the minds, the liberties, and the meaning of the people it governs. It is not an ac­cident that Germany, the first paternalistic state of modern Eu­rope, was seized by an uncontrol­lable dictator who brought on the second world war; not an accident that Russia, adopting a centrally administered economy for human­itarian reasons, has arrived at a tyranny bloodier and more abso­lute than that of the Czars. And if England does not turn back soon, she will go this same way. Men who are fed by their govern­ment will soon be driven down to the status of slaves or cattle.

The Guaranteed Life” (preface to
Knickerbocker Holiday, 1938, revised 1950)

And it’s happening here, too.

From Each According to His Ability…

…to each according to his need. So goes Marx’s vision of pure communism — when capitalism is no more. Unfettered labor will then produce economic goods in such great abundance that there is no question of some taking from others. All will feed at an ever-filling and overflowing public trough.

There are many holes in the Marxian argument. Here’s the bottom line: It’s an impossible dream that flouts human nature.

Capital accrues and markets arise spontaneously (where not distorted and suppressed by lawlessness, government, and lawless government) because they foster mutually beneficial exchanges of economic goods (e.g., labor for manufactured items)

Communism has failed to catch on, as a sustained and widespread phenomenon, because it rejects capitalism and assumes the inexorability of economic progress in the absence of incentives (e.g., the possibility of great rewards for taking great risks and the investment of time and resources). It is telling that “to each his own need” (or an approximation of it) has been achieved on a broad scale only by force, and only by penalizing success and slowing economic progress.

If the state were to wither to nightwatchman status, the result would be the greatest outpouring of economic goods in human history. Everyone would be better off — rich and (relatively) poor alike. Only the envious and economic ignoramuses would be miserable, and then only in their own minds.

If Marx and his intellectual predecessors and successors were capable of thinking straight, they would have come up with the winning formula:

From each according to his ability and effort,
to each according to the market value of his output,
plus whatever voluntary contributions may come his way.

Where We Are, Economically

UPDATED (10/26/12)

The advance estimate of GDP for the third quarter of 2012 has been released. Real growth continues to slog along at about 2 percent. I have updated the graph, but the text needs no revision.

*  *   *

It occurred to me that the trend line in the second graph of “The Economy Slogs Along” is misleading. It is linear, when it should be curvilinear. Here is a better version:

Derived from the October 26, 2012 release of GDP estimates by the Bureau of Economic Analysis. (Contrary to the position of the National Bureau of Economic Research, there was no recession in 2000-2001. For my definition of a recession, see “Economic Growth Since World War II.”)

The more descriptive regression line underscores the moral of “Obama’s Economic Record in Perspective,” which is this:

The claims by Obama and his retinue about O’s supposed “rescue” of the economy from the abyss of depression are ludicrous. (See, for example, “A Keynesian Fantasy Land,” “The Keynesian Fallacy and Regime Uncertainty,” “Why the “Stimulus” Failed to Stimulate,” “Regime Uncertainty and the Great Recession,” The Real Multiplier,” “The Real Multiplier (II),”The Economy Slogs Along,” and “The Obama Effect: Disguised Unemployment.”) Nevertheless our flannel-mouthed president his sycophants insist that he has done great things for the country, though the only great thing that he could do is to leave it alone.

Obama is not to blame for the Great Recession, but the sluggish recovery is due to his anti-business rhetoric and policies (including Obamacare, among others). All that Obama can rightly take “credit” for is an acceleration of the downward trend of economic growth.

Related posts:
Are We Mortgaging Our Children’s Future?
In the Long Run We Are All Poorer
Mr. Greenspan Doth Protest Too Much
The Price of Government
Fascism and the Future of America
The Indivisibility of Economic and Social Liberty
Rationing and Health Care
The Fed and Business Cycles
The Commandeered Economy
The Perils of Nannyism: The Case of Obamacare
The Price of Government Redux
As Goes Greece
The State of the Union: 2010
The Shape of Things to Come
Ricardian Equivalence Reconsidered
The Real Burden of Government
Toward a Risk-Free Economy
The Rahn Curve at Work
The Illusion of Prosperity and Stability
More about the Perils of Obamacare
Health Care “Reform”: The Short of It
The Mega-Depression
I Want My Country Back
The “Forthcoming Financial Collapse”
Estimating the Rahn Curve: Or, How Government Inhibits Economic Growth
The Deficit Commission’s Deficit of Understanding
The Bowles-Simpson Report
The Bowles-Simpson Band-Aid
The Stagnation Thesis
America’s Financial Crisis Is Now
Understanding Hayek
Money, Credit, and Economic Fluctuations
A Keynesian Fantasy Land
The Keynesian Fallacy and Regime Uncertainty
Why the “Stimulus” Failed to Stimulate
The “Jobs Speech” That Obama Should Have Given
Say’s Law, Government, and Unemployment
Regime Uncertainty and the Great Recession
Regulation as Wishful Thinking
Vulgar Keynesianism and Capitalism
Why Are Interest Rates So Low?
Don’t Just Stand There, “Do Something”
The Commandeered Economy
Stocks for the Long Run?
We Owe It to Ourselves
Stocks for the Long Run? (Part II)
Bonds for the Long Run?
The Real Multiplier (II)
The Burden of Government
Economic Growth Since World War II
More Evidence for the Rahn Curve
The Economy Slogs Along
The Obama Effect: Disguised Unemployment
Obama’s Economic Record in Perspective

Economic Growth Since World War II


As we await (probably in vain) the resumption of robust economic growth, let us see what we can learn from the record since World War II (from 1947, to be precise). The  Bureau of Economic Analysis (BEA) provides  in spreadsheet form (here) quarterly and annual estimates of current- and constant-dollar (year 2005) GDP from 1947 to the present. BEA’s numbers yield several insights about the course of economic growth in the U.S.

I begin with this graph:

Real GDP 1947q1-2016q1

The exponential trend line indicates a constant-dollar (real) growth rate for the entire period of 0.81 percent quarterly, or 3.3 percent annually. The actual beginning-to-end annual growth rate is 3.1 percent.

The red bands parallel to the trend line delineate the 99.7% (3-sigma) confidence interval around the trend. GDP has been running at the lower edge of the confidence interval since the first quarter of 2009, that is, since the ascendancy of Barack Obama.

The vertical gray bars represent recessions, which do not correspond precisely to the periods defined as such by the National Bureau of Economic Research (NBER). I define a recession as:

  • two or more consecutive quarters in which real GDP (annualized) is below real GDP (annualized) for an earlier quarter, during which
  • the annual (year-over-year) change in real GDP is negative, in at least one quarter.

For example, annualized real GDP in the second quarter of 1953 was $2,593.5 billion (i.e., about $2.8 trillion in year 2009 dollars). Annualized GDP for the next  five quarters: $2,578.9, $2,539.8, $2,528.0, $2,530.7, and $2,559.4 billion, respectively. The U.S. was still in recession (by my definition) even as GDP began to rise from $2,528.0 billion because GDP remained below $2,593.5 billion. The recession (i.e., drop in output) did not end until the fourth quarter of 1954, when annualized GDP reached $2,609.3 billion, thus surpassing the value for the second quarter of 1953. Moreover, the year-over-year change in GDP was negative in the first three quarters of the recession.

Unlike the NBER, I do not locate a recession in 2001. Real GDP, measured quarterly, dropped in the first and third quarters of 2001, but each decline lasted only a quarter. But, whereas the NBER places the Great Recession from December 2007 to June 2009, I date it from the first quarter of 2008 through the second quarter of 2011.

My method of identifying recessions is more objective and consistent than the NBER’s method, which one economist describes as “The NBER will know it when it sees it.” Moreover, unlike the NBER, I would not presume to pinpoint the first and last months of a recession, given the volatility of GDP estimates:

Year-over-year changes in real GDP

The second-order polynomial regression line gives the best approximation of the post-war trend in the rate of real growth. Not a pretty picture.

Here’s another ugly picture:

Real GDP by post-WW2 business cycle

Rates of growth (depicted by the exponential regression lines) clearly are lower in later cycles than in earlier ones, and lowest of all in the current cycle.

In this connection, I note that the “Clinton boom“ — 3.4 percent real growth from 1993 to 2001 — was nothing to write home about, being mainly the product of Clinton’s self-promotion and the average citizen’s ahistorical perspective. The boomlet of the 1990s, whatever its causes, was less impressive than several earlier post-war expansions. In fact, the overall rate of growth from the first quarter of 1947 to the first quarter of 1993 — recessions and all — was 3.4 percent. The “Clinton boom” is the boom that wasn’t.

Even more depressing (pardon the pun) — but unsurprising — is the feeble rate of growth since the end of the Great Recession: 2.0 percent. And it will get worse before it gets better. As long as the fiscal and regulatory burden of government grows, the economy will slide deeper into stagnation.

*     *     *

Related posts:
Are We Mortgaging Our Children’s Future?
In the Long Run We Are All Poorer
Mr. Greenspan Doth Protest Too Much
The Price of Government
Fascism and the Future of America
The Indivisibility of Economic and Social Liberty
Rationing and Health Care
The Fed and Business Cycles
The Commandeered Economy
The Perils of Nannyism: The Case of Obamacare
The Price of Government Redux
As Goes Greece
The State of the Union: 2010
The Shape of Things to Come
Ricardian Equivalence Reconsidered
The Real Burden of Government
Toward a Risk-Free Economy
The Rahn Curve at Work
The Illusion of Prosperity and Stability
More about the Perils of Obamacare
Health Care “Reform”: The Short of It
The Mega-Depression
I Want My Country Back
The “Forthcoming Financial Collapse”
Estimating the Rahn Curve: Or, How Government Inhibits Economic Growth
The Deficit Commission’s Deficit of Understanding
The Bowles-Simpson Report
The Bowles-Simpson Band-Aid
The Stagnation Thesis
America’s Financial Crisis Is Now
Understanding Hayek
Money, Credit, and Economic Fluctuations
A Keynesian Fantasy Land
The Keynesian Fallacy and Regime Uncertainty
Why the “Stimulus” Failed to Stimulate
The “Jobs Speech” That Obama Should Have Given
Say’s Law, Government, and Unemployment
Regime Uncertainty and the Great Recession
Regulation as Wishful Thinking
Vulgar Keynesianism and Capitalism
Why Are Interest Rates So Low?
Don’t Just Stand There, “Do Something”
The Commandeered Economy
Stocks for the Long Run?
We Owe It to Ourselves
The Burden of Government
Government in Macroeconomic Perspective
Keynesianism: Upside-Down Economics in the Collectivist Cause
Economic Horror Stories: The Great “Demancipation” and Economic Stagnation
Economics: A Survey
Why Are Interest Rates So Low?
Vulgar Keynesianism and Capitalism
America’s Financial Crisis Is Now
The Keynesian Multiplier: Phony Math
The True Multiplier
Income Inequality and Economic Growth
The Rahn Curve Revisited
The Slow-Motion Collapse of the Economy
The Real Burden of Government (II)
Further Thoughts about the Keynesian Multiplier

Undermining the Free Society

Apropos my earlier post about “Asymmetrical (Ideological) Warfare,” I note this review by Gerald J. Russello of Kenneth Minogue’s The Servile Mind: How Democracy Erodes the Moral Life. As he summarizes Minogue, Russello writes:

The push for equality and ever more rights—two of [democracy’s] basic principles—requires a ruling class to govern competing claims; thus the rise of the undemocratic judiciary as the arbiter of many aspects of public life, and of bureaucracies that issue rules far removed from the democratic process. Should this trend continue, Minogue foresees widespread servility replacing the tradition of free government.

This new servility will be based not on oppression, but on the conviction that experts have eliminated any need for citizens to develop habits of self-control, self-government, or what used to be called the virtues.

How has democracy led to “servility,” which is really a kind of oppression? Here is my diagnosis.

It is well understood that voters, by and large, vote irrationally, that is, emotionally, on the basis of “buzz” instead of facts, and inconsistently. (See this, this, and this, for example.) Voters are prone to vote against their own long-run interests because they do not understand the consequences of the sound-bite policies advocated by politicians. American democracy, by indiscriminately granting the franchise — as opposed to limiting it to, say, married property owners over the age of 30 who have children — empowers the run-of-the-mill politician who seeks office (for the sake of prestige, power, and perks) by pandering to the standard, irrational voter.

Rationality is the application of sound reasoning and pertinent facts to the pursuit of a realistic objective (one that does not contradict the laws of nature or human nature). I daresay that most voters are guilty of voting irrationally because they believe in such claptrap as peace through diplomacy, “social justice” through high marginal tax rates, or better health care through government regulation.

To be perfectly clear, the irrationality lies not in favoring peace, “social justice” (whatever that is), health care, and the like. The irrationality lies in uninformed beliefs in such contradictions as peace through unpreparedness for war, “social justice” through soak-the-rich schemes, better health care through greater government control of medicine, etc., etc., etc. Voters whose objectives incorporate such beliefs simply haven’t taken the relatively little time it requires to process what they may already know or have experienced about history, human nature, and social and economic realities.

Why is voters’ irrationality important? Does voting really matter? Well, it’s easy to say that an individual’s vote makes very little difference. But individual votes add up. Every vote cast for a winning political candidate enhances his supposed mandate, which usually is (in his mind) some scheme (or a lot of them) to regulated our lives more than they are already regulated.

That is to say, voters (not to mention those who profess to understand voters) overlook the slippery slope effects of voting for those who promise to “deliver” certain benefits. It is true that the benefits, if delivered, would temporarily increase the well-being of certain voters. But if one group of voters reaps benefits, then another group of voters wants to reap benefits as well. Why? Because votes are not won, nor offices held, by placating a particular class of voter; many other classes of them must also be placated.

The “benefits” sought by voters (and delivered by politicians) are regulatory as well as monetary. Many voters (especially wealthy, paternalistic ones) are more interested in controlling others than they are in reaping government handouts (though they don’t object to that either). And if one group of voters reaps certain regulatory benefits, it follows (as night from day) that other groups also will seek (and reap) regulatory benefits. (Must one be a trained economist to understand this? Obviously not, because most trained economists don’t seem to understand it.)

And then there is the “peaceat-any-priceone-worldcrowd, which is hard to distinguish from the crowd that demands (and delivers) monetary and regulatory “benefits.”

So, here we are:

  • Many particular benefits are bestowed and many regulations are imposed, to the detriment of investors, entrepreneurs, innovators, inventors, and people who simply are willing to work hard to advance themselves. And it is they who are responsible for the economic growth that bestows (or would bestow) more jobs and higher incomes on everyone, from the poorest to the richest.
  • A generation from now, the average American will “enjoy” about one-fourth the real output that would be his absent the advent of the regulatory-welfare state about a century ago.

Americans have, since 1932, voted heavily against their own economic and security interests, and the economic and security interests of their progeny. But what else can you expect when — for those same 78 years — voters have been manipulated into voting against their own interests by politicians, media, “educators,” and “intelligentsia”? What else can you expect when the courts have all too often ratified the malfeasance of those same politicians?

If this is democracy, give me monarchy.

The Illusion of Prosperity and Stability

For reasons I outlined in “The Price of Government,” the post-Civil War boom of 1866-1907 finally gave way to the onslaught of Progressivism. Real GDP grew at the rate of 4.3 percent annually during the post-Civil War boom; it has since grown at an annual rate of 3.3 percent. The difference between the two rates of growth, compounded over a century, is the difference between $13 trillion (2009’s GDP in 2005 dollars) and $41 trillion (2009’s potential GDP in 2005 dollars).

As I said in “The Price of Government,” this disparity

may seem incredible, but scan the lists here and you will find even greater cross-national disparities in per capita GDP. Go here and you will find that real, per capita GDP in 1790 was only 4.6 percent of the value it had attained 218 years later. Our present level of output seems incredible to citizens of impoverished nations, and it would seem no less incredible to an American of 1790. In sum, vast disparities can and do exist, across nations and time.

The main reason for the disparity is the intervention of the federal government in the economic affairs of Americans and their businesses. I put it this way in “The Price of Government”:

What we are seeing [in the present recession and government’s response to it] is the continuation of a death-spiral that began in the early 1900s. Do-gooders, worry-warts, control freaks, and economic ignoramuses see something “bad” and — in their misguided efforts to control natural economic forces (which include business cycles) — make things worse. The most striking event in the death-spiral is the much-cited Great Depression, which was caused by government action, specifically the loose-tight policies of the Federal Reserve, Herbert Hoover’s efforts to engineer the economy, and — of course — FDR’s benighted New Deal. (For details, see this, and this.)

But, of course, the worse things get, the greater the urge to rely on government. Now, we have “stimulus,” which is nothing more than an excuse to greatly expand government’s intervention in the economy. Where will it lead us? To a larger, more intrusive government that absorbs an ever larger share of resources that could be put to productive use, and counteracts the causes of economic growth.

One of the ostensible reasons for governmental intervention is to foster economic stability. That was an important rationale for the creation of the Federal Reserve System; it was an implicit rationale for Social Security, which moves income to those who are more likely to spend it; and it remains a key rationale for so-called counter-cyclical spending (i.e., “fiscal policy”) and the onerous regulation of financial institutions.

Has the quest for stability succeeded? If you disregard the Great Depression, and several deep recessions (including the present one), it has. But the price has been high. The green line in the following graph traces real GDP as it would have been had economic growth after 1907 followed the same path as it did in 1866-1907, with all of the ups and down in that era of relatively unregulated “instability.” The red line, which diverges from the green one after 1907, traces real GDP as it has been since government took over the task of ensuring stable prosperity.

Only by overlooking the elephant in the room — the Great Depression — can one assert that government has made the economy more stable. Only because we cannot see the exorbitant price of government can we believe that it has had something to do with our “prosperity.”

What about those fairly sharp downturns along the green line? If it really is important for government to shield us from economic shocks, there are much better ways of getting the job done that they ways now employed. There was no federal income tax during the post-Civil War boom (one of the reasons for the boom). Suppose that in the early 1900s the federal government had been allowed to impose a small, constitutionally limited income tax of, say, 0.5 percent on gross personal incomes over a certain level, measured in constant dollars (with an explicit ban on exemptions, deductions, and other adjustments, to keep it simple and keep interest groups from enriching themselves at the expense of others). Suppose, further, that the proceeds from the tax had a constitutionally limited use: the payment of unemployment benefits for a constitutionally limited time whenever real GDP declined from quarter to quarter.

Perhaps that’s too much clutter for devotees of constitutional simplicity. But wouldn’t the results have been worth the clutter? The primary result would have been growth at a rate close to that of 1866-1907, but with some of the wrinkles ironed out. The secondary result — and an equally important one — would have been the diminution (if not the elimination) of the “need” for governmental intervention in our affairs.

Related posts:
Basic Economics
The Economic and Social Consequences of Government

More about Paternalism

To complement my earlier post, “Beware of Libertarian Paternalists,” I offer the following links:

Pitfalls of Paternalism (Ilya Somin, The Volokh Conspiracy)

Hayek on the Use of Superior Expert Knowledge as a Justification for Paternalism (Ilya Somin, The Volokh Conspiracy)

The Knowledge Problem of New Paternalism (Mario Rizzo, ThinkMarkets)

Little Brother Is Watching You: The New Paternalism on the Slippery Slopes (Mario Rizzo, ThinkMarkets)

New Paternalism on the Slippery Slopes, Part I (Glen Whitman, Agoraphilia)

Be sure to read the posts and articles linked therein.

Does the Minimum Wage Increase Unemployment?


I have not a shred of doubt that the minimum wage increases unemployment, especially among the most vulnerable group of workers: males aged 16 to 19.

Anyone who claims that the minimum wage does not affect unemployment among that vulnerable group is guilty of (a) ingesting a controlled substance,  (b) wishing upon a star, or — most likely — (c) indulging in a mindless display of vicarious “compassion.”

Economists have waged a spirited mini-war over the minimum-wage issue, to no conclusive end. But anyone who tells you that a wage increase that is forced on businesses by government will not lead to a rise in unemployment is one of three things: an economist with an agenda, a politician with an agenda, a person who has never run a business. There is considerable overlap among the three categories.

I have run a business, and I have worked for the minimum wage (and less). On behalf of business owners and young male workers, I am here to protest further increases in the minimum wage. My protest is entirely evidence-based — no marching, shouting, or singing for me. Facts are my friends, even if they are inimical to Left-wing economists, politicians, and other members of the reality-challenged camp.

I begin with  time series on unemployment among males — ages 16 to 19 and 20 and older — for the period January 1948 through June 2009. (These time series are available via this page on the BLS website.) If it is true that the minimum wage targets younger males, the unemployment rate for 16 to 19 year-old males (16-19 YO) will rise faster or decrease less quickly than the unemployment rate for 20+ year-old males (20+ YO) whenever the minimum wage is increased. The precise change will depend on such factors as the propensity of young males to attend college — which has risen over time — and the value of the minimum wage in relation to prevailing wage rates for the industries which typically employ low-skilled workers. But those factors should have little influence on observed month-to-month changes in unemployment rates.

I use two methods to estimate the effects of minimum wage on the unemployment rate of 16-19 YO: graphical analysis and linear regression.

I begin by finding the long-term relationship between the unemployment rates for 16-19 YO and 20+ YO. As it turns out, there is a statistical artifact in the unemployment data, an artifact that is unexplained by this BLS document, which outlines changes in methods of data collection and analysis over the years. The relationship between the two time series is stable through March 1959, when it shifts abruptly. The markedness of the shift can be seen in the contrast between figure 1, which covers the entire period, and figures 2 and 3, which subdivide the entire period into two sub-periods.

090725_Minimum wage and unemployment_fig 1

090725_Minimum wage and unemployment_fig 2

090725_Minimum wage and unemployment_fig 3

For the graphical analysis, I use the equations shown in figures 2 and 3 to determine a baseline relationship between the unemployment rate for 20+ YO (“x”) and the unemployment rate for 16-19 YO (“y”). The equation in figure 2 yields a baseline unemployment rate for 16-19 YO for each month from January 1948 through March 1959; the equation in figure 3, a baseline unemployment rate for 16-19 YO for each month from April 1959 through June 2009. Combining the results, I obtain a baseline estimate for the entire period, January 1948 through June 2009.

I then find, for each month, a residual value for unemployment among 16-19 YO. The residual (actual value minus baseline estimate) is positive when unemployment among 16-19 YO is higher than expected, and negative when 16-19 YO unemployment is lower than expected. Again, this is unemployment of 16-19 YO relative to 20+ YO. Given the stable baseline relationships between the two unemployment rates (when the time series are subdivided as described above), the values of the residuals (month-to-month deviations from the baseline) can reasonably be attributed to changes in the minimum wage.

For purposes of my analysis, I adopt the following conventions:

  • A change in the minimum wage  begins to affect unemployment among 16-19 YO in the month it becomes law, when the legally effective date falls near the start of the month. A change becomes effective in the month following its legally effective date when that date falls near the end of the month. (All of the effective dates have thus far been on the 1st, 3rd, 24th, and 25th of a month.)
  • In either event, the change in the minimum wage affects unemployment among 16-19 YO for 6 months, including the month in which it becomes effective, as reckoned above.

In other words, I assume that employers (by and large) do not anticipate the minimum wage and begin to fire employees before the effective date of an increase. I assume, rather, that employers (by and large) respond to the minimum wage by failing to hire 16-19 YO who are new to the labor force. Finally, I assume that the non-hiring effect lasts about 6 months — in which time prevailing wage rates for 16-19 YO move toward toward (and perhaps exceed) the minimum wage, thus eventually blunting the effect of the minimum wage on unemployment.

I relax the 6-month rule during eras when the minimum wage rises annually, or nearly so. I assume that during such eras employers anticipate scheduled increases in the minimum wage by continuously suppressing their demand for 16-19 YO labor. (There are four such eras: the first runs from September 1963 through July 1971; the second, from May 1974 through June 1981; the third, from May 1996 through February 1998; the fourth, from July 2007 to the present, and presumably beyond.)

With that prelude, I present the following graph of the relationship between residual unemployment among 16-19 YO and the effective periods of minimum wage increases.

090725_Minimum wage and unemployment_fig 4

The jagged, green and red line represents the residual unemployment rate for 16-19 YO. The green portions of the line denote periods in which the minimum wage is ineffective; the red portions of the line denote periods in which the minimum wage is effective. The horizontal gray bands at +1 and -1 denote the normal range of the residuals, one standard deviation above and below the mean, which is zero.

It is obvious that higher residuals (greater unemployment) are generally associated with periods in which the minimum wage is effective; that is, most portions of the line that lie above the normal range are red. Conversely, lower residuals (less unemployment) are generally associated with periods in which the minimum wage is ineffective; that is, most portions of the line that lie below the normal range are green. (Similar results obtain for variations in which employers anticipate the minimum wage increase, for example, by firing or reduced hiring in the preceding 3 months, while the increase affects employment for only 3 months after it becomes law.)

Having shown that there is an obvious relationship between 16-19 YO unemployment and the minimum wage, I now quantify it. Because of the distinctly different relationships between 16-19 YO unemployment and 20+ YO unemployment in the two sub-periods (January 1948 – March 1959, April 1959 – June 2009), I estimate a separate regression equation for each sub-period.

For the first sub-period, I find the following relationship:

Unemployment rate for 16-19 YO (in percentage points) = 3.913 + 1.828 x unemployment rate for 20+ YO + 0.501 x dummy variable for minimum wage (1 if in effect, 0 if not)

Adjusted R-squared: 0.858; standard error of the estimate: 9 percent of the mean value of 16-19 YO unemployment rate; t-statistics on the intercept and coefficients: 14.663, 28.222, 1.635.

Here is the result for the second sub-period:

Unemployment rate for 16-19 YO (in percentage points) = 8.940 + 1.528 x unemployment rate for 20+ YO + 0.610 x dummy variable for minimum wage (1 if in effect, 0 if not)

Adjusted R-squared: 0.855; standard error of the estimate: 6 percent of the mean value of 16-19 YO unemployment rate; t-statistics on the intercept and coefficients: 62.592, 59.289, 7.495.

On the basis of the robust results for the second sub-period, which is much longer and current, I draw the following conclusions:

  • The baseline unemployment rate for 16-19 YO is about 9 percent.
  • Unemployment around the baseline changes by about 1.5 percentage points for every percentage-point change in the unemployment rate for 20+ YO.
  • The minimum wage, when effective, raises the unemployment rate for 16-19 YO by 0.6 percentage points.

Therefore, given the current number of 16 to 19 year old males in the labor force (about 3.3 million), some 20,000 will lose or fail to find jobs because of yesterday’s boost in the minimum wage. Yes, 20,000 is a small fraction of 3.3 million (0.6 percent), but it is a real, heartbreaking number — 20,000 young men for whom almost any hourly wage would be a blessing.

But the “bleeding hearts” who insist on setting a minimum wage, and raising it periodically, don’t care about those 20,000 young men — they only care about their cheaply won reputation for “compassion.”

UPDATE (09/08/09):

A relevant post by Don Boudreaux:

Here’s a second letter that I sent today to the New York Times:

Gary Chaison misses the real, if unintended, lesson of the Russell Sage Foundation study that finds that low-skilled workers routinely keep working for employers who violate statutory employment regulations such as the minimum-wage (Letters, September 8).  This real lesson is that economists’ conventional wisdom about the negative consequences of the minimum-wage likely is true after all.

Fifteen years ago, David Card and Alan Krueger made headlines by purporting to show that a higher minimum-wage, contrary to economists’ conventional wisdom, doesn’t reduce employment of low-skilled workers.  The RSF study casts significant doubt on Card-Krueger.  First, because the minimum-wage itself is circumvented in practice, its negative effect on employment is muted, perhaps to the point of becoming statistically imperceptible.  Second, employers’ and employees’ success at evading other employment regulations – such as mandatory overtime pay – counteracts the minimum-wage’s effect of pricing many low-skilled workers out of the job market.

Donald J. Boudreaux

Fooled by Non-Randomness

Nassim Nicholas Taleb, in his best-selling Fooled by Randomness, charges human beings with the commission of many perceptual and logical errors. One reviewer captures the point of the book, which is to

explore luck “disguised and perceived as non-luck (that is, skills).” So many of the successful among us, he argues, are successful due to luck rather than reason. This is true in areas beyond business (e.g. Science, Politics), though it is more obvious in business.

Our inability to recognize the randomness and luck that had to do with making successful people successful is a direct result of our search for pattern. Taleb points to the importance of symbolism in our lives as an example of our unwillingness to accept randomness. We cling to biographies of great people in order to learn how to achieve greatness, and we relentlessly interpret the past in hopes of shaping our future.

Only recently has science produced probability theory, which helps embrace randomness. Though the use of probability theory in practice is almost nonexistent.

Taleb says the confusion between luck and skill is our inability to think critically. We enjoy presenting conjectures as truth and are not equipped to handle probabilities, so we attribute our success to skill rather than luck.

Taleb writes in a style found all too often on best-seller lists: pseudo-academic theorizing “supported” by selective (often anecdotal) evidence. I sometimes enjoy such writing, but only for its entertainment value. Fooled by Randomness leaves me unfooled, for several reasons.


The first reason that I am unfooled by Fooled… might be called a meta-reason. Standing back from the book, I am able to perceive its essential defect: According to Taleb, human affairs — especially economic affairs, and particularly the operations of financial markets — are dominated by randomness. But if that is so, only a delusional person can truly claim to understand the conduct of human affairs. Taleb claims to understand the conduct of human affairs. Taleb is therefore either delusional or omniscient.

Given Taleb’s humanity, it is more likely that he is delusional — or simply fooled, but not by randomness. He is fooled because he proceeds from the assumption of randomness instead of exploring the ways and means by which humans are actually capable of shaping events. Taleb gives no more than scant attention to those traits which, in combination, set humans apart from other animals: self-awareness, empathy, forward thinking, imagination, abstraction, intentionality, adaptability, complex communication skills, and sheer brain power. Given those traits (in combination) the world of human affairs cannot be random. Yes, human plans can fail of realization for many reasons, including those attributable to human flaws (conflict, imperfect knowledge, the triumph of hope over experience, etc.). But the failure of human plans is due to those flaws — not to the randomness of human behavior.

What Taleb sees as randomness is something else entirely. The trajectory of human affairs often is unpredictable, but it is not random. For it is possible to find patterns in the conduct of human affairs, as Taleb admits (implicitly) when he discusses such phenomena as survivorship bias, skewness, anchoring, and regression to the mean.


What Is It?

Taleb, having bloviated for dozens of pages about the failure of humans to recognize randomness, finally gets around to (sort of) defining randomness on pages 168 and 169 (of the 2005 paperback edition):

…Professor Karl Pearson … devised the first test of nonrandomness (it was in reality a test of deviation from normality, which for all intents and purposes, was the same thing). He examined millions of runs of [a roulette wheel] during the month of July 1902. He discovered that, with high degree of statistical significance … the runs were not purely random…. Philosophers of statistics call this the reference case problem to explain that there is no true attainable randomness in practice, only in theory….

…Even the fathers of statistical science forgot that a random series of runs need not exhibit a pattern to look random…. A single random run is bound to exhibit some pattern — if one looks hard enough…. [R]eal randomness does not look random.

The quoted passage illustrates nicely the superficiality of Fooled by Randomness, and (I must assume) the muddledness of Taleb’s thinking:

  • He accepts a definition of randomness which describes the observation of outcomes of mechanical processes (e.g., the turning of a roulette wheel, the throwing of dice) that are designed to yield random outcomes. That is, randomness of the kind cited by Taleb is in fact the result of human intentions.
  • If “there is no true attainable randomness,” why has Taleb written a 200-plus page book about randomness?
  • What can he mean when he says “a random series of runs need not exhibit a pattern to look random”? The only sensible interpretation of that bit of nonsense would be this: It is possible for a random series of runs to contain what looks like a pattern. But remember that the random series of runs to which Taleb refers is random only because humans intended its randomness.
  • It is true enough that “A single random run is bound to exhibit some pattern — if one looks hard enough.” Sure it will. But it remains a single random run of a process that is intended to produce randomness, which is utterly unlike such events as transactions in financial markets.

One of the “fathers of statistical science” mentioned by Taleb (deep in the book’s appendix) is Richard von Mises, who in Probability Statistics and Truth defines randomness as follows:

First, the relative frequencies of the attributes [e.g. heads and tails] must possess limiting values [i.e., converge on 0.5, in the case of coin tosses]. Second, these limiting values must remain the same in all partial sequences which may be selected from the original one in an arbitrary way. Of course, only such partial sequences can be taken into consideration as can be extended indefinitely, in the same way as the original sequence itself. Examples of this kind are, for instance, the partial sequences formed by all odd members of the original sequence, or by all members for which the place number in the sequence is the square of an integer, or a prime number, or a number selected according to some other rule, whatever it may be. (pp. 24-25 of the 1981 Dover edition, which is based on the author’s 1951 edition)

Gregory J. Chaitin, writing in Scientific American (“Randomness and Mathematical Proof,” vol. 232, no. 5 (May 1975), pp. 47-52), offers this:

We are now able to describe more precisely the differences between the[se] two series of digits … :


The first could be specified to a computer by a very simple algorithm, such as “Print 01 ten times.” If the series were extended according to the same rule, the algorithm would have to be only slightly larger; it might be made to read, for example, “Print 01 a million times.” The number of bits in such an algorithm is a small fraction of the number of bits in the series it specifies, and as the series grows larger the size of the program increases at a much slower rate.

For the second series of digits there is no corresponding shortcut. The most economical way to express the series is to write it out in full, and the shortest algorithm for introducing the series into a computer would be “Print 01101100110111100010.” If the series were much larger (but still apparently patternless), the algorithm would have to be expanded to the corresponding size. This “incompressibility” is a property of all random numbers; indeed, we can proceed directly to define randomness in terms of incompressibility: A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself [emphasis added].

This is another way of saying that if you toss a balanced coin 1,000 times the only way to describe the outcome of the tosses is to list the 1,000 outcomes of those tosses. But, again, the thing that is random is the outcome of a process designed for randomness.

Taking Mises and Chaitin’s definitions together, we can define random events as events which are repeatable, convergent on a limiting value, and truly patternless over a large number of repetitions. Evolving economic events (e.g., stock-market trades, economic growth) are not alike (in the way that dice are, for example), they do not converge on limiting values, and they are not patternless, as I will show.

In short, Taleb fails to demonstrate that human affairs in general or financial markets in particular exhibit randomness, properly understood.

Randomness and the Physical World

Nor are we trapped in a random universe. Returning to Mises, I quote from the final chapter of Probability, Statistics and Truth:

We can only sketch here the consequences of these new concepts [e.g., quantum mechanics and Heisenberg’s principle of uncertainty] for our general scientific outlook. First of all, we have no cause to doubt the usefulness of the deterministic theories in large domains of physics. These theories, built on a solid body of experience, lead to results that are well confirmed by observation. By allowing us to predict future physical events, these physical theories have fundamentally changed the conditions of human life. The main part of modern technology, using this word in its broadest sense, is still based on the predictions of classical mechanics and physics. (p. 217)

Even now, almost 60 years on, the field of nanotechnology is beginning to hardness quantum mechanical effects in the service of a long list of useful purposes.

The physical world, in other words, is not dominated by randomness, even though its underlying structures must be described probabilistically rather than deterministically.

Summation and Preview

A bit of unpredictability (or “luck”) here and there does not make for a random universe, random lives, or random markets. If a bit of unpredictability here and there dominated our actions, we wouldn’t be here to talk about randomness — and Taleb wouldn’t have been able to marshal his thoughts into a published, marketed, and well-sold book.

Human beings are not “designed” for randomness. Human endeavors can yield unpredictable results, but those results do not arise from random processes, they derive from skill or the lack therof, knowledge or the lack thereof (including the kinds of self-delusions about which Taleb writes), and conflicting objectives.

An Illustration from Life

To illustrate my position on randomness, I offer the following digression about the game of baseball.

At the professional level, the game’s poorest players seldom rise above the low minor leagues. But even those poorest players are paragons of excellence when compared with the vast majority of American males of about the same age. Did those poorest players get where they were because of luck? Perhaps some of them were in the right place at the right time, and so were signed to minor league contracts. But their luck runs out when they are called upon to perform in more than a few games. What about those players who weren’t in the right place at the right time, and so were overlooked in spite of skills that would have advanced them beyond the rookie leagues? I have no doubt that there have been many such players. But, in the main, professional baseball abounds with the lion’s share of skilled baseball players who are there because they intend to be there, and because baseball clubs intend for them to be there.

Now, most minor leaguers fail to advance to the major leagues, even for the proverbial “cup of coffee” (appearing in few games at the end of the major-league season, when teams are allowed to expand their rosters following the end of the minor-league season). Does “luck” prevent some minor leaguers from advancement to “the show” (the major leagues)? Of course. Does “luck” result in the advancement of some minor leaguers to “the show”? Of course. But “luck,” in this context, means injury, illness, a slump, a “hot” streak, and the other kinds of unpredictable events that ballplayers are subject to. Are the events random? Yes, in the sense that they are unpredictable, but I daresay that most baseball players do not succumb to bad luck or advance very for or for very long because of good luck. In fact, ballplayers who advance to the major leagues, and then stay there for more than a few seasons, do so because they possess (and apply) greater skill than their minor-league counterparts. And make no mistake, each player’s actions are so closely watched and so extensively quantified that it isn’t hard to tell when a player is ready to be replaced.

It is true that a player may experience “luck” for a while during a season, and sometimes for a whole season. But a player will not be consistently “lucky” for several seasons. The length of his career (barring illness, injury, or voluntary retirement), and his accomplishments during that career, will depend mainly on his inherent skills and his assiduousness in applying those skills.

No one believes that Ty Cobb, Babe Ruth, Ted Williams, Christy Matthewson, Warren Spahn, and the dozens of other baseball players who rank among the truly great were lucky. No one believes that the vast majority of the the tens of thousands of minor leaguers who never enjoyed more than the proverbial cup of coffee were unlucky. No one believes that the vast majority of the millions of American males who never made it to the minor leagues were unlucky. Most of them never sought a career in baseball; those who did simply lacked the requisite skills.

In baseball, as in life, “luck” is mainly an excuse and rarely an explanation. We prefer to apply “luck” to outcomes when we don’t like the true explanations for them. In the realm of economic activity and financial markets, one such explanation (to which I will come) is the exogenous imposition of governmental power.


They Cannot Be, Given Competition

Returning to Taleb’s main theme — the randomness of economic and financial events — I quote this key passage (my comments are in brackets and boldface):

…Most of [Bill] Gates'[s] rivals have an obsessive jealousy of his success. They are maddened by the fact that he managed to win so big while many of them are struggling to make their companies survive. [These are unsupported claims that I include only because they set the stage for what follows.]

Such ideas go against classical economic models, in which results either come from a precise reason (there is no account for uncertainty) or the good guy wins (the good guy is the one who is most skilled and has some technical superiority). [The “good guy” theory would come as a great surprise to “classical” economists, who quite well understood imperfect competition based on product differentiation and monopoly based on (among other things) early entry into a market.] Economists discovered path-dependent effects late in their game [There is no “late” in a “game” that had no distinct beginning and has no pre-ordained end.], then tried to publish wholesale on the topic that otherwise be bland and obvious. For instance, Brian Arthur, an economist concerned with nonlinearities at the Santa Fe Institute [What kinds of nonlinearities are found at the Santa Fe Institute?], wrote that chance events coupled with positive feedback other than technological superiority will determine economic superiority — not some abstrusely defined edge in a given area of expertise. [It would come as no surprise to economists — even “classical” ones — that many factors aside from technical superiority determine market outcomes.] While early economic models excluded randomness, Arthur explained how “unexpected orders, chance meetings with lawyers, managerial whims … would help determine which ones acheived early sales and, over time, which firms dominated.”

Regarding the final sentence of the quoted passage, I refer back to the example of baseball. A person or a firm may gain an opportunity to succeed because of the kinds of “luck” cited by Brian Arthur, but “good luck” cannot sustain an incompetent performer for very long.  And when “bad luck” happens to competent individuals and firms they are often (perhaps usually) able to overcome it.

While overplaying the role of luck in human affairs, Taleb underplays the role of competition when he denigrates “classical economic models,” in which competition plays a central role. “Luck” cannot forever outrun competition, unless the game is rigged by governmental intervention, namely, the writing of regulations that tend to favor certain competitors (usually market incumbents) over others (usually would-be entrants). The propensity to regulate at the behest of incumbents (who plead “public interest,” of course) is a proof of the power of competition to shape economic outcomes. It is loathed and feared, and yet it leads us in the direction to which classical economic theory points: greater output and lower prices.

Competition is what ensures that (for the most part) the best ballplayers advance to the major leagues. It’s what keeps “monopolists” like Microsoft hopping (unless they have a government-guaranteed monopoly), because even a monopolist (or oligopolist) can face competition, and eventually lose to it — witness the former “Big Three” auto makers, many formerly thriving chain stores (from Kresge’s to Montgomery Ward’s), and numerous other brand names of days gone by. If Microsoft survives and thrives, it will be because it actually offers consumers more value for their money, either in the way of products similar to those marketed by Microsoft or in entirely new products that supplant those offered by Microsoft.

Monopolists and oligopolists cannot survive without constant innovation and attention to their customers’ needs.Why? Because they must compete with the offerors of all the other goods and services upon which consumers might spend their money. There is nothing — not even water — which cannot be produced or delivered in competitive ways. (For more, see this.)

The names of the particular firms that survive the competitive struggle may be unpredictable, but what is predictable is the tendency of competitive forces toward economic efficiency. In other words, the specific outcomes of economic competition may be unpredictable (which is not a bad thing), but the general result — efficiency — is neither unpredictable nor a manifestation of randomness or “luck.”

Taleb, had he broached the subject of competition would (with his hero George Soros) denigrate it, on the ground that there is no such thing as perfect competition. But the failure of competitive forces to mimic the model of perfect competition does not negate the power of competition, as I have summarized it here. Indeed, the failure of competitive forces to mimic the model of perfect competition is not a failure, for perfect competition is unattainable in practice, and to hold it up as a measure of the effectiveness of market forces is to indulge in the Nirvana fallacy.

In any event, Taleb’s myopia with respect to competition is so complete that he fails to mention it, let alone address its beneficial effects (even when it is less than perfect). And yet Taleb dares to dismiss as a utopist Milton Friedman (p. 272) — the same Milton Friedman who was among the twentieth century’s foremost advocates of the benefits of competition.

Are Financial Markets Random?

Given what I have said thus far, I find it almost incredible that anyone believes in the randomness of financial markets. It is unclear where Taleb stands on the random-walk hypothesis, but it is clear that he believes financial markets to be driven by randomness. Yet, contradictorily, he seems to attack the efficient-markets hypothesis (see pp. 61-62), which is the foundation of the random-walk hypothesis.

What is the random-walk hypothesis? In brief, it is this: Financial markets are so efficient that they instantaneously reflect all information bearing on the prices of financial instruments that is then available to persons buying and selling those instruments. (The qualifier “then available to persons buying and selling those instruments” leaves the door open for [a] insider trading and [b] arbitrage, due to imperfect knowledge on the part of some buyers and/or sellers.) Because information can change rapidly and in unpredictable ways, the prices of financial instruments move randomly. But the random movement is of a very special kind:

If a stock goes up one day, no stock market participant can accurately predict that it will rise again the next. Just as a basketball player with the “hot hand” can miss the next shot, the stock that seems to be on the rise can fall at any time, making it completely random.

And, therefore, changes in stock prices cannot be predicted.

Note, however, the focus on changes. It is that focus which creates the illusion of randomness and unpredictability. It is like hoping to understand the movements of the planets around the sun by looking at the random movements of a particle in a cloud chamber.

When we step back from day-to-day price changes, we are able to see the underlying reality: prices (instead of changes) and price trends (which are the opposite of randomness). This (correct) perspective enables us to see that stock prices (on the whole) are not random, and to identify the factors that influence the broad movements of the stock market.
For one thing, if you look at stock prices correctly, you can see that they vary cyclically. Here is a telling graphic (from “Efficient-market hypothesis” at Wikipedia):

Returns on stocks vs. PE ratioPrice-Earnings ratios as a predictor of twenty-year returns based upon the plot by Robert Shiller (Figure 10.1,[18] source). The horizontal axis shows the real price-earnings ratio of the S&P Composite Stock Price Index as computed in Irrational Exuberance (inflation adjusted price divided by the prior ten-year mean of inflation-adjusted earnings). The vertical axis shows the geometric average real annual return on investing in the S&P Composite Stock Price Index, reinvesting dividends, and selling twenty years later. Data from different twenty-year periods is color-coded as shown in the key. See also ten-year returns. Shiller states that this plot “confirms that long-term investors—investors who commit their money to an investment for ten full years—did do well when prices were low relative to earnings at the beginning of the ten years. Long-term investors would be well advised, individually, to lower their exposure to the stock market when it is high, as it has been recently, and get into the market when it is low.”[18] This correlation between price to earnings ratios and long-term returns is not explained by the efficient-market hypothesis.

Why should stock prices tend to vary cyclically? Because stock prices generally are driven by economic growth (i.e., changes in GDP), and economic growth is strongly cyclical. (See this post.)

More fundamentally, the economic outcomes reflected in stock prices aren’t random, for they depend mainly on intentional behavior along well-rehearsed lines (i.e., the production and consumption of goods and services in ways that evolve over time). Variations in economic behavior, even when they are unpredictable, have explanations; for example:

  • Innovation and capital investment spur the growth of economic output.
  • Natural disasters slow the growth of economic output (at least temporarily) because they absorb resources that could have gone to investment  (as well as consumption).
  • Governmental interventions (taxation and regulation), if not reversed, dampen growth permanently.

There is nothing in those three statements that hasn’t been understood since the days of Adam Smith. Regarding the third statement, the general slowing of America’s economic growth since the advent of the Progressive Era around 1900 is certainly not due to randomness, it is due to the ever-increasing burden of taxation and regulation imposed on the economy — an entirely predictable result, and certainly not a random one.

In fact, the long-term trend of the stock market (as measured by the S&P 500) is strongly correlated with GDP. And broad swings around that trend can be traced to governmental intervention in the economy. The following graph shows how the S&P 500, reconstructed to 1870, parallel constant-dollar GDP:

The next graph shows the relationship more clearly.

090711_Real S&P 500 vs Real GDP

090711_Real S&P 500 vs Real GDP_2

The wild swings around the trend line began in the uncertain aftermath of World War I, which saw the imposition of production and price controls. The swings continued with the onset of the Great Depression (which can be traced to governmental action), the advent of the anti-business New Deal, and the imposition of production and price controls on a grand scale during World War II. The next downswing was occasioned by the culmination the Great Society, the “oil shocks” of the early 1970s, and the raging inflation that was touched off by — you guessed it — government policy. The latest downswing is owed mainly to the financial crisis born of yet more government policy: loose money and easy loans to low-income borrowers.

And so it goes, wildly but predictably enough if you have the faintest sense of history. The moral of the story: Keep your eye on government and a hand on your wallet.


There is randomness in economic affairs, but they are not dominated by randomness. They are dominated by intentions, including especially the intentions of the politicians and bureaucrats who run governments. Yet, Taleb has no space in his book for the influence of their deeds economic activity and financial markets.

Taleb is right to disparage those traders (professional and amateur) who are lucky enough to catch upswings, but are unprepared for downswings. And he is right to scoff at their readiness to believe that the current upswing (uniquely) will not be followed by a downswing (“this time it’s different”).

But Taleb is wrong to suggest that traders are fooled by randomness. They are fooled to some extent by false hope, but more profoundly by their inablity to perceive the economic damage wrought by government. They are not alone of course; most of the rest of humanity shares their perceptual failings.

Taleb, in that respect, is only somewhat different than most of the rest of humanity. He is not fooled by false hope, but he is fooled by non-randomness — the non-randomness of government’s decisive influence on economic activity and financial markets. In overlooking that influence he overlooks the single most powerful explanation for the behavior of markets in the past 90 years.

Beware of Libertarian Paternalists

I have written extensively about paternalism of the so-called libertarian variety. (See this post and the posts linked therein.) Glen Whitman, in two recent posts at Agoraphilia, renews his attack on “libertarian paternalism,” the main proponents of which are Cass Sunstein and Richard Thaler (S&T). In the first of the two posts, Whitman writes:

[Thaler] continues to disregard the distinction between public and private action.

Some critics contend that behavioral economists have neglected the obvious fact that bureaucrats make errors, too. But this misses the point. After all, wouldn’t you prefer to have a qualified, albeit human, technician inspect your aircraft’s engines rather than do it yourself?

The owners of ski resorts hire experts who have previously skied the runs, under various conditions, to decide which trails should be designated for advanced skiers. These experts know more than a newcomer to the mountain. Bureaucrats are human, too, but they can also hire experts and conduct research.Here we see two of Thaler’s favorite stratagems deployed at once. First, he relies on a deceptively innocuous, private, and non-coercive example to illustrate his brand of paternalism. Before it was cafeteria dessert placement; now it’s ski-slope markings. Second, he subtly equates private and public decision makers without even mentioning their different incentives. In this case, he uses “bureaucrats” to refer to all managers, regardless of whether they manage private or public enterprises.

The distinction matters. The case of ski-slope markings is the market principle at work. Skiers want to know the difficulty of slopes, and so the owners of ski resorts provide it. They have a profit incentive to do so. This is not at all coercive, and it is no more “paternalist” than a restaurant identifying the vegetarian dishes.

Public bureaucrats don’t have the same incentives at all. They don’t get punished by consumers for failing to provide information, or for providing the wrong information. They don’t suffer if they listen to the wrong experts. They face no competition from alternative providers of their service. They get to set their own standards for “success,” and if they fail, they can use that to justify a larger budget.

And Thaler knows this, because these are precisely the arguments made by the “critics” to whom he is responding. His response is just a dodge, enabled by his facile use of language and his continuing indifference – dare I say hostility? – to the distinction between public and private.

In the second of the two posts, Whitman says:

The advocates of libertarian paternalism have taken great pains to present their position as one that does not foreclose choice, and indeed even adds choice. But this is entirely a matter of presentation. They always begin with non-coercive and privately adopted measures, such as the ski-slope markings in Thaler’s NY Times article. And when challenged, they resolutely stick to these innocuous examples (see this debate between Thaler and Mario Rizzo, for example). But if you read Sunstein & Thaler’s actual publications carefully, you will find that they go far beyond non-coercive and private measures. They consciously construct a spectrum of “libertarian paternalist” policies, and at one end of this spectrum lies an absolutely ban on certain activities, such as motorcycling without a helmet. I’m not making this up!…

[A]s Sunstein & Thaler’s published work clearly indicates, this kind of policy [requiring banks to offer “plain vanilla” mortgages] is the thin end of the wedge. The next step, as outlined in their articles, is to raise the cost of choosing other options. In this case, the government could impose more and more onerous requirements for opting out of the “plain vanilla” mortgage: you must fill out extra paperwork, you must get an outside accountant, you must have a lawyer present, you must endure a waiting period, etc., etc. Again, this is not my paranoid imagination at work. S&T have said explicitly that restrictions like these would count as “libertarian paternalism” by their definition….

The problem is that S&T’s “libertarian paternalism” is used almost exclusively to advocate greater intervention, not less. I have never, for instance, seen S&T push for privatization of Social Security or vouchers in education. I have never seen them advocate repealing a blanket smoking ban and replacing it with a special licensing system for restaurants that want to allow their customers to smoke. If they have, I would love to see it.

In their articles, S&T pay lip service to the idea that libertarian paternalism lies between hard paternalism and laissez faire, and thus that it could in principle be used to expand choice. But look at the actual list of policies they’ve advocated on libertarian paternalist grounds, and see where their real priorities lie.

S&T are typical “intellectuals,” in that they presume to know how others should lead their lives — a distinctly non-libertarian attitude. It is, in fact, a hallmark of “liberalism.” In an earlier post I had this to say about the founders of “liberalism” — John Stuart Mill, Thomas Hill Green, and Leonard Trelawney Hobhouse:

[W]e are met with (presumably) intelligent persons who believe that their intelligence enables them to peer into the souls of others, and to raise them up through the blunt instrument that is the state.

And that is precisely the mistake that lies at heart of what we now call “liberalism” or “progressivism.”  It is the three-fold habit of setting oneself up as an omniscient arbiter of economic and social outcomes, then castigating the motives and accomplishments of the financially successful and socially “well placed,” and finally penalizing financial and social success through taxation and other regulatory mechanisms (e.g., affirmative action, admission quotas, speech codes, “hate crime” legislation”). It is a habit that has harmed the intended beneficiaries of government intervention, not just economically but in other ways, as well….

The other ways, of course, include the diminution of social liberty, which is indivisible from economic liberty.

Just how dangerous to liberty are S&T? Thaler is an influential back-room operator, with close ties to the Obama camp. Sunstein is a long-time crony and adviser who now heads the White House’s Office of Information and Regulatory Affairs, where he has an opportunity to enforce “libertarian paternalism”:

…Sunstein would like to control the content of the internet — for our own good, of course. I refer specifically to Sunstein’s “The Future of Free Speech,” in which he advances several policy proposals, including these:

4. . . . [T]he government might impose “must carry” rules on the most popular Websites, designed to ensure more exposure to substantive questions. Under such a program, viewers of especially popular sites would see an icon for sites that deal with substantive issues in a serious way. They would not be required to click on them. But it is reasonable to expect that many viewers would do so, if only to satisfy their curiosity. The result would be to create a kind of Internet sidewalk, promoting some of the purposes of the public forum doctrine. Ideally, those who create Websites might move in this direction on their own. If they do not, government should explore possibilities of imposing requirements of this kind, making sure that no program draws invidious lines in selecting the sites whose icons will be favoured. Perhaps a lottery system of some kind could be used to reduce this risk.

5. The government might impose “must carry” rules on highly partisan Websites, designed to ensure that viewers learn about sites containing opposing views. This policy would be designed to make it less likely for people to simply hear echoes of their own voices. Of course, many people would not click on the icons of sites whose views seem objectionable; but some people would, and in that sense the system would not operate so differently from general interest intermediaries and public forums. Here too the ideal situation would be voluntary action. But if this proves impossible, it is worth considering regulatory alternatives. [Emphasis added.]

A Left-libertarian defends Sunstein’s foray into thought control, concluding that

Sunstein once thought some profoundly dumb policies might be worth considering, but realized years ago he was wrong about that… The idea was a tentative, speculative suggestion he now condemns in pretty strong terms.

Alternatively, in the face of severe criticism of his immodest proposal, Sunstein merely went underground, to await an opportunity to revive his proposal. I somehow doubt that Sunstein, as a confirmed paternalist, truly abandoned it. The proposal certainly was not off-the-cuff, running to 11 longish web pages.  Now, judging by the bulleted list above, the time is right for a revival of Sunstein’s proposal. And there he is, heading the Office of Information and Regulatory Affairs. The powers of that office supposedly are constrained by the executive order that established it. But it is evident that the Obama adminstration isn’t bothered by legal niceties when it comes to the exercise of power. Only a few pen strokes stand between Obama and a new, sweeping executive order, the unconstitutionality of which would be of no import to our latter-day FDR.

It’s just another step beyond McCain-Feingold, isn’t it?

Thus is the tyranny of “libertarian paternalism.” And thus does the death-spiral of liberty proceed.

Why Is Entrepreneurship Declining?

Jonathan Adler of The Volokh Conspiracy addresses evidence that entrepreneurial activity is declining in the United States, noting that

The number of employer firms created annually has declined significantly since 1990, and the numbers of businesses created and those claiming to be self-employed have declined as well.

Adler continues:

What accounts for this trend? [The author of the cited analysis] thinks one reason is “the Wal-Mart effect.”

Large, efficient companies are able to out-compete small start-ups, replacing the independent businesses in many markets. Multiply across the entire economy the effect of a Wal-Mart replacing the independent restaurant, grocery store, clothing store, florist, etc., in a town, and you can see how we end up with a downward trend in entrepreneurship over time.

That may be true. It seems to me that another likely contributor is the increased regulatory burden. It is well documented that regulation can increase industry concentration. Smaller firms typically bear significantly greater regulatory costs per employee than larger firms (see, e.g., this study), and regulatory costs can also increase start-up costs and serve as a barrier to entry. While the rate at which new regulations were adopted slowed somewhat in recent years at the federal level (see here), so long as the cumulative regulatory burden increases, I would expect it to depress small business creation and growth.

Going further than Adler, I attribute the whole sorry mess to the growth of government over the past century. And I fully expect the increased regulatory and tax burdens of Obamanomics to depress innovation, business expansion, business creation, job creation, and the rate of economic growth. As I say here,

Had the economy of the U.S. not been deflected from its post-Civil War course [by the advent of the regulatory-welfare state around 1900], GDP would now be more than three times its present level…. If that seems unbelievable to you, it shouldn’t: $100 compounded for 100 years at 4.4 percent amounts to $7,400; $100 compounded for 100 years at 3.1 percent amounts to $2,100. Nothing other than government intervention (or a catastrophe greater than any we have known) could have kept the economy from growing at more than 4 percent.

What’s next? Unless Obama’s megalomaniac plans are aborted by a reversal of the Republican Party’s fortunes, the U.S. will enter a new phase of economic growth — something close to stagnation. We will look back on the period from 1970 to 2008 [when GDP rose at an annual rate of 3.1 percent] with longing, as we plod along at a growth rate similar to that of 1908-1940, that is, about 2.2 percent. Thus:

  • If GDP grows at 2.2 percent through 2108, it will be 58 percent lower than if we plod on at 3.1 percent.
  • If GDP grows at 2.2 percent for through 2108, it will be only 4 percent of what it would have been had it continued to grow at 4.4 percent after 1907.

The latter disparity may seem incredible, but scan the lists here and you will find even greater cross-national disparities in per capita GDP. Go here and you will find that real, per capita GDP in 1790 was only 3.3 percent of the value it had attained 201 years later. Our present level of output seems incredible to citizens of impoverished nations, and it would seem no less incredible to an American of 201 years ago. But vast disparities can and do exist, across nations and time. We have every reason to believe in a sustained growth rate of 4.4 percent, as against one of 2.2 percent, because we have experienced both.

Selection Bias and the Road to Serfdom

Office-seeking is about one thing: power. (Money is sometimes a motivator, but power is the common denominator of politics.) Selection bias, as I argue here, deters office-seeking and voting by those (relatively rare) individuals who oppose the accrual of governmental power. The inevitable result — as we have seen for decades and are seeing today — is the accrual of governmental power on a fascistic scale.

Selection bias

most often refers to the distortion of a statistical analysis, due to the method of collecting samples. If the selection bias is not taken into account then any conclusions drawn may be wrong.

Selection bias can occur in studies that are based on the behavior of participants. For example, one form of selection bias is

self-selection bias, which is possible whenever the group of people being studied has any form of control over whether to participate. Participants’ decision to participate may be correlated with traits that affect the study, making the participants a non-representative sample. For example, people who have strong opinions or substantial knowledge may be more willing to spend time answering a survey than those who do not.

I submit that the path of politics in America (and elsewhere) reflects a kind of self-selection bias: On the one hand, most politicians run for office in order to exert power. On the other hand, most voters — believing that government can “solve problems” or one kind or another — prefer politicians who promise to use their power to “solve problems.” In other words, power-seekers and their enablers select themselves into the control of government and the receipt of its (illusory) benefits.

Who is self-selected “out”? First, there are libertarian* office-seekers — a rare breed — who must first attain power in order to curb it. Self-selection, in this case, means that individuals who eschew power are unlikely to seek it in the first place, understanding the likely futility of their attempts to curb the power of the offices to which they might be elected. Thus the relative rarity of libertarian candidates.

Second, there are libertarian voters, who — when faced with an overwhelming array of power-seeking Democrats and Republicans — tend not to vote. Their non-voting enables non-libertarian voters to elect non-libertarian candidates, who then accrue more power, thus further discouraging libertarian candidacies and driving more libertarian voters away from the polls.

As the futility of libertarianism becomes increasingly evident, more voters — fearing that they won’t get their “share” of (illusory) benefits — choose to join the scramble for said benefits, further empowering anti-libertarian candidates for office. And thus we spiral into serfdom.


* I use “libertarian” in this post to denote office-seekers and voters who prefer a government (at all levels) whose powers are (in the main) limited to those necessary for the protection of the people from predators, foreign and domestic.

The Indivisibility of Economic and Social Liberty

John Stuart Mill, whose harm principle I have found wanting, had this right:

If the roads, the railways, the banks, the insurance offices, the great joint-stock companies, the universities, and the public charities, were all of them branches of government; if in addition, the municipal corporations and local boards, with all that now devolves on them, became departments of the central administration; if the employees of all these different enterprises were appointed and paid by the government, and looked to the government for every rise in life; not all the freedom of the press and popular constitution of the legislature would make this or any other country free otherwise in name.

From On Liberty, Chapter 5

Friedrich A. Hayek put it this way:

There is, however, yet another reason why freedom of action, especially in the economic field that is so often represented as being of minor importance, is in fact as important as the freedom of the mind. If it is the mind which chooses the ends of human action, their realization depends on the availability of the required means, and any economic control which gives power over the means also gives power over the ends. There can be no freedom of the press if the instruments of printing are under the control of government, no freedom of assembly if the needed rooms are so controlled, no freedom of movement if the means of transport are a government monopoly, etc. This is the reason why governmental direction of all economic activity, often undertaken in the vain hope of providing more ample means for all purposes, has invariably brought severe restrictions of the ends which the individuals can pursue. It is probably the most significant lesson of the political developments of the twentieth century that control of the material part of life has given government, in what we have learnt to call totalitarian systems, far?reaching powers over the intellectual life. It is the multiplicity of different and independent agencies prepared to supply the means which enables us to choose the ends which we will pursue.

From part 16 of Liberalism
(go here and scroll down)