The Balderdash Chronicles

Balderdash is nonsense, to put it succinctly. Less succinctly, balderdash is stupid or illogical talk; senseless rubbish. Rather thoroughly, it is

balls, bull, rubbish, shit, rot, crap, garbage, trash, bunk, bullshit, hot air, tosh, waffle, pap, cobblers, bilge, drivel, twaddle, tripe, gibberish, guff, moonshine, claptrap, hogwash, hokum, piffle, poppycock, bosh, eyewash, tommyrot, horsefeathers, or buncombe.

I have encountered innumerable examples of balderdash in my 35 years of full-time work,  14 subsequent years of blogging, and many overlapping years as an observer of the political scene.  This essay documents some of the worst balderdash that I have come across.

THE LIMITS OF SCIENCE

Science (or what too often passes for it) generates an inordinate amount of balderdash. Consider an article in The Christian Science Monitor: “Why the Universe Isn’t Supposed to Exist”, which reads in part:

The universe shouldn’t exist — at least according to a new theory.

Modeling of conditions soon after the Big Bang suggests the universe should have collapsed just microseconds after its explosive birth, the new study suggests.

“During the early universe, we expected cosmic inflation — this is a rapid expansion of the universe right after the Big Bang,” said study co-author Robert Hogan, a doctoral candidate in physics at King’s College in London. “This expansion causes lots of stuff to shake around, and if we shake it too much, we could go into this new energy space, which could cause the universe to collapse.”

Physicists draw that conclusion from a model that accounts for the properties of the newly discovered Higgs boson particle, which is thought to explain how other particles get their mass; faint traces of gravitational waves formed at the universe’s origin also inform the conclusion.

Of course, there must be something missing from these calculations.

“We are here talking about it,” Hogan told Live Science. “That means we have to extend our theories to explain why this didn’t happen.”

No kidding!

Though there’s much more to come, this example should tell you all that you need to know about the fallibility of scientists. If you need more examples, consider these.

MODELS LIE WHEN LIARS MODEL

Not that there’s anything wrong with being wrong, but there’s a great deal wrong with seizing on a transitory coincidence between two variables (CO2 emissions and “global” temperatures in the late 1900s) and spurring a massively wrong-headed “scientific” mania — the mania of anthropogenic global warming.

What it comes down to is modeling, which is simply a way of baking one’s assumptions into a pseudo-scientific mathematical concoction. Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a general explanation of the sham, see this.

SCIENCE VS. SCIENTISM: STEVEN PINKER’S BALDERDASH

The examples that I’ve adduced thus far (and most of those that follow) demonstrate a mode of thought known as scientism: the application of the tools and language of science to create a pretense of knowledge.

No less a personage than Steven Pinker defends scientism in “Science Is Not Your Enemy”. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. Pinker says this, for example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists”. We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (e.g., bluffing, spotting “tells”, and avoiding “tells”).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real test of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification”. With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, discussed later.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

Much of the problem arises because of a simple fact that is too often overlooked: Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

In sum, scientists are human and fallible. It is in the best tradition of science to distrust their scientific claims and to dismiss their non-scientific utterances.

ECONOMICS: PHYSICS ENVY AT WORK

Economics is rife with balderdash cloaked in mathematics. Economists who rely heavily on mathematics like to say (and perhaps even believe) that mathematical expression is more precise than mere words. But, as Arnold Kling points out in “An Important Emerging Economic Paradigm”, mathematical economics is a language of faux precision, which is useful only when applied to well defined, narrow problems. It can’t address the big issues — such as economic growth — which depend on variables such as the rule of law and social norms which defy mathematical expression and quantification.

I would go a step further and argue that mathematical economics borders on obscurantism. It’s a cult whose followers speak an arcane language not only to communicate among themselves but to obscure the essentially bankrupt nature of their craft from others. Mathematical expression actually hides the assumptions that underlie it. It’s far easier to identify and challenge the assumptions of “literary” economics than it is to identify and challenge the assumptions of mathematical economics.

I daresay that this is true even for persons who are conversant in mathematics. They may be able to manipulate easily the equations of mathematical economics, but they are able to do so without grasping the deeper meanings — the assumptions and complexities — hidden by those equations. In fact, the ease of manipulating the equations gives them a false sense of mastery of the underlying, real concepts.

Much of the economics profession is nevertheless dedicated to the protection and preservation of the essential incompetence of mathematical economists. This is from “An Important Emerging Economic Paradigm”:

One of the best incumbent-protection rackets going today is for mathematical theorists in economics departments. The top departments will not certify someone as being qualified to have an advanced degree without first subjecting the student to the most rigorous mathematical economic theory. The rationale for this is reminiscent of fraternity hazing. “We went through it, so should they.”

Mathematical hazing persists even though there are signs that the prestige of math is on the decline within the profession. The important Clark Medal, awarded to the most accomplished American economist under the age of 40, has not gone to a mathematical theorist since 1989.

These hazing rituals can have real consequences. In medicine, the controversial tradition of long work hours for medical residents has come under scrutiny over the last few years. In economics, mathematical hazing is not causing immediate harm to medical patients. But it probably is working to the long-term detriment of the profession.

The hazing ritual in economics has as least two real and damaging consequences. First, it discourages entry into the economics profession by persons who, like Kling, can discuss economic behavior without resorting to the sterile language of mathematics. Second, it leads to economics that’s irrelevant to the real world — and dead wrong.

How wrong? Economists are notoriously bad at constructing models that adequately predict near-term changes in GDP. That task should be easier than sorting out the microeconomic complexities of the labor market.

Take Professor Ray Fair, for example. Professor Fair teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983, Prof. Fair has been forecasting changes in real GDP over the next four quarters. He has made 80 such forecasts based on a model that he has undoubtedly tweaked over the years. The current model is here. His forecasting track record is here. How has he done? Here’s how:

1. The median absolute error of his forecasts is 30 percent.

2. The mean absolute error of his forecasts is 70 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 4 percent; too low when real, four-quarter GDP growth is greater than 4 percent.

4. His forecasts have grown generally worse — not better — with time.

Prof. Fair is still at it. And his forecasts continue to grow worse with time:

fair-model-forecasting-errors-vs-time
This and later graphs pertaining to Prof. Fair’s forecasts were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing; 8 percent of the errors exceed 200 percent.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

fair-model-estimated-vs-actual-growth-rate

It fails the most important test; that is, it doesn’t reflect the downward trend in economic growth:

fair-model-year-over-year-growth-estimated-and-actual

THE INVISIBLE ELEPHANT IN THE ROOM

Professor Fair and his prognosticating ilk are pikers compared with John Maynard Keynes and his disciples. The Keynesian multiplier is the fraud of all frauds, not just in economics but in politics, where it is too often invoked as an excuse for taking money from productive uses and pouring it down the rathole of government spending.

The Keynesian (fiscal) multiplier is defined as

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier. Please go to “Killing the Keynesian Multiplier” for a detailed explanation of the phony math and a derivation of the true multiplier, which is decidedly negative. Here’s the short version:

  • The phony math involves the use of an accounting identity that can be manipulated in many ways, to “prove” many things. But the accounting identity doesn’t express an operational (or empirical) relationship between a change in government spending and a change in GDP.
  • The true value of the multiplier isn’t 5 (a common mathematical estimate), 1.5 (a common but mistaken empirical estimate used for government purposes), or any positive number. The true value represents the negative relationship between the change in government spending (including transfer payments) as a fraction of GDP and the change in the rate of real GDP growth. Specifically, where F represents government spending as a fraction of GDP,

a rise in F from 0.24 to 0.33 (the actual change from 1947 to 2007) would reduce the real rate of economic growth by 0.031 percentage points. The real rate of growth from 1947 to 1957 was 4 percent. Other things being the same, the rate of growth would have dropped to 0.9 percent in the period 2008-2017. It actually dropped to 1.4 percent, which is within the standard error of the estimate.

  • That kind of drop makes a huge difference in the incomes of Americans. In 10 years, rise GDP rises by almost 50 percent when the rate of growth is 4 percent, but only by 15 percent when the rate of growth is 1.9 percent. Think of the tens of millions of people who would be living in comfort rather than squalor were it not for Keynesian balderdash, which turns reality on its head in order to promote big government.

MANAGEMENT “SCIENCE”

A hot new item in management “science” a few years ago was the Candle Problem. Graham Morehead describes the problem and discusses its broader, “scientifically” supported conclusions:

The Candle Problem was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

Sam Glucksberg added a fascinating twist to this finding in his 1962 paper, “Influece of strength of drive on functional fixedness and perceptual recognition.” (Journal of Experimental Psychology 1962. Vol. 63, No. 1, 36-41). He studied the effect of financial incentives on solving the candle problem. To one group he offered no money. To the other group he offered an amount of money for solving the problem fast.

Remember, there are two candle problems. Let the “Simple Candle Problem” be the one where the tacks are outside the box — no functional fixedness. The solution is straightforward. Here are the results for those who solved it:

Simple Candle Problem Mean Times :

  • WITHOUT a financial incentive : 4.99 min
  • WITH a financial incentive : 3.67 min

Nothing unexpected here. This is a classical incentivization effect anybody would intuitively expect.

Now, let “In-Box Candle Problem” refer to the original description where the tacks start off in the box.

In-Box Candle Problem Mean Times :

  • WITHOUT a financial incentive : 7:41 min
  • WITH a financial incentive : 11:08 min

How could this be? The financial incentive made people slower? It gets worse — the slowness increases with the incentive. The higher the monetary reward, the worse the performance! This result has been repeated many times since the original experiment.

Glucksberg and others have shown this result to be highly robust. Daniel Pink calls it a legally provable “fact.” How should we interpret the above results?

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

All of this is found in a video (to which Morehead links), wherein Daniel Pink (an author and journalist whose actual knowledge of science and business appears to be close to zero) expounds the lessons of the Candle Problem. Pink displays his (no-doubt-profitable) conviction that the Candle Problem and related “science” reveals (a) the utter bankruptcy of capitalism and (b) the need to replace managers with touchy-feely gurus (like himself, I suppose). That Pink has worked for two of the country’s leading anti-capitalist airheads — Al Gore and Robert Reich — should tell you all that you need to know about Pink’s real agenda.

Here are my reasons for sneering at Pink and his ilk:

1. I have been there and done that. That is to say, as a manager, I lived through (and briefly bought into) the touchy-feely fads of the ’80s and ’90s. Think In Search of Excellence, The One Minute Manager, The Seven Habits of Highly Effective People, and so on. What did anyone really learn from those books and the lectures and workshops based on them? A perceptive person would have learned that it is easy to make up plausible stories about the elements of success, and having done so, it is possible to make a lot of money peddling those stories. But the stories are flawed because (a) they are based on exceptional cases; (b) they attribute success to qualitative assessments of behaviors that seem to be present in those exceptional cases; and (c) they do not properly account for the surrounding (and critical) circumstances that really led to success, among which are luck and rare combinations of personal qualities (e.g., high intelligence, perseverance, people-reading skills). In short, Pink and his predecessors are guilty of reductionism and the post hoc ergo propter hoc fallacy.

2. Also at work is an undue generalization about the implications of the Candle Problem. It may be true that workers will perform better — at certain kinds of tasks (very loosely specified) — if they are not distracted by incentives that are related to the performance of those specific tasks. But what does that have to do with incentives in general? Not much, because the Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

3. The Candle Problem, with its anti-incentive “lesson”, is therefore inapplicable to the real world, where incentives play a crucial and positive role:

  • The profit incentive leads firms to invest resources in the development and/or production of things that consumers are willing to buy because those things satisfy wants at the right price.
  • Firms acquire resources to develop and produce things by bidding for those resources, that is, by offering monetary incentives to attract the resources required to make the things that consumers are willing to buy.
  • The incentives (compensation) offered to workers of various kinds (from scientists with doctorates to burger-flippers) are generally commensurate with the contributions made by those workers to the production of things of value to consumers, and to the value placed on those things by consumers.
  • Workers agree to the terms and conditions of employment (including compensation) before taking a job. The incentive for most workers is to keep a job by performing adequately over a sustained period — not by demonstrating creativity in a few minutes. Some workers (but not a large fraction of them) are striving for performance-based commissions, bonuses, and profit-sharing distributions. But those distributions are based on performance over a sustained period, during which the striving workers have plenty of time to think about how they can perform better.
  • Truly creative work is done, for the most part, by persons who are hired for such work on the basis of their credentials (education, prior employment, test results). Their compensation is based on their credentials, initially, and then on their performance over a sustained period. If they are creative, they have plenty of psychological space in which to exercise and demonstrate their creativity.
  • On-the-job creativity — the improvement of protocols and tools by workers using them — does not occur under conditions of the kind assumed in the Candle Problem. Rather, on-the-job creativity flows from actual work and insights about how to do the work better. It happens when it happens, and has nothing to do with artificial time constraints and monetary incentives to be “creative” within those constraints.
  • Pink’s essential pitch is that incentives can be replaced by offering jobs that yield autonomy (self-direction), mastery (the satisfaction of doing difficult things well), and purpose (that satisfaction of contributing to the accomplishment of something important). Well, good luck with that, but I (and millions of other consumers) want what we want, and if workers want to make a living they will just have to provide what we want, not what turns them on. Yes, there is a lot to be said for autonomy, mastery, and purpose, but there is also a lot to be said for getting a paycheck. And, contrary to Pink’s implication, getting a paycheck does not rule out autonomy, mastery, and purpose — where those happen to go with the job.

Pink and company’s “insights” about incentives and creativity are 180 degrees off-target. McDonald’s could use the Candle Problem to select creative burger-flippers who will perform well under tight deadlines because their compensation is unrelated to the creativity of their burger-flipping. McDonald’s customers should be glad that McDonald’s has taken creativity out of the picture by reducing burger-flipping to the routine application of protocols and tools.

In summary:

  • The Candle Problem is an interesting experiment, and probably valid with respect to the performance of specific tasks against tight deadlines. I think the results apply whether the stakes are money or any other kind of prize. The experiment illustrates the “choke” factor, and nothing more profound than that.
  • I question whether the experiment applies to the usual kind of incentive (e.g., a commissions or bonus), where the “incentee” has ample time (months, years) for reflection and research that will enable him to improve his performance and attain a bigger commission or bonus (which usually isn’t an all-or-nothing arrangement).
  • There’s also the dissimilarity of the Candle Problem — which involves more-or-less randomly chosen subjects, working against an artificial deadline — and actual creative thinking — usually involving persons who are experts (even if the expertise is as mundane as ditch-digging), working against looser deadlines or none at all.

PARTISAN POLITICS IN THE GUISE OF PSEUDO-SCIENCE

There’s plenty of it to go around, but this one is a whopper. Peter Singer outdoes his usual tendentious self in this review of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. In the course of the review, Singer writes:

Pinker argues that enhanced powers of reasoning give us the ability to detach ourselves from our immediate experience and from our personal or parochial perspective, and frame our ideas in more abstract, universal terms. This in turn leads to better moral commitments, including avoiding violence. It is just this kind of reasoning ability that has improved during the 20th century. He therefore suggests that the 20th century has seen a “moral Flynn effect, in which an accelerating escalator of reason carried us away from impulses that lead to violence” and that this lies behind the long peace, the new peace, and the rights revolution. Among the wide range of evidence he produces in support of that argument is the tidbit that since 1946, there has been a negative correlation between an American president’s I.Q. and the number of battle deaths in wars involving the United States.

Singer does not give the source of the IQ estimates on which Pinker relies, but the supposed correlation points to a discredited piece of historiometry by Dean Keith Simonton, Simonton jumps through various hoops to assess the IQs of  every president from Washington to Bush II — to one decimal place. That is a feat on a par with reconstructing the final thoughts of Abel, ere Cain slew him.

Before I explain the discrediting of Simonton’s obviously discreditable “research”, there is some fun to be had with the Pinker-Singer story of presidential IQ (Simonton-style) for battle deaths. First, of course, there is the convenient cutoff point of 1946. Why 1946? Well, it enables Pinker-Singer to avoid the inconvenient fact that the Civil War, World War I, and World War II happened while the presidency was held by three men who (in Simonton’s estimation) had high IQs: Lincoln, Wilson, and FDR.

The next several graphs depict best-fit relationships between Simonton’s estimates of presidential IQ and the U.S. battle deaths that occurred during each president’s term of office.* The presidents, in order of their appearance in the titles of the graphs are Harry S Truman (HST), George W. Bush (GWB), Franklin Delano Roosevelt (FDR), (Thomas) Woodrow Wilson (WW), Abraham Lincoln (AL), and George Washington (GW). The number of battle deaths is rounded to the nearest thousand, so that the prevailing value is 0, even in the case of the Spanish-American War (385 U.S. combat deaths) and George H.W. Bush’s Gulf War (147 U.S. combat deaths).

This is probably the relationship referred to by Singer, though Pinker may show a linear fit, rather than the tighter polynomial fit used here:

It looks bad for the low “IQ” presidents — if you believe Simonton’s estimates of IQ, which you shouldn’t, and if you believe that battle deaths are a bad thing per se, which they aren’t. I will come back to those points. For now, just suspend your well-justified disbelief.

If the relationship for the HST-GWB era were statistically meaningful, it would not change much with the introduction of additional statistics about “IQ” and battle deaths, but it does:




If you buy the brand of snake oil being peddled by Pinker-Singer, you must believe that the “dumbest” and “smartest” presidents are unlikely to get the U.S. into wars that result in a lot of battle deaths, whereas some (but, mysteriously, not all) of the “medium-smart” presidents (Lincoln, Wilson, FDR) are likely to do so.

In any event, if you believe in Pinker-Singer’s snake oil, you must accept the consistent “humpback” relationship that is depicted in the preceding four graphs, rather than the highly selective, one-shot negative relationship of the HST-GWB graph.

More seriously, the relationship in the HST-GWB graph is an evident ploy to discredit certain presidents (especially GWB, I suspect), which is why it covers only the period since WWII. Why not just say that you think GWB is a chimp-like, war-mongering, moron and be done with it? Pseudo-statistics of the kind offered up by Pinker-Singer is nothing more than a talking point for those already convinced that Bush=Hitler.

But as long as this silly game is in progress, let us continue it, with a new rule. Let us advance from one to two explanatory variables. The second explanatory variable that strongly suggests itself is political party. And because it is not good practice to omit relevant statistics (a favorite gambit of liars), I estimated an equation based on “IQ” and battle deaths for the 27 men who served as president from the first Republican presidency (Lincoln’s) through the presidency of GWB.  The equation looks like this:

U.S. battle deaths (000) “owned” by a president =

-80.6 + 0.841 x “IQ” – 31.3 x party (where 0 = Dem, 1 = GOP)

In other words, battle deaths rise at the rate of 841 per IQ point (so much for Pinker-Singer). But there will be fewer deaths with a Republican in the White House (so much for Pinker-Singer’s implied swipe at GWB).

All of this is nonsense, of course, for two reasons: Simonton’s estimates of IQ are hogwash, and the number of U.S. battle deaths is a meaningless number, taken by itself.

With regard to the hogwash, Simonton’s estimates of presidents’ IQs put every one of them — including the “dumbest,” U.S. Grant — in the top 2.3 percent of the population. And the mean of Simonton’s estimates puts the average president in the top 0.1 percent (one-tenth of one percent) of the population. That is literally incredible. Good evidence of the unreliability of Simonton’s estimates is found in an entry by Thomas C. Reeves at George Mason University’s History New Network. Reeves is the author of A Question of Character: A Life of John F. Kennedy, the negative reviews of which are evidently the work of JFK idolators who refuse to be disillusioned by facts. Anyway, here is Reeves:

I’m a biographer of two of the top nine presidents on Simonton’s list and am highly familiar with the histories of the other seven. In my judgment, this study has little if any value. Let’s take JFK and Chester A. Arthur as examples.

Kennedy was actually given an IQ test before entering Choate. His score was 119…. There is no evidence to support the claim that his score should have been more than 40 points higher [i.e., the IQ of 160 attributed to Kennedy by Simonton]. As I described in detail in A Question Of Character [link added], Kennedy’s academic achievements were modest and respectable, his published writing and speeches were largely done by others (no study of Kennedy is worthwhile that downplays the role of Ted Sorensen)….

Chester Alan Arthur was largely unknown before my Gentleman Boss was published in 1975. The discovery of many valuable primary sources gave us a clear look at the president for the first time. Among the most interesting facts that emerged involved his service during the Civil War, his direct involvement in the spoils system, and the bizarre way in which he was elevated to the GOP presidential ticket in 1880. His concealed and fatal illness while in the White House also came to light.

While Arthur was a college graduate, and was widely considered to be a gentleman, there is no evidence whatsoever to suggest that his IQ was extraordinary. That a psychologist can rank his intelligence 2.3 points ahead of Lincoln’s suggests access to a treasure of primary sources from and about Arthur that does not exist.

This historian thinks it impossible to assign IQ numbers to historical figures. If there is sufficient evidence (as there usually is in the case of American presidents), we can call people from the past extremely intelligent. Adams, Wilson, TR, Jefferson, and Lincoln were clearly well above average intellectually. But let us not pretend that we can rank them by tenths of a percentage point or declare that a man in one era stands well above another from a different time and place.

My educated guess is that this recent study was designed in part to denigrate the intelligence of the current occupant of the White House….

That is an excellent guess.

The meaninglessness of battle deaths as a measure of anything — but battle deaths — should be evident. But in case it is not evident, here goes:

  • Wars are sometimes necessary, sometimes not. (I give my views about the wisdom of America’s various wars at this post.) Necessary or not, presidents usually act in accordance with popular and elite opinion about the desirability of a particular war. Imagine, for example, the reaction if FDR had not gone to Congress on December 8, 1941, to ask for a declaration of war against Japan, or if GWB had not sought the approval of Congress for action in Afghanistan.
  • Presidents may have a lot to do with the decision to enter a war, but they have little to do with the external forces that help to shape that decision. GHWB, for example, had nothing to do with Saddam’s decision to invade Kuwait and thereby threaten vital U.S. interests in the Middle East. GWB, to take another example, was not a party to the choices of earlier presidents (GHWB and Clinton) that enabled Saddam to stay in power and encouraged Osama bin Laden to believe that America could be brought to its knees by a catastrophic attack.
  • The number of battle deaths in a war depends on many things outside the control of a particular president; for example, the size and capabilities of enemy forces, the size and capabilities of U.S. forces (which have a lot to do with the decisions of earlier administrations and Congresses), and the scope and scale of a war (again, largely dependent on the enemy).
  • Battle deaths represent personal tragedies, but — in and of themselves — are not a measure of a president’s wisdom or acumen. Whether the deaths were in vain is a separate issue that depends on the aforementioned considerations. To use battle deaths as a single, negative measure of a president’s ability is rank cynicism — the rankness of which is revealed in Pinker’s decision to ignore Lincoln and FDR and their “good” but deadly wars.

To put the last point another way, if the number of battle death deaths is a bad thing, Lincoln and FDR should be rotting in hell for the wars that brought an end to slavery and Hitler.
__________
* The numbers of U.S. battle deaths, by war, are available at infoplease.com, “America’s Wars: U.S. Casualties and Veterans”. The deaths are “assigned” to presidents as follows (numbers in parentheses indicate thousands of deaths):

All of the deaths (2) in the War of 1812 occurred on Madison’s watch.

All of the deaths (2) in the Mexican-American War occurred on Polk’s watch.

I count only Union battle deaths (140) during the Civil War; all are “Lincoln’s.” Let the Confederate dead be on the head of Jefferson Davis. This is a gift, of sorts, to Pinker-Singer because if Confederate dead were counted as Lincoln, with his high “IQ,” it would make Pinker-Singer’s hypothesis even more ludicrous than it is.

WW is the sole “owner” of WWI battle deaths (53).

Some of the U.S. battle deaths in WWII (292) occurred while HST was president, but Truman was merely presiding over the final months of a war that was almost won when FDR died. Truman’s main role was to hasten the end of the war in the Pacific by electing to drop the A-bombs on Hiroshima and Nagasaki. So FDR gets “credit” for all WWII battle deaths.

The Korean War did not end until after Eisenhower succeeded Truman, but it was “Truman’s war,” so he gets “credit” for all Korean War battle deaths (34). This is another “gift” to Pinker-Singer because Ike’s “IQ” is higher than Truman’s.

Vietnam was “LBJ’s war,” but I’m sure that Singer would not want Nixon to go without “credit” for the battle deaths that occurred during his administration. Moreover, LBJ had effectively lost the Vietnam war through his gradualism, but Nixon chose nevertheless to prolong the agony. So I have shared the “credit” for Vietnam War battle deaths between LBJ (deaths in 1965-68: 29) and RMN (deaths in 1969-73: 17). To do that, I apportioned total Vietnam War battle deaths, as given by infoplease.com, according to the total number of U.S. deaths in each year of the war, 1965-1973.

The wars in Afghanistan and Iraq are “GWB’s wars,” even though Obama has continued them. So I have “credited” GWB with all the battle deaths in those wars, as of May 27, 2011 (5).

The relative paucity of U.S. combat  deaths in other post-WWII actions (e.g., Lebanon, Somalia, Persian Gulf) is attested to by “Post-Vietnam Combat Casualties”, at infoplease.com.

A THIRD APPEARANCE BY PINKER

Steven Pinker, whose ignominious outpourings I have addressed twice here, deserves a third strike (which he shall duly be awarded). Pinker’s The Better Angels of Our Nature is cited gleefully by leftists and cockeyed optimists as evidence that human beings, on the whole, are becoming kinder and gentler because of:

  • The Leviathan – The rise of the modern nation-state and judiciary “with a monopoly on the legitimate use of force,” which “can defuse the [individual] temptation of exploitative attack, inhibit the impulse for revenge, and circumvent…self-serving biases.”
  • Commerce – The rise of “technological progress [allowing] the exchange of goods and services over longer distances and larger groups of trading partners,” so that “other people become more valuable alive than dead” and “are less likely to become targets of demonization and dehumanization”;
  • Feminization – Increasing respect for “the interests and values of women.”
  • Cosmopolitanism – the rise of forces such as literacy, mobility, and mass media, which “can prompt people to take the perspectives of people unlike themselves and to expand their circle of sympathy to embrace them”;
  • The Escalator of Reason – an “intensifying application of knowledge and rationality to human affairs,” which “can force people to recognize the futility of cycles of violence, to ramp down the privileging of their own interests over others’, and to reframe violence as a problem to be solved rather than a contest to be won.”

I can tell you that Pinker’s book is hogwash because two very bright leftists — Peter Singer and Will Wilkinson — have strongly and wrongly endorsed some of its key findings. I dispatched Singer in earlier. As for Wilkinson, he praises statistics adduced by Pinker that show a decline in the use of capital punishment:

In the face of such a decisive trend in moral culture, we can say a couple different things. We can say that this is just change and says nothing in particular about what is really right or wrong, good or bad. Or we can take take say this is evidence of moral progress, that we have actually become better. I prefer the latter interpretation for basically the same reasons most of us see the abolition of slavery and the trend toward greater equality between races and sexes as progress and not mere morally indifferent change. We can talk about the nature of moral progress later. It’s tricky. For now, I want you to entertain the possibility that convergence toward the idea that execution is wrong counts as evidence that it is wrong.

I would count convergence toward the idea that execution is wrong as evidence that it is wrong, if that idea were (a) increasingly held by individuals who (b) had arrived at their “enlightenment” unnfluenced by operatives of the state (legislatures and judges), who take it upon themselves to flout popular support of the death penalty. What we have, in the case of the death penalty, is moral regress, not moral progress.

Moral regress because the abandonment of the death penalty puts innocent lives at risk. Capital punishment sends a message, and the message is effective when it is delivered: it deters homicide. And even if it didn’t, it would at least remove killers from our midst, permanently. By what standard of morality can one claim that it is better to spare killers than to protect innocents? For that matter, by what standard of morality is it better to kill innocents in the womb than to spare killers? Proponents of abortion (like Singer and Wilkinson) — who by and large oppose capital punishment — are completely lacking in moral authority.

Returning to Pinker’s thesis that violence has declined, I quote a review at Foseti:

Pinker’s basic problem is that he essentially defines “violence” in such a way that his thesis that violence is declining becomes self-fulling. “Violence” to Pinker is fundamentally synonymous with behaviors of older civilizations. On the other hand, modern practices are defined to be less violent than newer practices.

A while back, I linked to a story about a guy in my neighborhood who’s been arrested over 60 times for breaking into cars. A couple hundred years ago, this guy would have been killed for this sort of vandalism after he got caught the first time. Now, we feed him and shelter him for a while and then we let him back out to do this again. Pinker defines the new practice as a decline in violence – we don’t kill the guy anymore! Someone from a couple hundred years ago would be appalled that we let the guy continue destroying other peoples’ property without consequence. In the mind of those long dead, “violence” has in fact increased. Instead of a decline in violence, this practice seems to me like a decline in justice – nothing more or less.

Here’s another example, Pinker uses creative definitions to show that the conflicts of the 20th Century pale in comparison to previous conflicts. For example, all the Mongol Conquests are considered one event, even though they cover 125 years. If you lump all these various conquests together and you split up WWI, WWII, Mao’s takeover in China, the Bolshevik takeover of Russia, the Russian Civil War, and the Chinese Civil War (yes, he actually considers this a separate event from Mao), you unsurprisingly discover that the events of the 20th Century weren’t all that violent compared to events in the past! Pinker’s third most violent event is the “Mideast Slave Trade” which he says took place between the 7th and 19th Centuries. Seriously. By this standard, all the conflicts of the 20th Century are related. Is the Russian Revolution or the rise of Mao possible without WWII? Is WWII possible without WWI? By this consistent standard, the 20th Century wars of Communism would have seen the worst conflict by far. Of course, if you fiddle with the numbers, you can make any point you like.

There’s much more to the review, including some telling criticisms of Pinker’s five reasons for the (purported) decline in violence. That the reviewer somehow still wants to believe in the rightness of Pinker’s thesis says more about the reviewer’s optimism than it does about the validity of Pinker’s thesis.

That thesis is fundamentally flawed, as Robert Epstein points out in a review at Scientific American:

[T]he wealth of data [Pinker] presents cannot be ignored—unless, that is, you take the same liberties as he sometimes does in his book. In two lengthy chapters, Pinker describes psychological processes that make us either violent or peaceful, respectively. Our dark side is driven by a evolution-based propensity toward predation and dominance. On the angelic side, we have, or at least can learn, some degree of self-control, which allows us to inhibit dark tendencies.

There is, however, another psychological process—confirmation bias—that Pinker sometimes succumbs to in his book. People pay more attention to facts that match their beliefs than those that undermine them. Pinker wants peace, and he also believes in his hypothesis; it is no surprise that he focuses more on facts that support his views than on those that do not. The SIPRI arms data are problematic, and a reader can also cherry-pick facts from Pinker’s own book that are inconsistent with his position. He notes, for example, that during the 20th century homicide rates failed to decline in both the U.S. and England. He also describes in graphic and disturbing detail the savage way in which chimpanzees—our closest genetic relatives in the animal world—torture and kill their own kind.

Of greater concern is the assumption on which Pinker’s entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease? By this logic, when we reach a world population of nine billion in 2050, Pinker will conceivably be satisfied if a mere two million people are killed in war that year.

The biggest problem with the book, though, is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Pinker’s belief that violence is on the decline reminds me of “it’s different this time”, a phrase that was on the lips of hopeful stock-pushers, stock-buyers, and pundits during the stock-market bubble of the late 1990s. That bubble ended, of course, in the spectacular crash of 2000.

Predictions about the future of humankind are better left in the hands of writers who see human nature whole, and who are not out to prove that it can be shaped or contained by the kinds of “liberal” institutions that Pinker so obviously favors.

Consider this, from an article by Robert J. Samuelson at The Washington Post:

[T]he Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar. Amid the controversy over leaks from the National Security Agency, this looms as an even bigger downside.

By cyberwarfare, I mean the capacity of groups — whether nations or not — to attack, disrupt and possibly destroy the institutions and networks that underpin everyday life. These would be power grids, pipelines, communication and financial systems, business record-keeping and supply-chain operations, railroads and airlines, databases of all types (from hospitals to government agencies). The list runs on. So much depends on the Internet that its vulnerability to sabotage invites doomsday visions of the breakdown of order and trust.

In a report, the Defense Science Board, an advisory group to the Pentagon, acknowledged “staggering losses” of information involving weapons design and combat methods to hackers (not identified, but probably Chinese). In the future, hackers might disarm military units. “U.S. guns, missiles and bombs may not fire, or may be directed against our own troops,” the report said. It also painted a specter of social chaos from a full-scale cyberassault. There would be “no electricity, money, communications, TV, radio or fuel (electrically pumped). In a short time, food and medicine distribution systems would be ineffective.”

But Pinker wouldn’t count the resulting chaos as violence, as long as human beings were merely starving and dying of various diseases. That violence would ensue, of course, is another story, which is told by John Gray in The Silence of Animals: On Progress and Other Modern Myths. Gray’s book — published  18 months after Better Angels — could be read as a refutation of Pinker’s book, though Gray doesn’t mention Pinker or his book.

The gist of Gray’s argument is faithfully recounted in a review of Gray’s book by Robert W. Merry at The National Interest:

The noted British historian J. B. Bury (1861–1927) … wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future…. The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

… Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong….

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption….

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

In the Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny….

And yet the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves….

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy.

RACE AS A SOCIAL CONSTRUCT

David Reich‘s hot new book, Who We Are and How We Got Here, is causing a stir in genetic-research circles. Reich, who takes great pains to assure everyone that he isn’t a racist, and who deplores racism, is nevertheless candid about race:

I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”

Groundbreaking advances in DNA sequencing technology have been made over the last two decades. These advances enable us to measure with exquisite accuracy what fraction of an individual’s genetic ancestry traces back to, say, West Africa 500 years ago — before the mixing in the Americas of the West African and European gene pools that were almost completely isolated for the last 70,000 years. With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real….

Self-identified African-Americans turn out to derive, on average, about 80 percent of their genetic ancestry from enslaved Africans brought to America between the 16th and 19th centuries. My colleagues and I searched, in 1,597 African-American men with prostate cancer, for locations in the genome where the fraction of genes contributed by West African ancestors was larger than it was elsewhere in the genome. In 2006, we found exactly what we were looking for: a location in the genome with about 2.8 percent more African ancestry than the average.

When we looked in more detail, we found that this region contained at least seven independent risk factors for prostate cancer, all more common in West Africans. Our findings could fully account for the higher rate of prostate cancer in African-Americans than in European-Americans. We could conclude this because African-Americans who happen to have entirely European ancestry in this small section of their genomes had about the same risk for prostate cancer as random Europeans.

Did this research rely on terms like “African-American” and “European-American” that are socially constructed, and did it label segments of the genome as being probably “West African” or “European” in origin? Yes. Did this research identify real risk factors for disease that differ in frequency across those populations, leading to discoveries with the potential to improve health and save lives? Yes.

While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another.

But whether we like it or not, that line has already been crossed. A recent study led by the economist Daniel Benjamin compiled information on the number of years of education from more than 400,000 people, almost all of whom were of European ancestry. After controlling for differences in socioeconomic background, he and his colleagues identified 74 genetic variations that are over-represented in genes known to be important in neurological development, each of which is incontrovertibly more common in Europeans with more years of education than in Europeans with fewer years of education.

It is not yet clear how these genetic variations operate. A follow-up study of Icelanders led by the geneticist Augustine Kong showed that these genetic variations also nudge people who carry them to delay having children. So these variations may be explaining longer times at school by affecting a behavior that has nothing to do with intelligence.

This study has been joined by others finding genetic predictors of behavior. One of these, led by the geneticist Danielle Posthuma, studied more than 70,000 people and found genetic variations in more than 20 genes that were predictive of performance on intelligence tests.

Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly. And since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.

You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work. Indeed, the study led by Dr. Kong showed that in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century….

So how should we prepare for the likelihood that in the coming years, genetic studies will show that many traits are influenced by genetic variations, and that these traits will differ on average across human populations? It will be impossible — indeed, anti-scientific, foolish and absurd — to deny those differences. [“How Genetics Is Changing Our Understanding of ‘Race’“, The New York Times, March 23, 2018]

Reich engages in a lot of non-scientific wishful thinking about racial differences and how they should be treated by “society” — none of which is in his purview as a scientist. Reich’s forays into psychobabble have been addressed at length by Steve Sailer (here and here) and Gregory Cochran (here, here, here, here, and here). Suffice it to say that Reich is trying in vain to minimize the scientific fact of racial differences that show up crucially in intelligence and rates of violent crime.

The lesson here is that it’s all right to show that race isn’t a social construct as long as you proclaim that it is a social construct. This is known as talking out of both sides of one’s mouth — another manifestation of balderdash.

DIVERSITY IS GOOD, EXCEPT WHEN IT ISN’T

I now invoke Robert Putnam, a political scientist known mainly for his book Bowling Alone: The Collapse and Revival of American Community (2005), in which he

makes a distinction between two kinds of social capital: bonding capital and bridging capital. Bonding occurs when you are socializing with people who are like you: same age, same race, same religion, and so on. But in order to create peaceful societies in a diverse multi-ethnic country, one needs to have a second kind of social capital: bridging. Bridging is what you do when you make friends with people who are not like you, like supporters of another football team. Putnam argues that those two kinds of social capital, bonding and bridging, do strengthen each other. Consequently, with the decline of the bonding capital mentioned above inevitably comes the decline of the bridging capital leading to greater ethnic tensions.

In later work on diversity and trust within communities, Putnam concludes that

other things being equal, more diversity in a community is associated with less trust both between and within ethnic groups….

Even when controlling for income inequality and crime rates, two factors which conflict theory states should be the prime causal factors in declining inter-ethnic group trust, more diversity is still associated with less communal trust.

Lowered trust in areas with high diversity is also associated with:

  • Lower confidence in local government, local leaders and the local news media.
  • Lower political efficacy – that is, confidence in one’s own influence.
  • Lower frequency of registering to vote, but more interest and knowledge about politics and more participation in protest marches and social reform groups.
  • Higher political advocacy, but lower expectations that it will bring about a desirable result.
  • Less expectation that others will cooperate to solve dilemmas of collective action (e.g., voluntary conservation to ease a water or energy shortage).
  • Less likelihood of working on a community project.
  • Less likelihood of giving to charity or volunteering.
  • Fewer close friends and confidants.
  • Less happiness and lower perceived quality of life.
  • More time spent watching television and more agreement that “television is my most important form of entertainment”.

It’s not as if Putnam is a social conservative who is eager to impart such news. To the contrary, as Michal Jonas writes in “The Downside of Diversity“, Putnam’s

findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis … , he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer….

After releasing the initial results in 2001, Putnam says he spent time “kicking the tires really hard” to be sure the study had it right. Putnam realized, for instance, that more diverse communities tended to be larger, have greater income ranges, higher crime rates, and more mobility among their residents — all factors that could depress social capital independent of any impact ethnic diversity might have.

“People would say, ‘I bet you forgot about X,’” Putnam says of the string of suggestions from colleagues. “There were 20 or 30 X’s.”

But even after statistically taking them all into account, the connection remained strong: Higher diversity meant lower social capital. In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”

“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes….

In a recent study, [Harvard economist Edward] Glaeser and colleague Alberto Alesina demonstrated that roughly half the difference in social welfare spending between the US and Europe — Europe spends far more — can be attributed to the greater ethnic diversity of the US population. Glaeser says lower national social welfare spending in the US is a “macro” version of the decreased civic engagement Putnam found in more diverse communities within the country.

Economists Matthew Kahn of UCLA and Dora Costa of MIT reviewed 15 recent studies in a 2003 paper, all of which linked diversity with lower levels of social capital. Greater ethnic diversity was linked, for example, to lower school funding, census response rates, and trust in others. Kahn and Costa’s own research documented higher desertion rates in the Civil War among Union Army soldiers serving in companies whose soldiers varied more by age, occupation, and birthplace.

Birds of different feathers may sometimes flock together, but they are also less likely to look out for one another. “Everyone is a little self-conscious that this is not politically correct stuff,” says Kahn….

In his paper, Putnam cites the work done by Page and others, and uses it to help frame his conclusion that increasing diversity in America is not only inevitable, but ultimately valuable and enriching. As for smoothing over the divisions that hinder civic engagement, Putnam argues that Americans can help that process along through targeted efforts. He suggests expanding support for English-language instruction and investing in community centers and other places that allow for “meaningful interaction across ethnic lines.”

Some critics have found his prescriptions underwhelming. And in offering ideas for mitigating his findings, Putnam has drawn scorn for stepping out of the role of dispassionate researcher. “You’re just supposed to tell your peers what you found,” says John Leo, senior fellow at the Manhattan Institute, a conservative think tank. [Michael Jonas, “The downside of diversity,” The Boston Globe (boston.com), August 5, 2007]

What is it about academics like Reich and Putnam who can’t bear to face the very facts that they have uncovered? The magic word is “academics”. They are denizens of a milieu in which the facts of life about race, guns, sex, and many other things are in the habit of being suppressed in favor of “hope and change”, and the facts be damned.

ONE MORE BIT OF RACE-RELATED BALDERDASH

I was unaware of the Implicit Association Test (IAT) until a few years ago, when I took a test at YourMorals.Org that purported to measure my implicit racial preferences. IAT has been exposed as junk, John. J. Ray calls it:

Psychologists are well aware that people often do not say what they really think.  It is therefore something of a holy grail among them to find ways that WILL detect what people really think. A very popular example of that is the Implicit Associations test (IAT).  It supposedly measures racist thoughts whether you are aware of them or not.  It sometimes shows people who think they are anti-racist to be in fact secretly racist.

I dismissed it as a heap of junk long ago (here and here) but it has remained very popular and is widely accepted as revealing truth.  I am therefore pleased that a very long and thorough article has just appeared which comes to the same conclusion that I did.

The article in question (which has the same title as Ray’s post) is by Jesse Singal. It appeared at Science of Us on January 11, 2017. Here are some excerpts:

Perhaps no new concept from the world of academic psychology has taken hold of the public imagination more quickly and profoundly in the 21st century than implicit bias — that is, forms of bias which operate beyond the conscious awareness of individuals. That’s in large part due to the blockbuster success of the so-called implicit association test, which purports to offer a quick, easy way to measure how implicitly biased individual people are….

Since the IAT was first introduced almost 20 years ago, its architects, as well as the countless researchers and commentators who have enthusiastically embraced it, have offered it as a way to reveal to test-takers what amounts to a deep, dark secret about who they are: They may not feel racist, but in fact, the test shows that in a variety of intergroup settings, they will act racist….

[The] co-creators are Mahzarin Banaji, currently the chair of Harvard University’s psychology department, and Anthony Greenwald, a highly regarded social psychology researcher at the University of Washington. The duo introduced the test to the world at a 1998 press conference in Seattle — the accompanying press release noted that they had collected data suggesting that 90–95 percent of Americans harbored the “roots of unconscious prejudice.” The public immediately took notice: Since then, the IAT has been mostly treated as a revolutionary, revelatory piece of technology, garnering overwhelmingly positive media coverage….

Maybe the biggest driver of the IAT’s popularity and visibility, though, is the fact that anyone can take the test on the Project Implicit website, which launched shortly after the test was unveiled and which is hosted by Harvard University. The test’s architects reported that, by October 2015, more than 17 million individual test sessions had been completed on the website. As will become clear, learning one’s IAT results is, for many people, a very big deal that changes how they view themselves and their place in the world.

Given all this excitement, it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way….

Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such.

How does IAT work? Singal summarizes:

You sit down at a computer where you are shown a series of images and/or words. First, you’re instructed to hit ‘i’ when you see a “good” term like pleasant, or to hit ‘e’ when you see a “bad” one like tragedy. Then, hit ‘i’ when you see a black face, and hit ‘e’ when you see a white one. Easy enough, but soon things get slightly more complex: Hit ‘i’ when you see a good word or an image of a black person, and ‘e’ when you see a bad word or an image of a white person. Then the categories flip to black/bad and white/good. As you peck away at the keyboard, the computer measures your reaction times, which it plugs into an algorithm. That algorithm, in turn, generates your score.

If you were quicker to associate good words with white faces than good words with black faces, and/or slower to associate bad words with white faces than bad words with black ones, then the test will report that you have a slight, moderate, or strong “preference for white faces over black faces,” or some similar language. You might also find you have an anti-white bias, though that is significantly less common. By the normal scoring conventions of the test, positive scores indicate bias against the out-group, while negative ones indicate bias against the in-group.

The rough idea is that, as humans, we have an easier time connecting concepts that are already tightly linked in our brains, and a tougher time connecting concepts that aren’t. The longer it takes to connect “black” and “good” relative to “white” and “good,” the thinking goes, the more your unconscious biases favor white people over black people.

Singal continues (at great length) to pile up the mountain of evidence against IAT, and to caution against reading anything into the results it yields.

Having become aware of the the debunking of IAT, I went to the website of Project Implicit. When I reached this page, I was surprised to learn that I could not only find out whether I’m a closet racist but also whether I prefer dark or light skin tones, Asians or non-Asians, Trump or a previous president, and several other things or their opposites. I chose to discover my true feelings about Trump vs. a previous president, and was faced with a choice between Trump and Clinton.

What was the result of my several minutes of tapping “e” and “i” on the keyboard of my PC? This:

Your data suggest a moderate automatic preference for Bill Clinton over Donald Trump.

Balderdash! Though Trump is obviously not of better character than Clinton, he’s obviously not of worse character. And insofar as policy goes, the difference between Trump and Clinton is somewhat like the difference between a non-silent Calvin Coolidge and an FDR without the patriotism. (With apologies to the memory of Coolidge, my favorite president.)

What did I learn from the IAT? I must have very good reflexes. A person who processes information rapidly and then almost instantly translates it into a physical response should be able to “beat” the IAT. And that’s probably what I did in the Trump vs. Clinton test.

Perhaps the IAT for racism could be used to screen candidates for fighter-pilot training. Only “non-racists” would be admitted. Anyone who isn’t quick enough to avoid the “racist” label isn’t quick enough to win a dogfight.

OTHER “LIBERAL” DELUSIONS

There are plenty of them under the heading of balderdash. It’s also known as magical thinking, in which “ought” becomes “is” and the forces of nature and human nature can be held in abeyance by edict. The following examples revisit some ground already covered here:

  • Men are unnecessary.
  • Women can do everything that men can do, but it doesn’t work the other way … just because.
  • Mothers can work outside the home without damage to their children.
  • Race is a “social construct”; there is no such thing as intelligence; women and men are mentally and physically equal in all respects; and the under-representation of women and blacks in certain fields is therefore due to rank discrimination (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).
  • A minimum wage can be imposed without an increase in unemployment, a “fact” which can be “proven” only by concocting special cases of limited applicability.
  • Taxes can be raised without discouraging investment and therefore reducing the rate of economic growth.
  • Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”. There can “free lunches” all around.
  • Health insurance premiums will go down while the number of mandates is increased.
  • The economy can be stimulated through the action of the Keynesian multiplier, which is nothing but phony math.
  • “Green” programs create jobs (but only because they are inefficient).
  • Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).

There’s much more in a different vein here.

BALDERDASH AS EUPHEMISTIC THINKING

Balderdash, as I have sampled it here, isn’t just nonsense — it’s nonsense in the service of an agenda. The agenda is too often the expansion of government power. Those who favor the expansion of government power don’t like to think that it hurts people. (“We’re from the government and we’re here to help.”) This is a refusal to face facts, which is amply if not exhautively illustrated in the preceding entries.

But there’s a lot more where that comes from; for example:

  • Crippled became handicapped, which became disabled and then differently abled or something-challenged.
  • Stupid became learning disabled, which became special needs (a euphemistic category that houses more than the stupid).
  • Poor became underprivileged, which became economically disadvantaged, which became (though isn’t overtly called) entitled (as in entitled to other people’s money).
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

Why do lefties — lovers of big government — persist in varnishing the truth? They are — they insist — strong supporters of science, which is (ideally) the pursuit of truth. Well, that’s because they aren’t really supporters of science (witness their devotion to the “unsettled” science of AGW, among many fabrications). Nor do they really want the truth. They simply want to portray the world as they would like it to be, or to lie about it so that they can strive to reshape it to their liking.

BALDERDASH IN THE SERVICE OF SLAVERY, MODERN STYLE

I will end with this one, which is less conclusive than what has gone before, but which further illustrates the left’s penchant for evading reality in the service of growing government.

Thomas Nagel writes:

Some would describe taxation as a form of theft and conscription as a form of slavery — in fact some would prefer to describe taxation as slavery too, or at least as forced labor. Much might be said against these descriptions, but that is beside the point. For within proper limits, such practices when engaged in by governments are acceptable, whatever they are called. If someone with an income of $2000 a year trains a gun on someone with an income of $100000 a year and makes him hand over his wallet, that is robbery. If the federal government withholds a portion of the second person’s salary (enforcing the laws against tax evasion with threats of imprisonment under armed guard) and gives some of it to the first person in the form of welfare payments, food stamps, or free health care, that is taxation. In the first case it is (in my opinion) an impermissible use of coercive means to achieve a worthwhile end. In the second case the means are legitimate, because they are impersonally imposed by an institution designed to promote certain results. Such general methods of distribution are preferable to theft as a form of private initiative and also to individual charity. This is true not only for reasons of fairness and efficiency, but also because both theft and charity are disturbances of the relations (or lack of them) between individuals and involve their individual wills in a way that an automatic, officially imposed system of taxation does not. [Mortal Questions, “Ruthlessness in Public Life,” pp. 87-88]

How many logical and epistemic errors can a supposedly brilliant philosopher make in one (long) paragraph? Too many:

  • “For within proper limits” means that Nagel is about to beg the question by shaping an answer that fits his idea of proper limits.
  • Nagel then asserts that the use by government of coercive means to achieve the same end as robbery is “legitimate, because [those means] are impersonally imposed by an institution designed to promote certain results.” Balderdash! Nagel’s vision of government as some kind of omniscient, benevolent arbiter is completely at odds with reality.  The “certain results” (redistribution of income) are achieved by functionaries, armed or backed with the force of arms, who themselves share in the spoils of coercive redistribution. Those functionaries act under the authority of bare majorities of elected representatives, who are chosen by bare majorities of voters. And those bare majorities are themselves coalitions of interested parties — hopeful beneficiaries of redistributionist policies, government employees, government contractors, and arrogant statists — who believe, without justification, that forced redistribution is a proper function of government.
  • On the last point, Nagel ignores the sordid history of the unconstitutional expansion of the powers of government. Without justification, he aligns himself with proponents of the “living Constitution.”
  • Nagel’s moral obtuseness is fully revealed when he equates coercive redistribution with “fairness and efficiency,” as if property rights and liberty were of no account.
  • The idea that coercive redistribution fosters efficiency is laughable. It does quite the opposite because it removes resources from productive uses — including job-creating investments. The poor are harmed by coercive redistribution because it drastically curtails economic growth, from which they would benefit as job-holders and (where necessary) recipients of private charity (the resources for which would be vastly greater in the absence of coercive redistribution).
  • Finally (though not exhaustively), Nagel’s characterization of private charity as a “disturbance of the relations … among individuals” is so wrong-headed that it leaves me dumbstruck. Private charity arises from real relations among individuals — from a sense of community and feelings of empathy. It is the “automatic, officially imposed system of taxation” that distorts and thwarts (“disturbs”) the social fabric.

In any event, taxation for the purpose of redistribution is slavery: the subjection of one person to others, namely, agents of the government and the recipients of the taxes extracted from the person who pays them under threat of punishment. It’s slavery without whips and chains, but slavery nevertheless.

Revisiting the Laffer Curve

Among the claims made in favor of the Tax Cuts and Jobs Act of 2017 was that the resulting tax cuts would pay for themselves. Thus the Laffer curve returned briefly to prominence, after having been deployed to support the Reagan and Bush tax cuts of 1981 and 2001.

The idea behind the Laffer curve is straightforward. Taxes inhibit economic activity, that is, the generation of output and income. Tax-rate reductions therefore encourage work, which yields higher incomes. Higher incomes mean that there is more saving from which to finance growth-producing capital investment. Lower tax rates also make investment more attractive by increasing the expected return on capital investments. Lower tax rates therefore stimulate economic output by encouraging work and investment (supply-side economics). Under the right conditions, lower tax rates may generate enough additional income to yield an increase in tax revenue.

I believe that there are conditions under which the Laffer curve works as advertised. But so what? The Laffer curve focuses attention on the wrong economic variable: tax revenue. The economic variables that really matter — or that should matter — are the real rate of growth and the income available to Americans after taxes. More (real) economic growth means higher (real) income, across the board. More government spending means lower (real) income; the Keynesian multiplier is a cruel myth.

A new Laffer curve is in order, one that focuses on the effects of taxation on economic growth, and thus on the aggregate output of products and services available to consumers.

Let us begin at the beginning, with this depiction of the Laffer curve (via Forbes):

This is an unusually sophisticated depiction of the curve, in that it shows a growth-maximizing tax rate which is lower than the revenue-maximizing rate. It also shows that the growth-maximizing rate is greater than zero, for a good reason.

With real taxes (i.e., government spending) at zero or close to it, the rule of law would break down and the economy would be a shambles. But government spending above that required to maintain the rule of law (i.e., adequate policing, administration of justice, and national defense) interferes with the efficient operation of markets, both directly (by pulling resources out of productive use) and indirectly (by burdensome regulation financed by taxes).

Thus a tax rate higher than that required to sustain the rule of law1 leads to a reduction in the rate of (real) economic growth because of disincentives to work and invest. A reduction in the rate of growth pushes GDP below its potential level. Further, the effect is cumulative. A reduction in GDP means a reduction in investment, which means a reduction in future GDP, and on and on.

I will quantify the Laffer curve in two steps. First, I will estimate the tax rate at which revenue is maximized, taking the simplistic view that changes in the tax rate do not change the rate of economic growth. I will draw on Christina D. Romer and David H. Romer’s “The Macroeconomic Effects of Tax Changes: Estimates Based on a New Measure of Fiscal Shocks” (American Economic Review, June 2010, pp. 763-801).

The Romers estimate the effects of exogenous changes in taxes on GDP. (“Exogenous” meaning tax cuts aimed at stimulating the economy, as opposed, for example, to tax increases triggered by economic growth.) Here is their key finding:

Figure 4 summarizes the estimates by showing the implied effect of a tax increase of one percent of GDP on the path of real GDP (in logarithms), together with the one-standard-error bands. The effect is steadily down, first slowly and then more rapidly, finally leveling off after ten quarters. The estimated maximum impact is a fall in output of 3.0 percent. This estimate is overwhelmingly significant (t = –3.5). The two-standard-error confidence interval is (–4.7%,–1.3%). In short, tax increases appear to have a very large, sustained, and highly significant negative impact on output. Since most of our exogenous tax changes are in fact reductions, the more intuitive way to express this result is that tax cuts have very large and persistent positive output effects. [pp. 781-2]

The Romers assess the effects of tax cuts over a period of only 12 quarters (3 years). Some of the resulting growth in GDP during that period takes the form of greater spending on capital investments, the payoff from which usually takes more than 3 years to realize. So a tax cut of 1 percent of GDP yields more than a 3-percent rise in GDP over the longer run. But let’s keep it simple and use the relationship obtained by the Romers: a 1-percent tax cut (as a percentage of GDP) results in a 3-percent rise in GDP.

With that number in hand, and knowing the effective tax rate (33 percent of GDP in 20172), it is then easy to compute the short-run effects of changes in the effective tax rate on GDP, after-tax GDP, and tax revenue:


Effective tax revenue represents the dollar amount extracted from the economy through government spending at the stated percentage of GDP. (Spending includes transfer payments, which take from those who produce and give to those who do not.) Effective tax rate represents the dollar amount extracted from the economy, divided by GDP at the given tax rate. (GDP is based on the Romers’ estimate of the marginal effect of a change in the tax rate.)

It is a coincidence that tax revenue is maximized at the current (2017) effective tax rate of 33 percent. The coincidence occurs because, according to the Romers, every $1 change in tax revenue (or government spending that draws resources from the real economy) yields a $3 change in GDP, at the margin. If the marginal rate of return were lower than 3:1, the revenue-maximizing rate would be greater than 33 percent. If the marginal rate of return were higher than 3:1, the revenue-maximizing rate would be less than 33 percent.

In any event, the focus on tax revenue is entirely misplaced. What really matters, given that the prosperity of Americans is (or should be) of paramount interest, is GDP and especially after-tax GDP. Both would rise markedly in response to marginal cuts in real taxes (i.e., government spending). Democrats don’t want to hear that, of course, because they want government to decide how Americans spend the money that they earn. The idea that a far richer America would need far less government — subsidies, nanny-state regulations, etc. — frightens them.

It gets better (or worse, if you’re a big-government fan) when looking at the long-run effects of lower government spending on the rate of growth. I am speaking of the Rahn curve, which I estimate here. Holding other things the same, every percentage-point reduction in the real tax rate (government spending as a fraction of GDP) means an increase of 0.35 percentage point in the rate of real GDP growth. This is a long-run relationship because it takes time to convert some of the tax reduction to investment, and then to reap the additional output generated by the additional investment. It also takes time for workers to respond to the incentive of lower taxes by adding to their skills, working harder, and working in more productive jobs.

This graph depicts the long-run effects of changes in the effective tax rate, taking into account changes in the real growth rate from a base of 2.8 percent (the year-over-year rate for the most recent fiscal quarter):

Note that the same real tax revenue would be realized at an effective tax rate of 13 percent of GDP. At that rate, GDP would rise to 2.5 times its 2017 value (instead of 1.6 times as shown in Figure 1), and after-tax GDP would rise to 3.3 times its 2017 value (instead of 2.1 times as shown in Figure 1).

The real Laffer curve — the one that people ought to pay attention to — is the Rahn curve. Holding everything else constant, here is the relationship between the real growth rate and the effective tax rate:

Laffer revisited_3

At the current effective tax rate — 33 percent of GDP — the economy is limping along at about one-third of its potential growth. That is actually good news, inasmuch as the real growth rate dipped perilously close to 1 percent several times during the Obama administration, even after the official end of the Great Recession.

But it will take many years of spending cuts (relative to GDP, at least) and deregulation to push growth back to where it was in the decades immediately after World War II. Five percent isn’t out of the question.
__________

1. Total government spending, when transfer payments were negligible, amounted to between 5 and 10 percent of GDP between the Civil War and the Great Depression (Series F216-225, “Percent Distribution of National Income or Aggregate Payments, by Industry, in Current Prices: 1869-1968,” in Chapter F, National Income and Wealth, Historical Statistics of the United States, Colonial Times to 1970: Part 1). The cost of an adequate defense is a lot higher than it was in those relatively innocent times. Defense spending now accounts for about 3.5 percent of GDP. An increase to 5 percent wouldn’t render the U.S. invulnerable, but it would do a lot to deter potential adversaries. So at 10 percent of GDP, government spending on policing, the administration of justice, and defense — and nothing else — should be more than adequate to sustain the rule of law.

2. The effective tax rate on GDP in 2017 was 33.4 percent. That number represents total government government expenditures (line 37 of BEA Table 3.1), divided GDP (line 1 of BEA Table 1.15). The nominal tax rate on GDP was 30 percent; that is, government receipts (line 34 of BEA Table 3.1) accounted for 30 percent of GDP. (The BEA tables are accessible here.) I use the effective tax rate in this analysis because it truly represents the direct costs levied on the economy by government. (The indirect cost of regulatory activity adds about $2 trillion, bringing the total effective tax to 44 percent.)

More Evidence against College for Everyone

Here’s a datum:

My eldest grandchild is 23 years old. He’s a bright, articulate lad, but far more interested in doing than in reading. He has been working since he graduated from (home) high school, but not without a purpose in mind. Last fall, he enrolled in a course to learn a trade that he has always wanted to pursue. He passed the course with flying colors, quickly got a good job as a result, and from that job moved into the kind of job that he has long sought. He is happy, and I am happy for him.

But that’s not all. His job, though technically demanding, is “blue collar”. When I was his age, freshly equipped with a B.A. and some graduate school, I moved into the world of “white collar” work as an entry-level analyst at a government-sponsored think-tank in the D.C. area. Hot stuff, right?

Well, converting my starting salary to an hourly rate and adjusting it for inflation, I was making just about what my grandson is making now. But since graduating from high school he has been earning and saving money instead of cluttering a college campus. And he owns a pickup truck. When I started at the think-tank, I might have had a few hundred dollars in a checking account. And I couldn’t afford a car until I had worked for several months.

Will my grandson eventually make as money as I was able to make by feeding at the public trough? Given his ambition and foresight there’s no reason he can’t make a lot more than I did — and by doing things that people are actually willing to pay for instead of siphoning the U.S. Treasury.

College not only isn’t for everyone, it’s for almost no one. As I said seven years ago,

[w]hen I entered college [in 1958], I was among the 28 percent of high-school graduates then attending college. It was evident to me that about half of my college classmates didn’t belong in an institution of higher learning. Despite that, the college-enrollment rate among high-school graduates has since doubled.

Which means that only about one-fourth (or less) of today’s high-school graduates are really college material. That’s not a rap against them. It’s a rap against the insane idea of college for almost everyone. That would be a huge burden on taxpayers, a shameful misdirection of talent, and a massive drain on the economic potential of the nation.


Related posts:
The Higher-Education Bubble
Is College for Everyone?
The Dumbing-Down of Public Schools
College Is for Almost No One
About Those “Underpaid” Teachers

Recommended Reading

Leftism, Political Correctness, and Other Lunacies (Dispatches from the Fifth Circle Book 1)

 

On Liberty: Impossible Dreams, Utopian Schemes (Dispatches from the Fifth Circle Book 2)

 

We the People and Other American Myths (Dispatches from the Fifth Circle Book 3)

 

Americana, Etc.: Language, Literature, Movies, Music, Sports, Nostalgia, Trivia, and a Dash of Humor (Dispatches from the Fifth Circle Book 4)

Rethinking Free Trade II

I ended “Rethinking Free Trade” with this:

To put it bluntly but correctly, the national government exists not for the benefit of the people of the whole world or any part of it outside the United States, but for the benefit of the citizens of the United States.

Yes, some Americans benefit from free trade… But not all Americans do. And it is the job of the national government to serve all of the people. A balance needs to be struck. And those who pay the price of free trade … must be compensated in some way.

How and how much? Those are questions that I will grapple with in future posts.

I must first acknowledge some rather good points that I made in “Gains from Trade“, a nine-year-old post in which I address objections to free trade made by Keith Burgess-Jackson (KBJ):

How is “free trade” a “disaster for this country” [as KBJ puts it] when, thanks to the lowering of barriers to trade, but not their abandonment (thus “free trade”), millions of Americans now own better automobiles, electronic gadgets, and other goodies than they had access to before “free trade.” Not only that, but they have been able to purchase those goodies to which they had access before “free trade” at lower real prices than in the days before “free trade.” On top of that, millions of Americans make a better living than than they did before “free trade” because of their employment in industries that became stronger or rose up because of “free trade.”…

… KBJ seems to acknowledge as much in a [later] post … , where he gives a bit more ground:

Free trade is efficient, in the sense that it increases (or even maximizes) aggregate material welfare. The key words are “aggregate” and “material.” As for the first of these words, free trade produces losers as well as gainers. The gainers could compensate the losers, but they are not made to do so. I’m concerned about the losers. In other words, I care about justice (how the pie is distributed) as well as efficiency (how big the pie is). As for the second word, there is more to life than material welfare. Free trade has bad effects on valuable nonmaterial things, such as community, culture, tradition, and family. As a conservative, I care very much about these things.

… KBJ focuses on American losers, but there are many, many American gainers from free trade, as discussed above. Are their communities, cultures, traditions, and families of no import to KBJ? It would seem so. On what basis does he prefer some Americans to others?…

… KBJ seems to ignore the fundamental fact of life that human beings try to better their lot in ways that often, and inescapably, result in change….

Perhaps (in KBJ’s view) it was a mistake for early man to have discovered fire-making, which undoubtedly led to new communal alignments, cultural totems, traditions, and even familial relationships. Methinks, in short, that KBJ has been swept away by a kind of self-indulgent romanticism for a past that was not as good as we remember it. (I’ve been there and done that, too.)…

“Free trade” works because there are gains to all participants. If that weren’t the case, Americans wouldn’t buy foreign goods and foreigners wouldn’t buy American goods. Moreover, “free trade” has been a boon to American consumers and workers (though not always the workers KBJ seems to be worried about). To the extent that “wealthy American entrepreneurs” have gained from “free trade,” it’s because they’ve risked their capital to create jobs (in the U.S. and overseas) that have helped people (in the U.S. and overseas) attain higher standards of living. The “worldwide pool of cheap labor” is, in fact, a worldwide pool of willing labor, which earns what it does in accordance with the willingness of Americans (and others) to buy its products….

If “free trade” is such a bad thing, I wonder if KBJ buys anything that’s not made in Texas, where he lives. Trade between the States, after all, is about as “free” as it gets (except when government bans something, of course). Suppose Texas were to be annexed suddenly by Mexico. Would KBJ immediately boycott everything that’s made in the remaining 49 States? Would it have suddenly become unclean?…

Putting an end to “free trade” would make Americans poorer, not richer. And I doubt that it would do anything to halt the natural evolution of “community, culture, tradition, and family” away from the forms sentimentalized by KBJ and toward entirely new but not necessarily inferior forms.

The biggest threat to “community, culture, tradition, and family” lies in the non-evolutionary imposition of new social norms by the Left. That’s where the ire of KBJ and company should be directed.

There are a few chinks in my argument.

First, there will be in the short run (and sometimes even in the long run) a downward shift in the demand for labor in some sectors of the economy due to actions taken by foreign governments. Those actions consist of direct subsidies to industries that export goods to the U.S., and indirect subsidies in the form of tariffs and quotas on goods imported from the U.S.

I have seen “libertarian” economists justify direct subsidies because they benefit American consumers. (The same economists are glaringly silent about the disbenefits to American workers whose jobs are lost because of the subsidies.) It is jarring to read justifications of that kind from “libertarians”, who are usually quick to put Americans and foreigners on the same plane; for example, by promoting and praising “open borders” despite considerable disbenefits to some Americans. (I am thinking of  those whose neighborhoods are threatened by gangs of illegals. I am also thinking of those who pay higher taxes to subsidize the education, shelter, sustenance, and schooling of illegals — but who, unlike more affluent Americans, don’t engage the services of low-priced nannies and yard workers.)

And I must point out that those foreign-government subsidies aren’t free. They’re paid for, one way or another, by the citizens of foreign countries. Why would a “libertarian” transnationalist overlook such a thing? To justify “free trade” I guess.

It’s only fair to note that the U.S. government subsidizes American industries in ways that harm foreigners, that is, through direct subsidies, tariffs on imports, and import quotas. But any gains to workers in the industries thus subsidized do not offset the harm that foreign-government subsidies do to workers in other American industries.

All in all, international trade is a real mess. (So is domestic trade, given the myriad distortions wrought by taxes and regulations.) But it’s fair to say that some American workers are harmed by what can only be called unfair practices in international trade. The harm to them isn’t offset by the gains to other Americans. Only an economist or socialist would think otherwise.

In sum, I have come around to Mr. Trump’s view of this issue. Free trade should be conducted on a level playing field. Given that that won’t happen soon — if ever — what should be done for American workers who are harmed by unfair trade? Stay tuned.

Rethinking Free Trade

I have long supported free trade as beneficial. But I have also long derided utilitarianism, which is the doctrinal basis for claiming that free trade is beneficial. And I have long opposed the idea of open borders, in part because of the utilitarian claims of its supporters. It is time for me to resolve these contradictions.

Which way should I go? Should I sustain my anti-utilitarian position and oppose free trade as well as open borders? Or should I become a consistent utilitarian and support both free trade and open borders?

A digression about utilitarianism is in order. Utilitarianism, in this context, implies a belief in an aggregate social-welfare function (SWF) — a mystical summing of the states of happiness (or unhappiness) of myriad persons over an infinite series of points in time. It is the aim of utilitarians (who are mainly leftists and economists, though the categories overlap) to push SWF upward, toward (imaginary) collective nirvana. In so doing, the utilitarian makes himself the judge of whether an increase in A’s happiness at the expense of B (e.g., income redistribution) will result in an increase or decrease in SWF. An argument for this presumption (which is familiar mainly to economists), is based on the hypothesis of diminishing marginal utility (DMU) — a hypothesis that I have refuted at length. Suffice it to say that if A gains pleasure by poking B in the eye, no one — not even a Ph.D. economist — can prove that A’s pleasure outweighs B’s pain. In fact, common sense — which is embedded in eons of tradition — tells us that the act that brings pleasure to A should be punished precisely because of the way in which that pleasure is gained.

How does all of that pertain to free trade and open borders? Like this: Economists defend free trade and open borders because, in the aggregate, such things — in the long run — lead to greater economic efficiency and thus to greater total output (measured in constant dollars). And they are right about that. I have no doubt of it. But, to paraphrase John Maynard Keynes, in the long run we are all dead, and in the meantime some of us pay for the betterment of others.

Moreover, there are economists and others who like to conjoin the economic truth about the long-run consequences of free trade and open borders with statements about liberty: People ought to be free to exchange goods and services voluntarily. People ought to be free to live where they like.

Only a jejune anarchist will take such pronouncements as absolutes. Murder for hire is almost almost universally disapproved, as are many other crimes, even in this “enlightened”age. And I am unaware of a movement among affluent leftists to open their living rooms to the homeless, nor to repeal laws against trespass.

The question is, as always, where to strike a balance between the interests of those who benefit from free trade and open borders, and the interests of those for whom such things mean loss of income or higher taxes. How do the gains that accrue to some (e.g., less-expensive Lexi and abundant, low-priced nanny services) offset the burdens borne by working-class taxpayers whose jobs move overseas and whose school taxes rise to cover the costs of educating migrant children?

I ask these questions in connection with a broader issue: the purpose of our national government. It exists precisely for the reasons stated in the Preamble to the Constitution:

We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.

To put it bluntly but correctly, the national government exists not for the benefit of the people of the whole world or any part of it outside the United States, but for the benefit of the citizens of the United States.

Yes, some Americans benefit from free trade, and some Americans benefit from massive immigration. But not all Americans do. And it is the job of the national government to serve all of the people. A balance needs to be struck. And those who pay the price of free trade and massive immigration must be compensated in some way.

How and how much? Those are questions that I will grapple with in future posts.


Related posts:
Liberalism and Sovereignty
Utilitarianism, “Liberalism,” and Omniscience
Gains from Trade
Utilitarianism vs. Liberty
Diminishing Marginal Utility and the Redistributive Urge
Utilitarianism vs. Liberty (II)
Not-So-Random Thoughts (XVIII) – third item
Prosperity Isn’t Everything

College for Almost No One

Bryan Caplan, with whom I often disagree, is quite right about this:

From kindergarten on, students spend thousands of hours studying subjects irrelevant to the modern labor market. Why do English classes focus on literature and poetry instead of business and technical writing? Why do advanced-math classes bother with proofs almost no student can follow? When will the typical student use history? Trigonometry? Art? Music? Physics? Latin? The class clown who snarks “What does this have to do with real life?” is onto something.

The disconnect between college curricula and the job market has a banal explanation: Educators teach what they know—and most have as little firsthand knowledge of the modern workplace as I do. Yet this merely complicates the puzzle. If schools aim to boost students’ future income by teaching job skills, why do they entrust students’ education to people so detached from the real world? Because, despite the chasm between what students learn and what workers do, academic success is a strong signal of worker productivity….

The labor market doesn’t pay you for the useless subjects you master; it pays you for the preexisting traits you signal by mastering them….

Lest I be misinterpreted, I emphatically affirm that education confers some marketable skills, namely literacy and numeracy. Nonetheless, I believe that signaling accounts for at least half of college’s financial reward, and probably more.

Most of the salary payoff for college comes from crossing the graduation finish line….

Indeed, in the average study, senior year of college brings more than twice the pay increase of freshman, sophomore, and junior years combined. Unless colleges delay job training until the very end, signaling is practically the only explanation. This in turn implies a mountain of wasted resources—time and money that would be better spent preparing students for the jobs they’re likely to do….

In 2003, the United States Department of Education gave about 18,000 Americans the National Assessment of Adult Literacy. The ignorance it revealed is mind-numbing. Fewer than a third of college graduates received a composite score of “proficient”—and about a fifth were at the “basic” or “below basic” level….

Of course, college students aren’t supposed to just download facts; they’re supposed to learn how to think in real life. How do they fare on this count? The most focused study of education’s effect on applied reasoning, conducted by Harvard’s David Perkins in the mid-1980s, assessed students’ oral responses to questions designed to measure informal reasoning, such as “Would a proposed law in Massachusetts requiring a five-cent deposit on bottles and cans significantly reduce litter?” The benefit of college seemed to be zero: Fourth-year students did no better than first-year students….

… When we look at countries around the world, a year of education appears to raise an individual’s income by 8 to 11 percent. By contrast, increasing education across a country’s population by an average of one year per person raises the national income by only 1 to 3 percent. In other words, education enriches individuals much more than it enriches nations.

How is this possible? Credential inflation: As the average level of education rises, you need more education to convince employers you’re worthy of any specific job….

As credentials proliferate, so do failed efforts to acquire them. Students can and do pay tuition, kill a year, and flunk their finals…. Simply put, the push for broader college education has steered too many students who aren’t cut out for academic success onto the college track.

The college-for-all mentality has fostered neglect of a realistic substitute: vocational education. [“The World Might Be Better Off without College for Everyone“, The Atlantic, January 2018]

Caplan has been preaching this gospel for years. But he’s not the only one.

Katherine Mangu-Ward, writing in The Atlantic almost eight years ago, observed that

the phrase “higher education bubble” is popping up everywhere in recent months. This is thanks (in small part) to President Obama, who announced in his first State of the Union address that “every American will need to get more than a high school diploma.” But Americans have been fetishizing college diplomas for a long time now — Obama just reinforced that message and brought even more cash to the table. College has become a minimum career requirement, a basic human right, and a minimum income guarantee in the eyes of the American public. [“President Obama Is Not Impressed with Your High-School Diploma. Neither Is Wal-Mart.“]

Mangu-Ward is exactly right when she says this:

If we’re going to push every 18-year-old in the country into some kind of higher education, most people will likely be better off in a programs that involves logistics and linoleum, rather than ivy and the Iliad.

Vocational training, in other words. Which has languished, even as public schools have been dumbed-down.

Don Lee, writing at about the same time as Mangu-Ward, underscores the over-education — more correctly, mis-educaton — of America’s young adults:

[G]overnment surveys indicate that the vast majority of job gains this year have gone to workers with only a high school education or less, casting some doubt on one of the nation’s most deeply held convictions: that a college education is the ticket to the American Dream.

The Bureau of Labor Statistics projects that seven of the 10 employment sectors that will see the largest gains during the next decade won’t require much more than some on-the-job training. These include home health care aides, customer service representatives, and food preparers and servers. Meanwhile, well-paying white-collar jobs, such as computer programming, have become vulnerable to outsourcing to foreign countries.

“People with bachelor’s degrees will increasingly get not very highly satisfactory jobs,” said W. Norton Grubb, a professor at the University of California at Berkeley’s School of Education. “In that sense, people are getting more schooling than jobs are available.”

He noted that in 1970, 77 percent of workers with bachelor’s degrees were employed in professional and managerial occupations. By 2000, that had fallen to 60 percent.

Of the nearly 1 million new jobs created since hiring turned up in January, about half have been temporary census jobs. Most of the rest are concentrated in industries such as retail, hospitality and temporary staffing, according to the Bureau of Labor Statistics. [“Education Loses Its Luster“, reprinted in Akron Beacon Journal, June 21, 2010]

But that’s not news, either, this is from an anonymous piece that ran in The Atlantic almost ten years ago:

America, ever-idealistic, seems wary of the vocational-education track. We are not comfortable limiting anyone’s options. Telling someone that college is not for him seems harsh and classist and British, as though we were sentencing him to a life in the coal mines. I sympathize with this stance; I subscribe to the American ideal. Unfortunately, it is with me and my red pen that that ideal crashes and burns.

Sending everyone under the sun to college is a noble initiative. Academia is all for it, naturally. Industry is all for it; some companies even help with tuition costs. Government is all for it; the truly needy have lots of opportunities for financial aid. The media applauds it—try to imagine someone speaking out against the idea. To oppose such a scheme of inclusion would be positively churlish. But one piece of the puzzle hasn’t been figured into the equation, to use the sort of phrase I encounter in the papers submitted by my English 101 students. The zeitgeist of academic possibility is a great inverted pyramid, and its rather sharp point is poking, uncomfortably, a spot just about midway between my shoulder blades.

For I, who teach these low-level, must-pass, no-multiple-choice-test classes, am the one who ultimately delivers the news to those unfit for college: that they lack the most-basic skills and have no sense of the volume of work required; that they are in some cases barely literate; that they are so bereft of schemata, so dispossessed of contexts in which to place newly acquired knowledge, that every bit of information simply raises more questions. They are not ready for high school, some of them, much less for college. [“In the Basement of the Ivory Tower”, June 2008]

In fact, when I entered college 60 years ago, I was among the 28 percent of high-school graduates then attending college. It was evident to me that about half of my college classmates didn’t belong in an institution of higher learning. Despite that, the college-enrollment rate among high-school graduates has since doubled.

It’s long past time to burst the higher-education bubble. For one thing, it would mean fewer subsidies for the academic enemies of liberty.


Related posts:
School Vouchers and Teachers’ Unions
Whining about Teachers’ Pay: Another Lesson about the Evils of Public Education
I Used to Be Too Smart to Understand This
The Higher-Education Bubble
The Public-School Swindle
Is College for Everyone?
Subsidizing the Enemies of Liberty
A Sideways Glance at Public “Education”
The Dumbing-Down of Public Schools

The Conscience of a Conservative

My heart bleeds for the people of s***hole countries, cities, and neighborhoods. God knows there are enough of the latter two in the U.S. Why is that? Certainly, there are cultural and genetic factors at work. But those have been encouraged and reinforced by governmental acts.

Government — the central government especially — has long been a silent killer of economic opportunity. Jobs are killed by regulation that hinders business formation and expansion and every government program that diverts resources from the private sector.

How bad is it? This bad:

Because of increases in the rate of government spending and the issuance of regulations, the real rate of GDP growth has been halved since the end of World War II.

If GDP had continued to grow at an annual rate of 4 percent from its 1946 level of $1.9 trillion (in chained 2009 dollars), it would have reached $30 trillion in 2016 instead of $17 trillion.

Given the relationship between employment and real GDP, the cost of government policies is huge. There could now be as many as 207 million employed Americans instead of the current number of 156 million*, were it not for the “helpful” big-government policies foisted on hapless Americans by “compassionate” leftist do-gooders (and not a few dupes in center and on the right).

My heart bleeds.


* The relationship between employment and real GDP is as follows:

E = 1204.8Y0.4991

where
E = employment in thousands
Y = real GDP in billions of chained 2009 dollars.

This estimate is based on employment and GDP values for 1948 through 2016, which are available here and here.

An increase in employment from 156 million to 207 million would raise the employment-population ratio from 60 percent to 80 percent, which is well above the post-World War II peak of 65 percent. The real limit is undoubtedly higher than 65 percent, but probably less than 80 percent. In any event, the impoverishing effect of big government is real and huge.

“Capitalism” Is a Dirty Word

Dyspepsia Generation points to a piece at reason.com, which explains that capitalism is a Marxist coinage. In fact, capitalism

is what the Dutch call a geuzennaam—a word assigned by one’s sneering enemies, such as Quaker or Tory or Whig, but later adopted proudly by the victims themselves.

I have long viewed it that way. Capitalism conjures the greedy, coupon-clipping, fat-cat of Monopoly:

Thus did a board-game that vaulted to popularity during the Great Depression signify the identification of capitalism with another “bad thing”: monopoly. And, more recently, capitalism has been conjoined with yet another “bad thing”: income inequality.

 

In fact, capitalism

is a misnomer for the system of free markets that could deliver abundant prosperity and happiness, were markets left free. Free does not mean unfettered; competition for the favor of consumers exerts strong discipline on markets. And laws against theft, deception, and fraud would serve amply to keep markets honest, the worrying classes to the contrary notwithstanding.

What the defenders of capitalism are defending — or should be — is voluntary, market-based exchange. It doesn’t roll off the tongue, but that’s no excuse for continuing to use a Marxist smear-word for the best of all possible economic systems.


Related posts:
More Commandments of Economics (#13 and #19)
Monopoly and the General Welfare
Monopoly: Private Is Better than Public
Some Inconvenient Facts about Income Inequality
Mass (Economic) Hysteria: Income Inequality and Related Themes
Income Inequality and Economic Growth
A Case for Redistribution, Not Made
McCloskey on Piketty
Nature, Nurture, and Inequality
Diminishing Marginal Utility and the Redistributive Urge
Capitalism, Competition, Prosperity, and Happiness
Economic Mobility Is Alive and Well in America
The Essence of Economics
“Rent” Is Indispensable

“Rent” Is Indispensable

Economic rent, which economists simply call “rent”, has nothing to do with the monthly fee that you might pay a landlord in exchange for the use of a dwelling owned by him. Economic rent

means the payment to a factor of production in excess of what is required to keep that factor in its present use. So, for example, if I am paid $150,000 in my current job but I would stay in that job for any salary over $130,000, I am making $20,000 in rent.

The quotation comes from David Henderson’s article on rent-seeking. Henderson continues:

What is wrong with rent seeking? Absolutely nothing. I would be rent seeking if I asked for a raise. My employer would then be free to decide if my services are worth it. Even though I am seeking rents by asking for a raise, this is not what economists mean by “rent seeking.” They use the term to describe people’s lobbying of government to give them special privileges. A much better term is “privilege seeking.”

With that crucial distinction in mind, consider the firm that makes millions of dollars in “rent” because it was the first (and still only or dominant) producer of a gee-whiz widget. The prospect of making “rent” is one of the things that causes inventors, innovators, and entrepreneurs to risk their time and money in devising and bringing to market new and improved products and processes.

The role of “rent” in economic progress has been long understood. The Framers of the Constitution clearly understood it. This is one of the enumerated powers of Congress, from Article I, Section 8, of the Constitution:

To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries

The extension of the life of patents and copyrights over the years, and the misuse of patents to block competition, are examples of “privilege-seeking”. It is probably the case that patent and copyright protections have been extended well beyond what is needed to incentivize invention, innovation, and entrepreneurship.

But let us not throw out the baby with the bath water. The prospect of “rent” is a vital to economic progress. “Rent” is good; “privilege” is bad. The trick is to reduce or eliminate the latter without sacrificing the former.

Why I Am Anti-Union

Schadenfreude. That was my reaction to a recent piece by Rick Moran:

A week ago, employees at the Gothamist and DNAinfo were celebrating after a successful vote to join the Writers Guild. The Gothamist is a noted New York City website that is devoted to covering local news. They operate affiliated sites in Washington, D.C., Chicago, Los Angeles and San Francisco.

But the reporters’ celebration was shortlived. Yesterday, the publisher of the Gothamist and its parent website, DNAifno, informed all employees that he was shutting them down.

Joe Ricketts, who founded Ameritrade, made the announcement in an email….

The question isn’t whether Ricketts was justified in pulling the plug. The question is why employees thought the outcome would be any different?

Many unions in 21st century America offer a fantasy.   Simple arithmetic would show even the dumbest worker that the dream doesn’t add up and that reality wins in the end. The best example of that is the drive for a $15 an hour minimum wage. When you disconnect the cost of labor from the value of labor, the numbers don’t add up and companies end up losing money or, at best, dramatically reducing profits. In the real world, radically increasing the cost of labor means increasing the price of the products sold. In the case of fast food restaurants, that means a reduction  in traffic leading to fewer customers and less profit.

But organized labor has sold the idea that there are no consequences to raising the minimum wage to $15 an hour. We are already seeing franchises moving rapidly to automate their operations as much as possible and reduce the number of employees — leading to job losses and, just as importantly, fewer jobs created.

The Gothamist employees are shocked, shocked I say, to find out that a news website doesn’t make money and even a billionaire can tolerate losing only so much money before throwing in the towel. Unionizing also brings other headaches that are intolerable to management unless they are gluttons for punishment.

The writing was on the wall for these employees, but they were blinded by their own delusions and naivete. [“Publisher Shutters Websites after Journalists Unionize“, American Thinker, November 3, 2017]

Schadenfreude because I have been anti-union for 60 years.

It began while I was in high school. I had a part-time job bagging groceries at a supermarket. The supermarket was unionized, as was the norm in my home State of Michigan. My wage rate was set by a contract between the supermarket chain and the union, which I had to join as a condition of employment.

After I had been on the job for several months, the manager of the supermarket added shelf-stocking to my duties. According to the union contract, I should have received a raise for doing something other than bagging groceries, which was the lowest-paid job in the store. I complained to the manager about my wage rate. He fired me. The head of the local union couldn’t be bothered to defend me because he knew that I was going off to college in the fall. And so I received no benefit for paying union dues out my measly earnings.

Did I owe those measly earnings to the unionization of the store? I doubt it. I was a fast and effective bagger, unlike the baggers who work where my wife does most of her grocery shopping. I filled a grocery bag so that it wasn’t too heavy or too light; I put the heavy items on the bottom and the crushable items on top; I separated produce and frozen foods from soap and other scented items; etc. Given my superior skill as a bagger, the effect of the union contract was to penalize me and transfer some of my earnings to the less efficient baggers who worked with me.

That’s a good enough reason to be anti-union. But there are other reasons, having to do with freedom of association and freedom of contract.

I have nothing against the formation of a union, in principle. The formation of a union as a voluntary organization is an exercise of the unenumerated right of freedom of association, which is contemplated in the Ninth Amendment to the Constitution of the United States. By the same token, when labor laws force a person to join union, that person’s constitutional right to freedom of association is violated.

Moreover, such laws violate the freedom of contract guaranteed in Article I, Section 10, of the Constitution of the United States. The violation impinges on the right of the individual worker to negotiate with an employer. By the same token, the violation impinges on the right of the employer to negotiate with each of his employees, taking into account their particular skills and performance.

Further, forced unionization impinges on the employer’s unenumerated constitutional right to the lawful use of his business property.

There is also the effect of unionization on employment. If the contracted wage rate is set below the wage rate that would obtain in the absence of unionization, workers (or many of them) are underpaid. In the more typical case, where a union strives to set a wage rate higher than the market-clearing rate, employers hire fewer workers than they would absent unionization. (There’s an obvious parallel with the minimum wage.)

I am always gladdened when I read that labor-union membership in the United States has declined, not just as a percentage of the labor force, but in absolute numbers. Personal responsibility isn’t dead in the United States, despite the efforts of most of the nation’s politicians an bureaucrats.


Related posts:
Freedom of Contract and the Rise of Judicial Tyranny
The Upside-Down World of Liberalism
The Interest-Group Paradox
Law and Liberty
Negative Rights
Government Failure: An Example
The Left and Its Delusions
Corporations, Unions, and the State
Judicial Supremacy: Judicial Tyranny
Substantive Due Process, Liberty of Contract, and States’ “Police Power”
Why Liberty of Contract Matters
Whiners
Society, Polarization, and Dissent

Thaler on Discounting

This is a companion to “Richard Thaler, Nobel Laureate” and “Thaler’s Non-Revolution in Economics“. See also the long list of related posts at the end of “Richard Thaler, Nobel Laureate”.

Richard Thaler, the newly minted Noble laureate in economics, has published many papers, including one about discounting as a tool of government decision-making. The paper, “Discounting and Fiscal Constraints: Why Discounting is Always Right”, appeared in August 1979 under the imprimatur of the think-tank where Thaler was a consultant. It was also published in the October 1979 issue of the now-defunct Defense Management Journal (DMJ). Given the lead time for producing a journal, it’s almost certain that there is no substantive difference between the in-house version and the DMJ version. But only the in-house version seems to be available online, so the preceding link leads to it, and the quotations below are taken from it.

The aim of Thaler’s piece is to refute an article in the March 1978 issue of DMJ by Commander Rolf Clark, “Should Defense Managers Discount Future Costs?”. Specifically, Thaler argues against Clark’s conclusion that discounting is irrelevant in a regime of fiscal constraints.*

Clark took the position that a defense manager faced with fiscal constraints should simply choose among alternatives by picking the one with the lowest undiscounted costs. Why? Because the defense manager, unlike a business manager, can’t earn interest by deferring an expenditure and investing the money where it earns interest. To put it another way, deferring an expenditure doesn’t result in a later increase in a defense manager’s budget. Or in the budget of any government manager, for that matter.

Viewed in perspective, the dispute between Thaler and Clark is a tempest in a teaspoon  — a debate about how to arrange the deck chairs on the Titanic. Discounting is of little consequence against this backdrop:

  • uncertainty about future threats to U.S. interests (e.g., their sources, the weapons and tactics of potential enemies, and the timing of attacks)
  • uncertainty about the actual effectiveness of U.S. systems and tactics (e.g., see this)
  • uncertainty bout the costs of systems, especially those that are still in the early stages of development
  • a panoply of vested interests and institutional constraints that must be satisfied (e.g., a strong Marine Corp “lobby” on Capitol Hill, the long dominance of aviation in the Navy, the need to keep the peace within the services by avoiding drastic changes in any component’s share of the budget)
  • uncertainty about the amounts of money that Congress will actually appropriate, and the specific mandates that Congress will impose on spending (e.g., buy this system, not that one, recruit to a goal of X active-duty personnel in the Air Force, not Y).

But the issue is worth revisiting because it reveals a blind spot in Thaler’s view of decision-making.

Thaler begins his substantive presentation by explaining the purpose of discounting:

A discount rate is simply a shorthand way of defining a firm’s, organization’s, or person’s time value of money. This rate is always determined by opportunity costs. Opportunity costs, in turn, depend on circumstances. Consider the following example: An organization must choose between two projects which yield equal effectiveness (or profits in the case of a firm). Project A will cost $200 this year and nothing thereafter. Project B will cost $205 next year and nothing before or after. Notice that if project B is selected the organization will have an extra $200 to use for a year. Whether project B is preferred simply depends on whether it is worth $5 to the organization to have those $200 to use for a year. That, in turn, depends on what the organization would do with the money. If the money would just sit around for the year, its time value is zero and project A should be chosen. However, if the money were put in a 5 percent savings account, it would earn $10 in the year and thus the organization would gain $5 by selecting project B. [pp. 1-2]

In Thaler’s simplified version of reality, a government decision-maker (manager) faces a choice between two projects that (ostensibly) would be equally effective against a postulated threat, even though their costs would be incurred at different times. Specifically, the manager must choose between project A, at a cost of $200 in year 1, and project B, at a cost of $205 in year 2. Thaler claims that the manager can choose between the two projects by discounting their costs:

A [government] manager . . . cannot earn bank interest on funds withheld for a year. . . .  However, there will generally exist other ways for the manager to “invest” funds which are available. Examples include cost-saving expenditures, conservation measures, and preventive maintenance. These kinds of expenditures, if they have positive rates of return, permit a manager to invest money just as if he were putting the money in a savings account.

. . . Suppose a thorough analysis of cost-saving alternatives reveals that [in year 2] a maintenance project will be required at a cost of $215. Call this project D. Alternatively the project can be done [in year 1] (at the same level of effectiveness) for only $200. Call this project C. All of the options are displayed in table 1.

Discounting in the public sector_table 1

[pp. 3-4]

Thaler believes that his example clinches the argument for discounting because the choice of project B (an expenditure of $205 in year 2) enables the manager to undertake project C in year 1, and thereby to “save” $10 in year 2. But Thaler’s “proof” is deeply flawed:

  • If a maintenance project is undertaken in year 1, it will pay off sooner than if it is undertaken in year 2 but, by the same token, its benefits will diminish sooner than if it is undertaken in year 2.
  • More generally, different projects cannot, by definition be equally effective. Projects A and B may be about equally effective by a particular measure, but because they are different they will differ in other respects, and those differences could be crucial in choosing between A and B.
  • Specifically, projects A and B might be equally effective when compared quantitatively in the context of an abstract scenario, but A might be more effective in an non-quantifiable but crucial respect. For example, the earlier expenditure on A might be viewed by a potential enemy as a more compelling deterrent than the later expenditure on B because it would demonstrate more clearly the U.S. government’s willingness and ability to mount a strong defense against the potential enemy. Alternatively, the earlier expenditure on B might cause the enemy to accelerate his own production of weapons or mobilization of troops. These are the kinds of crucial issues that discounting is powerless to illuminate, and may even obscure.
  • For a decision to rest on the use of a particular discount rate, there must be great certainty about the future costs and effectiveness of the alternatives. But there seldom is. The practice of discounting therefore promotes an illusion of certainty — a potentially dangerous illusion, in the case of national defense.
  • Finally, the “correct” discount rate depends on the options available to a particular manager of a particular government activity. Yet Thaler insists on the application of a uniform discount rate by all government managers (p. 6). By Thaler’s own example, such a practice could lead a manager to choose the wrong option.

So even if there is certainty about everything else, there is no “correct” discount rate, and it is presumptuous of Thaler to prescribe one on the assumption that it will fit every defense manager’s particular circumstances.**

Thaler does the same thing when he counsels intervention in personal decisions because too many people — in his view — make irrational decisions.

In the context of personal decision-making — which is the focal point of Thaler’s “libertarian” paternalism — the act of discounting is rational because it serves wealth-maximization. But life isn’t just about maximizing wealth. That’s why some people choose to have a lot of children, when doing so obviously reduces the amount that they can save. That’s why some choose to retire early rather than stay in stressful jobs. Rationality and wealth maximization are two very different things, but a lot of laypersons and too many economists are guilty of equating them.

If wealth-maximization is your goal, just stop drinking, smoking, enjoying good food, paying for entertainment, subscribing to newspapers and magazines, buying books, watering your lawn, mowing the grass, driving your car (except to work if you have no feasible alternative), and on into the night. You will accumulate a lot of money — if you invest wisely (there’s the rub of uncertainty) — but you will live a miserable life, unless you are the rare person who is a true miser.
__________
* If you are unfamiliar with the background of the Clark-Thaler exchange, and the reference to fiscal constraints, here’s the story: Since 1969 the Secretary of Defense has required the military departments to propose multi-year spending programs that are constrained by an explicit ceiling on each year’s spending. Fiscal guidance, as it is called, was lacking before that. But, in reality, defense budgets have always been constrained, ultimately by Congress. Fiscal guidance represents only a rough guess as to the total amount of defense spending that Congress will approve, and a rougher guess about the distribution of that spending among the military departments.

** Thaler’s example of a cost-saving investment is also a stretch, given how government budgets are decided. I gave it a pass in order to make the point that it wouldn’t save Thaler’s argument even if it were realistic. Here’s the missing reality:

Even if the Secretary of Defense (the grand panjandrum of defense managers) makes the kinds of moves counseled by Thaler, and even if his multi-year program sails through the Office and Management and Budget without a scratch, Congress has the final say. And Congress, though it pays attention to the multi-year plans coming from the Executive Branch, still makes annual appropriations. When it does so, it essentially ignores the internal logic of the multi-year plans (assuming that the Defense plan has an internal logic after it has been subjected to Pentagon politics). Instead, Congress divides the defense budget into various spending programs (see the list for national defense, here), and adjusts each program to suit the tastes, preferences, and moods of staffers, committee members, and committee chairman. Thus it is unlikely that the services’ maintenance and procurement budgets will emerge from Congress as they entered, with cross-temporal tradeoffs intact. A more rational budgeting strategy, from the perspective of the Secretary of Defense, is to submit plans that accord with the known preferences of Congress. Such plans may not incorporate the kind of trivial fine-tuning favored by Thaler, but they will more likely serve the national interest by yielding a robust defense.

Another (Big) Problem with “Nudging”

I’ve written recently about Richard Thaler’s Nobel prize and my objections to his (and Cass Sunstein’s) cheerleading for “nudging”. That’s a polite term for the use of business and government power to get people to make the “right” decisions. (“Right” according to Thaler, at least.) It’s the government part that really bothers me. Ilya Somin of The Volokh Conspiracy is of the same mind:

Thaler and many other behavioral economics scholars argue that government should intervene to protect people against their cognitive biases, by various forms of paternalistic policies. In the best-case scenario, government regulators can “nudge” us into correcting our cognitive errors, thereby enhancing our welfare without significantly curtailing freedom.

But can we trust government to be less prone to cognitive error than the private-sector consumers whose mistakes we want to correct? If not, paternalistic policies might just replace one form of cognitive bias with another, perhaps even worse one. Unfortunately, a recent study suggests that politicians are prone to severe cognitive biases too – especially when they consider ideologically charged issues….

Even when presented additional evidence to help them correct their mistakes, Dahlmann and Petersen found that the politicians tended to double down on their errors rather than admit they might have been wrong….

Politicians aren’t just biased in their evaluation of political issues. Many of them are ignorant, as well. For example, famed political journalist Robert Kaiser found that most members of Congress know little about policy and “both know and care more about politics than about substance.”….

But perhaps voters can incentivize politicians to evaluate evidence more carefully. They can screen out candidates who are biased and ill-informed, and elect knowledgeable and objective decision-makers. Sadly, that is unlikely to happen, because the voters themselves also suffer from massive political ignorance, often being unaware of even very basic facts about public policy.

Of course, the Framers of the Constitution understood all of this in 1787. And they wisely acted on it by placing definite limits on the power of the central government. The removal of those limits, especially during and since the New Deal, is a constitutional tragedy.

Thaler’s Non-Revolution in Economics

James R. Rogers writes about Richard Thaler and behavioral economics:

[M]edia treatments of Thaler’s work, and of behavioral economics more generally, suggest that it provides a much-deserved comeuppance to conventional microeconomics. Well . . . Not quite….

… Economists, and rational choice theorists more generally, have a blind spot, [Thaler] argues, for just how often their assumptions about human behavior are inconsistent with real human behavior. That’s an important point.

Yet here’s where spin matters: Does Thaler provide a correction to previous economics, underscoring something everyone always knew but just ignored as a practical matter, or is Thaler’s work revolutionary, inviting a broad and necessary reconceptualization of standard microeconomics?…

… No. He has built a career by correcting a blind spot in modern academic economics. But his insight provides us with a “well, duh” moment rather than a “we need totally to rewrite modern economics” moment that some of his journalistic (and academic) supporters suggest it provides….

Thaler’s work underscores that the economist’s rationality postulates cannot account for all human behavior. That’s an important point. But I don’t know that many, or even any, economists very much believed the opposite in any serious way. [“Did Richard Thaler Really Shift the Paradigm in Economics?“, Library of Law and Liberty, October 11, 2017]

I have made the same point:

Even in those benighted days when I learned the principles of “micro” — just a few years ahead of Thaler — it was understood that the assumption of rationality was an approximation of the tendency of individuals to try to make themselves better off by making choices that would do so, given their tastes and preferences and the information that they possess at the time or could obtain at a cost commensurate with the value of the decision at hand.

Highly recommended reading: my previous post about Thaler and the many related posts listed at the end of it.

Richard Thaler, Nobel Laureate

I am slightly irked by today’s news of the selection of Richard Thaler as the 2017 Noblel laureate in economics. (It’s actually the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, not one of the original prizes designated in Alfred Nobel’s will.) Granted, Thaler did some praiseworthy and groundbreaking work in behavioral economics, which is nicely summarized in this post by Timothy Taylor.

But Thaler, whom I knew slightly when he was a consultant to the outfit where I worked, gets a lot of pushback when he translates his work into normative prescriptions. He was already semi-famous (or infamous) for his collaboration with Cass Sunstein. Together and separately they propounded “libertarian paternalism”, an obnoxious oxymoron that they abandoned in favor of “nudging”. Thus their book-length epistle to true believers in governmental omniscience, Nudge: Improving Decisions about Health and Happiness.

It would be a vast understatement to say that I disagree with Thaler and Sunstein’s policy prescriptions. I have recorded my disagreements in many posts, which are listed below.

Sunstein at the Volokh Conspiracy
More from Sunstein
Cass Sunstein’s Truly Dangerous Mind
An (Imaginary) Interview with Cass Sunstein
Libertarian Paternalism
A Libertarian Paternalist’s Dream World
Slippery Sunstein
The Short Answer to Libertarian Paternalism
Second-Guessing, Paternalism, Parentalism, and Choice
Another Thought about Libertarian Paternalism
Back-Door Paternalism
Sunstein and Executive Power
Another Voice Against the New Paternalism
The Feds and “Libertarian Paternalism”
A Further Note about “Libertarian” Paternalism
Apropos Paternalism
Beware of Libertarian Paternalists
Discounting and Libertarian Paternalism
The Mind of a Paternalist
The Mind of a Paternalist, Revisited
Another Entry in the Sunstein Saga
The Sunstein Effect Is Alive and Well in the White House
Sunstein the Fatuous
Not-So-Random Thoughts (XVI) – first item
The Perpetual Nudger

Not-So-Random Thoughts (XXI)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

Fred Reed, in a perceptive post worth reading in its entirety, says this:

Democracy works better the smaller the group practicing it. In a town, people can actually understand the questions of the day. They know what matters to them. Do we build a new school, or expand the existing one? Do we want our children to recite the pledge of allegiance, or don’t we? Reenact the Battle of Antietam? Sing Christmas carols in the town square? We can decide these things. Leave us alone….

Then came the vast empire, the phenomenal increase in the power and reach of the federal government, which really means the Northeast Corridor. The Supreme Court expanded and expanded and expanded the authority of Washington, New York’s store-front operation. The federals now decided what could be taught in the schools, what religious practices could be permitted, what standards employers could use in hiring, who they had to hire. The media coalesced into a small number of corporations, controlled from New York but with national reach….

Tyranny comes easily when those seeking it need only corrupt a single Congress, appoint a single Supreme Court, or control the departments of one executive branch. In a confederation of largely self-governing states, those hungry to domineer would have to suborn fifty congresses. It could not be done. State governments are accessible to the governed. They can be ejected. They are much more likely to be sympathetic to the desires of their constituents since they are of the same culture.

Tyranny is often justified by invoking “the will of the people”, but as I say here:

It is a logical and factual error to apply the collective “we” to Americans, except when referring generally to the citizens of the United States. Other instances of “we” (e.g., “we” won World War II, “we” elected Barack Obama) are fatuous and presumptuous. In the first instance, only a small fraction of Americans still living had a hand in the winning of World War II. In the second instance, Barack Obama was elected by amassing the votes of fewer than 25 percent of the number of Americans living in 2008 and 2012. “We the People” — that stirring phrase from the Constitution’s preamble — was never more hollow than it is today.

Further, the logical and factual error supports the unwarranted view that the growth of government somehow reflects a “national will” or consensus of Americans. Thus, appearances to the contrary (e.g., the adoption and expansion of national “social insurance” schemes, the proliferation of cabinet departments, the growth of the administrative state) a sizable fraction of Americans (perhaps a majority) did not want government to grow to its present size and degree of intrusiveness. And a sizable fraction (perhaps a majority) would still prefer that it shrink in both dimensions. In fact, The growth of government is an artifact of formal and informal arrangements that, in effect, flout the wishes of many (most?) Americans. The growth of government was not and is not the will of “we Americans,” “Americans on the whole,” “Americans in the aggregate,” or any other mythical consensus.


I am pleased to note that my prognosis for Trump’s presidency (as of December 2016) was prescient:

Based on his appointments to date — with the possible exception of Steve Bannon [now gone from the White House] — he seems to be taking a solidly conservative line. He isn’t building a government of bomb-throwers, but rather a government of staunch conservatives who, taken together, have a good chance at rebuilding America’s status in the world while dismantling much of Obama’s egregious “legacy”….

Will Donald Trump be a perfect president, if perfection is measured by adherence to the Constitution? Probably not, but who has been? It now seems likely, however, that Trump will be a far less fascistic president than Barack Obama has been and Hillary Clinton would have been. He will certainly be far less fascistic than the academic thought-police, whose demise cannot come too soon for the sake of liberty.

In sum, Trump’s emerging agenda seems to resemble my own decidedly conservative one.

But anti-Trump hysteria continues unabated, even among so-called conservatives. David Gelertner writes:

Some conservatives have the impression that, by showing off their anti-Trump hostility, they will get the networks and the New York Times to like them. It doesn’t work like that. Although the right reads the left, the left rarely reads the right. Why should it, when the left owns American culture? Nearly every university, newspaper, TV network, Hollywood studio, publisher, education school and museum in the nation. The left wrapped up the culture war two generations ago. Throughout my own adult lifetime, the right has never made one significant move against the liberal culture machine.

David Brooks of The New York Times is one of the (so-called) conservatives who shows off his anti-Trump hostility. Here he is writing about Trump and tribalism:

The Trump story is that good honest Americans are being screwed by aliens. Regular Americans are being oppressed by a snobbish elite that rigs the game in its favor. White Americans are being invaded by immigrants who take their wealth and divide their culture. Normal Americans are threatened by an Islamic radicalism that murders their children.

This is a tribal story. The tribe needs a strong warrior in a hostile world. We need to build walls to keep out illegals, erect barriers to hold off foreign threats, wage endless war on the globalist elites.

Somebody is going to have to arise to point out that this is a deeply wrong and un-American story. The whole point of America is that we are not a tribe. We are a universal nation, founded on universal principles, attracting talented people from across the globe, active across the world on behalf of all people who seek democracy and dignity.

I am unaware that Mr. Trump has anything against talented people. But he rightly has a lot against adding to the welfare rolls and allowing jihadists into the country. As for tribalism — that bugbear of “enlightened” people — here’s where I stand:

There’s a world of difference between these three things:

  1. hating persons who are different because they’re different
  2. fearing persons of a certain type because that type is highly correlated with danger
  3. preferring the company and comfort of persons with whom one has things in common, such as religion, customs, language, moral beliefs, and political preferences.

Number 1 is a symptom of bigotry, of which racism is a subset. Number 2 is a sign of prudence. Number 3 is a symptom of tribalism.

Liberals, who like to accuse others of racism and bigotry, tend to be strong tribalists — as are most people, the world around. Being tribal doesn’t make a person a racist or a bigot, that is, hateful toward persons of a different type. It’s natural (for most people) to trust and help those who live nearest them or are most like them, in customs, religion, language, etc. Persons of different colors and ethnicities usually have different customs, religions, and languages (e.g., black English isn’t General American English), so it’s unsurprising that there’s a tribal gap between most blacks and whites, most Latinos and whites, most Latinos and blacks, and so on.

Tribalism has deep evolutionary-psychological roots in mutual aid and mutual defense. The idea that tribalism can be erased by sitting in a circle, holding hands, and singing Kumbaya — or the equivalent in social-diplomatic posturing — is as fatuous as the idea that all human beings enter this world with blank minds and equal potential. Saying that tribalism is wrong is like saying that breathing and thinking are wrong. It’s a fact of life that can’t be undone without undoing the bonds of mutual trust and respect that are the backbone of a civilized society.

If tribalism is wrong, then most blacks, Latinos, members of other racial and ethnic groups, and liberals are guilty of wrong-doing.

None of this seems to have occurred to Our Miss Brooks (a cultural reference that may be lost on younger readers). But “liberals” — and Brooks is one of them — just don’t get sovereignty.


While we’re on the subject of immigration, consider a study of the effect of immigration on the wages of unskilled workers, which is touted by Timothy Taylor. According to Taylor, the study adduces evidence that

in areas with high levels of low-skill immigration, local firms shift their production processes in a way that uses more low-skilled labor–thus increasing the demand for such labor. In addition, immigrant low-skilled labor has tended to focus on manual tasks, which has enabled native-born low-skilled labor to shift to nonmanual low-skilled tasks, which often pay better.

It’s magical. An influx of non-native low-skilled laborers allows native-born low-skilled laborers to shift to better-paying jobs. If they could have had those better-paying jobs, why didn’t they take them in the first place?

More reasonably, Rick Moran writes about a

Federation for American Immigration Reform report [which] reveals that illegal aliens are costing the U.S. taxpayer $135 billion.  That cost includes medical care, education, and law enforcement expenses.

That’s a good argument against untrammeled immigration (legal or illegal). There are plenty more. See, for example, the entry headed “The High Cost of Untrammeled Immigration” at this post.


There’s a fatuous argument that a massive influx of illegal immigrants wouldn’t cause the rate of crime to rise. I’ve disposed of that argument with one of my own, which is supported by numbers. I’ve also dealt with crime in many other posts, including this one, where I say this (and a lot more):

Behavior is shaped by social norms. Those norms once were rooted in the Ten Commandments and time-tested codes of behavior. They weren’t nullified willy-nilly in accordance with the wishes of “activists,” as amplified through the megaphone of the mass media, and made law by the Supreme Court….

But by pecking away at social norms that underlie mutual trust and respect, “liberals” have sundered the fabric of civilization. There is among Americans the greatest degree of mutual enmity (dressed up as political polarization) since the Civil War.

The mutual enmity isn’t just political. It’s also racial, and it shows up as crime. Heather Mac Donald says “Yes, the Ferguson Effect Is Real,” and Paul Mirengoff shows that “Violent Crime Jumped in 2015.” I got to the root of the problem in “Crime Revisited,” to which I’ve added “Amen to That” and “Double Amen.” What is the root of the problem? A certain, violence-prone racial minority, of course, and also under-incarceration (see “Crime Revisited”).

The Ferguson Effect is a good example of where the slippery slope of free-speech absolutism leads. More examples are found in the violent protests in the wake of Donald Trump’s electoral victory. The right “peaceably to assemble, and to petition the Government for a redress of grievances” has become the right to assemble a mob, disrupt the lives of others, destroy the property of others, injure and kill others, and (usually) suffer no consequences for doing so — if you are a leftist or a member of one of the groups patronized by the left, that is.

How real is the Ferguson effect? Jazz Shaw writes about the rising rate of violent crime:

We’ve already looked at a couple of items from the latest FBI crime report and some of the dark news revealed within. But when you match up some of their numbers with recent historical facts, even more trends become evident. As the Daily Caller reports this week, one disturbing trend can be found by matching up locations recording rising murder rates with the homes of of widespread riots and anti-police protests.

As we discussed when looking at the rising murder and violent crime rates, the increases are not homogeneous across the country. Much of the spike in those figures is being driven by the shockingly higher murder numbers in a dozen or so cities. What some analysts are now doing is matching up those hot spots with the locations of the aforementioned anti-police protests. The result? The Ferguson Effect is almost undoubtedly real….

Looking at the areas with steep increases in murder rates … , the dots pretty much connect themselves. It starts with the crime spikes in St. Louis, Baltimore and Chicago. Who is associated with those cities? Michael Brown, Freddie Gray and Laquan McDonald. The first two cities experienced actual riots. While Chicago didn’t get quite that far out of hand, there were weeks of protests and regular disruptions. The next thing they have in common is the local and federal response. Each area, rather than thanking their police for fighting an increasingly dangerous gang violence situation with limited resources, saw municipal leaders chastising the police for being “too aggressive” or using similar language. Then the federal government, under Barack Obama and his two Attorney Generals piled on, demanding long term reviews of the police forces in those cities with mandates to clean up the police departments.

Small wonder that under such circumstances, the cops tended to back off considerably from proactive policing, as Heather McDonald describes it. Tired of being blamed for problems and not wanting to risk a lawsuit or criminal charges for doing their jobs, cops became more cautious about when they would get out of the patrol vehicle at times. And the criminals clearly noticed, becoming more brazen.

The result of such a trend is what we’re seeing in the FBI report. Crime, which had been on the retreat since the crackdown which started in the nineties, is back on the rise.


It is well known that there is a strong, negative relationship between intelligence and crime; that is, crime is more prevalent among persons of low intelligence. This link has an obvious racial dimension. There’s the link between race and crime, and there’s the link between race and intelligence. It’s easy to connect the dots. Unless you’re a “liberal”, of course.

I was reminded of the latter link by two recent posts. One is a reissue by Jared Taylor, which is well worth a re-read, or a first read if it’s new to you. The other, by James Thompson, examines an issue that I took up here, namely the connection between geography and intelligence. Thompson’s essay is more comprehensive than mine. He writes:

[R]esearchers have usually looked at latitude as an indicator of geographic influences. Distance from the Equator is a good predictor of outcomes. Can one do better than this, and include other relevant measures to get a best-fit between human types and their regions of origin?… [T]he work to be considered below…. seeks to create a typology of biomes which may be related to intelligence.

(A biome is “a community of plants and animals that have common characteristics for the environment they exist in. They can be found over a range of continents. Biomes are distinct biological communities that have formed in response to a shared physical climate.”)

Thompson discusses and quotes from the work (slides here), and ends with this:

In summary, the argument that geography affects the development of humans and their civilizations need not be a bone of contention between hereditarian and environmentalist perspectives, so long as environmentalists are willing to agree that long-term habitation in a particular biome could lead to evolutionary changes over generations.

Environment affects heredity, which then (eventually) embodies environmental effects.


Returning to economics, about which I’ve written little of late, I note a post by Scott Winship, in which he addresses the declining labor-force participation rate:

Obama’s Council of Economic Advisers (CEA) makes the argument that the decline in prime-age male labor is a demand-side issue that ought to be addressed through stimulative infrastructure spending, subsidized jobs, wage insurance, and generous safety-net programs. If the CEA is mistaken, however, then these expensive policies may be ineffective or even counterproductive.

The CEA is mistaken—the evidence suggests there has been no significant drop in demand, but rather a change in the labor supply driven by declining interest in work relative to other options.

  • There are several problems with the assumptions and measurements that the CEA uses to build its case for a demand-side explanation for the rise in inactive prime-age men.
  • In spite of conventional wisdom, the prospect for high-wage work for prime-age men has not declined much over time, and may even have improved.
  • Measures of discouraged workers, nonworkers marginally attached to the workforce, part-time workers who wish to work full-time, and prime-age men who have lost their job involuntarily have not risen over time.
  • The health status of prime-age men has not declined over time.
  • More Social Security Disability Insurance claims are being filed for difficult-to-assess conditions than previously.
  • Most inactive men live in households where someone receives government benefits that help to lessen the cost of inactivity.

Or, as I put it here, there is

the lure of incentives to refrain from work, namely, extended unemployment benefits, the relaxation of welfare rules, the aggressive distribution of food stamps, and “free” healthcare” for an expanded Medicaid enrollment base and 20-somethings who live in their parents’ basements.


An additional incentive — if adopted in the U.S. — would be a universal basic income (UBI) or basic income guarantee (BIG), which even some libertarians tout, in the naive belief that it would replace other forms of welfare. A recent post by Alberto Mingardi reminded me of UBI/BIG, and invoked Friedrich Hayek — as “libertarian” proponents of UBI/BIG are wont to do. I’ve had my say (here and here, for example). Here’s I said when I last wrote about it:

The Basic Income Guarantee (BIG), also known as Universal Basic Income (UBI), is the latest fool’s gold of “libertarian” thought. John Cochrane devotes too much time and blog space to the criticism and tweaking of the idea. David Henderson cuts to the chase by pointing out that even a “modest” BIG — $10,000 per adult American per year — would result in “a huge increase in federal spending, a huge increase in tax rates, and a huge increase in the deadweight loss from taxes.”

Aside from the fact that BIG would be a taxpayer-funded welfare program — to which I generally object — it would necessarily add to the already heavy burden on taxpayers, even though it is touted as a substitute for many (all?) extant welfare programs. The problem is that the various programs are aimed at specific recipients (e.g., women with dependent children, families with earned incomes below a certain level). As soon as a specific but “modest” proposal is seriously floated in Congress, various welfare constituencies will find that proposal wanting because their “entitlements” would shrink. A BIG bill would pass muster only if it allowed certain welfare programs to continue, in addition to BIG, or if the value of BIG were raised to a level that such that no welfare constituency would be a “loser.”

In sum, regardless of the aims of its proponents — who, ironically, tend to call themselves libertarians — BIG would lead to higher welfare spending and more enrollees in the welfare state.


-30-

Politics Trumps Economics

Years ago I was conversing with a hard-core economist, one of the benighted kind who assume that everyone behaves like a wealth-maximizing robot. I observed that even if he were right in his presumption that economic decisions are made rationally and in a way that comports with economic efficiency, government stands in the way of efficiency. In my pithy phrasing: Politics trumps economics.

So even if the impetus for efficiency isn’t blunted by governmental acts (laws, regulations, judicial decrees), those acts nevertheless stand in the way of efficiency, despite clever workarounds. A simple case in point is the minimum wage, which doesn’t merely drive up the wages of some workers, but also ensures that many workers are unemployed in the near term, and that many more workers will be unemployed in the long-term. Yes, the minimum wage causes some employers to substitute capital (e.g., robots) for labor, but they do so only to reduce the bottom-line damage of the minimum wage (at least in the near-term). Neither the employer nor the jobless is made better off by the employer’s machinations. Thus politics (the urge to regulate) trumps economics (the efficiency-maximizing state of affairs that would otherwise obtain).

I was reminded of my exchange with the economist by a passage in Jean-François Revel’s Last Exit to Utopia: The Survival of Socialism in a Post-Soviet Era:

Karl Jaspers, in his essay on Max Weber, records the following conversation between Weber and Joseph Schumpeter:

The two men met at a Vienna cafe… Schumpter indicated how gratified he was by the socialist revolution in Russia. Henceforth socialism would not be just a program on paper — it would have to prove its viability.

To which Weber … replied that Communism at this stage of development in Russia virtually amounted to a crime, and that to take this path would lead to human misery without equal and to a terrible catastrophe.

“That’s exactly what will happen,” agreed Schumpeter, “but what a perfect laboratory experiment.”

“A laboratory in which mountains of corpses will be heaped!” retorted Weber….

This exchange must have occurred at the beginning of the Bolshevik regime, since Max Weber died in 1920. Thus one of the twentieth century’s greatest sociologists and one of its greatest economists were in substantial agreement about Communism: they had no illusions about it and were fully aware of its criminogenic tendencies. On one issue, though, they differed. Schumpeter was still in thrall to a belief that Weber did not share, namely the illusion that the failures and crimes of Communism would serve as a lesson to humanity. [pp. 141-142]

Weber was right, of course. Politics trumps economics because people — especially people in power — will cling to counterproductive beliefs, even despite evidence that they are counterproductive. Facts and logic don’t stand a chance against power-lust, magical thinking, virtue-signalling, and the band-wagon effect.


Related posts:
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
A Keynesian Fantasy Land
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Income Inequality and Economic Growth
A Case for Redistribution, Not Made
Ruminations on the Left in America
Academic Ignorance
Superiority
Whiners
A Dose of Reality
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
The Rahn Curve Revisited
Retrospective Virtue-Signalling
Four Kinds of “Liberals”
Leftist Condescension
The Vast Left-Wing Conspiracy
Leftism As Crypto-Fascism: The Google Paradigm
What’s Going On? A Stealth Revolution