The Balderdash Chronicles

Balderdash is nonsense, to put it succinctly. Less succinctly, balderdash is stupid or illogical talk; senseless rubbish. Rather thoroughly, it is

balls, bull, rubbish, shit, rot, crap, garbage, trash, bunk, bullshit, hot air, tosh, waffle, pap, cobblers, bilge, drivel, twaddle, tripe, gibberish, guff, moonshine, claptrap, hogwash, hokum, piffle, poppycock, bosh, eyewash, tommyrot, horsefeathers, or buncombe.

I have encountered innumerable examples of balderdash in my 35 years of full-time work,  14 subsequent years of blogging, and many overlapping years as an observer of the political scene.  This essay documents some of the worst balderdash that I have come across.

THE LIMITS OF SCIENCE

Science (or what too often passes for it) generates an inordinate amount of balderdash. Consider an article in The Christian Science Monitor: “Why the Universe Isn’t Supposed to Exist”, which reads in part:

The universe shouldn’t exist — at least according to a new theory.

Modeling of conditions soon after the Big Bang suggests the universe should have collapsed just microseconds after its explosive birth, the new study suggests.

“During the early universe, we expected cosmic inflation — this is a rapid expansion of the universe right after the Big Bang,” said study co-author Robert Hogan, a doctoral candidate in physics at King’s College in London. “This expansion causes lots of stuff to shake around, and if we shake it too much, we could go into this new energy space, which could cause the universe to collapse.”

Physicists draw that conclusion from a model that accounts for the properties of the newly discovered Higgs boson particle, which is thought to explain how other particles get their mass; faint traces of gravitational waves formed at the universe’s origin also inform the conclusion.

Of course, there must be something missing from these calculations.

“We are here talking about it,” Hogan told Live Science. “That means we have to extend our theories to explain why this didn’t happen.”

No kidding!

Though there’s much more to come, this example should tell you all that you need to know about the fallibility of scientists. If you need more examples, consider these.

MODELS LIE WHEN LIARS MODEL

Not that there’s anything wrong with being wrong, but there’s a great deal wrong with seizing on a transitory coincidence between two variables (CO2 emissions and “global” temperatures in the late 1900s) and spurring a massively wrong-headed “scientific” mania — the mania of anthropogenic global warming.

What it comes down to is modeling, which is simply a way of baking one’s assumptions into a pseudo-scientific mathematical concoction. Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a general explanation of the sham, see this.

SCIENCE VS. SCIENTISM: STEVEN PINKER’S BALDERDASH

The examples that I’ve adduced thus far (and most of those that follow) demonstrate a mode of thought known as scientism: the application of the tools and language of science to create a pretense of knowledge.

No less a personage than Steven Pinker defends scientism in “Science Is Not Your Enemy”. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. Pinker says this, for example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists”. We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (e.g., bluffing, spotting “tells”, and avoiding “tells”).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real test of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification”. With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, discussed later.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

Much of the problem arises because of a simple fact that is too often overlooked: Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

In sum, scientists are human and fallible. It is in the best tradition of science to distrust their scientific claims and to dismiss their non-scientific utterances.

ECONOMICS: PHYSICS ENVY AT WORK

Economics is rife with balderdash cloaked in mathematics. Economists who rely heavily on mathematics like to say (and perhaps even believe) that mathematical expression is more precise than mere words. But, as Arnold Kling points out in “An Important Emerging Economic Paradigm”, mathematical economics is a language of faux precision, which is useful only when applied to well defined, narrow problems. It can’t address the big issues — such as economic growth — which depend on variables such as the rule of law and social norms which defy mathematical expression and quantification.

I would go a step further and argue that mathematical economics borders on obscurantism. It’s a cult whose followers speak an arcane language not only to communicate among themselves but to obscure the essentially bankrupt nature of their craft from others. Mathematical expression actually hides the assumptions that underlie it. It’s far easier to identify and challenge the assumptions of “literary” economics than it is to identify and challenge the assumptions of mathematical economics.

I daresay that this is true even for persons who are conversant in mathematics. They may be able to manipulate easily the equations of mathematical economics, but they are able to do so without grasping the deeper meanings — the assumptions and complexities — hidden by those equations. In fact, the ease of manipulating the equations gives them a false sense of mastery of the underlying, real concepts.

Much of the economics profession is nevertheless dedicated to the protection and preservation of the essential incompetence of mathematical economists. This is from “An Important Emerging Economic Paradigm”:

One of the best incumbent-protection rackets going today is for mathematical theorists in economics departments. The top departments will not certify someone as being qualified to have an advanced degree without first subjecting the student to the most rigorous mathematical economic theory. The rationale for this is reminiscent of fraternity hazing. “We went through it, so should they.”

Mathematical hazing persists even though there are signs that the prestige of math is on the decline within the profession. The important Clark Medal, awarded to the most accomplished American economist under the age of 40, has not gone to a mathematical theorist since 1989.

These hazing rituals can have real consequences. In medicine, the controversial tradition of long work hours for medical residents has come under scrutiny over the last few years. In economics, mathematical hazing is not causing immediate harm to medical patients. But it probably is working to the long-term detriment of the profession.

The hazing ritual in economics has as least two real and damaging consequences. First, it discourages entry into the economics profession by persons who, like Kling, can discuss economic behavior without resorting to the sterile language of mathematics. Second, it leads to economics that’s irrelevant to the real world — and dead wrong.

How wrong? Economists are notoriously bad at constructing models that adequately predict near-term changes in GDP. That task should be easier than sorting out the microeconomic complexities of the labor market.

Take Professor Ray Fair, for example. Professor Fair teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983, Prof. Fair has been forecasting changes in real GDP over the next four quarters. He has made 80 such forecasts based on a model that he has undoubtedly tweaked over the years. The current model is here. His forecasting track record is here. How has he done? Here’s how:

1. The median absolute error of his forecasts is 30 percent.

2. The mean absolute error of his forecasts is 70 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 4 percent; too low when real, four-quarter GDP growth is greater than 4 percent.

4. His forecasts have grown generally worse — not better — with time.

Prof. Fair is still at it. And his forecasts continue to grow worse with time:

fair-model-forecasting-errors-vs-time
This and later graphs pertaining to Prof. Fair’s forecasts were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing; 8 percent of the errors exceed 200 percent.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

fair-model-estimated-vs-actual-growth-rate

It fails the most important test; that is, it doesn’t reflect the downward trend in economic growth:

fair-model-year-over-year-growth-estimated-and-actual

THE INVISIBLE ELEPHANT IN THE ROOM

Professor Fair and his prognosticating ilk are pikers compared with John Maynard Keynes and his disciples. The Keynesian multiplier is the fraud of all frauds, not just in economics but in politics, where it is too often invoked as an excuse for taking money from productive uses and pouring it down the rathole of government spending.

The Keynesian (fiscal) multiplier is defined as

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier. Please go to “Killing the Keynesian Multiplier” for a detailed explanation of the phony math and a derivation of the true multiplier, which is decidedly negative. Here’s the short version:

  • The phony math involves the use of an accounting identity that can be manipulated in many ways, to “prove” many things. But the accounting identity doesn’t express an operational (or empirical) relationship between a change in government spending and a change in GDP.
  • The true value of the multiplier isn’t 5 (a common mathematical estimate), 1.5 (a common but mistaken empirical estimate used for government purposes), or any positive number. The true value represents the negative relationship between the change in government spending (including transfer payments) as a fraction of GDP and the change in the rate of real GDP growth. Specifically, where F represents government spending as a fraction of GDP,

a rise in F from 0.24 to 0.33 (the actual change from 1947 to 2007) would reduce the real rate of economic growth by 0.031 percentage points. The real rate of growth from 1947 to 1957 was 4 percent. Other things being the same, the rate of growth would have dropped to 0.9 percent in the period 2008-2017. It actually dropped to 1.4 percent, which is within the standard error of the estimate.

  • That kind of drop makes a huge difference in the incomes of Americans. In 10 years, rise GDP rises by almost 50 percent when the rate of growth is 4 percent, but only by 15 percent when the rate of growth is 1.9 percent. Think of the tens of millions of people who would be living in comfort rather than squalor were it not for Keynesian balderdash, which turns reality on its head in order to promote big government.

MANAGEMENT “SCIENCE”

A hot new item in management “science” a few years ago was the Candle Problem. Graham Morehead describes the problem and discusses its broader, “scientifically” supported conclusions:

The Candle Problem was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

Sam Glucksberg added a fascinating twist to this finding in his 1962 paper, “Influece of strength of drive on functional fixedness and perceptual recognition.” (Journal of Experimental Psychology 1962. Vol. 63, No. 1, 36-41). He studied the effect of financial incentives on solving the candle problem. To one group he offered no money. To the other group he offered an amount of money for solving the problem fast.

Remember, there are two candle problems. Let the “Simple Candle Problem” be the one where the tacks are outside the box — no functional fixedness. The solution is straightforward. Here are the results for those who solved it:

Simple Candle Problem Mean Times :

  • WITHOUT a financial incentive : 4.99 min
  • WITH a financial incentive : 3.67 min

Nothing unexpected here. This is a classical incentivization effect anybody would intuitively expect.

Now, let “In-Box Candle Problem” refer to the original description where the tacks start off in the box.

In-Box Candle Problem Mean Times :

  • WITHOUT a financial incentive : 7:41 min
  • WITH a financial incentive : 11:08 min

How could this be? The financial incentive made people slower? It gets worse — the slowness increases with the incentive. The higher the monetary reward, the worse the performance! This result has been repeated many times since the original experiment.

Glucksberg and others have shown this result to be highly robust. Daniel Pink calls it a legally provable “fact.” How should we interpret the above results?

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

All of this is found in a video (to which Morehead links), wherein Daniel Pink (an author and journalist whose actual knowledge of science and business appears to be close to zero) expounds the lessons of the Candle Problem. Pink displays his (no-doubt-profitable) conviction that the Candle Problem and related “science” reveals (a) the utter bankruptcy of capitalism and (b) the need to replace managers with touchy-feely gurus (like himself, I suppose). That Pink has worked for two of the country’s leading anti-capitalist airheads — Al Gore and Robert Reich — should tell you all that you need to know about Pink’s real agenda.

Here are my reasons for sneering at Pink and his ilk:

1. I have been there and done that. That is to say, as a manager, I lived through (and briefly bought into) the touchy-feely fads of the ’80s and ’90s. Think In Search of Excellence, The One Minute Manager, The Seven Habits of Highly Effective People, and so on. What did anyone really learn from those books and the lectures and workshops based on them? A perceptive person would have learned that it is easy to make up plausible stories about the elements of success, and having done so, it is possible to make a lot of money peddling those stories. But the stories are flawed because (a) they are based on exceptional cases; (b) they attribute success to qualitative assessments of behaviors that seem to be present in those exceptional cases; and (c) they do not properly account for the surrounding (and critical) circumstances that really led to success, among which are luck and rare combinations of personal qualities (e.g., high intelligence, perseverance, people-reading skills). In short, Pink and his predecessors are guilty of reductionism and the post hoc ergo propter hoc fallacy.

2. Also at work is an undue generalization about the implications of the Candle Problem. It may be true that workers will perform better — at certain kinds of tasks (very loosely specified) — if they are not distracted by incentives that are related to the performance of those specific tasks. But what does that have to do with incentives in general? Not much, because the Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

3. The Candle Problem, with its anti-incentive “lesson”, is therefore inapplicable to the real world, where incentives play a crucial and positive role:

  • The profit incentive leads firms to invest resources in the development and/or production of things that consumers are willing to buy because those things satisfy wants at the right price.
  • Firms acquire resources to develop and produce things by bidding for those resources, that is, by offering monetary incentives to attract the resources required to make the things that consumers are willing to buy.
  • The incentives (compensation) offered to workers of various kinds (from scientists with doctorates to burger-flippers) are generally commensurate with the contributions made by those workers to the production of things of value to consumers, and to the value placed on those things by consumers.
  • Workers agree to the terms and conditions of employment (including compensation) before taking a job. The incentive for most workers is to keep a job by performing adequately over a sustained period — not by demonstrating creativity in a few minutes. Some workers (but not a large fraction of them) are striving for performance-based commissions, bonuses, and profit-sharing distributions. But those distributions are based on performance over a sustained period, during which the striving workers have plenty of time to think about how they can perform better.
  • Truly creative work is done, for the most part, by persons who are hired for such work on the basis of their credentials (education, prior employment, test results). Their compensation is based on their credentials, initially, and then on their performance over a sustained period. If they are creative, they have plenty of psychological space in which to exercise and demonstrate their creativity.
  • On-the-job creativity — the improvement of protocols and tools by workers using them — does not occur under conditions of the kind assumed in the Candle Problem. Rather, on-the-job creativity flows from actual work and insights about how to do the work better. It happens when it happens, and has nothing to do with artificial time constraints and monetary incentives to be “creative” within those constraints.
  • Pink’s essential pitch is that incentives can be replaced by offering jobs that yield autonomy (self-direction), mastery (the satisfaction of doing difficult things well), and purpose (that satisfaction of contributing to the accomplishment of something important). Well, good luck with that, but I (and millions of other consumers) want what we want, and if workers want to make a living they will just have to provide what we want, not what turns them on. Yes, there is a lot to be said for autonomy, mastery, and purpose, but there is also a lot to be said for getting a paycheck. And, contrary to Pink’s implication, getting a paycheck does not rule out autonomy, mastery, and purpose — where those happen to go with the job.

Pink and company’s “insights” about incentives and creativity are 180 degrees off-target. McDonald’s could use the Candle Problem to select creative burger-flippers who will perform well under tight deadlines because their compensation is unrelated to the creativity of their burger-flipping. McDonald’s customers should be glad that McDonald’s has taken creativity out of the picture by reducing burger-flipping to the routine application of protocols and tools.

In summary:

  • The Candle Problem is an interesting experiment, and probably valid with respect to the performance of specific tasks against tight deadlines. I think the results apply whether the stakes are money or any other kind of prize. The experiment illustrates the “choke” factor, and nothing more profound than that.
  • I question whether the experiment applies to the usual kind of incentive (e.g., a commissions or bonus), where the “incentee” has ample time (months, years) for reflection and research that will enable him to improve his performance and attain a bigger commission or bonus (which usually isn’t an all-or-nothing arrangement).
  • There’s also the dissimilarity of the Candle Problem — which involves more-or-less randomly chosen subjects, working against an artificial deadline — and actual creative thinking — usually involving persons who are experts (even if the expertise is as mundane as ditch-digging), working against looser deadlines or none at all.

PARTISAN POLITICS IN THE GUISE OF PSEUDO-SCIENCE

There’s plenty of it to go around, but this one is a whopper. Peter Singer outdoes his usual tendentious self in this review of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. In the course of the review, Singer writes:

Pinker argues that enhanced powers of reasoning give us the ability to detach ourselves from our immediate experience and from our personal or parochial perspective, and frame our ideas in more abstract, universal terms. This in turn leads to better moral commitments, including avoiding violence. It is just this kind of reasoning ability that has improved during the 20th century. He therefore suggests that the 20th century has seen a “moral Flynn effect, in which an accelerating escalator of reason carried us away from impulses that lead to violence” and that this lies behind the long peace, the new peace, and the rights revolution. Among the wide range of evidence he produces in support of that argument is the tidbit that since 1946, there has been a negative correlation between an American president’s I.Q. and the number of battle deaths in wars involving the United States.

Singer does not give the source of the IQ estimates on which Pinker relies, but the supposed correlation points to a discredited piece of historiometry by Dean Keith Simonton, Simonton jumps through various hoops to assess the IQs of  every president from Washington to Bush II — to one decimal place. That is a feat on a par with reconstructing the final thoughts of Abel, ere Cain slew him.

Before I explain the discrediting of Simonton’s obviously discreditable “research”, there is some fun to be had with the Pinker-Singer story of presidential IQ (Simonton-style) for battle deaths. First, of course, there is the convenient cutoff point of 1946. Why 1946? Well, it enables Pinker-Singer to avoid the inconvenient fact that the Civil War, World War I, and World War II happened while the presidency was held by three men who (in Simonton’s estimation) had high IQs: Lincoln, Wilson, and FDR.

The next several graphs depict best-fit relationships between Simonton’s estimates of presidential IQ and the U.S. battle deaths that occurred during each president’s term of office.* The presidents, in order of their appearance in the titles of the graphs are Harry S Truman (HST), George W. Bush (GWB), Franklin Delano Roosevelt (FDR), (Thomas) Woodrow Wilson (WW), Abraham Lincoln (AL), and George Washington (GW). The number of battle deaths is rounded to the nearest thousand, so that the prevailing value is 0, even in the case of the Spanish-American War (385 U.S. combat deaths) and George H.W. Bush’s Gulf War (147 U.S. combat deaths).

This is probably the relationship referred to by Singer, though Pinker may show a linear fit, rather than the tighter polynomial fit used here:

It looks bad for the low “IQ” presidents — if you believe Simonton’s estimates of IQ, which you shouldn’t, and if you believe that battle deaths are a bad thing per se, which they aren’t. I will come back to those points. For now, just suspend your well-justified disbelief.

If the relationship for the HST-GWB era were statistically meaningful, it would not change much with the introduction of additional statistics about “IQ” and battle deaths, but it does:




If you buy the brand of snake oil being peddled by Pinker-Singer, you must believe that the “dumbest” and “smartest” presidents are unlikely to get the U.S. into wars that result in a lot of battle deaths, whereas some (but, mysteriously, not all) of the “medium-smart” presidents (Lincoln, Wilson, FDR) are likely to do so.

In any event, if you believe in Pinker-Singer’s snake oil, you must accept the consistent “humpback” relationship that is depicted in the preceding four graphs, rather than the highly selective, one-shot negative relationship of the HST-GWB graph.

More seriously, the relationship in the HST-GWB graph is an evident ploy to discredit certain presidents (especially GWB, I suspect), which is why it covers only the period since WWII. Why not just say that you think GWB is a chimp-like, war-mongering, moron and be done with it? Pseudo-statistics of the kind offered up by Pinker-Singer is nothing more than a talking point for those already convinced that Bush=Hitler.

But as long as this silly game is in progress, let us continue it, with a new rule. Let us advance from one to two explanatory variables. The second explanatory variable that strongly suggests itself is political party. And because it is not good practice to omit relevant statistics (a favorite gambit of liars), I estimated an equation based on “IQ” and battle deaths for the 27 men who served as president from the first Republican presidency (Lincoln’s) through the presidency of GWB.  The equation looks like this:

U.S. battle deaths (000) “owned” by a president =

-80.6 + 0.841 x “IQ” – 31.3 x party (where 0 = Dem, 1 = GOP)

In other words, battle deaths rise at the rate of 841 per IQ point (so much for Pinker-Singer). But there will be fewer deaths with a Republican in the White House (so much for Pinker-Singer’s implied swipe at GWB).

All of this is nonsense, of course, for two reasons: Simonton’s estimates of IQ are hogwash, and the number of U.S. battle deaths is a meaningless number, taken by itself.

With regard to the hogwash, Simonton’s estimates of presidents’ IQs put every one of them — including the “dumbest,” U.S. Grant — in the top 2.3 percent of the population. And the mean of Simonton’s estimates puts the average president in the top 0.1 percent (one-tenth of one percent) of the population. That is literally incredible. Good evidence of the unreliability of Simonton’s estimates is found in an entry by Thomas C. Reeves at George Mason University’s History New Network. Reeves is the author of A Question of Character: A Life of John F. Kennedy, the negative reviews of which are evidently the work of JFK idolators who refuse to be disillusioned by facts. Anyway, here is Reeves:

I’m a biographer of two of the top nine presidents on Simonton’s list and am highly familiar with the histories of the other seven. In my judgment, this study has little if any value. Let’s take JFK and Chester A. Arthur as examples.

Kennedy was actually given an IQ test before entering Choate. His score was 119…. There is no evidence to support the claim that his score should have been more than 40 points higher [i.e., the IQ of 160 attributed to Kennedy by Simonton]. As I described in detail in A Question Of Character [link added], Kennedy’s academic achievements were modest and respectable, his published writing and speeches were largely done by others (no study of Kennedy is worthwhile that downplays the role of Ted Sorensen)….

Chester Alan Arthur was largely unknown before my Gentleman Boss was published in 1975. The discovery of many valuable primary sources gave us a clear look at the president for the first time. Among the most interesting facts that emerged involved his service during the Civil War, his direct involvement in the spoils system, and the bizarre way in which he was elevated to the GOP presidential ticket in 1880. His concealed and fatal illness while in the White House also came to light.

While Arthur was a college graduate, and was widely considered to be a gentleman, there is no evidence whatsoever to suggest that his IQ was extraordinary. That a psychologist can rank his intelligence 2.3 points ahead of Lincoln’s suggests access to a treasure of primary sources from and about Arthur that does not exist.

This historian thinks it impossible to assign IQ numbers to historical figures. If there is sufficient evidence (as there usually is in the case of American presidents), we can call people from the past extremely intelligent. Adams, Wilson, TR, Jefferson, and Lincoln were clearly well above average intellectually. But let us not pretend that we can rank them by tenths of a percentage point or declare that a man in one era stands well above another from a different time and place.

My educated guess is that this recent study was designed in part to denigrate the intelligence of the current occupant of the White House….

That is an excellent guess.

The meaninglessness of battle deaths as a measure of anything — but battle deaths — should be evident. But in case it is not evident, here goes:

  • Wars are sometimes necessary, sometimes not. (I give my views about the wisdom of America’s various wars at this post.) Necessary or not, presidents usually act in accordance with popular and elite opinion about the desirability of a particular war. Imagine, for example, the reaction if FDR had not gone to Congress on December 8, 1941, to ask for a declaration of war against Japan, or if GWB had not sought the approval of Congress for action in Afghanistan.
  • Presidents may have a lot to do with the decision to enter a war, but they have little to do with the external forces that help to shape that decision. GHWB, for example, had nothing to do with Saddam’s decision to invade Kuwait and thereby threaten vital U.S. interests in the Middle East. GWB, to take another example, was not a party to the choices of earlier presidents (GHWB and Clinton) that enabled Saddam to stay in power and encouraged Osama bin Laden to believe that America could be brought to its knees by a catastrophic attack.
  • The number of battle deaths in a war depends on many things outside the control of a particular president; for example, the size and capabilities of enemy forces, the size and capabilities of U.S. forces (which have a lot to do with the decisions of earlier administrations and Congresses), and the scope and scale of a war (again, largely dependent on the enemy).
  • Battle deaths represent personal tragedies, but — in and of themselves — are not a measure of a president’s wisdom or acumen. Whether the deaths were in vain is a separate issue that depends on the aforementioned considerations. To use battle deaths as a single, negative measure of a president’s ability is rank cynicism — the rankness of which is revealed in Pinker’s decision to ignore Lincoln and FDR and their “good” but deadly wars.

To put the last point another way, if the number of battle death deaths is a bad thing, Lincoln and FDR should be rotting in hell for the wars that brought an end to slavery and Hitler.
__________
* The numbers of U.S. battle deaths, by war, are available at infoplease.com, “America’s Wars: U.S. Casualties and Veterans”. The deaths are “assigned” to presidents as follows (numbers in parentheses indicate thousands of deaths):

All of the deaths (2) in the War of 1812 occurred on Madison’s watch.

All of the deaths (2) in the Mexican-American War occurred on Polk’s watch.

I count only Union battle deaths (140) during the Civil War; all are “Lincoln’s.” Let the Confederate dead be on the head of Jefferson Davis. This is a gift, of sorts, to Pinker-Singer because if Confederate dead were counted as Lincoln, with his high “IQ,” it would make Pinker-Singer’s hypothesis even more ludicrous than it is.

WW is the sole “owner” of WWI battle deaths (53).

Some of the U.S. battle deaths in WWII (292) occurred while HST was president, but Truman was merely presiding over the final months of a war that was almost won when FDR died. Truman’s main role was to hasten the end of the war in the Pacific by electing to drop the A-bombs on Hiroshima and Nagasaki. So FDR gets “credit” for all WWII battle deaths.

The Korean War did not end until after Eisenhower succeeded Truman, but it was “Truman’s war,” so he gets “credit” for all Korean War battle deaths (34). This is another “gift” to Pinker-Singer because Ike’s “IQ” is higher than Truman’s.

Vietnam was “LBJ’s war,” but I’m sure that Singer would not want Nixon to go without “credit” for the battle deaths that occurred during his administration. Moreover, LBJ had effectively lost the Vietnam war through his gradualism, but Nixon chose nevertheless to prolong the agony. So I have shared the “credit” for Vietnam War battle deaths between LBJ (deaths in 1965-68: 29) and RMN (deaths in 1969-73: 17). To do that, I apportioned total Vietnam War battle deaths, as given by infoplease.com, according to the total number of U.S. deaths in each year of the war, 1965-1973.

The wars in Afghanistan and Iraq are “GWB’s wars,” even though Obama has continued them. So I have “credited” GWB with all the battle deaths in those wars, as of May 27, 2011 (5).

The relative paucity of U.S. combat  deaths in other post-WWII actions (e.g., Lebanon, Somalia, Persian Gulf) is attested to by “Post-Vietnam Combat Casualties”, at infoplease.com.

A THIRD APPEARANCE BY PINKER

Steven Pinker, whose ignominious outpourings I have addressed twice here, deserves a third strike (which he shall duly be awarded). Pinker’s The Better Angels of Our Nature is cited gleefully by leftists and cockeyed optimists as evidence that human beings, on the whole, are becoming kinder and gentler because of:

  • The Leviathan – The rise of the modern nation-state and judiciary “with a monopoly on the legitimate use of force,” which “can defuse the [individual] temptation of exploitative attack, inhibit the impulse for revenge, and circumvent…self-serving biases.”
  • Commerce – The rise of “technological progress [allowing] the exchange of goods and services over longer distances and larger groups of trading partners,” so that “other people become more valuable alive than dead” and “are less likely to become targets of demonization and dehumanization”;
  • Feminization – Increasing respect for “the interests and values of women.”
  • Cosmopolitanism – the rise of forces such as literacy, mobility, and mass media, which “can prompt people to take the perspectives of people unlike themselves and to expand their circle of sympathy to embrace them”;
  • The Escalator of Reason – an “intensifying application of knowledge and rationality to human affairs,” which “can force people to recognize the futility of cycles of violence, to ramp down the privileging of their own interests over others’, and to reframe violence as a problem to be solved rather than a contest to be won.”

I can tell you that Pinker’s book is hogwash because two very bright leftists — Peter Singer and Will Wilkinson — have strongly and wrongly endorsed some of its key findings. I dispatched Singer in earlier. As for Wilkinson, he praises statistics adduced by Pinker that show a decline in the use of capital punishment:

In the face of such a decisive trend in moral culture, we can say a couple different things. We can say that this is just change and says nothing in particular about what is really right or wrong, good or bad. Or we can take take say this is evidence of moral progress, that we have actually become better. I prefer the latter interpretation for basically the same reasons most of us see the abolition of slavery and the trend toward greater equality between races and sexes as progress and not mere morally indifferent change. We can talk about the nature of moral progress later. It’s tricky. For now, I want you to entertain the possibility that convergence toward the idea that execution is wrong counts as evidence that it is wrong.

I would count convergence toward the idea that execution is wrong as evidence that it is wrong, if that idea were (a) increasingly held by individuals who (b) had arrived at their “enlightenment” unnfluenced by operatives of the state (legislatures and judges), who take it upon themselves to flout popular support of the death penalty. What we have, in the case of the death penalty, is moral regress, not moral progress.

Moral regress because the abandonment of the death penalty puts innocent lives at risk. Capital punishment sends a message, and the message is effective when it is delivered: it deters homicide. And even if it didn’t, it would at least remove killers from our midst, permanently. By what standard of morality can one claim that it is better to spare killers than to protect innocents? For that matter, by what standard of morality is it better to kill innocents in the womb than to spare killers? Proponents of abortion (like Singer and Wilkinson) — who by and large oppose capital punishment — are completely lacking in moral authority.

Returning to Pinker’s thesis that violence has declined, I quote a review at Foseti:

Pinker’s basic problem is that he essentially defines “violence” in such a way that his thesis that violence is declining becomes self-fulling. “Violence” to Pinker is fundamentally synonymous with behaviors of older civilizations. On the other hand, modern practices are defined to be less violent than newer practices.

A while back, I linked to a story about a guy in my neighborhood who’s been arrested over 60 times for breaking into cars. A couple hundred years ago, this guy would have been killed for this sort of vandalism after he got caught the first time. Now, we feed him and shelter him for a while and then we let him back out to do this again. Pinker defines the new practice as a decline in violence – we don’t kill the guy anymore! Someone from a couple hundred years ago would be appalled that we let the guy continue destroying other peoples’ property without consequence. In the mind of those long dead, “violence” has in fact increased. Instead of a decline in violence, this practice seems to me like a decline in justice – nothing more or less.

Here’s another example, Pinker uses creative definitions to show that the conflicts of the 20th Century pale in comparison to previous conflicts. For example, all the Mongol Conquests are considered one event, even though they cover 125 years. If you lump all these various conquests together and you split up WWI, WWII, Mao’s takeover in China, the Bolshevik takeover of Russia, the Russian Civil War, and the Chinese Civil War (yes, he actually considers this a separate event from Mao), you unsurprisingly discover that the events of the 20th Century weren’t all that violent compared to events in the past! Pinker’s third most violent event is the “Mideast Slave Trade” which he says took place between the 7th and 19th Centuries. Seriously. By this standard, all the conflicts of the 20th Century are related. Is the Russian Revolution or the rise of Mao possible without WWII? Is WWII possible without WWI? By this consistent standard, the 20th Century wars of Communism would have seen the worst conflict by far. Of course, if you fiddle with the numbers, you can make any point you like.

There’s much more to the review, including some telling criticisms of Pinker’s five reasons for the (purported) decline in violence. That the reviewer somehow still wants to believe in the rightness of Pinker’s thesis says more about the reviewer’s optimism than it does about the validity of Pinker’s thesis.

That thesis is fundamentally flawed, as Robert Epstein points out in a review at Scientific American:

[T]he wealth of data [Pinker] presents cannot be ignored—unless, that is, you take the same liberties as he sometimes does in his book. In two lengthy chapters, Pinker describes psychological processes that make us either violent or peaceful, respectively. Our dark side is driven by a evolution-based propensity toward predation and dominance. On the angelic side, we have, or at least can learn, some degree of self-control, which allows us to inhibit dark tendencies.

There is, however, another psychological process—confirmation bias—that Pinker sometimes succumbs to in his book. People pay more attention to facts that match their beliefs than those that undermine them. Pinker wants peace, and he also believes in his hypothesis; it is no surprise that he focuses more on facts that support his views than on those that do not. The SIPRI arms data are problematic, and a reader can also cherry-pick facts from Pinker’s own book that are inconsistent with his position. He notes, for example, that during the 20th century homicide rates failed to decline in both the U.S. and England. He also describes in graphic and disturbing detail the savage way in which chimpanzees—our closest genetic relatives in the animal world—torture and kill their own kind.

Of greater concern is the assumption on which Pinker’s entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease? By this logic, when we reach a world population of nine billion in 2050, Pinker will conceivably be satisfied if a mere two million people are killed in war that year.

The biggest problem with the book, though, is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Pinker’s belief that violence is on the decline reminds me of “it’s different this time”, a phrase that was on the lips of hopeful stock-pushers, stock-buyers, and pundits during the stock-market bubble of the late 1990s. That bubble ended, of course, in the spectacular crash of 2000.

Predictions about the future of humankind are better left in the hands of writers who see human nature whole, and who are not out to prove that it can be shaped or contained by the kinds of “liberal” institutions that Pinker so obviously favors.

Consider this, from an article by Robert J. Samuelson at The Washington Post:

[T]he Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar. Amid the controversy over leaks from the National Security Agency, this looms as an even bigger downside.

By cyberwarfare, I mean the capacity of groups — whether nations or not — to attack, disrupt and possibly destroy the institutions and networks that underpin everyday life. These would be power grids, pipelines, communication and financial systems, business record-keeping and supply-chain operations, railroads and airlines, databases of all types (from hospitals to government agencies). The list runs on. So much depends on the Internet that its vulnerability to sabotage invites doomsday visions of the breakdown of order and trust.

In a report, the Defense Science Board, an advisory group to the Pentagon, acknowledged “staggering losses” of information involving weapons design and combat methods to hackers (not identified, but probably Chinese). In the future, hackers might disarm military units. “U.S. guns, missiles and bombs may not fire, or may be directed against our own troops,” the report said. It also painted a specter of social chaos from a full-scale cyberassault. There would be “no electricity, money, communications, TV, radio or fuel (electrically pumped). In a short time, food and medicine distribution systems would be ineffective.”

But Pinker wouldn’t count the resulting chaos as violence, as long as human beings were merely starving and dying of various diseases. That violence would ensue, of course, is another story, which is told by John Gray in The Silence of Animals: On Progress and Other Modern Myths. Gray’s book — published  18 months after Better Angels — could be read as a refutation of Pinker’s book, though Gray doesn’t mention Pinker or his book.

The gist of Gray’s argument is faithfully recounted in a review of Gray’s book by Robert W. Merry at The National Interest:

The noted British historian J. B. Bury (1861–1927) … wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future…. The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

… Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong….

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption….

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

In the Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny….

And yet the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves….

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy.

RACE AS A SOCIAL CONSTRUCT

David Reich‘s hot new book, Who We Are and How We Got Here, is causing a stir in genetic-research circles. Reich, who takes great pains to assure everyone that he isn’t a racist, and who deplores racism, is nevertheless candid about race:

I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”

Groundbreaking advances in DNA sequencing technology have been made over the last two decades. These advances enable us to measure with exquisite accuracy what fraction of an individual’s genetic ancestry traces back to, say, West Africa 500 years ago — before the mixing in the Americas of the West African and European gene pools that were almost completely isolated for the last 70,000 years. With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real….

Self-identified African-Americans turn out to derive, on average, about 80 percent of their genetic ancestry from enslaved Africans brought to America between the 16th and 19th centuries. My colleagues and I searched, in 1,597 African-American men with prostate cancer, for locations in the genome where the fraction of genes contributed by West African ancestors was larger than it was elsewhere in the genome. In 2006, we found exactly what we were looking for: a location in the genome with about 2.8 percent more African ancestry than the average.

When we looked in more detail, we found that this region contained at least seven independent risk factors for prostate cancer, all more common in West Africans. Our findings could fully account for the higher rate of prostate cancer in African-Americans than in European-Americans. We could conclude this because African-Americans who happen to have entirely European ancestry in this small section of their genomes had about the same risk for prostate cancer as random Europeans.

Did this research rely on terms like “African-American” and “European-American” that are socially constructed, and did it label segments of the genome as being probably “West African” or “European” in origin? Yes. Did this research identify real risk factors for disease that differ in frequency across those populations, leading to discoveries with the potential to improve health and save lives? Yes.

While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another.

But whether we like it or not, that line has already been crossed. A recent study led by the economist Daniel Benjamin compiled information on the number of years of education from more than 400,000 people, almost all of whom were of European ancestry. After controlling for differences in socioeconomic background, he and his colleagues identified 74 genetic variations that are over-represented in genes known to be important in neurological development, each of which is incontrovertibly more common in Europeans with more years of education than in Europeans with fewer years of education.

It is not yet clear how these genetic variations operate. A follow-up study of Icelanders led by the geneticist Augustine Kong showed that these genetic variations also nudge people who carry them to delay having children. So these variations may be explaining longer times at school by affecting a behavior that has nothing to do with intelligence.

This study has been joined by others finding genetic predictors of behavior. One of these, led by the geneticist Danielle Posthuma, studied more than 70,000 people and found genetic variations in more than 20 genes that were predictive of performance on intelligence tests.

Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly. And since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.

You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work. Indeed, the study led by Dr. Kong showed that in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century….

So how should we prepare for the likelihood that in the coming years, genetic studies will show that many traits are influenced by genetic variations, and that these traits will differ on average across human populations? It will be impossible — indeed, anti-scientific, foolish and absurd — to deny those differences. [“How Genetics Is Changing Our Understanding of ‘Race’“, The New York Times, March 23, 2018]

Reich engages in a lot of non-scientific wishful thinking about racial differences and how they should be treated by “society” — none of which is in his purview as a scientist. Reich’s forays into psychobabble have been addressed at length by Steve Sailer (here and here) and Gregory Cochran (here, here, here, here, and here). Suffice it to say that Reich is trying in vain to minimize the scientific fact of racial differences that show up crucially in intelligence and rates of violent crime.

The lesson here is that it’s all right to show that race isn’t a social construct as long as you proclaim that it is a social construct. This is known as talking out of both sides of one’s mouth — another manifestation of balderdash.

DIVERSITY IS GOOD, EXCEPT WHEN IT ISN’T

I now invoke Robert Putnam, a political scientist known mainly for his book Bowling Alone: The Collapse and Revival of American Community (2005), in which he

makes a distinction between two kinds of social capital: bonding capital and bridging capital. Bonding occurs when you are socializing with people who are like you: same age, same race, same religion, and so on. But in order to create peaceful societies in a diverse multi-ethnic country, one needs to have a second kind of social capital: bridging. Bridging is what you do when you make friends with people who are not like you, like supporters of another football team. Putnam argues that those two kinds of social capital, bonding and bridging, do strengthen each other. Consequently, with the decline of the bonding capital mentioned above inevitably comes the decline of the bridging capital leading to greater ethnic tensions.

In later work on diversity and trust within communities, Putnam concludes that

other things being equal, more diversity in a community is associated with less trust both between and within ethnic groups….

Even when controlling for income inequality and crime rates, two factors which conflict theory states should be the prime causal factors in declining inter-ethnic group trust, more diversity is still associated with less communal trust.

Lowered trust in areas with high diversity is also associated with:

  • Lower confidence in local government, local leaders and the local news media.
  • Lower political efficacy – that is, confidence in one’s own influence.
  • Lower frequency of registering to vote, but more interest and knowledge about politics and more participation in protest marches and social reform groups.
  • Higher political advocacy, but lower expectations that it will bring about a desirable result.
  • Less expectation that others will cooperate to solve dilemmas of collective action (e.g., voluntary conservation to ease a water or energy shortage).
  • Less likelihood of working on a community project.
  • Less likelihood of giving to charity or volunteering.
  • Fewer close friends and confidants.
  • Less happiness and lower perceived quality of life.
  • More time spent watching television and more agreement that “television is my most important form of entertainment”.

It’s not as if Putnam is a social conservative who is eager to impart such news. To the contrary, as Michal Jonas writes in “The Downside of Diversity“, Putnam’s

findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis … , he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer….

After releasing the initial results in 2001, Putnam says he spent time “kicking the tires really hard” to be sure the study had it right. Putnam realized, for instance, that more diverse communities tended to be larger, have greater income ranges, higher crime rates, and more mobility among their residents — all factors that could depress social capital independent of any impact ethnic diversity might have.

“People would say, ‘I bet you forgot about X,’” Putnam says of the string of suggestions from colleagues. “There were 20 or 30 X’s.”

But even after statistically taking them all into account, the connection remained strong: Higher diversity meant lower social capital. In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”

“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes….

In a recent study, [Harvard economist Edward] Glaeser and colleague Alberto Alesina demonstrated that roughly half the difference in social welfare spending between the US and Europe — Europe spends far more — can be attributed to the greater ethnic diversity of the US population. Glaeser says lower national social welfare spending in the US is a “macro” version of the decreased civic engagement Putnam found in more diverse communities within the country.

Economists Matthew Kahn of UCLA and Dora Costa of MIT reviewed 15 recent studies in a 2003 paper, all of which linked diversity with lower levels of social capital. Greater ethnic diversity was linked, for example, to lower school funding, census response rates, and trust in others. Kahn and Costa’s own research documented higher desertion rates in the Civil War among Union Army soldiers serving in companies whose soldiers varied more by age, occupation, and birthplace.

Birds of different feathers may sometimes flock together, but they are also less likely to look out for one another. “Everyone is a little self-conscious that this is not politically correct stuff,” says Kahn….

In his paper, Putnam cites the work done by Page and others, and uses it to help frame his conclusion that increasing diversity in America is not only inevitable, but ultimately valuable and enriching. As for smoothing over the divisions that hinder civic engagement, Putnam argues that Americans can help that process along through targeted efforts. He suggests expanding support for English-language instruction and investing in community centers and other places that allow for “meaningful interaction across ethnic lines.”

Some critics have found his prescriptions underwhelming. And in offering ideas for mitigating his findings, Putnam has drawn scorn for stepping out of the role of dispassionate researcher. “You’re just supposed to tell your peers what you found,” says John Leo, senior fellow at the Manhattan Institute, a conservative think tank. [Michael Jonas, “The downside of diversity,” The Boston Globe (boston.com), August 5, 2007]

What is it about academics like Reich and Putnam who can’t bear to face the very facts that they have uncovered? The magic word is “academics”. They are denizens of a milieu in which the facts of life about race, guns, sex, and many other things are in the habit of being suppressed in favor of “hope and change”, and the facts be damned.

ONE MORE BIT OF RACE-RELATED BALDERDASH

I was unaware of the Implicit Association Test (IAT) until a few years ago, when I took a test at YourMorals.Org that purported to measure my implicit racial preferences. IAT has been exposed as junk, John. J. Ray calls it:

Psychologists are well aware that people often do not say what they really think.  It is therefore something of a holy grail among them to find ways that WILL detect what people really think. A very popular example of that is the Implicit Associations test (IAT).  It supposedly measures racist thoughts whether you are aware of them or not.  It sometimes shows people who think they are anti-racist to be in fact secretly racist.

I dismissed it as a heap of junk long ago (here and here) but it has remained very popular and is widely accepted as revealing truth.  I am therefore pleased that a very long and thorough article has just appeared which comes to the same conclusion that I did.

The article in question (which has the same title as Ray’s post) is by Jesse Singal. It appeared at Science of Us on January 11, 2017. Here are some excerpts:

Perhaps no new concept from the world of academic psychology has taken hold of the public imagination more quickly and profoundly in the 21st century than implicit bias — that is, forms of bias which operate beyond the conscious awareness of individuals. That’s in large part due to the blockbuster success of the so-called implicit association test, which purports to offer a quick, easy way to measure how implicitly biased individual people are….

Since the IAT was first introduced almost 20 years ago, its architects, as well as the countless researchers and commentators who have enthusiastically embraced it, have offered it as a way to reveal to test-takers what amounts to a deep, dark secret about who they are: They may not feel racist, but in fact, the test shows that in a variety of intergroup settings, they will act racist….

[The] co-creators are Mahzarin Banaji, currently the chair of Harvard University’s psychology department, and Anthony Greenwald, a highly regarded social psychology researcher at the University of Washington. The duo introduced the test to the world at a 1998 press conference in Seattle — the accompanying press release noted that they had collected data suggesting that 90–95 percent of Americans harbored the “roots of unconscious prejudice.” The public immediately took notice: Since then, the IAT has been mostly treated as a revolutionary, revelatory piece of technology, garnering overwhelmingly positive media coverage….

Maybe the biggest driver of the IAT’s popularity and visibility, though, is the fact that anyone can take the test on the Project Implicit website, which launched shortly after the test was unveiled and which is hosted by Harvard University. The test’s architects reported that, by October 2015, more than 17 million individual test sessions had been completed on the website. As will become clear, learning one’s IAT results is, for many people, a very big deal that changes how they view themselves and their place in the world.

Given all this excitement, it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way….

Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such.

How does IAT work? Singal summarizes:

You sit down at a computer where you are shown a series of images and/or words. First, you’re instructed to hit ‘i’ when you see a “good” term like pleasant, or to hit ‘e’ when you see a “bad” one like tragedy. Then, hit ‘i’ when you see a black face, and hit ‘e’ when you see a white one. Easy enough, but soon things get slightly more complex: Hit ‘i’ when you see a good word or an image of a black person, and ‘e’ when you see a bad word or an image of a white person. Then the categories flip to black/bad and white/good. As you peck away at the keyboard, the computer measures your reaction times, which it plugs into an algorithm. That algorithm, in turn, generates your score.

If you were quicker to associate good words with white faces than good words with black faces, and/or slower to associate bad words with white faces than bad words with black ones, then the test will report that you have a slight, moderate, or strong “preference for white faces over black faces,” or some similar language. You might also find you have an anti-white bias, though that is significantly less common. By the normal scoring conventions of the test, positive scores indicate bias against the out-group, while negative ones indicate bias against the in-group.

The rough idea is that, as humans, we have an easier time connecting concepts that are already tightly linked in our brains, and a tougher time connecting concepts that aren’t. The longer it takes to connect “black” and “good” relative to “white” and “good,” the thinking goes, the more your unconscious biases favor white people over black people.

Singal continues (at great length) to pile up the mountain of evidence against IAT, and to caution against reading anything into the results it yields.

Having become aware of the the debunking of IAT, I went to the website of Project Implicit. When I reached this page, I was surprised to learn that I could not only find out whether I’m a closet racist but also whether I prefer dark or light skin tones, Asians or non-Asians, Trump or a previous president, and several other things or their opposites. I chose to discover my true feelings about Trump vs. a previous president, and was faced with a choice between Trump and Clinton.

What was the result of my several minutes of tapping “e” and “i” on the keyboard of my PC? This:

Your data suggest a moderate automatic preference for Bill Clinton over Donald Trump.

Balderdash! Though Trump is obviously not of better character than Clinton, he’s obviously not of worse character. And insofar as policy goes, the difference between Trump and Clinton is somewhat like the difference between a non-silent Calvin Coolidge and an FDR without the patriotism. (With apologies to the memory of Coolidge, my favorite president.)

What did I learn from the IAT? I must have very good reflexes. A person who processes information rapidly and then almost instantly translates it into a physical response should be able to “beat” the IAT. And that’s probably what I did in the Trump vs. Clinton test.

Perhaps the IAT for racism could be used to screen candidates for fighter-pilot training. Only “non-racists” would be admitted. Anyone who isn’t quick enough to avoid the “racist” label isn’t quick enough to win a dogfight.

OTHER “LIBERAL” DELUSIONS

There are plenty of them under the heading of balderdash. It’s also known as magical thinking, in which “ought” becomes “is” and the forces of nature and human nature can be held in abeyance by edict. The following examples revisit some ground already covered here:

  • Men are unnecessary.
  • Women can do everything that men can do, but it doesn’t work the other way … just because.
  • Mothers can work outside the home without damage to their children.
  • Race is a “social construct”; there is no such thing as intelligence; women and men are mentally and physically equal in all respects; and the under-representation of women and blacks in certain fields is therefore due to rank discrimination (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).
  • A minimum wage can be imposed without an increase in unemployment, a “fact” which can be “proven” only by concocting special cases of limited applicability.
  • Taxes can be raised without discouraging investment and therefore reducing the rate of economic growth.
  • Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”. There can “free lunches” all around.
  • Health insurance premiums will go down while the number of mandates is increased.
  • The economy can be stimulated through the action of the Keynesian multiplier, which is nothing but phony math.
  • “Green” programs create jobs (but only because they are inefficient).
  • Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).

There’s much more in a different vein here.

BALDERDASH AS EUPHEMISTIC THINKING

Balderdash, as I have sampled it here, isn’t just nonsense — it’s nonsense in the service of an agenda. The agenda is too often the expansion of government power. Those who favor the expansion of government power don’t like to think that it hurts people. (“We’re from the government and we’re here to help.”) This is a refusal to face facts, which is amply if not exhautively illustrated in the preceding entries.

But there’s a lot more where that comes from; for example:

  • Crippled became handicapped, which became disabled and then differently abled or something-challenged.
  • Stupid became learning disabled, which became special needs (a euphemistic category that houses more than the stupid).
  • Poor became underprivileged, which became economically disadvantaged, which became (though isn’t overtly called) entitled (as in entitled to other people’s money).
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

Why do lefties — lovers of big government — persist in varnishing the truth? They are — they insist — strong supporters of science, which is (ideally) the pursuit of truth. Well, that’s because they aren’t really supporters of science (witness their devotion to the “unsettled” science of AGW, among many fabrications). Nor do they really want the truth. They simply want to portray the world as they would like it to be, or to lie about it so that they can strive to reshape it to their liking.

BALDERDASH IN THE SERVICE OF SLAVERY, MODERN STYLE

I will end with this one, which is less conclusive than what has gone before, but which further illustrates the left’s penchant for evading reality in the service of growing government.

Thomas Nagel writes:

Some would describe taxation as a form of theft and conscription as a form of slavery — in fact some would prefer to describe taxation as slavery too, or at least as forced labor. Much might be said against these descriptions, but that is beside the point. For within proper limits, such practices when engaged in by governments are acceptable, whatever they are called. If someone with an income of $2000 a year trains a gun on someone with an income of $100000 a year and makes him hand over his wallet, that is robbery. If the federal government withholds a portion of the second person’s salary (enforcing the laws against tax evasion with threats of imprisonment under armed guard) and gives some of it to the first person in the form of welfare payments, food stamps, or free health care, that is taxation. In the first case it is (in my opinion) an impermissible use of coercive means to achieve a worthwhile end. In the second case the means are legitimate, because they are impersonally imposed by an institution designed to promote certain results. Such general methods of distribution are preferable to theft as a form of private initiative and also to individual charity. This is true not only for reasons of fairness and efficiency, but also because both theft and charity are disturbances of the relations (or lack of them) between individuals and involve their individual wills in a way that an automatic, officially imposed system of taxation does not. [Mortal Questions, “Ruthlessness in Public Life,” pp. 87-88]

How many logical and epistemic errors can a supposedly brilliant philosopher make in one (long) paragraph? Too many:

  • “For within proper limits” means that Nagel is about to beg the question by shaping an answer that fits his idea of proper limits.
  • Nagel then asserts that the use by government of coercive means to achieve the same end as robbery is “legitimate, because [those means] are impersonally imposed by an institution designed to promote certain results.” Balderdash! Nagel’s vision of government as some kind of omniscient, benevolent arbiter is completely at odds with reality.  The “certain results” (redistribution of income) are achieved by functionaries, armed or backed with the force of arms, who themselves share in the spoils of coercive redistribution. Those functionaries act under the authority of bare majorities of elected representatives, who are chosen by bare majorities of voters. And those bare majorities are themselves coalitions of interested parties — hopeful beneficiaries of redistributionist policies, government employees, government contractors, and arrogant statists — who believe, without justification, that forced redistribution is a proper function of government.
  • On the last point, Nagel ignores the sordid history of the unconstitutional expansion of the powers of government. Without justification, he aligns himself with proponents of the “living Constitution.”
  • Nagel’s moral obtuseness is fully revealed when he equates coercive redistribution with “fairness and efficiency,” as if property rights and liberty were of no account.
  • The idea that coercive redistribution fosters efficiency is laughable. It does quite the opposite because it removes resources from productive uses — including job-creating investments. The poor are harmed by coercive redistribution because it drastically curtails economic growth, from which they would benefit as job-holders and (where necessary) recipients of private charity (the resources for which would be vastly greater in the absence of coercive redistribution).
  • Finally (though not exhaustively), Nagel’s characterization of private charity as a “disturbance of the relations … among individuals” is so wrong-headed that it leaves me dumbstruck. Private charity arises from real relations among individuals — from a sense of community and feelings of empathy. It is the “automatic, officially imposed system of taxation” that distorts and thwarts (“disturbs”) the social fabric.

In any event, taxation for the purpose of redistribution is slavery: the subjection of one person to others, namely, agents of the government and the recipients of the taxes extracted from the person who pays them under threat of punishment. It’s slavery without whips and chains, but slavery nevertheless.

Rights, Liberty, the Golden Rule, and Leviathan

Rights arise from voluntary and enduring social relationships. In that respect, they are natural because they represent the accommodations that a people make with each other in order to coexist peacefully and to their mutual benefit. (Natural rights, as I define them, are not the same thing as the kind of “natural rights” that many philosophers, political theorists, mystics and opportunistic politicians claim to find hovering in human beings like Platonic essences. See this, this, this, and this, for example.)

Natural rights, in sum, are the interpersonal claims that a people agree upon and (mainly) observe in their daily interactions. The claims can be negative (do not kill, except in self-defense) or positive (children must be clothed, fed, and taught about rights). For reasons discussed later, such claims are valid and generally honored even if there isn’t a superior power (a chieftain, monarch, or state apparatus) to enforce them.

Liberty is the condition in which agreed rights are generally observed, and enforced when they are violated. Liberty, in other words, is the condition of peaceful, willing coexistence and its concomitant: beneficially cooperative behavior. Peaceful, willing coexistence does not imply “an absence of constraints, impediments, or interference”, which is a standard definition of liberty. Rather, it implies that there is necessarily a degree of compromise (voluntary constraint) for the sake of beneficially cooperative behavior. Even happy marriages are replete with voluntary constraints on behavior, constraints that enable the partners to enjoy the blessings of union.

That’s all there is to it. Liberty isn’t a nirvana-like state of euphoria; it’s just what everyday life is like when people are able to coexist by their own lights, perhaps under the aegis of a superior power which does nothing but ensure that they are able to do so.

The persistence of natural rights and liberty among a people is fostered primarily by mutual trust, respect, and forbearance. Punishment of violations of rights (and therefore of liberty) helps, too, as long as the punishment is generally agreed upon and applied consistently.

Natural rights, as discussed thus far, are distinct from “rights” (sometimes “natural rights”) that people demand of a superior power. (See, for example, the UN Declaration of Human Rights, which is a wish-list of things that people are “entitled” to.) Those are really privileges. Government can (and sometimes does) recognize and protect truly natural rights, but it doesn’t manufacture them. The Bill of Rights, for example, consists of a hodge-podge of actual rights (e.g., the right to bear arms), and privileges (e.g., protection from self-incrimination). Some of the latter are special dispensations made necessary by the existence of government itself, that is, promises made by the government to protect the people from its superior power.

As mentioned in passing earlier, rights are usually divided into two categories: negative and positive. Negative rights are natural rights that can be exercised without requiring anything of others but reciprocal forbearance[1]. Wikipedia puts it this way:

Adrian has a negative right to x against Clay if and only if Clay is prohibited from acting upon Adrian in some way regarding x…. A case in point, if Adrian has a negative right to life against Clay, then Clay is required to refrain from killing Adrian….

To spin out the example, there is a negative right not to be harmed (killed in this case) as long as Clay is forbidden to kill Adrian, Adrian is forbidden to kill Clay, both are forbidden to kill others, and others are forbidden to kill anyone. This is a widely understood and accepted negative right. But it is not an unconditional right. There are also widely understood and accepted exceptions to it, such as killing in self-defense.

In any event, the textbook explanation of negative rights, such as the one given by Wikipedia, is appealing. But it is simplistic, like John Stuart Mill’s harm principle.

“Negative rights” and “harm”, by themselves, are mere abstractions. It seems obvious that a person shouldn’t be harmed as long as he is doing no harm to others, which is the essence of Wikipedia‘s explanation. But “harm” is the operative word. Harm isn’t an abstraction; it’s a real thing — many real things — with concrete meanings. And those concrete meanings arise from social interactions and the norms born of them.

For example, libertarians consider it a negative right to sell one’s home to another person without interference by one’s neighbors (or the state acting on their behalf). One’s neighbors must forbear intervention, just as the seller must forbear intervention against the sales of the neighbors’ homes. But intervention may be necessary to prevent harm.

The part that libertarians usually get wrong is forbearance. Libertarians assume forbearance. They assume forbearance because they assume away — or simply ignore — the possibility that a voluntary transaction between two parties may result in harm to third parties.

But what if the buyer is an absentee owner who rents rooms to all and sundry (resulting in parking problems, an eyesore property, etc.)? Libertarians reject zoning as an infringement on the negative right of property ownership. So what are put-upon neighbors supposed to do about the absentee landlord who rents rooms to all and sundry? Well, the neighbors can always complain to the city government if things get out of hand, can’t they? Yes, but in the meantime harm will have been done, and the police may not be able to put a stop to it unless the harm actually violates a statute or ordinance that the police and courts are willing and able to enforce without being attacked as racist pigs, or some such thing.

Does the libertarian conception of negative rights have room in it for homeowners’ associations that actually allow neighborhoods to define harm, as it applies to their particular circumstances, and act to prevent it? In my experience, the libertarian conception of negative property rights — thou shalt not interfere in the sale of a house — has become enshrined in statutes and ordinances that de-fang homeowners’ associations, making them powerless to prevent harm by enforcing restrictive covenants (e.g., against renting rooms) that libertarians decry as infringements of negative rights.

The only negative rights worthy of the name are specific rights that are recognized within a voluntary and enduring association of persons. Violations of those rights undermine the fabric of mutual trust and mutual forbearance that enable a people to coexist in beneficial, voluntary cooperation. That — not some imaginary nirvana — is liberty.

By the same token, a voluntary and enduring association of persons can recognize positive rights. That is to say, positive rights — those broadly accepted as part and parcel of peaceful, willing coexistence and its concomitant: beneficially cooperative behavior — are just as much an aspect of liberty as are negative rights. (Doctrinaire libertarians, who aren’t really libertarians, mistakenly decry all positive rights as antithetical to liberty.)

Returning to the Wikipedia article quoted above, and the example of Adrian and Clay,

Adrian has a positive right to x against Clay if and only if Clay is obliged to act upon Adrian in some way regarding x…. [I]f Adrian has a positive right to life against Clay, then Clay is required to act as necessary to preserve the life of Adrian.

Negative and positive rights are compatible with each other in the context of the Golden Rule, or ethic of reciprocity: One should treat others as one would expect others to treat oneself. This is a truly natural law, for reasons I will come to.

The Golden Rule can be expanded into two, complementary sub-rules:

  • Do no harm to others, lest they do harm to you.
  • Be kind and charitable to others, and they will be kind and charitable to you.

The first sub-rule fosters negative rights. The second sub-rule fosters positive rights. But, as discussed earlier, the rights in question are specific — not abstract injunctions — because they are understood and recognized in the context of voluntary and enduring social relationships.

I call the Golden Rule a natural law because it’s neither a logical construct (e.g., the “given-if-then” formulation discussed here) nor a state-imposed one. Its long history and widespread observance (if only vestigial) suggest that it embodies an understanding that arises from the similar experiences of human beings across time and place. The resulting behavioral convention, the ethic of reciprocity, arises from observations about the effects of one’s behavior on that of others and mutual agreement (tacit or otherwise) to reciprocate preferred behavior, in the service of self-interest and empathy.

That is to say, the convention is a consequence of the observed and anticipated benefits of adhering to it. Those benefits accrue not only to the person who complies with the Golden Rule in a particular situation (the actor), but also to the person (or persons) who benefit from compliance (the beneficiary). The consequences of compliance don’t usually redound immediately to the actor, but they redound indirectly over the long-term because the actor (and many more like him) do their part to preserve the convention. It follows that the immediate impetus for observance of the convention is a mixture of two considerations: (a) an understanding of the importance of preserving the convention and (b) empathy on the part of the actor toward the beneficiary.

The Golden Rule will be widely observed within a group only if the members of the group are (a) generally agreed about the definition of harm, (b) value kindness and charity (in the main), and (c) perhaps most importantly see that their acts have beneficial consequences. If those conditions are not met, the Golden Rule descends from convention to slogan.

Is the Golden Rule susceptible of varying interpretations across groups, and is it therefore a vehicle for moral relativism? Yes, with qualifications. It’s true that groups vary in their conceptions of permissible behavior. For example, the idea of allowing, encouraging, or aiding the death of old persons is not everywhere condemned. (Many — with whom I wouldn’t choose to coexist voluntarily — embrace it as a concomitant of a government-run or government-regulated health-care “system” that treats the delivery of medical services as matter of rationing.) Infanticide has a long history in many cultures; modern, “enlightened” cultures have simply replaced it with abortion. (More behavior that is beyond the pale of my preferred society.) Slavery is still an acceptable practice in some places, though those enslaved (as in the past) usually are outsiders. Homosexuality has a long history of condemnation, and occasional acceptance. (To be pro-homosexual nowadays — and especially to favor homosexual “marriage” — has joined the litany of “causes” that connote membership in the tribe of “enlightened” “progressives” [a.k.a., “liberals” and leftists], along with being for abortion [i.e., pre-natal infanticide] and against the consumption of fossil fuels — except for one’s McMansion and SUV, of course.)

The foregoing recitation suggests a mixture of reasons for favoring or disfavoring various behaviors, that is, regarding them as beneficial or harmful. Those reasons range from utilitarianism (calculated weighing of costs and benefits) to status-signaling. In between, there are religious and consequentialist reasons for favoring or disfavoring various behaviors. Consequentialist reasoning goes like this: Behavior X can be indulged responsibly and without harm to others, but there a strong risk that it will not be indulged responsibly, or that it will lead to behavior Y, which has repercussions for others. Therefore, it’s better to put X off-limits, or to severely restrict and monitor it.

Consequentialist reasoning applies to euthanasia (it’s easy to slide from voluntary to involuntary acts, especially when the state controls the delivery of medical care); infanticide and abortion (forms of involuntary euthanasia and signs of disdain for life); homosexuality (a depraved, risky practice — especially among males — that can ensnare impressionable young persons who see it as an “easy” way to satisfy sexual urges); alcohol and drugs (addiction carries a high cost, for the addict, the addict’s family, and sometimes for innocent bystanders). In the absence of governmental edicts to the contrary, long-standing attitudes toward such behaviors would prevail in most places. (Socially and geographically isolated enclaves are welcome to kill themselves off and purify the gene pool.)

The exceptions discussed above to the contrary notwithstanding, there’s a mainstream interpretation of the Golden Rule — one that still holds in many places — which rules out certain kinds of behavior, except in extreme situations, and permits certain other kinds of behavior. There is, in other words, a “core” Golden Rule that comes down to this:

  • Killing is wrong, except in self-defense. (Capital punishment is just that: punishment. It’s also a deterrent to murder. It isn’t “murder,” muddle-headed defenders of baby-murder to the contrary notwithstanding.)
  • Various kinds of unauthorized “takings” are wrong, including theft (outright and through deception). (This explains popular resistance to government “takings” ,especially when it’s done on behalf of private parties. The view that it’s all right to borrow money from a bank and not repay it arises from the mistaken beliefs that (a) it’s not tantamount to theft and (b) it harms no one because banks can “afford it”.)
  • Libel and slander are wrong because they are “takings” by word instead of deed.
  • It is wrong to turn spouse against spouse, child against parent, or friend against friend. (And yet, such things are commonly portrayed in books, films, and plays as if they are normal occurrences, often desirable ones. And it seems to me that reality increasingly mimics “art”.)
  • It is right to be pleasant and kind to others, even under provocation, because “a mild answer breaks wrath: but a harsh word stirs up fury” (Proverbs 15:1).
  • Charity is a virtue, but it should begin at home, where the need is most certain and the good deed is most likely to have its intended effect. (Leftists turn a virtue into an imposition when they insist that “charity” — as in income redistribution — is a proper job of government.)

None of these observations would be surprising to a person raised in the Judeo-Christian tradition, or even in the less vengeful branches of Islam. The observations would be especially unsurprising to an American who was raised in a rural, small-town, or small-city setting, well removed from a major metropolis, or who was raised in an ethnic enclave in a major metropolis. For it is such persons and, to some extent, their offspring who are the principal heirs and keepers of the Golden Rule in America.

An ardent individualist — particularly an anarcho-capitalist — might insist that social comity can be based on the negative sub-rule, which is represented by the first five items in the “core” list. I doubt it. There’s but a short psychological distance from mean-spiritedness — failing to be kind and charitable — to sociopathy, a preference for harmful acts. Ardent individualists will disagree with me because they view kindness and charity as their business, and no one else’s. They’re right about that, but kindness and charity are nevertheless indispensable to the development of mutual trust among people who in an enduring social relationship. Without mutual trust, mutual restraint becomes problematic and co-existence becomes a matter of “getting the other guy before he gets you” — a convention that I hereby dub the Radioactive Rule.

Nevertheless, the positive sub-rule, which is represented by the final two items in the “core” list, can be optional for the occasional maverick. An extreme individualist (or introvert or grouch) could be a member in good standing of a society that lives by the Golden Rule. He would be a punctilious practitioner of the negative rule, and would not care that his unwillingness to offer kindness and charity resulted in coldness toward him. Coldness is all he would receive (and want) because, as a punctilious practitioner of the negative rule; his actions wouldn’t necessarily invite harm.

But too many extreme individualists would threaten the delicate balance of self-interested and voluntarily beneficial behavior that’s implied in the Golden Rule. Even if lives and livelihoods did not depend on acts of kindness and charity — and they probably would — mistrust would set it in. And from there, it would be a short distance to the Radioactive Rule.

Of course, the delicate balance would be upset if the Golden Rule were violated with impunity. For that reason, the it must be backed by sanctions. Non-physical sanctions would range from reprimands to ostracism. For violations of the negative sub-rule, imprisonment and corporal punishment would not be out of the question.

Now comes a dose of reality. Self-governance is possible only for a group of about 25 to 150 persons: the size of a hunter-gatherer band or Hutterite colony. It seems that self-governance breaks down when a group is larger than 150 persons. Why should that happen? Because mutual trust, mutual respect, and mutual forbearance — the things implied in the Golden Rule — depend very much on personal connections. A person who is loathe to say a harsh word to an acquaintance, friend, or family member — even when provoked — often waxes abusive toward strangers, especially in this era of e-mail and comment threads, where face-to-face encounters aren’t involved.

More generally, it’s a human tendency to treat family members, friends, and acquaintances differently than strangers; the former are accorded more trust, more cooperation, and more kindness than the latter. Why? Because there’s usually a difference between the consequences of behavior that’s directed toward strangers and the consequences of behavior that’s directed toward persons one knows, lives among, and depends upon for restraint, cooperation, and help. The allure of  doing harm without penalty (“getting away with something”) or receiving without giving (“getting something for nothing”)  becomes harder to resist as one’s social distance from others increases.

The preference of like for like is derided by libertarians and leftists as tribalism, which is like the pot calling the kettle black. There’s no one who is more tribal than a leftist, who weighs every word spoken by another person to ensure that person’s alignment with the left’s current dogmas. (Libertarians have it easier, inasmuch as most of them are loners by disposition, and thrive on contrariness.) But the preference of like for like is quite rational: Cooperation and help include mutual defense (and concerted attack, in the case of leftists).

When self-governance breaks down, it becomes necessary to spin off a new group or to establish a central power (a state) to establish and enforce rules of behavior (negative and positive). The problem, of course, is that those vested with the power of the state quickly learn to use it to advance their own preferences and interests, and to perpetuate their power by granting favors to those who can keep them in office. It is a rare state that is created for the sole purpose of protecting its citizens from one another (as the referee of last resort) and from outsiders, and rarer still is the state that remains true to such purposes.

In sum, the Golden Rule — as a uniting way of life — is quite unlikely to survive the passage of a group from a self-governing community to a component of a state. Nor does the Golden Rule as a uniting way of life have much chance of revival or survival where the state already dominates. The Golden Rule may operate within non-kinship groups (e.g., parishes, clubs, urban enclaves) by regulating the interactions among the members of such groups. It may have a vestigial effect on face-to-face interactions between stranger and stranger, but that effect arises in part from the fear of giving offense that will be met with hostility or harm, not from a communal bond.

In any event, the dominance of the state distorts behavior. For example, the state may enable and encourage acts (e.g., abortion, homosexuality) that had been discouraged as harmful by group norms. And the state will diminish the ability of members of a group to bestow charity on one another through the loss of income to taxes and the displacement of private charity by state-run schemes that mimic charity (e.g., Social Security).

The all-powerful state destroys liberty, even while sometimes defending it. This is done not just by dictating how people must live their lives, which is bad enough. It is also done by eroding the social bonds that liberty is built upon — the bonds that secure peaceful, willing coexistence and its concomitant: beneficially cooperative behavior.
__________
[1] Here is a summary of negative rights, by Randy Barnett:

A libertarian … favors the rigorous protection of certain individual rights that define the space within which people are free to choose how to act. These fundamental rights consist of (1) the right of private property, which includes the property one has in one’s own person; (2) the right of freedom of contract by which rights are transferred by one person to another; (3) the right of first possession, by which property comes to be owned from an unowned state; (4) the right to defend oneself and others when fundamental rights are being threatened; and (5) the right to restitution or compensation from those who violate another’s fundamental rights. [“Is the Constitution Libertarian?”, Georgetown Public Law Research Paper No. 1432854 (posted at SSRN July 14, 2009), p. 3]

Borrowing from and elaborating on Barnett’s list, I come to the following set of negative rights:

  • freedom from force and fraud (including the right of self-defense against force)
  • property ownership (including the right of first possession)
  • freedom of contract (including contracting to employ/be employed)
  • freedom of association and movement
  • restitution or compensation for violations of the foregoing rights.

This set of negative rights that would obtain in a state which devolves political decisions to the level of socially cohesive groups, while serving only as the defender of such rights (in the last resort) against domestic and foreign predators.

Suicide

Suicide has garnered a lot of attention in recent days. As noted in a study by the Centers for Disease Control and Prevention, the rate has been rising steadily since it bottomed out in 2000. I discussed suicide at some length in “Suicidal Despair and the ‘War on Whites’” (June 26, 2017). I have updated a few graphs and a bit of text to accommodate the latest figures. But the bottom line remains unchanged. What is it? The “war on whites” is a red herring. Go there and see for yourself.

Climate Scare Tactics: The Mythical Ever-Rising 30-Year Average

Our local weather Nazi, of whom I have written before, jumped on his high horse yesterday and lectured viewers about the 401 (?) consecutive months of above-average temperatures. He didn’t come out and say it — this time — but his thoughts and prayers are running in the direction of a climatic version of gun confiscation. Just take away those fossil fuels, etc., and Earth will return to its “correct” temperature. In his case, because anything above 75 in Austin is too warm for him, attaining the “correct” temperature would require a return to something like a real ice age.

In any event, the statistic that has the weather Nazi — and other climate hysterics — all a-twitter goes like this. The global temperature for every month since February 1985 has exceeded the rolling, 30-year average for that month. This statistic must be derived from surface thermometer readings, inasmuch as satellite readings didn’t begin until the late 1970s. We know all about those surface thermometer readings: spotty coverage, poor siting (often in locations surrounded by concrete and traffic), missing data, and — worst of all — frequent downward adjustments of historical numbers to make recent decades look hotter than they were. There’s no mention of the “pause” between the El Niños of the late 1990s and mid 2010s, of course.

Here’s something closer to the truth, based on satellite-recorded temperatures in the lower troposphere, from the Global Temperature Report by the Earth System Science Center of the University of Alabama-Huntsville:

Here are the yearly averages:

Three comments:

  • The anomalies are small.
  • There are many negative values before the onset of the major El Niño that began late in 2014 and lasted until mid-2016. (A negative value means that the reading for a month was below the 30-year average for that month.)
  • The effects of that El Niño are wearing off.

Note to local weather Nazi: Give it a rest.


Related posts:
AGW: The Death Knell
Not-So-Random Thoughts (XIV) (second item)
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Hurricane Hysteria
Much Ado about the Unknown and Unknowable
Hot Is Better Than Cold: A Small Case Study

Justice Thomas on “Masterpiece Cakeshop”

It is well known by now that cake maker Jack Phillips, proprietor of Masterpiece Cakeshop in Denver, prevailed in an opinion written by Justice Kennedy.  At issue were the Colorado Civil Rights Commission’s actions in assessing Phillips’s reasons for declining to make a cake for a same-sex couple’s wedding celebration. The commission’s actions violated the free exercise clause of the First Amendment. Specifically, in Kennedy’s words:

The Commission gave “every appearance,” of adjudicating [Phillips’s] religious objection based on a negative normative “evaluation of the particular justification” for his objection and the religious grounds for it, but government has no role in expressing or even suggesting whether the religious ground for Phillips’ conscience-based objection is legitimate or illegitimate. The inference here is thus that Phillips’ religious objection was not considered with the neutrality required by the Free Exercise Clause. The State’s interest could have been weighed against Phillips’ sincere religious objections in a way consistent with the requisite religious neutrality that must be strictly observed. But the official expressions of hostility to religion in some of the commissioners’ comments were inconsistent with that requirement, and the Commission’s disparate consideration of Phillips’ case compared to the cases of the other bakers suggests the same.

This is a narrow ruling, as many commentators have observed, in that it does not address the fundamental issue of the right of Phillips (or anyone similarly situated) to refuse to express views contrary to his beliefs — religious or not.

Justice Thomas, in a concurring opinion (joined by Justice Gorsuch), gets it right:

The First Amendment, applicable to the States through the Fourteenth Amendment, prohibits state laws that abridge the “freedom of speech.” When interpreting this command, this Court has distinguished between regulations of speech and regulations of conduct. The latter generally do not abridge the freedom of speech, even if they impose “incidental burdens” on expression….

Although public-accommodations laws generally regulate conduct, particular applications of them can burden protected speech. When a public-accommodations law “ha[s] the effect of declaring . . . speech itself to be the public accommodation,” the First Amendment applies with full force…. When [a Massachusetts] law required the sponsor of a St. Patrick’s Day parade to include a parade unit of gay, lesbian, and bisexual Irish-Americans, the Court unanimously held that the law violated the sponsor’s right to free speech. Parades are “a form of expression,” this Court explained, and the application of the public-accommodations law “alter[ed] the expressive content” of the parade by forcing the sponsor to add a new unit. The addition of that unit compelled the organizer to “bear witness to the fact that some Irish are gay, lesbian, or bisexual”; “suggest . . . that people of their sexual orientation have as much claim to unqualified social acceptance as heterosexuals”;and imply that their participation “merits celebration.” While this Court acknowledged that the unit’s exclusion might have been “misguided, or even hurtful,” ibid., it rejected the notion that governments can mandate“thoughts and statements acceptable to some groups or,indeed, all people” as the “antithesis” of free speech….

The parade . . . was an example of what this Court has termed “expressive conduct.” This Court has long held that “the Constitution looks beyond written or spoken words as mediums of expression,” and that “[s]ymbolism is a primitive but effective way of communicating idea.” Thus, a person’s “conduct may be ‘sufficiently imbued with elements of communication to fall within the scope of the First and Fourteenth Amendments.’” Applying this principle, the Court has recognized a wide array of conduct that can qualify as expressive, including nude dancing, burning the American flag, flying an upside-down American flag with a taped-on peace sign, wearing a military uniform, wearing a black armband, conducting a silent sit-in, refusing to salute the American flag, and flying a plain red flag….

Phillips’ creation of custom wedding cakes is expressive. The use of his artistic talents to create a well-recognized symbol that celebrates the beginning of a marriage clearly communicates a message. The use of his artistic talents to create a well-recognized symbol that celebrates the beginning of a marriage clearly communicates a message—certainly more so than nude dancing….

States cannot punish protected speech because some group finds it offensive, hurtful, stigmatic, unreasonable, or undignified. “If there is a bedrock principle underlying the First Amendment, it is that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.”… If the only reason a public-accommodations law regulates speech is “to produce a society free of . . . biases” against the protected groups, that purpose is “decidedly fatal” to the law’s constitutionality, “for it amounts to nothing less than a proposal to limit speech in the service of orthodox expression.”…

[T]he fact that this Court has now decided Obergefell v. Hodges [does not] somehow diminish Phillips’ right to free speech. [As CJ Roberts wrote in in dissenting opinion in Obergefell,] “It is one thing . . . to conclude that the Constitution protects a right to same-sex marriage; it is something else to portray everyone who does not share [that view] as bigoted” and unentitled to express a different view. This Court is not an authority on matters of conscience, and its decisions can (and often should) be criticized. The First Amendment gives individuals the right to disagree about the correctness of Obergefell and the morality of same-sex marriage. [The majority opinion in ] Obergefell itself emphasized that the traditional understanding of marriage “long has been held—and continues to be held—in good faith by reasonable and sincere people here and throughout the world.” If Phillips’ continued adherence to that understanding makes him a minority after Obergefell, that is all the more reason to insist that his speech be protected….

In Obergefell, I warned that the Court’s decision would “inevitabl[y] . . . come into conflict” with religious liberty,“as individuals . . . are confronted with demands to participate in and endorse civil marriages between same-sex couples.” This case proves that the conflict has already emerged. Because the Court’s decision vindicates Phillips’ right to free exercise, it seems that religious liberty has lived to fight another day. But, in future cases, the freedom of speech could be essential to preventing Obergefell from being used [in Justice Alito’s words] to “stamp out every vestige of dissent” and “vilify Americans who are unwilling to assent to the new orthodoxy.”

That should have been the majority opinion.


Related posts:
The Writing on the Wall
How to Protect Property Rights and Freedom of Association and Expression
The Beginning of the End of Liberty in America
Marriage: Privatize It and Revitalize It
Equal Protection in Principle and Practice
Freedom of Speech and the Long War for Constitutional Governance
Freedom of Speech: Getting It Right

Revisiting the Laffer Curve

Among the claims made in favor of the Tax Cuts and Jobs Act of 2017 was that the resulting tax cuts would pay for themselves. Thus the Laffer curve returned briefly to prominence, after having been deployed to support the Reagan and Bush tax cuts of 1981 and 2001.

The idea behind the Laffer curve is straightforward. Taxes inhibit economic activity, that is, the generation of output and income. Tax-rate reductions therefore encourage work, which yields higher incomes. Higher incomes mean that there is more saving from which to finance growth-producing capital investment. Lower tax rates also make investment more attractive by increasing the expected return on capital investments. Lower tax rates therefore stimulate economic output by encouraging work and investment (supply-side economics). Under the right conditions, lower tax rates may generate enough additional income to yield an increase in tax revenue.

I believe that there are conditions under which the Laffer curve works as advertised. But so what? The Laffer curve focuses attention on the wrong economic variable: tax revenue. The economic variables that really matter — or that should matter — are the real rate of growth and the income available to Americans after taxes. More (real) economic growth means higher (real) income, across the board. More government spending means lower (real) income; the Keynesian multiplier is a cruel myth.

A new Laffer curve is in order, one that focuses on the effects of taxation on economic growth, and thus on the aggregate output of products and services available to consumers.

Let us begin at the beginning, with this depiction of the Laffer curve (via Forbes):

This is an unusually sophisticated depiction of the curve, in that it shows a growth-maximizing tax rate which is lower than the revenue-maximizing rate. It also shows that the growth-maximizing rate is greater than zero, for a good reason.

With real taxes (i.e., government spending) at zero or close to it, the rule of law would break down and the economy would be a shambles. But government spending above that required to maintain the rule of law (i.e., adequate policing, administration of justice, and national defense) interferes with the efficient operation of markets, both directly (by pulling resources out of productive use) and indirectly (by burdensome regulation financed by taxes).

Thus a tax rate higher than that required to sustain the rule of law1 leads to a reduction in the rate of (real) economic growth because of disincentives to work and invest. A reduction in the rate of growth pushes GDP below its potential level. Further, the effect is cumulative. A reduction in GDP means a reduction in investment, which means a reduction in future GDP, and on and on.

I will quantify the Laffer curve in two steps. First, I will estimate the tax rate at which revenue is maximized, taking the simplistic view that changes in the tax rate do not change the rate of economic growth. I will draw on Christina D. Romer and David H. Romer’s “The Macroeconomic Effects of Tax Changes: Estimates Based on a New Measure of Fiscal Shocks” (American Economic Review, June 2010, pp. 763-801).

The Romers estimate the effects of exogenous changes in taxes on GDP. (“Exogenous” meaning tax cuts aimed at stimulating the economy, as opposed, for example, to tax increases triggered by economic growth.) Here is their key finding:

Figure 4 summarizes the estimates by showing the implied effect of a tax increase of one percent of GDP on the path of real GDP (in logarithms), together with the one-standard-error bands. The effect is steadily down, first slowly and then more rapidly, finally leveling off after ten quarters. The estimated maximum impact is a fall in output of 3.0 percent. This estimate is overwhelmingly significant (t = –3.5). The two-standard-error confidence interval is (–4.7%,–1.3%). In short, tax increases appear to have a very large, sustained, and highly significant negative impact on output. Since most of our exogenous tax changes are in fact reductions, the more intuitive way to express this result is that tax cuts have very large and persistent positive output effects. [pp. 781-2]

The Romers assess the effects of tax cuts over a period of only 12 quarters (3 years). Some of the resulting growth in GDP during that period takes the form of greater spending on capital investments, the payoff from which usually takes more than 3 years to realize. So a tax cut of 1 percent of GDP yields more than a 3-percent rise in GDP over the longer run. But let’s keep it simple and use the relationship obtained by the Romers: a 1-percent tax cut (as a percentage of GDP) results in a 3-percent rise in GDP.

With that number in hand, and knowing the effective tax rate (33 percent of GDP in 20172), it is then easy to compute the short-run effects of changes in the effective tax rate on GDP, after-tax GDP, and tax revenue:


Effective tax revenue represents the dollar amount extracted from the economy through government spending at the stated percentage of GDP. (Spending includes transfer payments, which take from those who produce and give to those who do not.) Effective tax rate represents the dollar amount extracted from the economy, divided by GDP at the given tax rate. (GDP is based on the Romers’ estimate of the marginal effect of a change in the tax rate.)

It is a coincidence that tax revenue is maximized at the current (2017) effective tax rate of 33 percent. The coincidence occurs because, according to the Romers, every $1 change in tax revenue (or government spending that draws resources from the real economy) yields a $3 change in GDP, at the margin. If the marginal rate of return were lower than 3:1, the revenue-maximizing rate would be greater than 33 percent. If the marginal rate of return were higher than 3:1, the revenue-maximizing rate would be less than 33 percent.

In any event, the focus on tax revenue is entirely misplaced. What really matters, given that the prosperity of Americans is (or should be) of paramount interest, is GDP and especially after-tax GDP. Both would rise markedly in response to marginal cuts in real taxes (i.e., government spending). Democrats don’t want to hear that, of course, because they want government to decide how Americans spend the money that they earn. The idea that a far richer America would need far less government — subsidies, nanny-state regulations, etc. — frightens them.

It gets better (or worse, if you’re a big-government fan) when looking at the long-run effects of lower government spending on the rate of growth. I am speaking of the Rahn curve, which I estimate here. Holding other things the same, every percentage-point reduction in the real tax rate (government spending as a fraction of GDP) means an increase of 0.35 percentage point in the rate of real GDP growth. This is a long-run relationship because it takes time to convert some of the tax reduction to investment, and then to reap the additional output generated by the additional investment. It also takes time for workers to respond to the incentive of lower taxes by adding to their skills, working harder, and working in more productive jobs.

This graph depicts the long-run effects of changes in the effective tax rate, taking into account changes in the real growth rate from a base of 2.8 percent (the year-over-year rate for the most recent fiscal quarter):

Note that the same real tax revenue would be realized at an effective tax rate of 13 percent of GDP. At that rate, GDP would rise to 2.5 times its 2017 value (instead of 1.6 times as shown in Figure 1), and after-tax GDP would rise to 3.3 times its 2017 value (instead of 2.1 times as shown in Figure 1).

The real Laffer curve — the one that people ought to pay attention to — is the Rahn curve. Holding everything else constant, here is the relationship between the real growth rate and the effective tax rate:

Laffer revisited_3

At the current effective tax rate — 33 percent of GDP — the economy is limping along at about one-third of its potential growth. That is actually good news, inasmuch as the real growth rate dipped perilously close to 1 percent several times during the Obama administration, even after the official end of the Great Recession.

But it will take many years of spending cuts (relative to GDP, at least) and deregulation to push growth back to where it was in the decades immediately after World War II. Five percent isn’t out of the question.
__________

1. Total government spending, when transfer payments were negligible, amounted to between 5 and 10 percent of GDP between the Civil War and the Great Depression (Series F216-225, “Percent Distribution of National Income or Aggregate Payments, by Industry, in Current Prices: 1869-1968,” in Chapter F, National Income and Wealth, Historical Statistics of the United States, Colonial Times to 1970: Part 1). The cost of an adequate defense is a lot higher than it was in those relatively innocent times. Defense spending now accounts for about 3.5 percent of GDP. An increase to 5 percent wouldn’t render the U.S. invulnerable, but it would do a lot to deter potential adversaries. So at 10 percent of GDP, government spending on policing, the administration of justice, and defense — and nothing else — should be more than adequate to sustain the rule of law.

2. The effective tax rate on GDP in 2017 was 33.4 percent. That number represents total government government expenditures (line 37 of BEA Table 3.1), divided GDP (line 1 of BEA Table 1.15). The nominal tax rate on GDP was 30 percent; that is, government receipts (line 34 of BEA Table 3.1) accounted for 30 percent of GDP. (The BEA tables are accessible here.) I use the effective tax rate in this analysis because it truly represents the direct costs levied on the economy by government. (The indirect cost of regulatory activity adds about $2 trillion, bringing the total effective tax to 44 percent.)

Selected Writings about Intelligence

I have treated intelligence many times; for example:

Positive Rights and Cosmic Justice: Part IV
Race and Reason: The Achievement Gap — Causes and Implications
“Wading” into Race, Culture, and IQ
The Harmful Myth of Inherent Equality
Bigger, Stronger, and Faster — But Not Quicker?
The IQ of Nations
Some Notes about Psychology and Intelligence
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
More about Intelligence
Not-So-Random Thoughts (XXI), fifth item
Intelligence and Intuition
Intelligence, Personality, Politics, and Happiness
Intelligence As a Dirty Word

The material below consists entirely of quotations from cited sources. The quotations are consistent with and confirm several points made in the earlier posts:

  • Intelligence has a strong genetic component; it is heritable.
  • Race is a real manifestation of genetic differences among sub-groups of human beings. Those subgroups are not only racial but also ethnic in character.
  • Intelligence therefore varies by race and ethnicity, though it is influenced by environment.
  • Specifically, intelligence varies in the following way: There are highly intelligent persons of all races and ethnicities, but the proportion of highly intelligent persons is highest among Ashkenazi Jews, followed in order by East Asians, Northern Europeans, Hispanics (of European/Amerindian descent), and sub-Saharan Africans — and the American descendants of each group.
  • Males are disproportionately represented among highly intelligent persons, relative to females. Males have greater quantitative skills (including spatio-temporal aptitude) relative to females; whereas, females have greater verbal skills than males.
  • Intelligence is positively correlated with attractiveness, health, and longevity.
  • The Flynn effect (rising IQ) is a transitory environmental effect brought about by environment (e.g., better nutrition) and practice (e.g., learning and application of technical skills). The Woodley effect is (probably) a long-term dysgenic effect among people whose survival and reproduction depends more on technology (devised by a relatively small portion of the populace) than on the ability to cope with environmental threats (i.e., intelligence).

I have moved the supporting material to a new page: “Intelligence“.

Freedom of Speech: Getting It Right

Congress shall make no law … abridging the freedom of speech….

Constitution of the United States, Amendment I

* * *

[T]he sole end for which mankind are warranted, individually or collectively in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others….

If all mankind minus one, were of one opinion, and only one person were of the contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind.

John Stuart Mill, On Liberty (1869), Chapter I and Chapter II

* * *

[T]he character of every act depends upon the circumstances in which it is done. The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic. It does not even protect a man from an injunction against uttering words that may have all the effect of force. The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent. It is a question of proximity and degree.

Oliver Wendell Holmes Jr., Schenck v. United States (1919)

* * *

To justify suppression of free speech there must be reasonable ground to fear that serious evil will result if free speech is practiced. There must be reasonable ground to believe that the danger apprehended is imminent. There must be reasonable ground to believe that the evil to be prevented is a serious one.

Louis D. Brandeis, Whitney v. People of State of California (1927),
joined by Holmes

* * *

The First Amendment has been systematically misapplied for the past 100 years, thanks mainly to Holmes and Brandeis. Mill’s generalizations are fatuous nonsense. Here is a palate-cleanser:

[O]nly where advocacy of and organization for an overthrow of government is deemed to be a “clear and present danger” can such advocacy or organization be curbed. Which is somewhat like waiting to shoot at an enemy armed with a long-range rifle until you are able to see the whites of his eyes. Or, perhaps more aptly in the 21st century, waiting until a terrorist strikes before acting against him. Which is too late, of course, and impossible in the usual case of suicide-cum-terror.

And therein lies the dangerous folly of free-speech absolutism….

The First Amendment, in the hands of the Supreme Court, has become inimical to the civil and state institutions that enable liberty….

[Mill’s harm principle] is empty rhetoric….

Harm must be defined. And its definition must arise from voluntarily evolved social norms. Such norms evince and sustain the mutual trust, respect, forbearance, and voluntary aid that — taken together — foster willing, peaceful coexistence and beneficially cooperative behavior. And what is liberty but willing, peaceful coexistence and beneficially cooperative behavior?

Behavior is shaped by social norms. Those norms once were rooted in the Ten Commandments and time-tested codes of behavior. They weren’t nullified willy-nilly in accordance with the wishes of “activists,” as amplified through the megaphone of the mass media, and made law by the Supreme Court. What were those norms? Here are some of the most important ones:

Marriage is a union of one man and one woman. Nothing else is marriage, despite legislative, executive, and judicial decrees that substitute brute force for the wisdom of the ages.

Marriage comes before children. This is not because people are pure at heart, but because it is the responsible way to start life together and to ensure that one’s children enjoy a stable, nurturing home life.

Marriage is until “death do us part.” Divorce is a recourse of last resort, not an easy way out of marital and familial responsibilities or the first recourse when one spouse disappoints or angers the other.

Children are disciplined — sometimes spanked — when they do wrong. They aren’t given long, boring, incomprehensible lectures about why they’re doing wrong. Why not? Because they usually know they’re doing wrong and are just trying to see what they can get away with.

Drugs are taken for the treatment of actual illnesses, not for recreational purposes.

Income is earned, not “distributed.” Persons who earn a lot of money are to be respected. If you envy them to the point of wanting to take their money, you’re a pinko-commie-socialist (no joke).

People should work, save, and pay for their own housing. The prospect of owning one’s own home, by dint of one’s own labor, is an incentive to work hard and to advance oneself through the acquisition of marketable skills.

Welfare is a gift that one accepts as a last resort, it is not a right or an entitlement, and it is not bestowed on persons with convenient disabilities….

A mother who devotes time and effort to the making of a good home and the proper rearing of her children is a pillar of civilized society. Her life is to be celebrated, not condemned as “a waste.”

Homosexuality is a rare, aberrant kind of behavior. (And that was before AIDS proved it to be aberrant.) It’s certainly not a “lifestyle” to be celebrated and shoved down the throats of all who object to it.

Privacy is a constrained right. It doesn’t trump moral obligations, among which are the obligations to refrain from spreading a deadly disease and to preserve innocent life.

Addiction isn’t a disease; it’s a surmountable failing….

Justice is a dish best served hot, so that would-be criminals can connect the dots between crime and punishment. Swift and sure punishment is the best deterrent of crime. Capital punishment is the ultimate deterrent because an executed killer can’t kill again.

Peace is the result of preparedness for war; lack of preparedness invites war.

The list isn’t exhaustive, but it’s certainly representative. The themes are few and simple: respect others, respect tradition, restrict government to the defense of society from predators foreign and domestic. The result is liberty: A regime of mutually beneficial coexistence based on mutual trust and respect. That’s all it takes — not big government bent on dictating new norms just because it can.

But by pecking away at social norms that underlie mutual trust and respect, “liberals” have sundered the fabric of civilization….

The right “peaceably to assemble, and to petition the Government for a redress of grievances” has become the right to assemble a mob, disrupt the lives of others, destroy the property of others, injure and kill others, and (usually) suffer no consequences for doing so — if you are a leftist or a member of one of the groups patronized by the left, that is.

But that’s not the end of it. There’s a reverse slippery-slope effect when it comes to ideas opposed by the left. There are, for example, speech codes at government-run universities; hate-crime laws, which effectively punish speech that offends a patronized group; and penalties in some States for opposing same-sex “marriage”….

In sum, there is no longer such a thing as the kind of freedom of speech intended by the Framers of the Constitution. There is on the one hand license for “speech” that subverts and flouts civilizing social norms — the norms that underlie liberty. There is on the other hand a growing tendency to suppress speech that supports civilizing social norms.

Freedom of Speech and the Long War for Constitutional Governance“,
Politics and Prosperity

* * *

See also:
Rethinking the Constitution: “Freedom of Speech, and of the Press”
Abortion and the Fourteenth Amendment
Privacy Is Not Sacred
The Contemporary Meaning of the Bill of Rights: First Amendment
How to Protect Property Rights and Freedom of Association and Expression
The Beginning of the End of Liberty in America
There’s More to It Than Religious Liberty
Equal Protection in Principle and Practice
Academic Freedom, Freedom of Speech, and the Demise of Civility
Preemptive (Cold) Civil War
The Framers, Mob Rule, and a Fatal Error
The Constitution: Myths and Realities

“This Has to Stop”

That’s a typical reaction to the latest (but, sadly, not last) mass shooting at a school (or anywhere else). What is the point of saying “this has to stop”? To express one’s outrage? It’s safe to assume that anyone who has an ounce of feeling for other people is outraged by mass shootings.

No, the point of it is virtue-signaling. But that’s all there is to it. Where’s the beef — the “solution” to the problem? Is it to tighten laws about access to guns, when the already tight laws aren’t being enforced well enough, and couldn’t be given the imperfections in human institutions? Is it to stop making “assault rifles” and large magazines when there are already so many in circulation that it won’t matter if no more are made. (Will there be an equally ridiculous and futile ban on the manufacture of knives and materials that can be made to explode?) Is the “solution” to clamp down on gun and ammunition sellers, period, when there are so many of them operating in the black market that it wouldn’t deter anyone who is serious about committing crimes?

Or is the “solution” to confiscate all firearms and ammunition (when they are volunteered or readily found), leaving law-abiding citizens at the mercy of those who scoff at the law? Yes, that must be it. Because it would be possible to confiscate millions of firearms and hundreds of millions of rounds of ammunition. And the resulting piles of guns and bullets would make an impressive showing on TV and in news photos. But it would be all for show. Except that the law-abiding Americans who turned in their guns and ammo would thenceforth be defenseless against the army of thugs and criminals that would remain at large.

What has to stop is the cultural erosion that has made almost routine something that was rare more than 50 years ago: mass murder. Mass murder isn’t happening because there are “too many” guns out there; America has been well armed since before the Revolutionary War. It’s happening because an increasing fraction of the population lacks a strong conscience, upbringing in an intact family, and strict discipline.


Related reading:
Gilbert T. Sewall, “How We Defined Deviancy Down and Got a Culture of Violence“, The American Conservative, May 22, 2018
Brandon J. Weichert, “Maybe America Should Ban Guns“, The American Spectator, May 24, 2018 (Weichert’s real target is moral decay, which the left has encouraged and abetted.)


Related posts:
Mass Murder: Reaping What Was Sown
Utilitarianism (and Gun Control) vs. Liberty

The Constitution: Myths and Realities

I have posted The Constitution: Myths and Realities at Realities. This very long article reworks and consolidates many posts at Politics & Prosperity. It’s worth your time if you haven’t thought critically about the role of the States in the creation of the Constitution, the legality of secession, and much more, including a strong argument that Americans aren’t morally bound by the Constitution.

The article runs 15,000 words, but still omits much relevant material from this blog. Thus the links to 21 posts in the pingbacks at the bottom of the article. Follow the links there for complementary and supplementary readings.

Moral Relativism from the Times

The headline in The New York Times Says It All: “As U.S. Demands Nuclear Disarmament, It Moves to Expand Its Own Arsenal“. As if an enlightened policy would be to disarm the U.S. first and hope (without hope) that other countries would follow suit.

I wonder if the authors of the piece really gave any thought to the matter. If they had, they might have concluded that it would be better for the U.S. to have a monopoly on nuclear weapons so that (a) no one could threaten the U.S. or its citizens’ overseas interests with a nuclear attack and (b) the U.S. could more easily protect its citizens and their overseas interests. It’s like having your favorite team ahead 10-0 going into the bottom of the 9th inning, instead of being tied 1-1. But the U.S., of course, isn’t the left’s favorite team.

The “reporters” who work for the Times — like their “liberal” brethren” throughout the U.S. — have swallowed the poison pill of moral relativism and transnationalism:

Mindless internationalism equates sovereignty with  jingoism, protectionism, militarism, and other deplorable “isms”. It ignores or denies the hard reality that Americans and their legitimate overseas interests are threatened by nationalistic rivalries and anti-Western fanaticism. “Transnationalism” is just a “soft” form of aggression; it would erode American values from the inside out, though American leftists hardly need any help from their foreign allies.

In the real world of powerful international rivals and determined, resourceful fanatics, the benefits afforded Americans by our (somewhat eroded) constitutional contract — most notably the enjoyment of civil liberties, the blessings of  free markets, and the protections of a common defense — are inseparable from and dependent upon the sovereignty of the United States.  To cede that sovereignty for the sake of mindless internationalism is to risk the complete loss of the benefits promised by the Constitution.

None of that matters to a “liberal”. Better the U.S. should be blackmailed by a tin-pot dictatorship than it should spend money to ensure against blackmail by any power, small or large. The U.S. is such an awful place, after all, so rife with sins of the left’s imagining. That’s why the leftists moved to Canada and Europe — I wish.

What’s With the Pink?

Women are no different than men, right? They can do everything men can do, right? (Wrong, but I’ll let it pass for now.)

Why, then, do feminists and their eunuchs among the male of the species insist on using pink to signify femininity?

Pink when it’s time to remind everyone about breast cancer. (As if reminders were needed.)

Pink to protest the elevation to the presidency of the worst male chauvinist since Bill Clinton, who was the worse one since LBJ, who was the worst one since FDR, and so on. (There should be retroactive protests of most of the male-chauvinist presidents from Harding onward, but that won’t happen because most of them were Democrats.)

Now comes pink for Mother’s (or is it Mothers’ or Mothers?) Day; thus:

After finishing a column for tomorrow I turned on the Rays game with the sound muted so I could monitor it while reading. What I found was disturbing. No, not that the Rays were trailing 17-1. That may be disturbing in New York or Boston, not even surprising here. But I see that, in honor of Mother’s Day, Rays and Orioles players are kitted out in a motley assortment of pink shoes, pink hats, pink batting gloves, pink undershirts, pink sweat-bands, even, God help us, pink bats. So I switched over to the Marlins/Braves game. Same desecration there, which I gather is MLB-wide today….

[S]urely Major League Baseball could have found a way to give a tip of the baseball cap to moms without the indignity of putting grown men in pink outfits. I just can’t watch it. I’m sure even plenty of moms, especially players’ moms, are put off by this marketing overkill. How could the players’ pit-bull agents have allowed this? Isn’t there anything in contemporary contracts about dignity? Where’s the take-no-prisoners players’ union? [Larry Thornberry, “It’s Undignified and I’ll Have None of It“, The Spectacle Blog, May 13, 2018]

Eunuchs, as I said.

The wearing of pink reminds people of the differences between the sexes (genders, in neo-speak), so how can it be allowed and encouraged? Well, it’s okay if pinkness is imposed on men (males). But heaven forbid that a girl (female) baby should be dressed in pink. Sexism! Toxic masculinity! Rank stereotyping! Call 911, it must be against the law!

Trump: The Consequential President

Ed Rogers, writing in The Washington Post on May 10, offers some back-handed praise of Donald Trump and his presidency:

For the Trump administration, the absence of disaster usually has to suffice as good news. Well, I wouldn’t say President Trump is on a roll, but he has had several good days.

Specifically, the outcome of Tuesday’s Senate primaries made it more likely that the GOP will retain control of the Senate, the clean break with the Iran deal can be considered a bold display of resolve, and two judges have fanned back special counsel Robert S. Mueller III — perhaps curbing his overreach. Progress toward an agreement with North Korea seems to be proceeding quickly. In fact, Secretary of State Mike Pompeo secured the release of three Americans on Wednesday who had been held prisoner, and President Trump announced he will meet with Kim Jong Un in Singapore on June 12. Regarding North Korea, Jeff Greenfield wrote in a Politico piece titled “Thinking the Unthinkable: What if Trump Succeeds?” last week that recognizing all of Trump’s flaws provides “all the more reason to retain a sense of perspective; to be able to consider seriously the proposition that this misbegotten president has somehow achieved an honest-to-God diplomatic success.”

Then there are the recent polls from Reuters-Ipsos, Gallup, CBS and CNN which show that the president’s job approval is ticking up. The unemployment rate is at an 18-year low; according to the National Federation of Independent Business, not only are record levels of small businesses reporting profit growth, but also the Small Business Optimism Index continues to sustain record-high levels. Americans have confidence in Trump’s handling of the economy. And at least for the time being, even the generic ballot is moving in Trump’s favor.

In addition, a few of the president’s critics are stumbling. The mainstream media did themselves real harm with the debacle of this year’s White House Correspondents Dinner, and Trump tormentor New York Attorney General Eric Schneiderman was forced to resign following allegations of repeated abuse of multiple women….

… Yet the Trump presidency could be an exploding cigar. Just as you begin to settle in and get used to it, the whole thing could blow apart.

To state the obvious, Trump is his own worst enemy — and he won’t change. Feckless Democrats won’t bring him down, Republicans have acquiesced, much of the media has become annoying background noise, and Mueller doesn’t seem to have a silver bullet. Only Trump can destroy Trump.

A correspondent of mine had some incisive things to say about the state of affairs:

I think Trump is not only consequential, but also significant. To me, in this context, consequential means changing important things from the way they had been. Significant, means historically noteworthy. I think he will be the most significant president since Ronald Reagan. Interestingly, both Trump and Reagan followed presidents that were not significant presidents, leaving little legacy to mark their terms in office. If Trump were to be impeached and awarded the Nobel Peace Prize, remote possibilities, he would be significant 100 years from now. He is, and will be, significant for grabbing a political party and making it his party, even though he is not a politician. TR, also a Nobel winner, did the same.

Trump may also be significant for being unsavory and getting away with it.

My reaction follows:

Reagan accomplished three consequential things, in my view. First, he made old-fashioned conservatism somewhat respectable, though he was and still is reviled on the left for having done so. Second, his determined effort to rebuild the armed forces — to call the bluff of the USSR — was probably the main cause of the Soviet surrender in the Cold War. Third, his political support of Volcker’s tight-money policies, coupled with the tax-rate cuts he pushed through Congress led to the taming of inflation and a resumption of strong economic growth after years of “malaise”.

Thus far, only 16 months into his presidency, Trump has done three consequential things. First, he has nominated a conservative justice to the Supreme Court (though this didn’t change the balance on the Court) and a slew of district and appellate court judges, who seem to be solid conservatives. (There haven’t been any howls of outrage from the conservative sector of the internet.) Second, he has changed the image of American defense and foreign policy from defeatism (clearly the upshot of Obama’s “leading from behind”) to something like Reaganesque doggedness. (In tandem with that, he has backed the enlargement of the defense budget, though not yet, I believe, on a Reagenseque scale.) Third, he has deliberately (and somewhat effectively, as far as I know) pushed for a rollback of regulations that he views as especially harmful to the economy. His stance on immigration is loud and controversial, but it remains to be seen whether it will be consequential.

Maybe I’ve missed some important things, but my bottom line is agreement with my correspondent. It is entirely possible that by the end of Trump’s (first?) term the U.S. legal system will have shifted sharply toward a literal reading of the Constitution; the U.S. will not be in danger of military or political eclipse by Russia and/or China; membership in the nuclear club will not have expanded; trouble-makers like Iran and North Korea will have been “tamed”; and the rate of economic growth will be at its highest since the end of World War II, with a concomitant reduction in the real unemployment rate (much of which is still hidden in a low labor-force participation rate) and a somewhat higher (but not economically debilitating) rate of inflation.

If all or most of that happens — a big if — it will cement the political realignment in the country that was sparked by Trump’s candidacy. The Democrat party will increasingly be the home of affluent, well-educated whites (mangers, aspiring managers, academics, techies). Blacks will still be there for the Dems, though not in their former numbers, now that they are beginning to learn three things: Trump will not send them to concentration camps; white Democrats take them for granted while talking down to them; and blacks have done worse, not better, since Democrats began to throw money and special privileges at them. Hispanics will still be there for the Dems, perhaps in higher numbers than before because of Trump’s perceived “racism”. But the “blue collar” classes and regions will turn increasingly Red. Thus the Midwest, despite Blue enclaves in the big cities, will shift back toward the GOP. The South will remain Red, with the exception of Virginia and perhaps North Carolina, which are becoming extensions of the Northeast (though it will be less reliably Democrat because of the blue-collar shift). The Left Coast will remain reliably on the left, but the push to split California and liberate its conservatives will grow. If it succeeds, the GOP will become even stronger in Congress and in the electoral college. Regardless of what happens in California, the new GOP will be stronger politically than it has been at any time since World War II.

All of that could go by the wayside if there’s a real war involving the U.S., a recession, or a scandal beyond the known fact of Trump’s dalliances (i.e., an actual crime of consequence, not the payoff to Stormy). But barring such things, there will be a new GOP, and it will be stronger than the old one for some years to come.

As for Trump’s personal life, if things go nearly as well as they might, it will merit an asterisk in history books. Balanced historians (they’re hard to come by) will simply note that Trump was one of many presidents who couldn’t keep his pants zipped up, but that he succeeded in spite of it. They might even note that (among men, at least) there is a strong connection between sexual and political drive. Though the last observation will be out of bounds in the new Victorian era that is descending upon us.

The Kennedy Retirement: Hope Springs Eternal

Law professor and blogger Tom Smith (The Right Coast) quotes from and comments on yet another speculative piece about the (hoped for) retirement of Justice Anthony Kennedy:

The Washington rumor mill is churning with speculation about whether Justice Anthony Kennedy will retire at the end of the Supreme Court’s term next month.

The rumors seem to pop up annually in recent years. But with Kennedy’s 30th year on the high court passing in February and the justice nearing 82, the whispers about his future seem to be growing louder.

via www.washingtonexaminer.com

But how will the country endure without its chief moral arbiter? At every turn, Justice Kennedy has been there to make the final, incoherent distinction between right and wrong, between popular and unpopular, between what strange and incomprehensible thing the Law seems to say and what the murk at the heart of his conscience demands, at least for now.

Somebody should write something about this — the making of uber-political decisions on the basis of law-like rhetoric, which everybody knows is just politics, but which everyone agrees should be cloaked as law, while still knowing it is politics. Maybe this is a good thing? Keeps the lid on and all that? But no one has practiced this craft (?), art (?), or rubbishy self-indulgence (?) more semi-artfully than Justice Kennedy. He’s the un-Bork, the un-Ginsburg. He’s what you get.

I couldn’t possibly have put it that well. Kennedy has been fairly consistent in his use of judicial power to undermine civilizing social norms and the rule of law.

There is a canard, which I have read many times during the past few years, that Supreme Court Justices tend to retire during the tenure of president who is of the same party as the president who nominated them. This is the kind of balderdash that becomes “knowledge” among reporters and pundits who can’t be bothered to look up the facts.

Well, I have looked up the facts, and here’s what they tell me about the 34 justices* who have resigned or retired since 1900:

  • Half of them (17) left office under a president of the same party as the president who nominated them. The last of these was Sandra Day O’Connor, who was nominated by Reagan and retired 12 years ago, during the presidency of G.W. Bush.
  • Nine others are Democrat appointees who retired with a Republican in the White House. The last of these was Thurgood Marshall, who was nominated by LBJ and retired 27 years ago, during the presidency of G.H.W. Bush. Marshall’s retirement was like a gift from heaven because it resulted in the nomination and (painful) confirmation of Clarence Thomas, a faithful constitutionalist.
  • The remaining eight were Republican nominees who retired with a Democrat in the White House. Three of the last four justices to retire are in this category: Harry Blackmun (author of the infamous Roe v. Wade decision), nominated by Nixon and retired under Clinton; David Souter (another RINO), nominated by G.H.W. Bush and retired under Obama; and John Paul Stevens (the biggest RINO in captivity), nominated by Gerald Ford and retired under Obama.

It would be poetic justice (pun intended) if Kennedy were to retire during Trump’s presidency, to be replaced by someone in the mold of Alito, Thomas, or Gorsuch.

Here’s the big picture, a plot of retirements by year and their effect on the nominal balance of party affiliations on the Supreme Court:


__________
* Here’s the chronological list of retirements, which the name of each retiring justice, the name of the president who nominated him (and year of accession to the Court), the name of the president at the time of the justice’s retirement (and year of retirement), and the effect of the retirement on the nominal party alignment of the Court:

Charles Evans Hughes – Taft 1910 – Wilson 1916 (GOP to Dem)
John Hessin Clarke – Wilson 1916 – Harding 1922 (Dem to GOP)
William Rufus Day – T. Roosevelt 1903 – Harding 1922 (Same)
William Howard Taft – Harding 1921 – Hoover 1930 (Same)
Oliver Wendell Holmes Jr. – T. Roosevelt 1902 – Hoover 1932 (Same)
Willis Van Devanter – Taft 1911 – F. Roosevelt 1937 (GOP to Dem)
George Sutherland – Harding 1922 – F. Roosevelt 1938 (GOP to Dem)
Louis Dembitz Brandeis – Wilson 1916 – F. Roosevelt 1939 (Same)
James Clark McReynolds – Wilson 1941 – F. Roosevelt 1941 (Same)
Charles Evans Hughes – Hoover 1930 – F. Roosevelt 1941 (GOP to Dem)
James Francis Byrnes – F. Roosevelt 1941 – F. Roosevelt 1942 (Same)
Owen Josephus Roberts – Hoover 1930 – Truman 1945 (GOP to Dem)
Robert Houghwout Jackson – F. Roosevelt 1941 – Eisenhower 1954 (Dem to GOP)
Sherman Minton – Truman 1949 – Eisenhower 1956 (Dem to GOP)
Stanley Forman Reed – F. Roosevelt 1938 – Eisenhower 1957 (Dem to GOP)
Harold Hitz Burton – Truman 1945 – Eisenhower 1958 (Dem to GOP)
Felix Frankfurter – F. Roosevelt 1939 – Kennedy 1962 (Same)
Arthur Joseph Goldberg – Kennedy 1962 – L. Johnson 1965 (Same)
Thomas Campbell Clark – Truman 1949 – L. Johnson 1967 (Same)
Abraham Fortas – L. Johnson 1965 – Nixon 1969 (Dem to GOP)
Earl Warren – Eisenhower 1954 – Nixon 1969 (Same)
Hugo Lafayette Black – F. Roosevelt 1937 – Nixon 1971 (Dem to GOP)
John Marshall Harlan II – Eisenhower 1955 – Nixon 1971 (Same)
William Orville Douglas – F. Roosevelt 1939 – Ford 1975 (Dem to GOP)
Potter Stewart – Eisenhower 1959 – Reagan 1981 (Same)
Warren Earl Burger – Nixon 1969 – Reagan 1986 (Same)
Lewis Franklin Powell Jr. – Nixon 1972 – Reagan 1987 (Same)
William Joseph Brennan Jr. – Eisenhower 1957 – Bush I 1990 (Same)
Thurgood Marshall – L. Johnson 1967 – Bush I 1991 (Dem to GOP)
Byron Raymond White – Kennedy 1962 – Clinton 1993 (Same)
Harry Andrew Blackmun – Nixon 1970 – Clinton 1994 (GOP to Dem)
Sandra Day O’Connor – Reagan 1981 – Bush II 2006 (Same)
David Hackett Souter – Bush I 1990 – Obama 2009 (GOP to Dem)
John Paul Stevens – Ford 1975 – Obama 2010 (GOP to Dem)

The list includes Charles Evans Hughes twice. He first joined the Court in 1910, and resigned in 1916 to run for the presidency as a Republican. Hughes was then nominated as chief justice in 1930, to succeed William Howard Taft. Taft was the only person to have served as President of the United States and Supreme Court justice.

Rage on the Left

Back in the days when I was a “liberal” I was put off by such things as the rioting and looting in the aftermath of the assassination of MLK, the riotous behavior of anti-war protesters (even though I was against the Vietnam War as it was being fought), the filthy-speech movement, the occupation of campus buildings, and the many protest marches that blocked traffic and often led to violence.

Why? Because my “liberalism” wasn’t a sign of inner rage on my part. It was a mark of my belief — a wrong belief as I have come to understand — that “society” can be organized rationally (i.e., by government). My “liberalism” wasn’t based on irrational rage. I was therefore put off by the kinds of things mentioned above because they were external manifestations of rage. You could see it in the faces, speech, and body language of the rioters and protesters.

And you can see it in the faces, speech, and body language of today’s leftists. Always demanding things because — in their view — the world doesn’t match up to their idea of perfection. They take it personally, and it shows.

Today’s leftists are tantrum-throwers, just as were their forebears in the 1960s and 1970s. And today’s leftists are more dangerous than were their forebears because they are much more influential. For one thing, there are many tantrum-throwers of yore in positions of power who are sympathetic to their emotional descendants, and who share their solipsistic view of the world: They want what they want, they want it now, and they will do anything to get it.


Related posts:
Asymmetrical (Ideological) Warfare
The Left’s Agenda
The Left and Its Delusions
The Spoiled Children of Capitalism
The Culture War
Ruminations on the Left in America
Superiority
Whiners
God-Like Minds
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
The Left and Violence
The Left and Evergreen State: Reaping What Was Sown
Leftism
Leftism As Crypto-Fascism: The Google Paradigm
What Is Going On? A Stealth Revolution
Utopianism, Leftism, and Dictatorship
Abortion, the “Me” Generation, and the Left
Whence Polarization?
Social Norms, the Left, and Social Disintegration
Can Left and Right Be Reconciled?

Can Left and Right Be Reconciled?

TWO DIMENSIONS OF POLITICAL THOUGHT

The political views of left and right* should be understood as ideological and psychological phenomena. Left and right aren’t distinguished just by what people think, but more deeply by why people think as they do. Some people just see the world differently than others. And that fundamental difference is reinforced and magnified by identifying with a particular political camp, imbibing the views that issue from it, and seeking out evidence for those views to the exclusion of contrary evidence (confirmation bias).

Why is the key to the irreconcilability of hard leftism and staunch conservatism.  What matters, but it is a less definitive discriminator between left and right because what people think is more malleable.

WHAT IS A MOVEABLE FEAST

What people think is influenced heavily by family, friends, neighbors, church, club, co-workers, professional colleagues, and so on. The urge to belong and the need for approval have a lot to do with what one says to others. The need for cognitive consonance pushes people in the direction of “believing” what they say. Thus it is easy to say what meets with the approval of one’s key social groups, to move one’s opinions as the opinions of the groups move, to believe that those opinions are correct, to seize on supporting “evidence” (anecdotes, slanted news, etc.), and to reject information that doesn’t support one’s opinions.

An introvert is more likely to seek facts — or what he takes to be facts — than to be swayed by groupthink in forming his views. By the same token, it is probably easier for an introvert to change his views than it is for an extravert to do so. In any event, a person who is open to new ideas, and whose social milieu changes in character, may find that his views evolve with time. He may also be struck by an insight (“mugged by reality”) to the same effect.

There is also the kind of person who is temperamentally unsuited to the political views that he holds as a matter of social conditioning. That kind of person, unlike the person whose views are matched to his temperament, will be more open to alternative ideas and to insights that may reshape his views.

Overlaid on social influences are signals emitted by authoritative sources. For many persons, the morality of a particular behavior (e.g., divorce, abortion, same-sex “marriage”) depends on how that behavior is depicted in news and entertainment media, or is treated as a matter of law.

Though a person who is temperamentally predisposed to conservatism, or leftism, is unlikely to switch sides for any of the reasons discussed thus far, there is what I call the “squishy center” of the electorate that swings many an election — and thus government policy.

For example, every week since the first inauguration of Barack Obama, Rasmussen Reports** has asked 2,500 likely voters whether they see the country as going in the “right direction” or being on the “wrong track”. During Obama’s tenure, the percentage of respondents saying “right direction” ranged from 13 to 43; the percentages for “wrong track” ranged from 51 to 80. If voters were consistent, a majority would have said “right direction” and a minority would have said “wrong track” since the inauguration of Donald Trump. But “right direction” has garnered only 29 to 47 percent thus far in Trump’s presidency, while “wrong track” is still almost always in the majority, at 47 to 65 percent.

Here’s my interpretation: Hard leftists said “right direction” when Obama was in the White House; staunch conservatives have been saying “right direction” since Trump moved into the White House; and the squishy center has all the while been swinging from one side to the other, depending on passing events.

Scraping away the squishy center, I estimate that about one-third of the electorate is hard left and about one-third is staunchly conservative; thus:

I don’t mean to minimize the importance of what people think. Bandwagon effects are powerful politically. I am convinced, for example, that Justice Kennedy’s 5-4 majority opinion in favor of same-sex “marriage” (Obergefell v. Hodges) signaled to the squishy center that being on the “right side of history” means siding with the libertines of the left against long-standing social norms.

Obergefell v. Hodges certainly emboldened the hard left. As I put it on the day of Justice Kennedy’s fateful ruling,

for every person who insists on exercising his rights, there will be at least as many (and probably more) who will be cowed, shamed, and forced by the state into silence and compliance with the new dispensation. And the more who are cowed, shamed, and forced into silence and compliance, the fewer who will assert their rights. Thus will the vestiges of liberty vanish.

Just look at the increasingly anti-male, anti-white, anti-conservative, anti-free-speech behavior on the part of Facebook Google, the other left-dominated social media, and much of academia. It has gone from threatening to frightening in the past three years.

GETTING TO WHY: A PRELIMINARY EXPLANATION

There is something deeper than social conformity at work among the hard left and staunch right. That something rules out reconciliation.

My earlier attempt at pinpointing the essential difference between left and right is here. I say, in part, that

“Liberals” are more neurotic than conservatives. That is, “liberals” have a “tendency to experience unpleasant emotions easily, such as anger, anxiety, depression, and vulnerability.”…

Anxious persons are eager to sacrifice better but less certain outcomes — the fruits of liberty — for “safe” ones. Anxious persons project their anxieties onto others, and put their trust in exploitative politicians who play on their anxieties even if they don’t share them. This combination of anxieties and power-lust yields “social safety net” programs and regulations aimed at reducing risks and deterring risk-taking.. At the same time, American “liberals” — being spoiled children of capitalism — have acquired a paradoxical aversion to the very things that would ensure their security: swift and sure domestic justice, potent and demonstrably ready armed forces.

Conservatives tend toward conscientiousness more than liberals do; that is, they “display self-discipline, act dutifully, and strive for achievement against measures or outside expectations.” (This paper summarizes previous research and arrives at the same conclusion about the positive correlation between conscientiousness and conservatism.) In other words, conservatives (by which I don’t mean yahoos) gather relevant facts, think things through, assess the risks involved in various courses of action, and choose to take risks (or not) accordingly. When conservatives choose to take risks, they do so after providing for the possibility of failure (e.g., through insurance and cash reserves). Confident, self-reliant conservatives are hindered by governmental intrusions imposed at the behest of anxious “liberals.” All that conservatives need from government is protection from domestic and foreign predators. What they get from government is too little protection and too much interference.

A DEEPER LOOK AT WHY

My hypothesis is consistent with that of Stephen Messenger (who blogs at The Independent Whig). Messenger’s hypothesis, which builds on the work of Jonathan Haidt, is spelled out in a recent article at Quillette, “Towards a Cognitive Theory of Politics“. Here’s some of it:

In brief, my theory holds that the political Left and Right are best understood as psychological profiles featuring different combinations of ‘moral foundations’ … and cognitive style…. To define ideologies in terms of beliefs, values, etc., is to confuse cause and effect.

Moral foundations are evolved psychological mechanisms of social perception, subconscious intuitive cognition, and conscious reasoning described by Haidt in The Righteous Mind….

Haidt allows that there are probably many moral foundations, but he has focused his efforts on identifying the most powerful. He’s identified six so far, summarized as follows in The Righteous Mind on pages 178-179 unless otherwise noted:

  • Care/Harm (sensitivity to signs of suffering and need)
  • Fairness/Cheating (sensitivity to indications that another person is likely to be a good or bad partner for collaboration and reciprocal altruism)
  • Liberty/Oppression (sensitivity to, and resentment of, attempted domination)
  • Loyalty/Betrayal (sensitivity to signs that another person is (or is not) a team player)
  • Authority/Subversion (sensitivity to signs of rank or status, and to signs that other people are (or are not) behaving properly given their position)
  • Sanctity/Degradation (sensitivity to pathogens, parasites, and other threats that spread by physical tough or proximity)….

He calls the first three foundations the “individualizing” foundations because their main emphasis is on the autonomy and well-being of the individual. The latter three are “binding” foundations because they help individuals form cooperative groups for the mutual benefit of all members….

Cognitive styles … are ways of thinking; operating systems, if you will, like Windows and iOS, that process information received from the social environment. There are two predominant cognitive styles, traced through 2,400 years of human history by Arthur Herman in his book The Cave and the Light: Plato and Aristotle and the Struggle for the Soul of Western Civilization, in which Plato and Aristotle serve as metaphors for each, summarized in the following two short passages:

Despite their differences, Plato and Aristotle agreed on many things.  They both stressed the importance of reason as our guide for understanding and shaping the world. Both believed that our physical world is shaped by certain eternal forms that are more real than matter. The difference was that Plato’s forms existed outside matter, whereas Aristotle’s forms were unrealizable without it. (p. 61)

The twentieth century’s greatest ideological conflicts do mark the violent unfolding of a Platonist versus Aristotelian view of what it means to be free and how reason and knowledge ultimately fit into our lives (p.539-540)

Plato thought that everything in the real world is but a pale imitation of its ideal self, and it is the role of the enlightened among us to help us see the ideal and to help steer society toward it. This is the style of thinking behind RFK’s “I dream things that never were and ask ‘Why not?’” John Lennon’s “Imagine,” President Obama’s “Fundamentally Transform,” and even Woodrow Wilson’s progressivism.

Aristotle agreed that we should always strive to improve the human condition, but argued that the real world in which we live sets practical limits on what’s achievable. The human mind is not infinitely capable, nor is human nature infinitely malleable. If we’re not mindful of such limitations, or if we try to ‘fix’ them, our good intentions can end up doing more harm than good and lead us down the proverbial road to hell.

These two cognitive styles can be thought of, respectively, as WEIRD (Western, Educated, Industrialized, Rich, and Democratic) and holistic. In The Righteous Mind, Haidt describes the peculiarities of WEIRD individuals, as follows:

WEIRD people think more analytically (detaching the focal object from its context, assigning it to a category, and then assuming that what’s true about the category is true about the object). (p. 113)

[WEIRD thinkers tend to] see a world full of separate objects rather than relationships. (p. 113)

Putting this all together, it makes sense that WEIRD philosophers since Kant and Mill have mostly generated moral systems that are individualistic, rule-based, and universalist. (p. 113-114)

Worldwide, this kind of thinking is a statistical outlier because most people and cultures think holistically.3 Holistic thinkers tend to see a world full of relationships rather than objects, and they have a stronger tendency toward consilience. As Haidt explains:

When holistic thinkers in a non-WEIRD culture write about morality, we get something more like the Analects of Confucius, a collection of aphorisms and anecdotes that can’t be reduced to a single rule. (p. 114)

WEIRD Platonic rationalism and holistic Aristotelian empiricism can be thought of as the two ends of a spectrum of cognitive styles. Few people are at the extremes; most are somewhere in between.

The psychological profiles of Left and Right differ in the degree to which they tend to favor the cognitive styles and the moral foundations. A series of studies of cognitive styles has found that “liberals think more analytically (more WEIRD) than conservatives”:

[L]iberals think more analytically (an element of WEIRD thought) than moderates and conservatives. Study 3 replicates this finding in the very different political culture of China, although it held only for people in more modernized urban centers. These results suggest that liberals and conservatives in the same country think as if they were from different cultures.4

Haidt’s studies of moral foundations show that liberals tend to employ the individualizing foundations and, of those, mostly the care/harm foundation, whereas conservatives tend to use of all of them equally. There’s no conservative foundation that’s not also a liberal foundation but, for all practical purposes, half of the conservative foundations are unavailable to liberal social cognition. The graphic below comes from Haidt’s TED Talk [link added], and it shows that this pattern holds true in every culture studied on every continent, suggesting it is a human universal.

‘Ingroup’ stands in for the ‘Loyalty/Betrayal’ foundation. The ‘Liberty/Oppression’ foundation, added to the first 5 foundations later by Haidt and his researchers, is absent.

….

In sum, the liberal psychological profile tends toward the Platonic cognitive style combined with the three-foundation moral matrix.  The conservative profile leans toward the Aristotelian cognitive style with the all-foundation moral matrix. The libertarian profile seems to be made up of the Aristotelian style combined with a moral matrix that emphasizes liberty/oppression more than the other foundations. [Ed. note: So-called libertarians are like realists who view the world through a pinhole instead of a picture window.]

As I have argued before, concepts like liberty, equality, justice, and fairness take on different—even mutually exclusive—meanings depending on which psychological profile is interpreting them. The Left’s bias toward outcome-based conceptions of ‘positive’ liberty seems to follow naturally from its profile of Platonic rationalism focused on the moral foundation of care. The Right’s tendency to favor process-based conceptions of ‘negative’ liberty follows from its profile of Aristotelian empiricism in combination with all of the moral foundations.

It’s almost as if Left and Right are speaking different languages, in which each uses the same words but attaches starkly different meanings to them. Both sides agree that liberty is a great thing, but because neither side realizes that their understanding of it is different from that of the other they talk past one another, or worse, assume their opponent is stupid, ignorant, or wicked due to the failure to grasp concepts that in their own minds are self-evident.

The American economist and social theorist Thomas Sowell describes the way these two profiles have played out in the real world since the late 1700s in his book A Conflict of Visions: Ideological Origins of Political Struggles. Liberal psychology is reflected by thinkers like Godwin, Condorcet, Mill, Laski, Voltaire, Paine, Holbach, Saint-Simon, Robert Owen, and G.B. Shaw. The conservative profile is seen in the likes of Smith, Burke, Hamilton, Malthus, Hayek, and Hobbes.

A Cognitive Theory of Politics can help us to improve our understanding historical events. For example, Sowell observes that the liberal ‘vision,’ or psychological profile, can be seen as the engine of the French Revolution. Jonathan Haidt made the same observation in a lecture he gave at the Stanford University Center for Compassion and Altruism Research (CCARE) entitled “When Compassion Leads to Sacrilege.” In contrast, Sowell argues that the American founding was a fundamentally conservative movement. A reading of The Federalist Papers through the lens of the Cognitive Theory of Politics bears him out, and Burke—who supported the American Revolution but opposed the French Revolution—would probably agree….

… The political polarization of America described by Charles Murray in his book Coming Apart is best understood as a self-sorting of the population based primarily on cognitive styles.

SYNTHESIS AND CONCLUSION

Putting it all together, leftists are attached mainly to the moral foundation of harm/care because of their nueroticism. But they pursue security for themselves and those to whom they are neurotically attached — various “victim” groups — by seizing upon idealized solutions. The apotheosis of those idealized solutions is big government, which has the magical power — in the left’s idealization of it — to right all wrongs without a misstep. (Defense is excluded from the magical thinking of the left because the need to defend the nation implies that America is worth defending, but it isn’t — to the leftist — because it falls so far short of his idea of perfection. Defense is also exempted because it draws resources from the things that would make America more perfect in the fascistic mind: socialized medicine, a guaranteed income, free day-care, free college for all, and on and on.)

Staunch conservatives, on the other hand, know that government is flawed because its leaders and minions are fallible human beings. Further, it is impossible for government to possess all of the information required to make better decisions than people can make for themselves through mutually beneficial cooperation. That cooperation occurs in the myriad institutions of civil society, which include but are far from limited to markets for the exchange of products and services. Staunch conservatives — who can also be called right-minarchists or libertarian conservatives — therefore decry the expansion of government power beyond that which is required to protect civil society from domestic and foreign predators.

Messenger, despite those fundamental differences, is hopeful about a reconciliation between left and right:

A Cognitive Theory of Politics offers a new lens through which we can better understand human history and more clearly see ourselves and each other. Using this tool, we can better understand how we got to where we are, what’s happening to us now, and the available paths forward. A more accurate, science-based, universal understanding of the ‘Social Animal’ (humans) by the social animal might break the language barrier between Left and Right and provide a common foundation of knowledge from which productive debate can ensue.

I disagree. Hard leftists and staunch conservatives are “wired” differently, as Haidt has shown, and as Messenger has acknowledged.

The staunch conservative sees civil society as a whole, understands that it is unitary, knows that it is self-correcting because people learn from experience, and accepts its outcomes as the best that can be attained in a real world of real people.

The leftist can’t see the forest for the trees. He sees particular outcomes that displease him, and is willing to use the power of government to rearrange civil society in an effort to “remedy” those outcomes. He doesn’t understand, or care, that the results will be worse: a weaker economy, fewer jobs for those most in need of them, more racial tension, more broken families, and so on, up to and including an irremediably polarized nation.

Moreover, because leftists are at bottom self-centered, they cannot tolerate dissent. Dissent from a leftist regime is ultimately dealt with by suppression and violence. What we see now on campuses and in social media is merely a foretaste of what will happen if the left succeeds in its aim of seizing firm control of America. All else will follow from that.

This leads to an obvious conclusion: Left and right — the hard left and staunch conservatism, in particular — are irreconcilable. They are in fact locked in a death-struggle over the future of America. The squishy center is along for the ride, and will change its tune (what it says) and allegiance opportunistically, in the hope that it will end up on the “right side of history”.
__________
* Given the actual stances of those who are usually identified as “left” and “right”, there is absurdity in a conventional characterization of the left-right political spectrum like this:

Generally, the left-wing is characterized by an emphasis on “ideas such as freedom, equality, fraternity, rights, progress, reform and internationalism”, while the right-wing is characterized by an emphasis on “notions such as authority, hierarchy, order, duty, tradition, reaction and nationalism”.

The truth of the matter is almost 180 degrees from the caricature presented above. But Wikipedia is the source, so what do you expect?

I have explained many times (e.g., here) that the left is fascistic, while the right — excluding its yahoo component and some of its so-called libertarian component — is liberty-loving. (Liberty is properly defined as an attainable modus vivendi rather than an imaginary nirvana). So the real question of the title should be: Can American fascism and (true) anti-fascism be reconciled?

But I have refrained from using the “f” word, despite its lexical accuracy, and stuck with “left” and “right” despite the erroneous association of conservatism (i.e., the right) with authoritarianism (i.e., fascism). Just remember that “right” is often used to mean “correct”, and if anything is correct when it comes to striving for liberty, it is conservatism.

** I cite Rasmussen Reports because of its good track record — here and here, for example. Its polls are usually more favorable toward Republicans. Though the polls are generally accurate, they are out of step with the majority of polls, which are biased toward Democrats. This  has caused Rasmussen Reports to be labeled “Republican-leaning”, as the other polls weren’t “Democrat-leaning”. There is a parallel with the labeling of Fox News as a “conservative” outlet (though it isn’t always), while the other major TV news outlets laughably claim to be neutral.


Related posts:
Libertarian-Conservatives Are from the Earth, Liberals Are from the Moon
The Worriers
More about the Worrying Classes
Refuting Rousseau and His Progeny
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
Human Nature, Liberty, and Rationalism
Society and the State
Liberty and Society
Tolerance on the Left
The Eclipse of “Old America”
Genetic Kinship and Society
“We the People” and Big Government
The Culture War
Getting Liberty Wrong
The Barbarians Within and the State of the Union
The Beginning of the End of Liberty in America
Turning Points
There’s More to It Than Religious Liberty
Equal Protection in Principle and Practice
Social Justice vs. Liberty
Economically Liberal, Socially Conservative
The Left and “the People”
Why Conservatives Shouldn’t Compromise
Liberal Nostrums
The Harm Principle Revisited: Mill Conflates Society and State
Liberty and Social Norms Re-examined
Equality
Academic Freedom, Freedom of Speech, and the Demise of Civility
Self-Made Victims
Leftism
Leftism As Crypto-Fascism: The Google Paradigm
What Is Going On? A Stealth Revolution
Disposition and Ideology
How’s Your (Implicit) Attitude?
Down the Memory Hole
“Why Can’t We All Just Get Along?”
“Tribalists”, “Haters”, and Psychological Projection
Mass Murder: Reaping What Was Sown
Andrew Sullivan Almost Gets It
Utopianism, Leftism, and Dictatorship
Pronoun Profusion
“Democracy” Thrives in Darkness — and Liberty Withers
Preemptive (Cold) Civil War
My View of Mill, Endorsed
The Framers, Mob Rule, and a Fatal Error
Abortion, the “Me” Generation, and the Left
Abortion Q and A
Whence Polarization?
Negative Rights, Etc.
Social Norms, the Left, and Social Disintegration
The Lesson of Alfie Evans
Order vs. Authority

More Evidence against College for Everyone

Here’s a datum:

My eldest grandchild is 23 years old. He’s a bright, articulate lad, but far more interested in doing than in reading. He has been working since he graduated from (home) high school, but not without a purpose in mind. Last fall, he enrolled in a course to learn a trade that he has always wanted to pursue. He passed the course with flying colors, quickly got a good job as a result, and from that job moved into the kind of job that he has long sought. He is happy, and I am happy for him.

But that’s not all. His job, though technically demanding, is “blue collar”. When I was his age, freshly equipped with a B.A. and some graduate school, I moved into the world of “white collar” work as an entry-level analyst at a government-sponsored think-tank in the D.C. area. Hot stuff, right?

Well, converting my starting salary to an hourly rate and adjusting it for inflation, I was making just about what my grandson is making now. But since graduating from high school he has been earning and saving money instead of cluttering a college campus. And he owns a pickup truck. When I started at the think-tank, I might have had a few hundred dollars in a checking account. And I couldn’t afford a car until I had worked for several months.

Will my grandson eventually make as money as I was able to make by feeding at the public trough? Given his ambition and foresight there’s no reason he can’t make a lot more than I did — and by doing things that people are actually willing to pay for instead of siphoning the U.S. Treasury.

College not only isn’t for everyone, it’s for almost no one. As I said seven years ago,

[w]hen I entered college [in 1958], I was among the 28 percent of high-school graduates then attending college. It was evident to me that about half of my college classmates didn’t belong in an institution of higher learning. Despite that, the college-enrollment rate among high-school graduates has since doubled.

Which means that only about one-fourth (or less) of today’s high-school graduates are really college material. That’s not a rap against them. It’s a rap against the insane idea of college for almost everyone. That would be a huge burden on taxpayers, a shameful misdirection of talent, and a massive drain on the economic potential of the nation.


Related posts:
The Higher-Education Bubble
Is College for Everyone?
The Dumbing-Down of Public Schools
College Is for Almost No One
About Those “Underpaid” Teachers

Order vs. Authority

I am an orderly person: an organized, neat, planner. As an orderly person, I have no problem with the idea of living in a community where one’s property must conform to certain standards: the color of house paint, style of siding, height of grass, prompt removal of empty trash bins from the street, only guests’ cars parked in the street (and not overnight), garage door closed when garage isn’t in use, etc.

I know people who object to such rules, and consider them authoritarian. But the occupant of a community with strict environmental standards knows (or should know) what he’s getting into. Living in a regime of strict environmental standards as a matter of choice doesn’t signify a preference for authoritarianism, it signifies a preference for neatness. I, for one, have no desire to push other people around; leave me alone and I’ll leave you alone.

Oddly, though, the people I know who express disdain for communities with strict environmental standards like to think of themselves as “libertarian”. But they are not; they are “liberals” who have a strong preference for authoritarianism, that is, pushing other people around (e.g., Obamacare, “green” regulations). It’s just that, like most people, they don’t like to be pushed around. There’s no better word for such people than “hypocrite”.

The Lesson of Alfie Evans

Alfie Evans, though he is probably doomed to die because of his physical ailment, deserves a better end than the one that government of Britain has decreed for him. This tone of this post is relatively calm compared to the black rage that I feel on behalf of Alfie and his parents.

When the state becomes your health-care provider, the state can kill you without benefit of a trial. It would be known as involuntary euthanasia were it not for the fact that the act of murder is passive rather than active.

When the state controls the output of a product or service, that product or service must be rationed. The state has no other way to respond to consumer demand. It is not a business competing for customers; drawing on available resources to respond to demand; or taking risks that may yield a profit (the reward for success) or a loss (the penalty for failure). It is just a machine dictating the rate at which the products and services under its control will be provided, and — with the help of algorithms and favoritism — determining for whom they will be provided. The state certainly doesn’t create supply in response to demand. In fact, it stifles supply with its often-ridiculously low reimbursement rates and onerous regulations. The state has no business being in business. It certainly has no business being in the health-care business.

But when, like Alfie’s parents, you challenge the state’s authority is such matters, you can’t expect compassion or flexibility. The rules are the rules, and a relaxation of the rules would call into question the authority upon which the state relies to maintain its monopoly power. If Alfie Evans were allowed treatment in another country, what would that say about the state of health care in Britain? Well, what it would say is what observant people around the world have known for decades: Britain’s National Health Service is a crime, wrapped in a bureaucracy, inside a pseudo-egalitarian facade.

Socialized medicine is of a piece with other examples of magical thinking, which abounds on the left; for example:

  • There can be a single-payer system of health care without “death panels”. (The case at hand.)
  • Women can do everything that men can do, but it doesn’t work the other way … just because.
  • Mothers can work outside the home without damage to their children.
  • Race is a “social construct”; there is no such thing as intelligence; women and men are mentally and physically equal in all respects; and the under-representation of women and blacks in certain fields is therefore due to rank discrimination (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).
  • A minimum wage can be imposed without an increase in unemployment.
  • Taxes can be raised without discouraging investment and therefore reducing the rate of economic growth.
  • Peace can be had without preparedness for war. (A reality that most British leaders ignored in the 1930s, despite Churchill’s warnings.)
  • Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”. There can “free lunches” all around.
  • Health insurance premiums will go down while the number of mandates is increased.
  • The economy can be stimulated through the action of the Keynesian multiplier, which is nothing but phony math.
  • “Green” programs create jobs (but only because they are inefficient).
  • Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).

Why do such fallacies persist, and so often dictate state action? To round out the psychological profile of leftism, one must add to magical thinking the closely related nirvana fallacy (hypothetical perfect is always better than feasible reality); large doses of neurotic hysteria (e.g., the overpopulation fears of Paul Ehrlich, the AGW hoax of Al Gore et al.); and adolescent rebellion (e.g., the post-election tantrum-riots of 2016).

The rhetoric of leftism — when it is not downright hateful toward non-leftists — has wide appeal because to adopt it for one’s own and to echo it is to make oneself feel kind, caring, generous — and powerful — at a stroke. It matters not whether the policies that flow from leftist rhetoric actually make others better off. The important things, to a leftist, are how he feels about himself and how others perceive him.

It is easy for a leftist to seem kinder, more caring, and more generous than his conservative and libertarian brethren because a leftist focuses on intentions rather than consequences. No matter that the consequences of leftist dogma could match their stated intentions only if Santa Claus or the Tooth Fairy ruled the world.

In the leftist’s imagination, of course, government is Santa Claus or the Tooth Fairy. Government, despite the fact that it consists of venal and fallible humans, somehow (in the leftist’s imagination) wields powers that enable it to make “good” things happen with the stroke of a pen and at no cost. Or only at the expense of the despised “rich”, even though most of them, it seems, are elite leftists.


Related reading:

David French, “Alfie Evans Foreshadows a Dark American Future“, National Review, April 26, 2018
Ramesh Ponnuru, “Alfie Evans and Libertarianism“, National Review, May 8, 2018 (Ponnuro quotes Michael Cannon of Cato Institute. Cannon’s remarks remind me why I rejected Cato’s brand of faux libertarianism and find little difference between it and leftism.)


Related posts:
Rationing and Health Care
The Perils of Nannyism: The Case of Obamacare
More about the Perils of Obamacare
Health-Care Reform: The Short of It
Asymmetrical (Ideological) Warfare
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
Ruminations on the Left in America
God-Like Minds
An Addendum to (Asymmetrical) Ideological Warfare
The Left and Violence
Leftism
Leftism As Crypto-Fascism: The Google Paradigm
“Tribalists”, “Haters”, and Psychological Projection
Utopianism, Leftism, and Dictatorship
Social Norms, the Left, and Social Disintegration