The Balderdash Chronicles

Balderdash is nonsense, to put it succinctly. Less succinctly, balderdash is stupid or illogical talk; senseless rubbish. Rather thoroughly, it is

balls, bull, rubbish, shit, rot, crap, garbage, trash, bunk, bullshit, hot air, tosh, waffle, pap, cobblers, bilge, drivel, twaddle, tripe, gibberish, guff, moonshine, claptrap, hogwash, hokum, piffle, poppycock, bosh, eyewash, tommyrot, horsefeathers, or buncombe.

I have encountered innumerable examples of balderdash in my 35 years of full-time work,  14 subsequent years of blogging, and many overlapping years as an observer of the political scene.  This essay documents some of the worst balderdash that I have come across.

THE LIMITS OF SCIENCE

Science (or what too often passes for it) generates an inordinate amount of balderdash. Consider an article in The Christian Science Monitor: “Why the Universe Isn’t Supposed to Exist”, which reads in part:

The universe shouldn’t exist — at least according to a new theory.

Modeling of conditions soon after the Big Bang suggests the universe should have collapsed just microseconds after its explosive birth, the new study suggests.

“During the early universe, we expected cosmic inflation — this is a rapid expansion of the universe right after the Big Bang,” said study co-author Robert Hogan, a doctoral candidate in physics at King’s College in London. “This expansion causes lots of stuff to shake around, and if we shake it too much, we could go into this new energy space, which could cause the universe to collapse.”

Physicists draw that conclusion from a model that accounts for the properties of the newly discovered Higgs boson particle, which is thought to explain how other particles get their mass; faint traces of gravitational waves formed at the universe’s origin also inform the conclusion.

Of course, there must be something missing from these calculations.

“We are here talking about it,” Hogan told Live Science. “That means we have to extend our theories to explain why this didn’t happen.”

No kidding!

Though there’s much more to come, this example should tell you all that you need to know about the fallibility of scientists. If you need more examples, consider these.

MODELS LIE WHEN LIARS MODEL

Not that there’s anything wrong with being wrong, but there’s a great deal wrong with seizing on a transitory coincidence between two variables (CO2 emissions and “global” temperatures in the late 1900s) and spurring a massively wrong-headed “scientific” mania — the mania of anthropogenic global warming.

What it comes down to is modeling, which is simply a way of baking one’s assumptions into a pseudo-scientific mathematical concoction. Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a general explanation of the sham, see this.

SCIENCE VS. SCIENTISM: STEVEN PINKER’S BALDERDASH

The examples that I’ve adduced thus far (and most of those that follow) demonstrate a mode of thought known as scientism: the application of the tools and language of science to create a pretense of knowledge.

No less a personage than Steven Pinker defends scientism in “Science Is Not Your Enemy”. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. Pinker says this, for example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists”. We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (e.g., bluffing, spotting “tells”, and avoiding “tells”).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real test of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification”. With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, discussed later.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

Much of the problem arises because of a simple fact that is too often overlooked: Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

In sum, scientists are human and fallible. It is in the best tradition of science to distrust their scientific claims and to dismiss their non-scientific utterances.

ECONOMICS: PHYSICS ENVY AT WORK

Economics is rife with balderdash cloaked in mathematics. Economists who rely heavily on mathematics like to say (and perhaps even believe) that mathematical expression is more precise than mere words. But, as Arnold Kling points out in “An Important Emerging Economic Paradigm”, mathematical economics is a language of faux precision, which is useful only when applied to well defined, narrow problems. It can’t address the big issues — such as economic growth — which depend on variables such as the rule of law and social norms which defy mathematical expression and quantification.

I would go a step further and argue that mathematical economics borders on obscurantism. It’s a cult whose followers speak an arcane language not only to communicate among themselves but to obscure the essentially bankrupt nature of their craft from others. Mathematical expression actually hides the assumptions that underlie it. It’s far easier to identify and challenge the assumptions of “literary” economics than it is to identify and challenge the assumptions of mathematical economics.

I daresay that this is true even for persons who are conversant in mathematics. They may be able to manipulate easily the equations of mathematical economics, but they are able to do so without grasping the deeper meanings — the assumptions and complexities — hidden by those equations. In fact, the ease of manipulating the equations gives them a false sense of mastery of the underlying, real concepts.

Much of the economics profession is nevertheless dedicated to the protection and preservation of the essential incompetence of mathematical economists. This is from “An Important Emerging Economic Paradigm”:

One of the best incumbent-protection rackets going today is for mathematical theorists in economics departments. The top departments will not certify someone as being qualified to have an advanced degree without first subjecting the student to the most rigorous mathematical economic theory. The rationale for this is reminiscent of fraternity hazing. “We went through it, so should they.”

Mathematical hazing persists even though there are signs that the prestige of math is on the decline within the profession. The important Clark Medal, awarded to the most accomplished American economist under the age of 40, has not gone to a mathematical theorist since 1989.

These hazing rituals can have real consequences. In medicine, the controversial tradition of long work hours for medical residents has come under scrutiny over the last few years. In economics, mathematical hazing is not causing immediate harm to medical patients. But it probably is working to the long-term detriment of the profession.

The hazing ritual in economics has as least two real and damaging consequences. First, it discourages entry into the economics profession by persons who, like Kling, can discuss economic behavior without resorting to the sterile language of mathematics. Second, it leads to economics that’s irrelevant to the real world — and dead wrong.

How wrong? Economists are notoriously bad at constructing models that adequately predict near-term changes in GDP. That task should be easier than sorting out the microeconomic complexities of the labor market.

Take Professor Ray Fair, for example. Professor Fair teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983, Prof. Fair has been forecasting changes in real GDP over the next four quarters. He has made 80 such forecasts based on a model that he has undoubtedly tweaked over the years. The current model is here. His forecasting track record is here. How has he done? Here’s how:

1. The median absolute error of his forecasts is 30 percent.

2. The mean absolute error of his forecasts is 70 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 4 percent; too low when real, four-quarter GDP growth is greater than 4 percent.

4. His forecasts have grown generally worse — not better — with time.

Prof. Fair is still at it. And his forecasts continue to grow worse with time:

fair-model-forecasting-errors-vs-time
This and later graphs pertaining to Prof. Fair’s forecasts were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing; 8 percent of the errors exceed 200 percent.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

fair-model-estimated-vs-actual-growth-rate

It fails the most important test; that is, it doesn’t reflect the downward trend in economic growth:

fair-model-year-over-year-growth-estimated-and-actual

THE INVISIBLE ELEPHANT IN THE ROOM

Professor Fair and his prognosticating ilk are pikers compared with John Maynard Keynes and his disciples. The Keynesian multiplier is the fraud of all frauds, not just in economics but in politics, where it is too often invoked as an excuse for taking money from productive uses and pouring it down the rathole of government spending.

The Keynesian (fiscal) multiplier is defined as

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier. Please go to “Killing the Keynesian Multiplier” for a detailed explanation of the phony math and a derivation of the true multiplier, which is decidedly negative. Here’s the short version:

  • The phony math involves the use of an accounting identity that can be manipulated in many ways, to “prove” many things. But the accounting identity doesn’t express an operational (or empirical) relationship between a change in government spending and a change in GDP.
  • The true value of the multiplier isn’t 5 (a common mathematical estimate), 1.5 (a common but mistaken empirical estimate used for government purposes), or any positive number. The true value represents the negative relationship between the change in government spending (including transfer payments) as a fraction of GDP and the change in the rate of real GDP growth. Specifically, where F represents government spending as a fraction of GDP,

a rise in F from 0.24 to 0.33 (the actual change from 1947 to 2007) would reduce the real rate of economic growth by 0.031 percentage points. The real rate of growth from 1947 to 1957 was 4 percent. Other things being the same, the rate of growth would have dropped to 0.9 percent in the period 2008-2017. It actually dropped to 1.4 percent, which is within the standard error of the estimate.

  • That kind of drop makes a huge difference in the incomes of Americans. In 10 years, rise GDP rises by almost 50 percent when the rate of growth is 4 percent, but only by 15 percent when the rate of growth is 1.9 percent. Think of the tens of millions of people who would be living in comfort rather than squalor were it not for Keynesian balderdash, which turns reality on its head in order to promote big government.

MANAGEMENT “SCIENCE”

A hot new item in management “science” a few years ago was the Candle Problem. Graham Morehead describes the problem and discusses its broader, “scientifically” supported conclusions:

The Candle Problem was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

Sam Glucksberg added a fascinating twist to this finding in his 1962 paper, “Influece of strength of drive on functional fixedness and perceptual recognition.” (Journal of Experimental Psychology 1962. Vol. 63, No. 1, 36-41). He studied the effect of financial incentives on solving the candle problem. To one group he offered no money. To the other group he offered an amount of money for solving the problem fast.

Remember, there are two candle problems. Let the “Simple Candle Problem” be the one where the tacks are outside the box — no functional fixedness. The solution is straightforward. Here are the results for those who solved it:

Simple Candle Problem Mean Times :

  • WITHOUT a financial incentive : 4.99 min
  • WITH a financial incentive : 3.67 min

Nothing unexpected here. This is a classical incentivization effect anybody would intuitively expect.

Now, let “In-Box Candle Problem” refer to the original description where the tacks start off in the box.

In-Box Candle Problem Mean Times :

  • WITHOUT a financial incentive : 7:41 min
  • WITH a financial incentive : 11:08 min

How could this be? The financial incentive made people slower? It gets worse — the slowness increases with the incentive. The higher the monetary reward, the worse the performance! This result has been repeated many times since the original experiment.

Glucksberg and others have shown this result to be highly robust. Daniel Pink calls it a legally provable “fact.” How should we interpret the above results?

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

All of this is found in a video (to which Morehead links), wherein Daniel Pink (an author and journalist whose actual knowledge of science and business appears to be close to zero) expounds the lessons of the Candle Problem. Pink displays his (no-doubt-profitable) conviction that the Candle Problem and related “science” reveals (a) the utter bankruptcy of capitalism and (b) the need to replace managers with touchy-feely gurus (like himself, I suppose). That Pink has worked for two of the country’s leading anti-capitalist airheads — Al Gore and Robert Reich — should tell you all that you need to know about Pink’s real agenda.

Here are my reasons for sneering at Pink and his ilk:

1. I have been there and done that. That is to say, as a manager, I lived through (and briefly bought into) the touchy-feely fads of the ’80s and ’90s. Think In Search of Excellence, The One Minute Manager, The Seven Habits of Highly Effective People, and so on. What did anyone really learn from those books and the lectures and workshops based on them? A perceptive person would have learned that it is easy to make up plausible stories about the elements of success, and having done so, it is possible to make a lot of money peddling those stories. But the stories are flawed because (a) they are based on exceptional cases; (b) they attribute success to qualitative assessments of behaviors that seem to be present in those exceptional cases; and (c) they do not properly account for the surrounding (and critical) circumstances that really led to success, among which are luck and rare combinations of personal qualities (e.g., high intelligence, perseverance, people-reading skills). In short, Pink and his predecessors are guilty of reductionism and the post hoc ergo propter hoc fallacy.

2. Also at work is an undue generalization about the implications of the Candle Problem. It may be true that workers will perform better — at certain kinds of tasks (very loosely specified) — if they are not distracted by incentives that are related to the performance of those specific tasks. But what does that have to do with incentives in general? Not much, because the Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

3. The Candle Problem, with its anti-incentive “lesson”, is therefore inapplicable to the real world, where incentives play a crucial and positive role:

  • The profit incentive leads firms to invest resources in the development and/or production of things that consumers are willing to buy because those things satisfy wants at the right price.
  • Firms acquire resources to develop and produce things by bidding for those resources, that is, by offering monetary incentives to attract the resources required to make the things that consumers are willing to buy.
  • The incentives (compensation) offered to workers of various kinds (from scientists with doctorates to burger-flippers) are generally commensurate with the contributions made by those workers to the production of things of value to consumers, and to the value placed on those things by consumers.
  • Workers agree to the terms and conditions of employment (including compensation) before taking a job. The incentive for most workers is to keep a job by performing adequately over a sustained period — not by demonstrating creativity in a few minutes. Some workers (but not a large fraction of them) are striving for performance-based commissions, bonuses, and profit-sharing distributions. But those distributions are based on performance over a sustained period, during which the striving workers have plenty of time to think about how they can perform better.
  • Truly creative work is done, for the most part, by persons who are hired for such work on the basis of their credentials (education, prior employment, test results). Their compensation is based on their credentials, initially, and then on their performance over a sustained period. If they are creative, they have plenty of psychological space in which to exercise and demonstrate their creativity.
  • On-the-job creativity — the improvement of protocols and tools by workers using them — does not occur under conditions of the kind assumed in the Candle Problem. Rather, on-the-job creativity flows from actual work and insights about how to do the work better. It happens when it happens, and has nothing to do with artificial time constraints and monetary incentives to be “creative” within those constraints.
  • Pink’s essential pitch is that incentives can be replaced by offering jobs that yield autonomy (self-direction), mastery (the satisfaction of doing difficult things well), and purpose (that satisfaction of contributing to the accomplishment of something important). Well, good luck with that, but I (and millions of other consumers) want what we want, and if workers want to make a living they will just have to provide what we want, not what turns them on. Yes, there is a lot to be said for autonomy, mastery, and purpose, but there is also a lot to be said for getting a paycheck. And, contrary to Pink’s implication, getting a paycheck does not rule out autonomy, mastery, and purpose — where those happen to go with the job.

Pink and company’s “insights” about incentives and creativity are 180 degrees off-target. McDonald’s could use the Candle Problem to select creative burger-flippers who will perform well under tight deadlines because their compensation is unrelated to the creativity of their burger-flipping. McDonald’s customers should be glad that McDonald’s has taken creativity out of the picture by reducing burger-flipping to the routine application of protocols and tools.

In summary:

  • The Candle Problem is an interesting experiment, and probably valid with respect to the performance of specific tasks against tight deadlines. I think the results apply whether the stakes are money or any other kind of prize. The experiment illustrates the “choke” factor, and nothing more profound than that.
  • I question whether the experiment applies to the usual kind of incentive (e.g., a commissions or bonus), where the “incentee” has ample time (months, years) for reflection and research that will enable him to improve his performance and attain a bigger commission or bonus (which usually isn’t an all-or-nothing arrangement).
  • There’s also the dissimilarity of the Candle Problem — which involves more-or-less randomly chosen subjects, working against an artificial deadline — and actual creative thinking — usually involving persons who are experts (even if the expertise is as mundane as ditch-digging), working against looser deadlines or none at all.

PARTISAN POLITICS IN THE GUISE OF PSEUDO-SCIENCE

There’s plenty of it to go around, but this one is a whopper. Peter Singer outdoes his usual tendentious self in this review of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. In the course of the review, Singer writes:

Pinker argues that enhanced powers of reasoning give us the ability to detach ourselves from our immediate experience and from our personal or parochial perspective, and frame our ideas in more abstract, universal terms. This in turn leads to better moral commitments, including avoiding violence. It is just this kind of reasoning ability that has improved during the 20th century. He therefore suggests that the 20th century has seen a “moral Flynn effect, in which an accelerating escalator of reason carried us away from impulses that lead to violence” and that this lies behind the long peace, the new peace, and the rights revolution. Among the wide range of evidence he produces in support of that argument is the tidbit that since 1946, there has been a negative correlation between an American president’s I.Q. and the number of battle deaths in wars involving the United States.

Singer does not give the source of the IQ estimates on which Pinker relies, but the supposed correlation points to a discredited piece of historiometry by Dean Keith Simonton, Simonton jumps through various hoops to assess the IQs of  every president from Washington to Bush II — to one decimal place. That is a feat on a par with reconstructing the final thoughts of Abel, ere Cain slew him.

Before I explain the discrediting of Simonton’s obviously discreditable “research”, there is some fun to be had with the Pinker-Singer story of presidential IQ (Simonton-style) for battle deaths. First, of course, there is the convenient cutoff point of 1946. Why 1946? Well, it enables Pinker-Singer to avoid the inconvenient fact that the Civil War, World War I, and World War II happened while the presidency was held by three men who (in Simonton’s estimation) had high IQs: Lincoln, Wilson, and FDR.

The next several graphs depict best-fit relationships between Simonton’s estimates of presidential IQ and the U.S. battle deaths that occurred during each president’s term of office.* The presidents, in order of their appearance in the titles of the graphs are Harry S Truman (HST), George W. Bush (GWB), Franklin Delano Roosevelt (FDR), (Thomas) Woodrow Wilson (WW), Abraham Lincoln (AL), and George Washington (GW). The number of battle deaths is rounded to the nearest thousand, so that the prevailing value is 0, even in the case of the Spanish-American War (385 U.S. combat deaths) and George H.W. Bush’s Gulf War (147 U.S. combat deaths).

This is probably the relationship referred to by Singer, though Pinker may show a linear fit, rather than the tighter polynomial fit used here:

It looks bad for the low “IQ” presidents — if you believe Simonton’s estimates of IQ, which you shouldn’t, and if you believe that battle deaths are a bad thing per se, which they aren’t. I will come back to those points. For now, just suspend your well-justified disbelief.

If the relationship for the HST-GWB era were statistically meaningful, it would not change much with the introduction of additional statistics about “IQ” and battle deaths, but it does:




If you buy the brand of snake oil being peddled by Pinker-Singer, you must believe that the “dumbest” and “smartest” presidents are unlikely to get the U.S. into wars that result in a lot of battle deaths, whereas some (but, mysteriously, not all) of the “medium-smart” presidents (Lincoln, Wilson, FDR) are likely to do so.

In any event, if you believe in Pinker-Singer’s snake oil, you must accept the consistent “humpback” relationship that is depicted in the preceding four graphs, rather than the highly selective, one-shot negative relationship of the HST-GWB graph.

More seriously, the relationship in the HST-GWB graph is an evident ploy to discredit certain presidents (especially GWB, I suspect), which is why it covers only the period since WWII. Why not just say that you think GWB is a chimp-like, war-mongering, moron and be done with it? Pseudo-statistics of the kind offered up by Pinker-Singer is nothing more than a talking point for those already convinced that Bush=Hitler.

But as long as this silly game is in progress, let us continue it, with a new rule. Let us advance from one to two explanatory variables. The second explanatory variable that strongly suggests itself is political party. And because it is not good practice to omit relevant statistics (a favorite gambit of liars), I estimated an equation based on “IQ” and battle deaths for the 27 men who served as president from the first Republican presidency (Lincoln’s) through the presidency of GWB.  The equation looks like this:

U.S. battle deaths (000) “owned” by a president =

-80.6 + 0.841 x “IQ” – 31.3 x party (where 0 = Dem, 1 = GOP)

In other words, battle deaths rise at the rate of 841 per IQ point (so much for Pinker-Singer). But there will be fewer deaths with a Republican in the White House (so much for Pinker-Singer’s implied swipe at GWB).

All of this is nonsense, of course, for two reasons: Simonton’s estimates of IQ are hogwash, and the number of U.S. battle deaths is a meaningless number, taken by itself.

With regard to the hogwash, Simonton’s estimates of presidents’ IQs put every one of them — including the “dumbest,” U.S. Grant — in the top 2.3 percent of the population. And the mean of Simonton’s estimates puts the average president in the top 0.1 percent (one-tenth of one percent) of the population. That is literally incredible. Good evidence of the unreliability of Simonton’s estimates is found in an entry by Thomas C. Reeves at George Mason University’s History New Network. Reeves is the author of A Question of Character: A Life of John F. Kennedy, the negative reviews of which are evidently the work of JFK idolators who refuse to be disillusioned by facts. Anyway, here is Reeves:

I’m a biographer of two of the top nine presidents on Simonton’s list and am highly familiar with the histories of the other seven. In my judgment, this study has little if any value. Let’s take JFK and Chester A. Arthur as examples.

Kennedy was actually given an IQ test before entering Choate. His score was 119…. There is no evidence to support the claim that his score should have been more than 40 points higher [i.e., the IQ of 160 attributed to Kennedy by Simonton]. As I described in detail in A Question Of Character [link added], Kennedy’s academic achievements were modest and respectable, his published writing and speeches were largely done by others (no study of Kennedy is worthwhile that downplays the role of Ted Sorensen)….

Chester Alan Arthur was largely unknown before my Gentleman Boss was published in 1975. The discovery of many valuable primary sources gave us a clear look at the president for the first time. Among the most interesting facts that emerged involved his service during the Civil War, his direct involvement in the spoils system, and the bizarre way in which he was elevated to the GOP presidential ticket in 1880. His concealed and fatal illness while in the White House also came to light.

While Arthur was a college graduate, and was widely considered to be a gentleman, there is no evidence whatsoever to suggest that his IQ was extraordinary. That a psychologist can rank his intelligence 2.3 points ahead of Lincoln’s suggests access to a treasure of primary sources from and about Arthur that does not exist.

This historian thinks it impossible to assign IQ numbers to historical figures. If there is sufficient evidence (as there usually is in the case of American presidents), we can call people from the past extremely intelligent. Adams, Wilson, TR, Jefferson, and Lincoln were clearly well above average intellectually. But let us not pretend that we can rank them by tenths of a percentage point or declare that a man in one era stands well above another from a different time and place.

My educated guess is that this recent study was designed in part to denigrate the intelligence of the current occupant of the White House….

That is an excellent guess.

The meaninglessness of battle deaths as a measure of anything — but battle deaths — should be evident. But in case it is not evident, here goes:

  • Wars are sometimes necessary, sometimes not. (I give my views about the wisdom of America’s various wars at this post.) Necessary or not, presidents usually act in accordance with popular and elite opinion about the desirability of a particular war. Imagine, for example, the reaction if FDR had not gone to Congress on December 8, 1941, to ask for a declaration of war against Japan, or if GWB had not sought the approval of Congress for action in Afghanistan.
  • Presidents may have a lot to do with the decision to enter a war, but they have little to do with the external forces that help to shape that decision. GHWB, for example, had nothing to do with Saddam’s decision to invade Kuwait and thereby threaten vital U.S. interests in the Middle East. GWB, to take another example, was not a party to the choices of earlier presidents (GHWB and Clinton) that enabled Saddam to stay in power and encouraged Osama bin Laden to believe that America could be brought to its knees by a catastrophic attack.
  • The number of battle deaths in a war depends on many things outside the control of a particular president; for example, the size and capabilities of enemy forces, the size and capabilities of U.S. forces (which have a lot to do with the decisions of earlier administrations and Congresses), and the scope and scale of a war (again, largely dependent on the enemy).
  • Battle deaths represent personal tragedies, but — in and of themselves — are not a measure of a president’s wisdom or acumen. Whether the deaths were in vain is a separate issue that depends on the aforementioned considerations. To use battle deaths as a single, negative measure of a president’s ability is rank cynicism — the rankness of which is revealed in Pinker’s decision to ignore Lincoln and FDR and their “good” but deadly wars.

To put the last point another way, if the number of battle death deaths is a bad thing, Lincoln and FDR should be rotting in hell for the wars that brought an end to slavery and Hitler.
__________
* The numbers of U.S. battle deaths, by war, are available at infoplease.com, “America’s Wars: U.S. Casualties and Veterans”. The deaths are “assigned” to presidents as follows (numbers in parentheses indicate thousands of deaths):

All of the deaths (2) in the War of 1812 occurred on Madison’s watch.

All of the deaths (2) in the Mexican-American War occurred on Polk’s watch.

I count only Union battle deaths (140) during the Civil War; all are “Lincoln’s.” Let the Confederate dead be on the head of Jefferson Davis. This is a gift, of sorts, to Pinker-Singer because if Confederate dead were counted as Lincoln, with his high “IQ,” it would make Pinker-Singer’s hypothesis even more ludicrous than it is.

WW is the sole “owner” of WWI battle deaths (53).

Some of the U.S. battle deaths in WWII (292) occurred while HST was president, but Truman was merely presiding over the final months of a war that was almost won when FDR died. Truman’s main role was to hasten the end of the war in the Pacific by electing to drop the A-bombs on Hiroshima and Nagasaki. So FDR gets “credit” for all WWII battle deaths.

The Korean War did not end until after Eisenhower succeeded Truman, but it was “Truman’s war,” so he gets “credit” for all Korean War battle deaths (34). This is another “gift” to Pinker-Singer because Ike’s “IQ” is higher than Truman’s.

Vietnam was “LBJ’s war,” but I’m sure that Singer would not want Nixon to go without “credit” for the battle deaths that occurred during his administration. Moreover, LBJ had effectively lost the Vietnam war through his gradualism, but Nixon chose nevertheless to prolong the agony. So I have shared the “credit” for Vietnam War battle deaths between LBJ (deaths in 1965-68: 29) and RMN (deaths in 1969-73: 17). To do that, I apportioned total Vietnam War battle deaths, as given by infoplease.com, according to the total number of U.S. deaths in each year of the war, 1965-1973.

The wars in Afghanistan and Iraq are “GWB’s wars,” even though Obama has continued them. So I have “credited” GWB with all the battle deaths in those wars, as of May 27, 2011 (5).

The relative paucity of U.S. combat  deaths in other post-WWII actions (e.g., Lebanon, Somalia, Persian Gulf) is attested to by “Post-Vietnam Combat Casualties”, at infoplease.com.

A THIRD APPEARANCE BY PINKER

Steven Pinker, whose ignominious outpourings I have addressed twice here, deserves a third strike (which he shall duly be awarded). Pinker’s The Better Angels of Our Nature is cited gleefully by leftists and cockeyed optimists as evidence that human beings, on the whole, are becoming kinder and gentler because of:

  • The Leviathan – The rise of the modern nation-state and judiciary “with a monopoly on the legitimate use of force,” which “can defuse the [individual] temptation of exploitative attack, inhibit the impulse for revenge, and circumvent…self-serving biases.”
  • Commerce – The rise of “technological progress [allowing] the exchange of goods and services over longer distances and larger groups of trading partners,” so that “other people become more valuable alive than dead” and “are less likely to become targets of demonization and dehumanization”;
  • Feminization – Increasing respect for “the interests and values of women.”
  • Cosmopolitanism – the rise of forces such as literacy, mobility, and mass media, which “can prompt people to take the perspectives of people unlike themselves and to expand their circle of sympathy to embrace them”;
  • The Escalator of Reason – an “intensifying application of knowledge and rationality to human affairs,” which “can force people to recognize the futility of cycles of violence, to ramp down the privileging of their own interests over others’, and to reframe violence as a problem to be solved rather than a contest to be won.”

I can tell you that Pinker’s book is hogwash because two very bright leftists — Peter Singer and Will Wilkinson — have strongly and wrongly endorsed some of its key findings. I dispatched Singer in earlier. As for Wilkinson, he praises statistics adduced by Pinker that show a decline in the use of capital punishment:

In the face of such a decisive trend in moral culture, we can say a couple different things. We can say that this is just change and says nothing in particular about what is really right or wrong, good or bad. Or we can take take say this is evidence of moral progress, that we have actually become better. I prefer the latter interpretation for basically the same reasons most of us see the abolition of slavery and the trend toward greater equality between races and sexes as progress and not mere morally indifferent change. We can talk about the nature of moral progress later. It’s tricky. For now, I want you to entertain the possibility that convergence toward the idea that execution is wrong counts as evidence that it is wrong.

I would count convergence toward the idea that execution is wrong as evidence that it is wrong, if that idea were (a) increasingly held by individuals who (b) had arrived at their “enlightenment” unnfluenced by operatives of the state (legislatures and judges), who take it upon themselves to flout popular support of the death penalty. What we have, in the case of the death penalty, is moral regress, not moral progress.

Moral regress because the abandonment of the death penalty puts innocent lives at risk. Capital punishment sends a message, and the message is effective when it is delivered: it deters homicide. And even if it didn’t, it would at least remove killers from our midst, permanently. By what standard of morality can one claim that it is better to spare killers than to protect innocents? For that matter, by what standard of morality is it better to kill innocents in the womb than to spare killers? Proponents of abortion (like Singer and Wilkinson) — who by and large oppose capital punishment — are completely lacking in moral authority.

Returning to Pinker’s thesis that violence has declined, I quote a review at Foseti:

Pinker’s basic problem is that he essentially defines “violence” in such a way that his thesis that violence is declining becomes self-fulling. “Violence” to Pinker is fundamentally synonymous with behaviors of older civilizations. On the other hand, modern practices are defined to be less violent than newer practices.

A while back, I linked to a story about a guy in my neighborhood who’s been arrested over 60 times for breaking into cars. A couple hundred years ago, this guy would have been killed for this sort of vandalism after he got caught the first time. Now, we feed him and shelter him for a while and then we let him back out to do this again. Pinker defines the new practice as a decline in violence – we don’t kill the guy anymore! Someone from a couple hundred years ago would be appalled that we let the guy continue destroying other peoples’ property without consequence. In the mind of those long dead, “violence” has in fact increased. Instead of a decline in violence, this practice seems to me like a decline in justice – nothing more or less.

Here’s another example, Pinker uses creative definitions to show that the conflicts of the 20th Century pale in comparison to previous conflicts. For example, all the Mongol Conquests are considered one event, even though they cover 125 years. If you lump all these various conquests together and you split up WWI, WWII, Mao’s takeover in China, the Bolshevik takeover of Russia, the Russian Civil War, and the Chinese Civil War (yes, he actually considers this a separate event from Mao), you unsurprisingly discover that the events of the 20th Century weren’t all that violent compared to events in the past! Pinker’s third most violent event is the “Mideast Slave Trade” which he says took place between the 7th and 19th Centuries. Seriously. By this standard, all the conflicts of the 20th Century are related. Is the Russian Revolution or the rise of Mao possible without WWII? Is WWII possible without WWI? By this consistent standard, the 20th Century wars of Communism would have seen the worst conflict by far. Of course, if you fiddle with the numbers, you can make any point you like.

There’s much more to the review, including some telling criticisms of Pinker’s five reasons for the (purported) decline in violence. That the reviewer somehow still wants to believe in the rightness of Pinker’s thesis says more about the reviewer’s optimism than it does about the validity of Pinker’s thesis.

That thesis is fundamentally flawed, as Robert Epstein points out in a review at Scientific American:

[T]he wealth of data [Pinker] presents cannot be ignored—unless, that is, you take the same liberties as he sometimes does in his book. In two lengthy chapters, Pinker describes psychological processes that make us either violent or peaceful, respectively. Our dark side is driven by a evolution-based propensity toward predation and dominance. On the angelic side, we have, or at least can learn, some degree of self-control, which allows us to inhibit dark tendencies.

There is, however, another psychological process—confirmation bias—that Pinker sometimes succumbs to in his book. People pay more attention to facts that match their beliefs than those that undermine them. Pinker wants peace, and he also believes in his hypothesis; it is no surprise that he focuses more on facts that support his views than on those that do not. The SIPRI arms data are problematic, and a reader can also cherry-pick facts from Pinker’s own book that are inconsistent with his position. He notes, for example, that during the 20th century homicide rates failed to decline in both the U.S. and England. He also describes in graphic and disturbing detail the savage way in which chimpanzees—our closest genetic relatives in the animal world—torture and kill their own kind.

Of greater concern is the assumption on which Pinker’s entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease? By this logic, when we reach a world population of nine billion in 2050, Pinker will conceivably be satisfied if a mere two million people are killed in war that year.

The biggest problem with the book, though, is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Pinker’s belief that violence is on the decline reminds me of “it’s different this time”, a phrase that was on the lips of hopeful stock-pushers, stock-buyers, and pundits during the stock-market bubble of the late 1990s. That bubble ended, of course, in the spectacular crash of 2000.

Predictions about the future of humankind are better left in the hands of writers who see human nature whole, and who are not out to prove that it can be shaped or contained by the kinds of “liberal” institutions that Pinker so obviously favors.

Consider this, from an article by Robert J. Samuelson at The Washington Post:

[T]he Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar. Amid the controversy over leaks from the National Security Agency, this looms as an even bigger downside.

By cyberwarfare, I mean the capacity of groups — whether nations or not — to attack, disrupt and possibly destroy the institutions and networks that underpin everyday life. These would be power grids, pipelines, communication and financial systems, business record-keeping and supply-chain operations, railroads and airlines, databases of all types (from hospitals to government agencies). The list runs on. So much depends on the Internet that its vulnerability to sabotage invites doomsday visions of the breakdown of order and trust.

In a report, the Defense Science Board, an advisory group to the Pentagon, acknowledged “staggering losses” of information involving weapons design and combat methods to hackers (not identified, but probably Chinese). In the future, hackers might disarm military units. “U.S. guns, missiles and bombs may not fire, or may be directed against our own troops,” the report said. It also painted a specter of social chaos from a full-scale cyberassault. There would be “no electricity, money, communications, TV, radio or fuel (electrically pumped). In a short time, food and medicine distribution systems would be ineffective.”

But Pinker wouldn’t count the resulting chaos as violence, as long as human beings were merely starving and dying of various diseases. That violence would ensue, of course, is another story, which is told by John Gray in The Silence of Animals: On Progress and Other Modern Myths. Gray’s book — published  18 months after Better Angels — could be read as a refutation of Pinker’s book, though Gray doesn’t mention Pinker or his book.

The gist of Gray’s argument is faithfully recounted in a review of Gray’s book by Robert W. Merry at The National Interest:

The noted British historian J. B. Bury (1861–1927) … wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future…. The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

… Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong….

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption….

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

In the Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny….

And yet the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves….

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy.

RACE AS A SOCIAL CONSTRUCT

David Reich‘s hot new book, Who We Are and How We Got Here, is causing a stir in genetic-research circles. Reich, who takes great pains to assure everyone that he isn’t a racist, and who deplores racism, is nevertheless candid about race:

I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”

Groundbreaking advances in DNA sequencing technology have been made over the last two decades. These advances enable us to measure with exquisite accuracy what fraction of an individual’s genetic ancestry traces back to, say, West Africa 500 years ago — before the mixing in the Americas of the West African and European gene pools that were almost completely isolated for the last 70,000 years. With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real….

Self-identified African-Americans turn out to derive, on average, about 80 percent of their genetic ancestry from enslaved Africans brought to America between the 16th and 19th centuries. My colleagues and I searched, in 1,597 African-American men with prostate cancer, for locations in the genome where the fraction of genes contributed by West African ancestors was larger than it was elsewhere in the genome. In 2006, we found exactly what we were looking for: a location in the genome with about 2.8 percent more African ancestry than the average.

When we looked in more detail, we found that this region contained at least seven independent risk factors for prostate cancer, all more common in West Africans. Our findings could fully account for the higher rate of prostate cancer in African-Americans than in European-Americans. We could conclude this because African-Americans who happen to have entirely European ancestry in this small section of their genomes had about the same risk for prostate cancer as random Europeans.

Did this research rely on terms like “African-American” and “European-American” that are socially constructed, and did it label segments of the genome as being probably “West African” or “European” in origin? Yes. Did this research identify real risk factors for disease that differ in frequency across those populations, leading to discoveries with the potential to improve health and save lives? Yes.

While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another.

But whether we like it or not, that line has already been crossed. A recent study led by the economist Daniel Benjamin compiled information on the number of years of education from more than 400,000 people, almost all of whom were of European ancestry. After controlling for differences in socioeconomic background, he and his colleagues identified 74 genetic variations that are over-represented in genes known to be important in neurological development, each of which is incontrovertibly more common in Europeans with more years of education than in Europeans with fewer years of education.

It is not yet clear how these genetic variations operate. A follow-up study of Icelanders led by the geneticist Augustine Kong showed that these genetic variations also nudge people who carry them to delay having children. So these variations may be explaining longer times at school by affecting a behavior that has nothing to do with intelligence.

This study has been joined by others finding genetic predictors of behavior. One of these, led by the geneticist Danielle Posthuma, studied more than 70,000 people and found genetic variations in more than 20 genes that were predictive of performance on intelligence tests.

Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly. And since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.

You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work. Indeed, the study led by Dr. Kong showed that in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century….

So how should we prepare for the likelihood that in the coming years, genetic studies will show that many traits are influenced by genetic variations, and that these traits will differ on average across human populations? It will be impossible — indeed, anti-scientific, foolish and absurd — to deny those differences. [“How Genetics Is Changing Our Understanding of ‘Race’“, The New York Times, March 23, 2018]

Reich engages in a lot of non-scientific wishful thinking about racial differences and how they should be treated by “society” — none of which is in his purview as a scientist. Reich’s forays into psychobabble have been addressed at length by Steve Sailer (here and here) and Gregory Cochran (here, here, here, here, and here). Suffice it to say that Reich is trying in vain to minimize the scientific fact of racial differences that show up crucially in intelligence and rates of violent crime.

The lesson here is that it’s all right to show that race isn’t a social construct as long as you proclaim that it is a social construct. This is known as talking out of both sides of one’s mouth — another manifestation of balderdash.

DIVERSITY IS GOOD, EXCEPT WHEN IT ISN’T

I now invoke Robert Putnam, a political scientist known mainly for his book Bowling Alone: The Collapse and Revival of American Community (2005), in which he

makes a distinction between two kinds of social capital: bonding capital and bridging capital. Bonding occurs when you are socializing with people who are like you: same age, same race, same religion, and so on. But in order to create peaceful societies in a diverse multi-ethnic country, one needs to have a second kind of social capital: bridging. Bridging is what you do when you make friends with people who are not like you, like supporters of another football team. Putnam argues that those two kinds of social capital, bonding and bridging, do strengthen each other. Consequently, with the decline of the bonding capital mentioned above inevitably comes the decline of the bridging capital leading to greater ethnic tensions.

In later work on diversity and trust within communities, Putnam concludes that

other things being equal, more diversity in a community is associated with less trust both between and within ethnic groups….

Even when controlling for income inequality and crime rates, two factors which conflict theory states should be the prime causal factors in declining inter-ethnic group trust, more diversity is still associated with less communal trust.

Lowered trust in areas with high diversity is also associated with:

  • Lower confidence in local government, local leaders and the local news media.
  • Lower political efficacy – that is, confidence in one’s own influence.
  • Lower frequency of registering to vote, but more interest and knowledge about politics and more participation in protest marches and social reform groups.
  • Higher political advocacy, but lower expectations that it will bring about a desirable result.
  • Less expectation that others will cooperate to solve dilemmas of collective action (e.g., voluntary conservation to ease a water or energy shortage).
  • Less likelihood of working on a community project.
  • Less likelihood of giving to charity or volunteering.
  • Fewer close friends and confidants.
  • Less happiness and lower perceived quality of life.
  • More time spent watching television and more agreement that “television is my most important form of entertainment”.

It’s not as if Putnam is a social conservative who is eager to impart such news. To the contrary, as Michal Jonas writes in “The Downside of Diversity“, Putnam’s

findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis … , he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer….

After releasing the initial results in 2001, Putnam says he spent time “kicking the tires really hard” to be sure the study had it right. Putnam realized, for instance, that more diverse communities tended to be larger, have greater income ranges, higher crime rates, and more mobility among their residents — all factors that could depress social capital independent of any impact ethnic diversity might have.

“People would say, ‘I bet you forgot about X,’” Putnam says of the string of suggestions from colleagues. “There were 20 or 30 X’s.”

But even after statistically taking them all into account, the connection remained strong: Higher diversity meant lower social capital. In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”

“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes….

In a recent study, [Harvard economist Edward] Glaeser and colleague Alberto Alesina demonstrated that roughly half the difference in social welfare spending between the US and Europe — Europe spends far more — can be attributed to the greater ethnic diversity of the US population. Glaeser says lower national social welfare spending in the US is a “macro” version of the decreased civic engagement Putnam found in more diverse communities within the country.

Economists Matthew Kahn of UCLA and Dora Costa of MIT reviewed 15 recent studies in a 2003 paper, all of which linked diversity with lower levels of social capital. Greater ethnic diversity was linked, for example, to lower school funding, census response rates, and trust in others. Kahn and Costa’s own research documented higher desertion rates in the Civil War among Union Army soldiers serving in companies whose soldiers varied more by age, occupation, and birthplace.

Birds of different feathers may sometimes flock together, but they are also less likely to look out for one another. “Everyone is a little self-conscious that this is not politically correct stuff,” says Kahn….

In his paper, Putnam cites the work done by Page and others, and uses it to help frame his conclusion that increasing diversity in America is not only inevitable, but ultimately valuable and enriching. As for smoothing over the divisions that hinder civic engagement, Putnam argues that Americans can help that process along through targeted efforts. He suggests expanding support for English-language instruction and investing in community centers and other places that allow for “meaningful interaction across ethnic lines.”

Some critics have found his prescriptions underwhelming. And in offering ideas for mitigating his findings, Putnam has drawn scorn for stepping out of the role of dispassionate researcher. “You’re just supposed to tell your peers what you found,” says John Leo, senior fellow at the Manhattan Institute, a conservative think tank. [Michael Jonas, “The downside of diversity,” The Boston Globe (boston.com), August 5, 2007]

What is it about academics like Reich and Putnam who can’t bear to face the very facts that they have uncovered? The magic word is “academics”. They are denizens of a milieu in which the facts of life about race, guns, sex, and many other things are in the habit of being suppressed in favor of “hope and change”, and the facts be damned.

ONE MORE BIT OF RACE-RELATED BALDERDASH

I was unaware of the Implicit Association Test (IAT) until a few years ago, when I took a test at YourMorals.Org that purported to measure my implicit racial preferences. IAT has been exposed as junk, John. J. Ray calls it:

Psychologists are well aware that people often do not say what they really think.  It is therefore something of a holy grail among them to find ways that WILL detect what people really think. A very popular example of that is the Implicit Associations test (IAT).  It supposedly measures racist thoughts whether you are aware of them or not.  It sometimes shows people who think they are anti-racist to be in fact secretly racist.

I dismissed it as a heap of junk long ago (here and here) but it has remained very popular and is widely accepted as revealing truth.  I am therefore pleased that a very long and thorough article has just appeared which comes to the same conclusion that I did.

The article in question (which has the same title as Ray’s post) is by Jesse Singal. It appeared at Science of Us on January 11, 2017. Here are some excerpts:

Perhaps no new concept from the world of academic psychology has taken hold of the public imagination more quickly and profoundly in the 21st century than implicit bias — that is, forms of bias which operate beyond the conscious awareness of individuals. That’s in large part due to the blockbuster success of the so-called implicit association test, which purports to offer a quick, easy way to measure how implicitly biased individual people are….

Since the IAT was first introduced almost 20 years ago, its architects, as well as the countless researchers and commentators who have enthusiastically embraced it, have offered it as a way to reveal to test-takers what amounts to a deep, dark secret about who they are: They may not feel racist, but in fact, the test shows that in a variety of intergroup settings, they will act racist….

[The] co-creators are Mahzarin Banaji, currently the chair of Harvard University’s psychology department, and Anthony Greenwald, a highly regarded social psychology researcher at the University of Washington. The duo introduced the test to the world at a 1998 press conference in Seattle — the accompanying press release noted that they had collected data suggesting that 90–95 percent of Americans harbored the “roots of unconscious prejudice.” The public immediately took notice: Since then, the IAT has been mostly treated as a revolutionary, revelatory piece of technology, garnering overwhelmingly positive media coverage….

Maybe the biggest driver of the IAT’s popularity and visibility, though, is the fact that anyone can take the test on the Project Implicit website, which launched shortly after the test was unveiled and which is hosted by Harvard University. The test’s architects reported that, by October 2015, more than 17 million individual test sessions had been completed on the website. As will become clear, learning one’s IAT results is, for many people, a very big deal that changes how they view themselves and their place in the world.

Given all this excitement, it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way….

Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such.

How does IAT work? Singal summarizes:

You sit down at a computer where you are shown a series of images and/or words. First, you’re instructed to hit ‘i’ when you see a “good” term like pleasant, or to hit ‘e’ when you see a “bad” one like tragedy. Then, hit ‘i’ when you see a black face, and hit ‘e’ when you see a white one. Easy enough, but soon things get slightly more complex: Hit ‘i’ when you see a good word or an image of a black person, and ‘e’ when you see a bad word or an image of a white person. Then the categories flip to black/bad and white/good. As you peck away at the keyboard, the computer measures your reaction times, which it plugs into an algorithm. That algorithm, in turn, generates your score.

If you were quicker to associate good words with white faces than good words with black faces, and/or slower to associate bad words with white faces than bad words with black ones, then the test will report that you have a slight, moderate, or strong “preference for white faces over black faces,” or some similar language. You might also find you have an anti-white bias, though that is significantly less common. By the normal scoring conventions of the test, positive scores indicate bias against the out-group, while negative ones indicate bias against the in-group.

The rough idea is that, as humans, we have an easier time connecting concepts that are already tightly linked in our brains, and a tougher time connecting concepts that aren’t. The longer it takes to connect “black” and “good” relative to “white” and “good,” the thinking goes, the more your unconscious biases favor white people over black people.

Singal continues (at great length) to pile up the mountain of evidence against IAT, and to caution against reading anything into the results it yields.

Having become aware of the the debunking of IAT, I went to the website of Project Implicit. When I reached this page, I was surprised to learn that I could not only find out whether I’m a closet racist but also whether I prefer dark or light skin tones, Asians or non-Asians, Trump or a previous president, and several other things or their opposites. I chose to discover my true feelings about Trump vs. a previous president, and was faced with a choice between Trump and Clinton.

What was the result of my several minutes of tapping “e” and “i” on the keyboard of my PC? This:

Your data suggest a moderate automatic preference for Bill Clinton over Donald Trump.

Balderdash! Though Trump is obviously not of better character than Clinton, he’s obviously not of worse character. And insofar as policy goes, the difference between Trump and Clinton is somewhat like the difference between a non-silent Calvin Coolidge and an FDR without the patriotism. (With apologies to the memory of Coolidge, my favorite president.)

What did I learn from the IAT? I must have very good reflexes. A person who processes information rapidly and then almost instantly translates it into a physical response should be able to “beat” the IAT. And that’s probably what I did in the Trump vs. Clinton test.

Perhaps the IAT for racism could be used to screen candidates for fighter-pilot training. Only “non-racists” would be admitted. Anyone who isn’t quick enough to avoid the “racist” label isn’t quick enough to win a dogfight.

OTHER “LIBERAL” DELUSIONS

There are plenty of them under the heading of balderdash. It’s also known as magical thinking, in which “ought” becomes “is” and the forces of nature and human nature can be held in abeyance by edict. The following examples revisit some ground already covered here:

  • Men are unnecessary.
  • Women can do everything that men can do, but it doesn’t work the other way … just because.
  • Mothers can work outside the home without damage to their children.
  • Race is a “social construct”; there is no such thing as intelligence; women and men are mentally and physically equal in all respects; and the under-representation of women and blacks in certain fields is therefore due to rank discrimination (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).
  • A minimum wage can be imposed without an increase in unemployment, a “fact” which can be “proven” only by concocting special cases of limited applicability.
  • Taxes can be raised without discouraging investment and therefore reducing the rate of economic growth.
  • Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”. There can “free lunches” all around.
  • Health insurance premiums will go down while the number of mandates is increased.
  • The economy can be stimulated through the action of the Keynesian multiplier, which is nothing but phony math.
  • “Green” programs create jobs (but only because they are inefficient).
  • Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).

There’s much more in a different vein here.

BALDERDASH AS EUPHEMISTIC THINKING

Balderdash, as I have sampled it here, isn’t just nonsense — it’s nonsense in the service of an agenda. The agenda is too often the expansion of government power. Those who favor the expansion of government power don’t like to think that it hurts people. (“We’re from the government and we’re here to help.”) This is a refusal to face facts, which is amply if not exhautively illustrated in the preceding entries.

But there’s a lot more where that comes from; for example:

  • Crippled became handicapped, which became disabled and then differently abled or something-challenged.
  • Stupid became learning disabled, which became special needs (a euphemistic category that houses more than the stupid).
  • Poor became underprivileged, which became economically disadvantaged, which became (though isn’t overtly called) entitled (as in entitled to other people’s money).
  • Colored persons became Negroes, who became blacks, then African-Americans, and now (often) persons of color.

Why do lefties — lovers of big government — persist in varnishing the truth? They are — they insist — strong supporters of science, which is (ideally) the pursuit of truth. Well, that’s because they aren’t really supporters of science (witness their devotion to the “unsettled” science of AGW, among many fabrications). Nor do they really want the truth. They simply want to portray the world as they would like it to be, or to lie about it so that they can strive to reshape it to their liking.

BALDERDASH IN THE SERVICE OF SLAVERY, MODERN STYLE

I will end with this one, which is less conclusive than what has gone before, but which further illustrates the left’s penchant for evading reality in the service of growing government.

Thomas Nagel writes:

Some would describe taxation as a form of theft and conscription as a form of slavery — in fact some would prefer to describe taxation as slavery too, or at least as forced labor. Much might be said against these descriptions, but that is beside the point. For within proper limits, such practices when engaged in by governments are acceptable, whatever they are called. If someone with an income of $2000 a year trains a gun on someone with an income of $100000 a year and makes him hand over his wallet, that is robbery. If the federal government withholds a portion of the second person’s salary (enforcing the laws against tax evasion with threats of imprisonment under armed guard) and gives some of it to the first person in the form of welfare payments, food stamps, or free health care, that is taxation. In the first case it is (in my opinion) an impermissible use of coercive means to achieve a worthwhile end. In the second case the means are legitimate, because they are impersonally imposed by an institution designed to promote certain results. Such general methods of distribution are preferable to theft as a form of private initiative and also to individual charity. This is true not only for reasons of fairness and efficiency, but also because both theft and charity are disturbances of the relations (or lack of them) between individuals and involve their individual wills in a way that an automatic, officially imposed system of taxation does not. [Mortal Questions, “Ruthlessness in Public Life,” pp. 87-88]

How many logical and epistemic errors can a supposedly brilliant philosopher make in one (long) paragraph? Too many:

  • “For within proper limits” means that Nagel is about to beg the question by shaping an answer that fits his idea of proper limits.
  • Nagel then asserts that the use by government of coercive means to achieve the same end as robbery is “legitimate, because [those means] are impersonally imposed by an institution designed to promote certain results.” Balderdash! Nagel’s vision of government as some kind of omniscient, benevolent arbiter is completely at odds with reality.  The “certain results” (redistribution of income) are achieved by functionaries, armed or backed with the force of arms, who themselves share in the spoils of coercive redistribution. Those functionaries act under the authority of bare majorities of elected representatives, who are chosen by bare majorities of voters. And those bare majorities are themselves coalitions of interested parties — hopeful beneficiaries of redistributionist policies, government employees, government contractors, and arrogant statists — who believe, without justification, that forced redistribution is a proper function of government.
  • On the last point, Nagel ignores the sordid history of the unconstitutional expansion of the powers of government. Without justification, he aligns himself with proponents of the “living Constitution.”
  • Nagel’s moral obtuseness is fully revealed when he equates coercive redistribution with “fairness and efficiency,” as if property rights and liberty were of no account.
  • The idea that coercive redistribution fosters efficiency is laughable. It does quite the opposite because it removes resources from productive uses — including job-creating investments. The poor are harmed by coercive redistribution because it drastically curtails economic growth, from which they would benefit as job-holders and (where necessary) recipients of private charity (the resources for which would be vastly greater in the absence of coercive redistribution).
  • Finally (though not exhaustively), Nagel’s characterization of private charity as a “disturbance of the relations … among individuals” is so wrong-headed that it leaves me dumbstruck. Private charity arises from real relations among individuals — from a sense of community and feelings of empathy. It is the “automatic, officially imposed system of taxation” that distorts and thwarts (“disturbs”) the social fabric.

In any event, taxation for the purpose of redistribution is slavery: the subjection of one person to others, namely, agents of the government and the recipients of the taxes extracted from the person who pays them under threat of punishment. It’s slavery without whips and chains, but slavery nevertheless.

Luck: The Loser’s Excuse

If you can’t think of a good reason why someone is more successful than you, blame it on luck. That’s the moral of this story:

Don’t you look at rich people and find too many of them, well, dull?

Don’t you listen to rich people and think: “What have they got that I haven’t? Other than money?”

In fact, doesn’t it astonish you a little that you know so much, see so much, and can do so much, yet you really don’t have much money at all?

A new study offers you a reason for your lack of wealth.

It’s one that’s going to hurt.

The study, entitled “Talent vs Luck: The Role of Randomness in Success and Failure,” looked at people over a 40-year period.

Alessandro Pluchino of the University of Catania in Italy and his colleagues created a computer model of talent.

I can’t imagine that was easy or, to every mind, entirely satisfying.

After all, one person’s idea of talent is another person’s idea of Simon Cowell.

Still, Pluchino and friends mapped such apparent basics as intelligence, skill, and ability in various fields.

They then looked at people over a 40-year period, discerned what sort of things had happened to them, and compared that with how wealthy they had become.

They discovered that the conventional distribution of wealth — 20 percent of humanity enjoys 80 percent of the wealth — held true.

But then they offered painful words.

They still hurt, even though we know they’re true: “The maximum success never coincides with the maximum talent, and vice-versa.”

Never.

It’s galling, isn’t it, to look at some of the relatively talentless quarterwits who bathe in untold piles of lucre?

“So what is it that makes the difference?” I hear you pant, with an agonious grimace.

Are you ready for this?

“Our simulation clearly shows that such a factor is just pure luck,” say the researchers.

That kind of crap-thinking underlies Barack Hussein Obama’s infamous statement, “You didn’t build that”, which I dissected here. It’s just another justification for income redistribution, also known as the punishment of success.

Sure, success involves some degree of luck. But it’s not blind luck. One doesn’t succeed by being near the bottom of the talent heap in a given field. Nor does one succeed by sitting on the sidelines, that is, by hiding one’s talent under a bushel.

It is inconceivable that the authors of the study in question found a way to summarize intelligence, knowledge, skill, and effort in a single field of endeavor, let alone a large number of fields. In fact, they didn’t do that. (BHO would be right in this instance.) The authors simulated “reality” without the benefit of data. That’s a good thing; otherwise, they would have been guilty of manufacturing a lot of data about things that are difficult or impossible to quantify. The “empirical” justification of the results consists of anecdotal evidence.

The bottom line: The results of the simulations reflect the assumptions underlying the authors’ model — not reality. A key assumption is that the model of success accounts for all relevant variables. When outcomes favor the less-intelligent, less-talented, etc., over the more-intelligent, more-talented, etc., this is attributed to luck. But that is just another assumption. In fact, “unexpected” outcomes simply reflect the vagaries of sampling from ersatz probability distributions. This is the kind of study that should be hidden under a bushel, and forgotten.

The authors’ obvious agenda is to push for rewards based on something other than actual accomplishment: theoretical rather than actual merit. What institution has the power to make that happen? It goes without saying in the article, but you can be sure that there will be plenty of support for the idea of using government to detect and eliminate “luck”. (Shades of affirmative action, “diversity” programs, etc.)

As I have said, “luck” is mainly an excuse and rarely an explanation. Attributing outcomes to “luck” is an easy way of belittling success when it accrues to a rival. “White privilege” and “patriarchy” are in the same category as “luck”.


Related posts:
Moral Luck
Fooled by Non-Randomness
Randomness Is Over-Rated
Luck-Egalitarianism and Moral Luck
Luck and Baseball, One More Time
More about Luck and Baseball
Obama’s Big Lie
Pseudoscience, “Moneyball,” and Luck
Diminishing Marginal Utility and the Redistributive Urge
Taleb’s Ruinous Rhetoric
Pattern-Seeking
Babe Ruth and the Hot-Hand Hypothesis

Recommended Reading

Leftism, Political Correctness, and Other Lunacies (Dispatches from the Fifth Circle Book 1)

 

On Liberty: Impossible Dreams, Utopian Schemes (Dispatches from the Fifth Circle Book 2)

 

We the People and Other American Myths (Dispatches from the Fifth Circle Book 3)

 

Americana, Etc.: Language, Literature, Movies, Music, Sports, Nostalgia, Trivia, and a Dash of Humor (Dispatches from the Fifth Circle Book 4)

Religion, Creation, and Morality

I was trying to find a way into Keith Burgess-Jackson’s eponymous blog, which seems to have been closed to public view since he defended Roy Moore’s courtship of a 14-year-old person. (Perhaps Moore might have been cut some slack by a segment of the vast left-wing conspiracy had the person been a male.) My search for an entry has thus far been futile, but I did come across an intriguing item produced by Burgess-Jackson last August: A Taxonomy of Religions.

In it, KBJ (as I will refer to him hereinafter) claims that there are religious atheists (e.g., Jainists and Buddhists) as well as irreligious ones. Atheistic religions (a jarring term) seem to be religions that prescribe moral codes and ways of life (e.g., non-violence, meditation, moderation), but which don’t posit a god or gods which created and in some way control the universe.

What’s interesting (to me) about KBJ’s taxonomy is what isn’t there. The god or gods of KBJ’s taxonomy are “personal” — a conscious being or conscious beings who deliberately created the universe, exert continuous control of it, and even communicate with human beings after some fashion.

What isn’t there, other than atheism, which denies a creator? It’s the view that there was a creator — a force of unknown qualities — which put the universe in motion but may no longer have anything to do with it. That kind of creator would be impersonal and therefore uncommunicative with the objects of its creation. This position is classical deism, as I understand it.

The only substantive difference between this hard deism (as it’s sometimes called) and non-religious atheism, as I see it, is the question of creation. A hard deist believes in the necessity of a creator. The atheist rejects the necessity and holds that the universe just is — full stop.

But if a hard deist sees no role for the creator after the creation, isn’t he really just an atheist? After all, if the creation is over and done with, and the creator no longer has anything to do with the universe, that’s tantamount to saying that we’re on our own, as an atheist would say.

But a deist could subscribe to the view that the creator not only set the universe in motion, but also did so by design. That is, things like light, the behavior of matter-energy, etc., didn’t arise by accident but are part of a deliberate effort to give order to what would otherwise be randomness.

If one accepts that view, one becomes a kind of soft deist who sees a semi-personal god behind the creation — something more than just a brute force. This position resembles the stance of the late Antony Flew, an English philosopher:

In a 2004 interview … , Flew, then 81 years old, said that he had become a deist. In the article Flew states that he has renounced his long-standing espousal of atheism by endorsing a deism of the sort that Thomas Jefferson advocated (“While reason, mainly in the form of arguments to design, assures us that there is a God, there is no room either for any supernatural revelation of that God or for any transactions between that God and individual human beings”).

I take the view that a creator is a logical necessity:

1. In the material universe, cause precedes effect.

2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.<

3. The existence of the universe therefore implies a separate, uncaused cause.

Elaborating:

1. There is necessarily a creator of the universe, which comprises all that exists in “nature”.

2. The creator is not part of nature; that is, he stands apart from his creation and is neither of its substance nor governed by its laws. (I use “he” as a term of convenience, not to suggest that the creator is some kind of human or animate being, as we know such beings.)

3. The creator designed the universe, if not in detail then in its parameters. The parameters are what we know as matter-energy (substance) and its various forms, motions, and combinations (the laws that govern the behavior of matter-energy).

4. The parameters determine everything that is possible in the universe. But they do not necessarily dictate precisely the unfolding of events in the universe. Randomness and free will are evidently part of the creator’s design.

5. The human mind and its ability to “do science” — to comprehend the laws of nature through observation and calculation — are artifacts of the creator’s design.

6. Two things probably cannot be known through science: the creator’s involvement in the unfolding of natural events; the essential character of the substance on which the laws of nature operate.

I would add a third unknowable thing: whether morality is implicit in the creation, and if so, what it comprises. But if it is implicit in the creator’s design, it is therefore discernible (if imperfectly).

How and by whom is it discernible? By clerics, reasoning from religious texts? By philosophers, reasoning from the nature of human beings and the discoveries of science?

Here is my view: If morality is something ingrained in human beings by their nature, as dictated by the “terms” of the creation, it reveals itself in widely accepted and practiced norms, such as the Golden Rule.

Such norms precede their codification by philosophers and holy men. They reflect what “works“, that is, what makes for a cohesive and productive society.

Positive law — legislative, executive, and judicial edicts — often undoes those norms and undermines society. The U.S. Supreme Court’s 5-4 decision in Obergefell v. Hodges is but one glaring example.


Related posts:
The Golden Rule and the State
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
In Defense of Marriage
Probability, Existence, and Creation
The Golden Rule as Beneficial Learning
The Atheism of the Gaps
The Myth That Same-Sex “Marriage” Causes No Harm
Why Conservatism Works
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
The Limits of Science (II)
The Limits of Science, Illustrated by Scientists
The Precautionary Principle and Pascal’s Wager
Fine-Tuning in a Wacky Wrapper
Natural Law and Natural Rights Revisited
Beating Religion with the Wrong End of the Stick
The Fragility of Knowledge

A (Long) Footnote about Science

In “Deduction, Induction, and Knowledge” I make a case that knowledge (as opposed to belief) can only be inductive, that is, limited to specific facts about particular phenomena. It’s true that a hypothesis or theory about a general pattern of relationships (e.g., the general theory of relativity) can be useful, and even necessary. As I say at the end of “Deduction…”, the fact that a general theory can’t be proven

doesn’t — and shouldn’t — stand in the way of acting as if we possess general knowledge. We must act as if we possess general knowledge. To do otherwise would result in stasis, or analysis-paralysis.

Which doesn’t mean that a general theory should be accepted just because it seems plausible. Some general theories — such as global climate models (or GCMs) are easily falsified. They persist only because pseudo-scientists and true believers refuse to abandon them. (There is no such thing as “settled science”.)

Neil Lock, writing at Watts Up With That?, offers this perspective on inductive vs. deductive thinking:

Bottom up thinking is like the way we build a house. Starting from the ground, we work upwards, using what we’ve done already as support for what we’re working on at the moment. Top down thinking, on the other hand, starts out from an idea that is a given. It then works downwards, seeking evidence for the idea, or to add detail to it, or to put it into practice….

The bottom up thinker seeks to build, using his senses and his mind, a picture of the reality of which he is a part. He examines, critically, the evidence of his senses. He assembles this evidence into percepts, things he perceives as true. Then he pulls them together and generalizes them into concepts. He uses logic and reason to seek understanding, and he often stops to check that he is still on the right lines. And if he finds he has made an error, he tries to correct it.

The top down thinker, on the other hand, has far less concern for logic or reason, or for correcting errors. He tends to accept new ideas only if they fit his pre-existing beliefs. And so, he finds it hard to go beyond the limitations of what he already knows or believes. [“‘Bottom Up’ versus ‘Top Down’ Thinking — On Just about Everything“, October 22, 2017]

(I urge you to read the whole thing, in which Lock applies the top down-bottom up dichotomy to a broad range of issues.)

Lock overstates the distinction between the two modes of thought. A lot of “bottom up” thinkers derive general hypotheses from their observations about particular events. But — and this is a big “but” — they are also amenable to revising their hypotheses when they encounter facts that contradict them. The best scientists are bottom-up and top-down thinkers whose beliefs are based on bottom-up thinking.

General hypotheses are indispensable guides to “everyday” living. Some of them (e.g., fire burns, gravity causes objects to fall) are such reliable guides that it’s foolish to assume their falsity. Nor does it take much research to learn, for example, that there are areas within a big city where violent crime is rampant. A prudent person — even a “liberal” one — will therefore avoid those areas.

There are also general patterns — now politically incorrect to mention — with respect to differences in physical, psychological, and intellectual traits and abilities between men and women and among races. (See this, this, and this, for example.) These patterns explain disparities in achievement, but they are ignored by true believers who would wish away the underlying causes and penalize those who are more able (in a relevant dimension) for the sake of ersatz equality. The point is that a good many people — perhaps most people — studiously ignore facts of some kind in order to preserve their cherished beliefs about themselves and the world around them.

Which brings me back to science and scientists. Scientists, for the most part, are human beings with a particular aptitude for pattern-seeking and the manipulation of abstract ideas. They can easily get lost in such pursuits and fail to notice that their abstractions have taken them a long way from reality (e.g., Einstein’s special theory of relativity).

This is certainly the case in physics, where scientists admit that the standard model of sub-atomic physics “proves” that the universe shouldn’t exist. (See Andrew Griffin, “The Universe Shouldn’t Exist, Scientists Say after Finding Bizarre Behaviour of Anti-Matter“, The Independent, October 23, 2017.) It is most certainly the case in climatology, where many pseudo-scientists have deployed hopelessly flawed models in the service of policies that would unnecessarily cripple the economy of the United States.

As I say here,

scientists are human and fallible. It is in the best tradition of science to distrust their claims and to dismiss their non-scientific utterances.

Non-scientific utterances are not only those which have nothing to do with a scientist’s field of specialization, but also include those that are based on theories which derive from preconceptions more than facts. It is scientific to admit lack of certainty. It is unscientific — anti-scientific, really — to proclaim certainty about something that is so little understood the origin of the universe or Earth’s climate.


Related posts:
Hemibel Thinking
The Limits of Science
The Thing about Science
Science in Politics, Politics in Science
Global Warming and the Liberal Agenda
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
“Warmism”: The Myth of Anthropogenic Global Warming
Modeling Is Not Science
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Pinker Commits Scientism
AGW: The Death Knell
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Is Science Self-Correcting?
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
Bayesian Irrationality
Mettenheim on Einstein’s Relativity
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unkownable

Deduction, Induction, and Knowledge

Syllogism:

All Greek males are bald.

Herodotus is a Greek male.

Therefore, Herodotus is bald.

The conclusion is false because Herodotus isn’t bald, at least not as he is portrayed.

Moreover, the conclusion depends on a premise — all Greeks are bald — which can’t be known with certainty. The disproof of the premise by a single observation exemplifies the HumeanPopperian view of the scientific method. A scientific proposition is one that can be falsified  — contradicted by observed facts. If a proposition isn’t amenable to falsification, it is non-scientific.

In the Humean-Popperian view, a general statement such as “all Greek males are bald” can never be proven. (The next Greek male to come into view may have a full head of hair.) In this view, knowledge consists only of the accretion of discrete facts. General statements are merely provisional inferences based on what has been observed, and cannot be taken as definitive statements about what has not been observed.

Is there a way to prove a general statement about a class of things by showing that there is something about such things which necessitates the truth of a general statement about them? That approach begs the question. The “something about such things” can be discovered only by observation of a finite number of such things. The unobserved things are still lurking out of view, and any of them might not possess the “something” that is characteristic of the observed things.

All general statements about things, their characteristics, and their relationshships are therefore provisional. This inescapable truth has been dressed up in the guise of inductive probability, which is a fancy way of saying the same thing.

Not all is lost, however. If it weren’t for provisional knowledge about such things as heat and gravity, many more human beings would succumb to the allure of flames and cliffs, and man would never have stood on the Moon. If it weren’t for provisional knowledge about the relationship between matter and energy, nuclear power and nuclear weapons wouldn’t exist. And on and on.

The Humean-Popperian view is properly cautionary, but it doesn’t — and shouldn’t — stand in the way of acting as if we possess general knowledge. We must act as if we possess general knowledge. To do otherwise would result in stasis, or analysis-paralysis.

Altruism, Self-Interest, and Voting

From a previous post:

I am reading and generally enjoying Darwinian Fairytales: Selfish Genes, Errors of Heredity and Other Fables of Evolution by the late Australian philosopher, David Stove. I say generally enjoying because in Essay 6, which I just finished reading, Stove goes off the rails.

The title of Essay 6 is “Tax and the Selfish Girl, Or Does ‘Altruism’ Need Inverted Commas?”. Stove expends many words in defense of altruism as it is commonly thought of: putting others before oneself….

… Stove’s analysis of altruism is circular: He parades examples of what he considers altruistic conduct, and says that because there is such conduct there must be altruism.

I went on to quote an earlier post of mine in which I make a case against altruism, as Stove and many others understand it.

Stove’s attempt to distinguish altruism from self-interest resurfaces in Essay 8, “‘He Ain’t Heavy, He’s my Brother,’ or Altruism and Shared Genes”:

And then, think how easy it is, and always has been, to convince many people of the selfish theory of human nature. It is quite pathetically easy. All it takes, as Joseph Butler pointed out nearly three centuries ago, is a certain coarseness of mind on the part of those to be convinced; though a little bad character on either part is certainly a help. You offer people two propositions: “No one can act voluntarily except in his own interests,” and “No one can act voluntarily except from some interest of his own.” The second is a trivial truth, while the first is an outlandish falsity. But what proportion of people can be relied on to notice any difference in meaning between the two? Experience shows very few. And a man will find it easier to mistake the false proposition for the evidently true one, the more willing he is to believe that everyone is as bad as himself, or to belittle the human species in general.

Therein lies the source of Stove’s confusion. Restating his propositions, he says it is false to believe that a person always acts voluntarily in his own interest, while it is (trivially) true to believe that a person always acts voluntarily from an interest of his own.

If a man’s interest of his own is to save his drowning child, because he loves the child, how is that different from acting in his own interest? There is “a part of himself” — to put it colloquially — which recoils at the though of his child’s death. Whether that part is love, empathy, or instinct is of no consequence. The man who acts to save his drowning child does so because he can’t bear to contemplate the death of his child.

In sum, there is really no difference between acting in one’s own interest or acting from an interest of one’s own.

It isn’t my aim to denigrate acts that are called altruistic. With more such acts, the world would be a better place in which to live. But the veneration of acts that are called altruistic is a backhanded way of denigrating acts that are called selfish. Among such acts is profit-seeking, which “liberals” hold in contempt as a selfish act. But it is not, as Adam Smith pointed out a long time ago:

It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages. [An Inquiry into the Nature and Causes of the Wealth of Nations, 1776]

The moral confusion of “liberals” (Stove wasn’t one) about matters of self-interest is revealed in their condescension toward working-class people who vote Republican. I have pointed this out in several posts (e.g., here and here). Keith Stanovich takes up the cause in “Were Trump Voters Irrational?” (Quillette, September 28, 2017):

Instrumental rationality—the optimization of the individual’s goal fulfillment–means behaving in the world so that you get what you most want…. More technically, the model of rational judgment used by decision scientists is one in which a person chooses options based on which option has the largest expected utility…. [U]tility refers to the good that accrues when people achieve their goals….

More important for discussions of voter rationality, however, is that utility does not just mean monetary value…. For instance, people gain utility from holding and expressing specific beliefs and values. Failing to realize this is the source of much misunderstanding about voting behavior….

Failure to appreciate these nuances in rational choice theory is behind the charge that the Trump voters were irrational. A common complaint about them among Democratic critics is that they were voting against their own interests. A decade ago, this was the theme of Thomas Frank’s popular book What’s the Matter with Kansas? and it has recurred frequently since. The idea is that lower income people who vote Republican (not necessarily for Trump—most of these critiques predate the 2016 election) are voting against their interests because they would receive more government benefits if they voted Democratic….

[L]eftists never seem to see how insulting this critique of Republican voters is. Their failure to see the insult illustrates precisely what they get wrong in evaluating the rationality of the Trump voters. Consider that these What’s the Matter with Kansas? critiques are written by highly educated left-wing pundits, professors, and advocates…. The stance of the educated progressive making the What’s the Matter with Kansas? argument seems to be that: “no one else should vote against their monetary interests, but it’s not irrational for me to do so, because I am enlightened.”

As I say here,

it never ceases to amaze the left that so many of “the people” turn their backs on a leftist (Democrat) candidate in favor of the (perceived) Republican rightist. Why is that? One reason, which became apparent in the recent presidential election, is that a lot of “the people” don’t believe that the left is their “voice” or that it rules on their behalf.

A lot of “the people” believe, correctly, that the left despises “the people” and is bent on dictating to them. Further, a lot of “the people” also believe, correctly, that the left’s dictatorial methods are not really designed with “the people” in mind. Rather, they are intended to favor certain groups of people — those deemed “victims” by the left — and to advance pet schemes (e.g., urban rail, “green” energy, carbon-emissions reductions, Obamacare) despite the fact that they are unnecessary, inefficient, and economically destructive.

It comes as a great shock to left that so many of “the people” see the left for what it is: doctrinaire, unfair, and dictatorial. Why, they ask, would “the people” vote against their own interest by rejecting Democrats and electing Republicans? The answer is that a lot of “the people” are smart enough to see that the left does not represent them and does not act in their interest.


Related posts:
A Leftist’s Lament
Leftist Condescension
Altruism, One More Time
The Left and “the People”

Altruism, One More Time

I am reading and generally enjoying Darwinian Fairytales: Selfish Genes, Errors of Heredity and Other Fables of Evolution by the late Australian philosopher, David Stove. I say generally enjoying because in Essay 6, which I just finished reading, Stove goes off the rails.

The title of Essay 6 is “Tax and the Selfish Girl, Or Does ‘Altruism’ Need Inverted Commas?”. Stove expends many words in defense of altruism as it is commonly thought of: putting others before oneself. He also expends some words (though not many) in defense of taxation as an altruistic act.

Stove, whose writing is refreshingly informal instead of academically stilted, is fond of calling things “ridiculous” and “absurd”. Well, Essay 6 is both of those things. Stove’s analysis of altruism is circular: He parades examples of what he considers altruistic conduct, and says that because there is such conduct there must be altruism.

His target is a position that I have taken, and still hold despite Essay 6. My first two essays about altruism are here and here. I will quote a third essay, in which I address philosopher Jason Brennan’s defense of altruism:

What about Brennan’s assertion that he is genuinely altruistic because he doesn’t merely want to avoid bad feelings, but wants to help his son for his son’s sake. That’s called empathy. But empathy is egoistic. Even strong empathy — the ability to “feel” another person’s pain or anguish — is “felt” by the empathizer. It is the empathizer’s response to the other person’s pain or anguish.

Brennan inadvertently makes that point when he invokes sociopathy:

Sociopaths don’t care about other people for their own sake–they view them merely as instruments. Sociopaths don’t feel guilt for failing to help others.

The difference between a sociopath and a “normal” person is found in caring (feeling). But caring (feeling) is something that the I does — or fails to do, if the I is a sociopath. I = ego:

the “I” or self of any person; a thinking, feeling, and conscious being, able to distinguish itself from other selves.

I am not deprecating the kind of laudable act that is called altruistic. I am simply trying to point out what should be an obvious fact: Human beings necessarily act in their own interests, though their own interests often coincide with the interests of others for emotional reasons (e.g., love, empathy), as well as practical ones (e.g., loss of income or status because of the death of a patron).

It should go without saying that the world would be a better place if it had fewer sociopaths in it. Voluntary, mutually beneficial relationships are more than merely transactional; they thrive on the mutual trust and respect that arise from social bonds, including the bonds of love and affection.

Where Stove goes off the rails is with his claim that the existence of classes of people like soldiers, priests, and doctors is evidence of altruism. (NB: Stove was an atheist, so his inclusion of priests isn’t any kind of defense of religion.)

People become soldiers, priests, and doctors for various reasons, including (among many non-altruistic things) a love of danger (soldiers), a desire to control the lives of others (soldiers, priests, and doctors), an intellectual challenge that has nothing to do with caring for others (doctors), earning a lot of money (doctors), prestige (high-ranking soldiers, priests, and doctors), and job security (priests and doctors). Where’s the altruism in any of that?

Where Stove really goes off the rails is with his claim that redistributive taxation is evidence of altruism. As if human beings live in monolithic societies (like ant colonies), where the will of one is the will of all. And as if government represents the “will of the people”, when all it represents is the will of a small number of people who have been granted the power to govern by garnering a bare minority of votes cast by a minority of the populace, by their non-elected bureaucratic agents, and by (mostly) non-elected judges.

 

The Fragility of Knowledge

A recent addition to the collection of essays at “Einstein’s Errors” relies mainly on Christoph von Mettenheim’s Popper versus Einstein. One of Mettenheim’s key witnesses for the prosecution of Einstein’s special theory of relativity (STR) is Alfred Tarski, a Polish-born logician and mathematician. According to Mettenheim, Tarski showed

that all the axioms of geometry [upon which STR is built] are in fact nominalistic definitions, and therefore have nothing to do with truth, but only with expedience. [p. 86]

Later:

Tarski has demonstrated that logical and mathematical inferences can never yield an increase of empirical information because they are based on nominalistic definitions of the most simple terms of our language. We ourselves give them their meaning and cannot,therefore, get out of them anything but what we ourselves have put into them. They are tautological in the sense that any information contained in the conclusion must also have been contained in the premises. This is why logic and mathematics alone can never lead to scientific discoveries. [p. 100]

Mettenheim refers also to Alfred North Whitehead, a great English mathematician and philosopher who preceded Tarski. I am reading Whitehead’s Science and the Modern World thanks to my son, who recently wrote about it. I had heretofore only encountered the book in bits and snatches. I will have more to say about it in future posts. For now, I am content to quote this relevant passage, which presages Tarski’s theme and goes beyond it:

Thought is abstract; and the the intolerant use of abstractions is the major vice of the intellect. this vice is not wholly corrected by the recurrence to concrete experience. For after all, you need only attend to those aspects of your concrete experience which lie within some limited scheme. There are two methods for the purification of ideas. One of them is dispassionate observation by means of the bodily senses. But observation is selection. [p. 18]

More to come.

The “Public Goods” Myth

The argument for the provision of public goods by the state goes like this:

People will free ride on a public good like a clean atmosphere because they can benefit from it without contributing to it. Mimi will enjoy more breathable air when others switch to a Prius even if she doesn’t drive one herself. So the state is justified as a means of forcing people like Mimi to contribute: for instance, by creating laws that penalize pollution….

Standard models predict that public goods will be underprovided because of free riding. Public goods are non-excludable, meaning that you cannot be excluded from enjoying them even if you didn’t contribute to them. Public goods are also non-rivalrous, meaning that my enjoyment of the good doesn’t subtract from yours. Here’s an example. A storm threatens to flood the river, a flood that would destroy your town. If the townspeople join together to build a levee with sandbags, the town will be spared. However, your individual contribution won’t make or break the effort. The levee is a public good. If it prevents the flood, your house will be saved whether or not you helped stack the sandbags. And the levee will protect the entire town, so protecting your house doesn’t detract from the protection afforded to other houses.

It’s typically assumed that people won’t voluntarily contribute to public goods like the levee. Your individual contribution is inconsequential, and if the levee does somehow get provided, you enjoy its protection whether or not you helped. You get the benefit without paying the costs. So the self-interested choice is to watch Netflix on your couch while your neighbors hurt their backs lugging sandbags around. The problem is, your neighbors have the exact same incentive to stay home— if enough others contribute to the levee, they’ll enjoy the benefits whether or not they contributed themselves. Consequently, no one has an incentive to contribute to the levee. As a result of this free-rider problem, the town will flood even though the flood is bad for everyone. [Christopher Freiman, Unequivocal Justice, 2017]

The idea is that private entities won’t provide certain things because there will be too many free riders. And yet, people do buy Priuses and similar cars, and do volunteer in emergencies, and do commit myriad acts of kindness and generosity without compensation (other than psychic). These contrary and readily observable facts should be enough to discredit public-goods theory. But I shall continue with a critical look at key terms and assumptions.

What is a public good? It’s a good that’s “underprovided”. What does that mean? It means that someone who believes that a certain good should be provided in a certain quantity at a certain price is dissatisfied with the actual quantity and/or price at which the good is provided (or not provided).

Who is that someone? Whoever happens to believe that a certain good should be provided at a certain price. Or, more likely, that it should be provided “free” by government. There are many advocates of universal health care, for example, who are certain that health care is underprovided, and that it should be made available freely to anyone who “needs” it. They are either ignorant of the track record of socialized medicine in Canada and Britain, or are among the many (usually leftists) who prefer hope to experience.

What is a free rider, and why is it bad to be a free rider? A free rider is someone who benefits from the provision and use of goods for which he (the free rider) doesn’t pay. There are free riders all around us, all the time. Any product, service, or activity that yields positive externalities is a boon to many persons who don’t buy the product or service, or engage in the activity. (Follow the link in the preceding sentence for a discussion and examples of positive externalities.) But people do buy products and services that yield positive externalities, and companies do stay in business by provide such products and services.

In sum, “free rider” is a scare term invoked for the purpose of justifying government-provided public goods. Why government-provided? Because that way the goods will be “free” to many users of them, and “the rich” will be taxed to provide the goods, of course. (“Free” is an illusion. See this.)

Health care — which people long paid for out of their own pockets or which was supported by voluntary charity — is demonstrably not a public good. If anything, the more that government has come to dominate the provision of health care (including its provision through insurance), the more costly it has become. The rising cost has served to justify greater government involvement in health care, which has further driven up the cost, etc., etc., etc. That’s what happens when government provides a so-called public good.

What about defense? As I say here,

given the present arrangement of the tax burden, those who have the most to gain from defense and justice (classic examples of “public goods”) already support a lot of free riders and “cheap riders.” Given the value of defense and justice to the orderly operation of the economy, it is likely that affluent Americans and large corporations — if they weren’t already heavily taxed — would willingly form syndicates to provide defense and justice. Most of them, after all, are willing to buy private security services, despite the taxes they already pay….

… It may nevertheless be desirable to have a state monopoly on police and justice — but only on police and justice, and only because the alternatives are a private monopoly of force, on the one hand, or a clash of warlords, on the other hand.

The environment? See this and this. Global warming? See this, and follow the links therein.

All in all, the price of “free” government goods is extremely high; government taketh away far more than it giveth. With a minimal government restricted to the defense of citizens against force and fraud there would be far fewer people in need of “public goods” and far, far more private charity available to those few who need it.


Related posts:
A Short Course in Economics
Addendum to a Short Course in Economics
Monopoly: Private Is Better than Public
Voluntary Taxation
What Free-Rider Problem?
Regulation as Wishful Thinking
Merit Goods, Positive Rights, and Cosmic Justice
More about Merit Goods
Don’t Just Stand There, “Do Something”

Mettenheim on Einstein’s Relativity

I have added “Mettenheim on Einstein’s Relativity – Part I” to “Einstein’s Errors“. The new material draws on the Part I of Christoph von Mettenheim’s Popper versus Einstein: On the Philosophical Foundations of Physics (Tübingen: Mohr Siebeck, 1998). Mettenheim strikes many telling blows against STR. These go to the heart of STR and Einstein’s view of science:

[T[o Einstein the axiomatic method of Euclidean geometry was the method of all science; and the task of the scientist was to find those fundamental truths from which all other statement of science could then be derived by purely logical inference. He explicitly said that the step from geometry to physics was to be achieved by simply adding to the axioms of Euclidean geometry one single further axiom, namely the sentence

Regarding the possibilities of their position solid physical bodies will behave like the bodies of Euclidean geometry.

Popper versus Einstein, p. 30

*     *     *

[T]he theory of relativity as Einstein stated it was a mathematical theory. To him the logical necessity of his theory served as an explanation of its results. He believed that nature itself will observe the rules of logic. His words were that

experience of course remains the sole criterion of the serviceability of a mathematical construction for physics, but the truly creative principle resides in mathematics.

Popper versus Einstein, pp. 61-62

*     *     *

There’s much, much more. Go there and see for yourself.

Another Case of Cultural Appropriation

Maverick Philosopher makes an excellent case for cultural appropriation. I am here to make a limited case against it.

There is an eons-old tradition that marriage is a union of man and woman, which was shared  by all religions and ethnicities until yesterday, on the time-scale of human existence. Then along came some homosexual “activists” and their enablers (mainly leftists, always in search of “victims”), to claim that homosexuals can marry.

This claim ignores the biological and deep social basis of marriage, which is the procreative pairing of male and female and the resulting formation of the basic social unit: the biologically bonded family.

Homosexual “marriage” is, by contrast, a wholly artificial conception. It is the ultimate act of cultural appropriation. Its artificiality is underscored by the fact that a homosexual “marriage” seems to consist of two “wives” or two “husbands”, in a rather risible bow to traditional usage. Why not “wusbands” or “hives”?


Related posts:
In Defense of Marriage
The Myth That Same-Sex “Marriage” Causes No Harm
Getting “Equal Protection” Right
Equal Protection in Principle and Practice

Quantum Mechanics and Free Will

Physicist Adam Frank, in “Minding Matter” (Aeon, March 13, 2017), visits subjects that I have approached from several angles in various posts. Frank addresses the manifestation of brain activity — more properly, the activity of the central nervous system (CNS) — which is known as consciousness. But there’s a lot more to CNS activity than that. What it all adds up to is generally called “mind”, which has conscious components (things we are aware of, including being aware of being aware) and subconscious components (things that go on in the background that we might or might not become aware of).

In the traditional (non-mystical) view, each person’s mind is separate from the minds of other persons. Mind (or the concepts, perceptions, feelings, memories, etc. that comprise it) therefore defines self. I am my self (i.e., not you) because my mind is a manifestation of my body’s CNS, which isn’t physically linked to yours.

With those definitional matters in hand, Frank’s essay can be summarized and interpreted as follows:

According to materialists, mind is nothing more than a manifestation of CNS activity.

The underlying physical properties of the CNS are unknown because the nature of matter is unknown.

Matter, whatever it is, doesn’t behave in billiard-ball fashion, where cause and effect are tightly linked.

Instead, according to quantum mechanics, matter has probabilistic properties that supposedly rule out strict cause-and-effect relationships. The act of measuring matter resolves the uncertainty, but in an unpredictable way.

Mind is therefore a mysterious manifestation of quantum-mechanical processes. One’s state of mind is affected by how one “samples” those processes, that is, by one’s deliberate, conscious attempt to use one’s CNS in formulating the mind’s output (e.g., thoughts and interpretations of the world around us).

Because of the ability of mind to affect mind (“mind over matter”), it is more than merely a passive manifestation of the physical state of one’s CNS. It is, rather, a meta-state — a physical state that is created by “mental” processes that are themselves physical.

In sum, mind really isn’t immaterial. It’s just a manifestation of poorly understood material processes that can be influenced by the possessor of a mind. It’s the ultimate self-referential system, a system that can monitor and change itself to some degree.

None of this means that human beings lack free will. In fact, the complexity of mind argues for free will. This is from a 12-year-old post of mine:

Suppose I think that I might want to eat some ice cream. I go to the freezer compartment and pull out an unopened half-gallon of vanilla ice cream and an unopened half-gallon of chocolate ice cream. I can’t decide between vanilla, chocolate, some of each, or none. I ask a friend to decide for me by using his random-number generator, according to rules of his creation. He chooses the following rules:

  • If the random number begins in an odd digit and ends in an odd digit, I will eat vanilla.
  • If the random number begins in an even digit and ends in an even digit, I will eat chocolate.
  • If the random number begins in an odd digit and ends in an even digit, I will eat some of each flavor.
  • If the random number begins in an even digit and ends in an odd digit, I will not eat ice cream.

Suppose that the number generated by my friend begins in an even digit and ends in an even digit: the choice is chocolate. I act accordingly.

I didn’t inevitably choose chocolate because of events that led to the present state of my body’s chemistry, which might otherwise have dictated my choice. That is, I broke any link between my past and my choice about a future action.I call that free will.

I suspect that our brains are constructed in such a way as to produce the same kind of result in many situations, though certainly not in all situations. That is, we have within us the equivalent of an impartial friend and an (informed) decision-making routine, which together enable us to exercise something we can call free will.

This rudimentary metaphor is consistent with the quantum nature of the material that underlies mind. But I don’t believe that free will depends on quantum mechanics. I believe that there is a part of mind — a part with a physical location — which makes independent judgments and arrives at decisions based on those judgments.

To extend the ice-cream metaphor, I would say that my brain’s executive function, having become aware of my craving for ice cream, taps my knowledge (memory) of snacks on hand, or directs the part of my brain that controls my movements to look in the cupboard and freezer. My executive function, having determined that my craving isn’t so urgent that I will drive to a grocery store, then compiles the available options and chooses the one that seems best suited to the satisfaction of my craving at that moment. It may be ice cream, or it may be something else. If it is ice cream, it will consult my “taste preferences” and choose between the flavors then available to me.

Given the ways in which people are seen to behave, it seems obvious that the executive function, like consciousness, is on a “different circuit” from other functions (memory, motor control, autonomic responses, etc.), just as the software programs that drive my computer’s operations are functionally separate from the data stored on the hard drive and in memory. The software programs would still be on my computer even if I erased all the data on my hard drive and in memory. So, too, would my executive function (and consciousness) remain even I lost all memory of everything that happened to me before I awoke this morning.

Given this separateness, there should be no question that a person has free will. That is why I can sometimes resist a craving for ice cream. That is why most people are often willing and able to overcome urges, from eating candy to smoking a cigarette to punching a jerk.

Conditioning, which leads to addiction, makes it hard to resist urges — sometimes nigh unto impossible. But the ability of human beings to overcome conditioning, even severe addictions, argues for the separateness of the executive function from other functions. In short, it argues for free will.


Related posts:
Free Will: A Proof by Example?
Free Will, Crime, and Punishment
Mind, Cosmos, and Consciousness
“Feelings, Nothing More than Feelings”
Hayek’s Anticipatory Account of Consciousness
Is Consciousness an Illusion?

Special Relativity

I have removed my four posts about special relativity and incorporated them in a new page, “Einstein’s Errors.” I will update that page occasionally rather than post about special relativity, which is rather “off the subject” for this blog.

Beating Religion with the Wrong End of the Stick

A leftist personage emits a Quotation of the Day, which I receive second-hand from a centrist personage. Here is today’s QOTD:

An interesting coincidence of events, suggesting a certain theme….

Caedite eos. Novit enim Dominus qui sunt eius.

– Arnaud Amalric (d. 1225) (at the siege of Béziers in 1209 during
the Albigensian Crusade, when asked which of the townspeople to spare)

(Kill them all. For the Lord knoweth them that are His.)

A fanatic is a man that does what he thinks the Lord would do if He knew the facts of the case.

– Finley Peter Dunne (1837-1936) (Mr. Dooley’s Opinions, “Casual Observations”)

The most dangerous madmen are those created by religion, and … people whose aim is to disrupt society always know how to make good use of them on occasion.

– Denis Diderot (1713-1794) (Conversations with a Christian Lady)

Throughout human history, the apostles of purity, those who have claimed to possess a total explanation, have wrought havoc among mere mixed-up human beings.

– Salman Rushdie (b. 1948) (“In Good Faith,”
Independent on Sunday, London, 4 February 1990)

Is uniformity [of religious opinion] attainable? Millions of innocent men, women, and children, since the introduction of Christianity, have been burnt, tortured, fined, and imprisoned; yet we have not advanced one inch toward uniformity. What has been the effect of coercion? To make one half the world fools, and the other half hypocrites.

– Thomas Jefferson (1743-1826) (Notes on the State of Virginia, Query 17)

Subject opinion to coercion: whom will you make your inquisitors? Fallible men; men governed by bad passions, by private as well as public reasons.

– Ibid.

(Yes, today is the 274th anniversary of the birth of Thomas Jefferson, 3rd president of these United States and a fervent believer in liberty of conscience and the separation of church and state – which is why he is often excoriated in right-wing religious circles today. But – mirabile dictu – it is also the 498th anniversary of the birth of Catherine de’ Medici (1519-1589), daughter of Lorenzo (but not “the Great”) de’ Medici, who became the queen of France’s King Henry III and with him planned the St. Bartholomew’s Night Massacre (1572), in which thousands of French Protestants were slaughtered in their beds. The event was timed to coincide with the wedding of the (Huguenot) Henry of Navarre, who (perhaps not surprisingly) converted to Catholicism (“Paris is worth a mass.”) and was crowned Henry IV in 1589. But wait! There’s more! On this date in 1598, Henry promulgated the Toleration Edict of Nantes, which protected freedom of belief in France, ended the Wars of Religion, and gave Protestants some measure of government influence – at least until Louis XIV revoked it in 1685, which forced thousands of Protestants to flee the country. One is reminded irresistibly of the comment of Lucretius (ca. 94-55 B.C.) in De Rerum Natura:

Tantum religio potuit suadere malorum.

(So much wrong could religion induce.)

True then; true today. Aren’t historical connections fascinating?)

The author of QOTD grasps the wrong end of the stick, as he often does. Religion doesn’t make fanatics, it attracts them (but far from exclusively). Just as the “religions” of communism, socialism (including Hitler’s version), and progressivism do (and with much greater frequency).

I doubt that the number of murders committed in the name of religion amounts to one-tenth of the number of murders committed by three notable anti-religionists: Hitler (yes, Hitler), Stalin, and Mao.

Natural Law and Natural Rights Revisited

An esteemed correspondent took exception to my statement in “Natural Law, Natural Rights, and the Real World” that I “don’t accept the broad outlines of natural law and natural rights,” which I had summarized thus:

Natural law is about morality, that is, right and wrong. Natural rights are about the duties and obligations that human beings owe to each other. Believers in natural law claim to start with the nature of human beings, then derive from that nature the “laws” of morality. Believers in natural rights claim to start with the nature of human beings, then derive from that nature the inalienable “rights” of human beings.

A natural law would be something like this: It is in the nature of human beings to seek life and to avoid death. A natural right would be something like this: Given that it is natural for human beings to seek life and avoid death, every human being has the right to life.

The correspondent later sent me a copy of Hadley Arkes’s essay “A Natural Law Manifesto” (Claremont Review of Books, Fall 2011, pp. 43-49). There’s an online version of the essay (with a slightly different opening sentence) at the website of The James Wilson Institute on Natural Rights and the American Founding, which I’ll quote from in the course of this post.

I don’t lightly dismiss natural law and natural rights. Many proponents of those concepts are on the side of liberty and against statism, which makes me their natural ally. As I say in “Natural Law, Natural Rights, and the Real World,” my problem with the concepts is their malleability. It is too easy to claim to know specifically what is and isn’t in accordance with natural law and natural rights, and it is too easy to issue vague generalizations about rights — generalizations that collapse easily under the weight of specification.

Consider the UN’s Universal Declaration of Human Rights, which rights are declared to be inalienable (i.e., natural). (The Declaration’s 30 articles comprise 48 such rights.) Quotations from the Declaration are followed by my comments in italics:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. What is arbitrary? One person’s “arbitrary” will be another person’s “lawful,” and there will be endless quibbles about where to draw lines.

1) Everyone has the right to freedom of movement and residence within the borders of each state.
(2) Everyone has the right to leave any country, including his own, and to return to his country. Everyone, even including criminals and terrorists? And if “everyone” is qualified by criteria of criminality, there will be endless quibbles about those criteria.

Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief, and freedom, either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship and observance. But what if the practice of a religion includes the commission of terrorist acts?

Everyone, as a member of society, has the right to social security and is entitled to realization, through national effort and international co-operation and in accordance with the organization and resources of each State, of the economic, social and cultural rights indispensable for his dignity and the free development of his personality. The qualification about the “organization and resources of each State” speak volumes about the relative nature of entitlements. But left unsaid is the nature of the “right” by which some are taxed to provide “social security” for others. Is there no natural right to the full enjoyment of the fruits of one’s own labors? I would think that there would be such a natural right, if there were any natural rights.

Everyone who works has the right to just and favourable remuneration ensuring for himself and his family an existence worthy of human dignity, and supplemented, if necessary, by other means of social protection. See the preceding comment.

Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control. Ditto.

It goes on an on like that. And the UN’s litany of “rights” is surely one that millions or even billions of people would claim to be “natural rights” which inhere in them as human beings. Certainly in the United States almost every Democrat, most independents, and a large fraction of Republicans would agree that such rights are “natural” or God-given or just plain obvious. And many of them would put up a good argument for their position.

If the Declaration of Human Rights seems too easy a target, consider abortion. Arkes and I are in agreement about the wrongness of abortion. He says this in his essay:

[T]he differences in jural perspective that I’m marking off here may have their most profound effect as they reach the most central question that the law may ever reach: who counts as a human person—who counts as the kind of being whose injuries matter? It was the question raised as President Bill Clinton vetoed the bill on partial birth abortion and expressed the deepest concern for the health of the woman denied that procedure. Of that other being present in the surgery, the one whose head was being punctured and the contents sucked out—the assault on the health of that being made no impression on Clinton. The harms didn’t register because the sufferer of the harms did not count in this picture.

But in raising questions of this kind, a jurisprudence with our [natural law] perspective would pose the question insistently: what is the ground of principle on which the law may remove a whole class of human beings from the circle of rights-bearing beings who may be subject to the protections of the law?

The “ground of reason,” though I hesitate to call it that, is the libertarian doctrine of self-ownership (which is tautologous). The child in the womb is dependent on the mother for its life. It is therefore up to the mother to decide whether the “demands” of the child in the womb should take precedence over other aspects of her life, including the remote possibility that bearing a child will kill her.

My objection to abortion is both empathic and prudential. Empathically, I can’t countenance what amounts to the brutal murder of an innocent human being for what is, in almost every case, a matter of convenience. Prudentially, abortion is a step down a slippery slope that leads to involuntary euthanasia. It puts the state on the wrong side of its only legitimate function, which is to protect the lives, liberty, and property of the citizenry.

In any event, Arkes’s essay is as much an attack on jurisprudence that scorns natural law as it is an explanation and defense of natural law. In that vein, Arkes says this:

I come then today, perhaps in the style of Edmund Burke, to make An Appeal from the Old Jurisprudence to the New: from the old jurisprudence, which relied on natural law as a matter of course, to a new conservative jurisprudence that has not only been resistant to natural law, but scorns it. At one level, some of the conservative jurists insist that their concern is merely prudential: Justice Antonin Scalia will say that he esteems the notion of natural law but the problem is there is no agreement on the content of natural law. Far better, he argues, that we simply concentrate on the text of the Constitution, or where the text is silent, on the way in which the text was “originally understood” by the men who framed and ratified it.

Justice Scalia’s key point — there is no agreement on the content of natural law — is underscored by two letters to the editor of the Claremont Review of Books, and Arkes’s reply to those letters (all found here). The writers take issue with Arkes’s pronouncements about the certainty of natural law. The crux of Arkes’s long and argumentative reply is that there are truths that may not be known to all people, but the truths nevertheless exist.

That attitude has two possible bases. The first is that Arkes is setting himself up as a member of the cognoscenti who knows what natural law is and is therefore qualified to reveal it to the ignorant. The second possibility, and the one that Arkes seems to prefer, is that reasonable people will ferret out the natural law. For example, here is a comment and reply about the 14th Amendment:

Max Hocutt: Arkes’s discussion of the 14th Amendment raises a very difficult question: its contemporaries believed mix-raced marriage to be contrary to nature. On the basis of what definition of nature is Arkes confident they were mistaken?

Arkes: It is quite arguable in this vein that the framers of the 14th Amendment did not understand the implications of their own principles when they insisted that nothing in that amendment would be at odds with the laws that barred marriage across racial lines. On the other hand, Mr. Hocutt may want to argue that there was no inconsistency, that there may be some kind of argument in prudence, or perhaps even a racial principle, that could make it justified to bar marriage across racial lines. Well, it is quite possible to have that argument. And the only way of having the “argument”— the only thing that makes it an argument—is that there are standards of reason to which we can appeal to judge the soundness, the truth of falsity, of these reasons.

Clearly, Arkes believes that the “standards of reason” will result in a declaration that the 14th Amendment allows interracial marriage, even if the amendment’s framers didn’t intend that outcome. But Arkes concedes that there is an argument to be had. And that is why Justice Scalia (and I, and many others) say that there is no agreement on the content of natural law, and therefore no agreement as to the rights that ought to be considered “natural” because they flow from natural law.

For example, there is eloquent disagreement with Arkes’s views in Timothy Sandefur’s review of Arkes’s Constitutional Illusions and Anchoring Truths. Notably, Sandefur is also a proponent of natural rights, and I have sparred with him on the subject.

Endless arguments about natural law and natural rights will lead nowhere because even reasonable people will disagree about human nature and the rights that inhere in human beings, if any. In “Evolution, Human Nature, and ‘Natural Rights’,” I explain at length why human beings do not have inherent (i.e., inalienable or “natural”) rights, at least not in the way that Arkes would have it. In the end, I take my stand on negative rights and the Golden Rule:

The following observations set the stage for my explanation:

1. “Natural rights” inhere in a particular way; that is, according to Randy Barnett, they “do not proscribe how rights-holders ought to act towards others. Rather they describe how others ought to act towards rights-holders.” In other words, the thing (for want of a better word) that arises from my nature is not a set of negative rights that I own; rather, it is an inclination or imperative to treat others as if they have negative rights. To put it crudely, I am wired to leave others alone as long as they leave me alone; others are wired to leave me alone as long as I leave them alone.

2. The idea of being inclined or compelled to “act toward” is more plausible than idea that “natural rights” inhere in their holders. It is so because “act toward” suggests that we learn that it is a good thing (for us) to leave others alone, and not that we (each of us) has a soul or psyche on which is indelibly inscribed a right to be left alone.

3. That leads to the question of how one learns to leave others alone as he is left alone by them. Is it by virtue of evolution or by virtue of socialization? And if the learning is evolutionary, why does it seem not to be universal; that is, why it is so routinely ignored?

4. The painful truth that vast numbers of human beings — past and present — have not acted and do not act as if there are “natural rights” suggests that the notion of “natural rights” is of little practical consequence. It may sometimes serve as a rallying point for political action, but with mixed results. Consider, for example, the contrast between the American Revolution, with its Declaration of Independence, and the French Revolution, with its Déclaration des droits de l’Homme et du Citoyen.

5. Even if humans are wired to leave others alone as they are left alone, it is evident that they are not wired exclusively in that way.

And now, for my natural (but not biologically deterministic) explanation. It comes from my post, “The Golden Rule and the State“:

I call the Golden Rule a natural law because it’s neither a logical construct … nor a state-imposed one. Its long history and widespread observance (if only vestigial) suggest that it embodies an understanding that arises from the similar experiences of human beings across time and place. The resulting behavioral convention, the ethic of reciprocity, arises from observations about the effects of one’s behavior on that of others and mutual agreement (tacit or otherwise) to reciprocate preferred behavior, in the service of self-interest and empathy. That is to say, the convention is a consequence of the observed and anticipated benefits of adhering to it.

The Golden Rule implies the acceptance of negative rights as a way of ensuring peaceful (and presumably fruitful) human coexistence. But, as I point out, there is a “positive” side to the Golden rule:

[It] can be expanded into two, complementary sub-rules:

  • Do no harm to others, lest they do harm to you.
  • Be kind and charitable to others, and they will be kind and charitable to you.

The first sub-rule — the negative one — is compatible with the idea of negative rights, but it doesn’t demand them. The second sub-rule — the positive one — doesn’t yield positive rights because it’s a counsel to kindness and charity, not a command….

An ardent individualist — particularly an anarcho-capitalist — might insist that social comity can be based on the negative sub-rule… I doubt it. There’s but a short psychological distance from mean-spiritedness — failing to be kind and charitable — to sociopathy, a preference for harmful acts…. [K]indness and charity are indispensable to the development of mutual trust among people who live in close proximity, without the protective cover of an external agency (e.g., the state). Without mutual trust, mutual restraint becomes problematic and co-existence becomes a matter of “getting the other guy before he gets you” — a convention that I hereby dub the Radioactive Rule.

The Golden Rule is beneficial even where the state affords “protective cover,” because the state cannot be everywhere all the time. The institutions of civil society are essential to harmonious and productive coexistence. Where those institutions are strong, the state’s role (at least with respect to internal order) becomes less important. Conversely, where the state is especially intrusive, it usurps and displaces the institutions of civil society, leading to the breakdown of the Golden Rule, that is, to a kind of vestigial observance that, in the main, extends only to persons joined by social connections.

In sum, the Golden Rule represents a social compromise that reconciles the various natural imperatives of human behavior (envy, combativeness, meddlesomeness, etc.). Even though human beings have truly natural proclivities, those proclivities do not dictate the existence of “natural rights.” They certainly do not dictate “natural rights” that are solely the negative rights of libertarian doctrine. To the extent that negative rights prevail, it is as part and parcel of the “bargain” that is embedded in the Golden Rule; that is, they are honored not because of their innateness in humans but because of their beneficial consequences.

Finally:

Among those of us who agree about the proper scope of rights, should the provenance of those rights matter? I think not. The assertion that there are “natural rights” (“inalienable rights”) makes for resounding rhetoric, but (a) it is often misused in the service of positive rights and (b) it makes no practical difference in a world where power routinely accrues to those who make the something-for-nothing promises of positive rights.

The real challenge for the proponents of negative rights — of liberty, in other words — is to overthrow the regulatory-welfare state’s “soft despotism” and nullify its vast array of positive rights. Libertarians, classical liberals, and libertarian-minded conservatives ought to unite around that effort, rather than divide on the provenance of negative rights.

Given the broad range of disagreement about the meaning of the Constitution and the content of natural law, neither will necessarily lead to judicial outcomes of which both Arkes and I approve. What really matters is whether or not judges are conservative in the sense that they are committed to the peaceful, voluntary evolution and exercise of social and economic relationships. Conservative judges of that stripe will more reliably use the words of the Constitution to protect and preserve the voluntary institutions of civil society and the salutary traditions that emerge from them. It is, after all, the Constitution that judges are sworn to support and defend, not amorphous conceptions of natural law and natural rights. As I say in “How Libertarians Ought to Think about the Constitution,” the document “may be a legal fiction, but … it’s a useful fiction when its promises of liberty can be redeemed.”

Arkes’s complaints about Justice Scalia and other strict constitutionalists exemplifies the adage that “perfect is the enemy of good.” The real alternative to Scalia and others similarly inclined isn’t a lineup of judges committed to Arkes’s particular view of natural law and natural rights. The real alternative to Scalia and others similarly inclined is a Court packed with the likes of Douglas, Warren, Brennan, Blackmun, Stevens, Kennedy, Souter, Breyer, Ginsburg, Sotomayor, and Kagan — to name (in chrononlogical order) only the worst in a long list of egregious appointments to the Supreme Court since the New Deal.

I prefer the good — reliably conservative justices like Scalia, Thomas, and Alito — to the impossible perfection sought by Hadley Arkes.


Related posts:
The Real Constitution: I
Negative Rights
Negative Rights, Social Norms, and the Constitution
Rights, Liberty, the Golden Rule, and the Legitimate State
The Real Constitution and Civil Disobedience
“Natural Rights” and Consequentialism
Positivism, “Natural Rights,” and Libertarianism
What Are “Natural Rights”?
The Golden Rule and the State
Evolution, Human Nature, and “Natural Rights”
The Golden Rule as Beneficial Learning
Human Nature, Liberty, and Rationalism
Libertarianism and Morality
Libertarianism and Morality: A Footnote
Merit Goods, Positive Rights, and Cosmic Justice
More about Merit Goods
Liberty, Negative Rights, and Bleeding Hearts
Liberty as a Social Construct: Moral Relativism?
The Futile Search for “Natural Rights”
How Libertarians Ought to Think about the Constitution
More About Social Norms and Liberty
Liberty and Social Norms Re-examined
Natural Law, Natural Rights, and the Real World

The Intransitivity of Political Philosphy

Rachel Lu, in an excellent post at The Public Discourse (“How I Learned to Stop Worrying and Love the Libertarian Atheists,” April 5, 2017), writes:

Undergraduates like communism and libertarianism for the same reasons they like utilitarianism and the categorical imperative. These theories are expansive in their reach, claiming to explain every aspect of the universe from the Milky Way to marriage….

Economy notwithstanding, I see low buy-in theories as a poor value. Like cheap appliances, they look neat in the packaging. Once you start trying to use them, it becomes clear that they’re riddled with bugs. When a political or moral view is grounded in just a few conceptually simple premises, the fleshed-out picture never turns out to be either satisfying or plausible….

My few abortive efforts to read Ayn Rand never got very far. Compared to the ancients and medievals, she seemed utterly plebeian, stomping all over subtle realities in clunky too-large boots. That just sealed my conviction that libertarians were simplistic dunderheads who couldn’t handle the complexity of real life….

… When I first ventured into the political sphere, it quickly became evident that libertarians were far more numerous there. They were a genuinely diverse lot, not fitting all my stereotypes. Some offered astute and fairly subtle social critiques. Some combined Hayekian political ideas with more robust moral views, making for a more interesting blend of influences than I had seen in the academy. I lightened up a little on libertarians….

Have I now repented of my grim assessment of libertarianism? Not entirely. I do still think that most libertarians (serious devotees of Rand, for instance) are metaphysically impoverished to some extent….

In the introduction to God and Man at Yale, William F. Buckley expresses gratitude for the help of Albert J. Nock, whom he describes as “a fine essayist whose thought turned on a single spit: all the reasons why one should be distrustful of state activity, round, and round, and round again.”

This is a wonderful description of a type I know well. Libertarians do indeed obsess over the negative ramifications of government interference. It can become exasperating, and at one time it seemed to me like a serious limitation. If your life’s overwhelming obsession is getting Uncle Sam off your back, you may find yourself thin on ideas for what to do with that cherished liberty.

Still, when a mind relentlessly works on a particular set of questions, it may unearth some useful things. Many libertarians (Milton Friedman, for instance) are genuinely brilliant at working through the potential negative ramifications of government involvement in human life….

There is certainly more to human life than repelling the advances of aggressive government. Still, in modern times, the growth of Leviathan does in fact pose a very significant threat to human thriving.

So far, so good. Lu has nailed the kind of simplistic libertarianism of which I long ago became intolerant, to the point that I have rejected the libertarian label.

Lu turns to Trump:

[T]he “Trumpian skeptic” room just kept getting emptier, and emptier, and still emptier. In the end, there was only one group of fellow travelers who reliably proved impervious to the Trumpian allure. They were my old friends, the libertarian atheists….

Obviously, I am generalizing; I still know a great many anti-Trump religious conservatives. I also do not wish to imply that all people who supported Trump, even in a limited way, should be seen as sellouts or opportunists. I understand why some reluctantly voted for Trump, despite grave concerns about his character. Nonetheless, it did really seem that a great many people whom I once viewed as “like-minded” (religious conservatives and intellectuals of a broadly Aristotelian bent) were, in a sense, seduced by Trump. It was excruciating to watch. Most people started tentatively with a “lesser evils” argument, but soon their justifications and even mannerisms made clear that they had given him, not just their votes, but also an alarming measure of loyalty, trust, and even love. Of course, many people had very legitimate concerns about the judiciary, the left’s cultural aggression, and so forth. None of that can fully explain the enthusiasm, which drew people into a complicity that went far beyond what pragmatic concerns alone could justify. The traditionalists felt the tug of Trump’s cultural nostalgia. Also, of course, they hated the political left.

And there you have it: Traditional conservatives oppose simplistic libertarianism; simplistic libertarians oppose Trump (to put it mildly); therefore, traditional conservatives should oppose Trump. But not all of them do. Why not? Because real life isn’t reducible to logic. Logic, in this case, is trumped (pun intended) by hatred for the political left, which seems (with a great deal of justification) to pose a far greater threat to liberty and prosperity than Trumpism (whatever that is).

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.