Thinking about Thinking — and Other Things: Desiderata As Beliefs

This is the fifth post in a series. (The previous posts are here, here, here, and here.)This post, like its predecessors, will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

How many things does a human being believe because he wants to believe them, and not because there is compelling evidence to support his beliefs? Here is a small sample of what must be an extremely long list:

There is a God. (1a)

There is no God. (1b)

There is a Heaven. (2a)

There is no Heaven. (2b)

Jesus Christ was the Son of God. (3a)

Jesus Christ, if he existed, was a mere mortal. (3b)

Marriage is the eternal union, blessed by God, of one man and one woman. (4a)

Marriage is a civil union, authorized by the state, of one or more consenting adults (or not) of any gender, as the participants in the marriage so define themselves to be. (4b)

All human beings should have equal rights under the law, and those rights should encompass not only negative rights (e.g., the right not to be murdered) but also positive rights (e.g., the right to a minimum wage). (5a)

Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons. (5b)

The rise in global temperatures over the past 170 years has been caused primarily by a greater concentration of carbon dioxide in the atmosphere, which rise has been caused by human activity – and especially by the burning of fossil fuels. This rise, if it isn’t brought under control will make human existence far less bearable and prosperous than it has been in recent human history. (6a)

The rise in global temperatures over the past 170 years has not been uniform across the globe, and has not been in lockstep with the rise in the concentration of atmospheric carbon dioxide. The temperatures of recent decades, and the rate at which they are supposed to have risen, are not unprecedented in the long view of Earth’s history, and may therefore be due to conditions that have not been given adequate consideration by believers in anthropogenic global warming (e.g., natural shifts in ocean currents that have different effects on various regions of Earth, the effects of cosmic radiation on cloud formation as influenced by solar activity and the position of the solar system and the galaxy with respect to other objects in the universe, the shifting of Earth’s magnetic field, and the movement of Earth’s tectonic plates and its molten core). In any event, the models of climate change have been falsified against measured temperatures (even when the temperature record has been adjusted to support the models). And predictions of catastrophe do not take into account the beneficial effects of warming (e.g., lower mortality rates, longer growing seasons), whatever causes it, or the ability of technology to compensate for undesirable effects at a much lower cost than the economic catastrophe that would result from preemptive reductions in the use of fossil fuels. (6b)

Not one of those assertions, even the ones that seem to be supported by facts, is true beyond a reasonable doubt. I happen to believe 1a (with some significant qualifications about the nature of God), 2b, 3b (given my qualified version of 1a), a modified version of 4a (monogamous, heterosexual marriage is socially and economically preferable, regardless of its divine blessing or lack thereof), 5a (but only with negative rights) and 5b, and 6b.  But I cannot “prove” that any of my beliefs is the correct one, nor should anyone believe that anyone can “prove” such things.

Take the belief that all persons are created equal. No one who has eyes, ears, and a minimally functioning brain believes that all persons are created equal. Abraham Lincoln, the Great Emancipator, didn’t believe it:

On September 18, 1858 at Charleston, Illinois, Lincoln told the assembled audience:

I am not, nor ever have been, in favor of bringing about in any way the social and political equality of the white and black races, that I am not, nor ever have been, in favor of making voters or jurors of negroes, nor of qualifying them to hold office, nor to intermarry with white people; and I will say in addition to this that there is a physical difference between the white and black races which I believe will forever forbid the two races living together on terms of social and political equality … I will add to this that I have never seen, to my knowledge, a man, woman, or child who was in favor of producing a perfect equality, social and political, between negroes and white men….

This was before Lincoln was elected president and before the outbreak of the Civil War, but Lincoln’s speeches, writings, and actions after these events continued to reflect this point of view about race and equality.

African American abolitionist Frederick Douglass, for his part, remained very skeptical about Lincoln’s intentions and program, even after the p[resident issued a preliminary emancipation in September 1862.

Douglass had good reason to mistrust Lincoln. On December 1, 1862, one month before the scheduled issuing of an Emancipation Proclamation, the president offered the Confederacy another chance to return to the union and preserve slavery for the foreseeable future. In his annual message to congress, Lincoln recommended a constitutional amendment, which if it had passed, would have been the Thirteenth Amendment to the Constitution.

The amendment proposed gradual emancipation that would not be completed for another thirty-seven years, taking slavery in the United States into the twentieth century; compensation, not for the enslaved, but for the slaveholder; and the expulsion, supposedly voluntary but essentially a new Trail of Tears, of formerly enslaved Africans to the Caribbean, Central America, and Africa….

Douglass’ suspicions about Lincoln’s motives and actions once again proved to be legitimate. On December 8, 1863, less than a month after the Gettysburg Address, Abraham Lincoln offered full pardons to Confederates in a Proclamation of Amnesty and Reconstruction that has come to be known as the 10 Percent Plan.

Self-rule in the South would be restored when 10 percent of the “qualified” voters according to “the election law of the state existing immediately before the so-called act of secession” pledged loyalty to the union. Since blacks could not vote in these states in 1860, this was not to be government of the people, by the people, for the people, as promised in the Gettysburg Address, but a return to white rule.

It is unnecessary, though satisfying, to read Charles Murray’s account in Human Diversity of the broad range of inherent differences in intelligence and other traits that are associated with the sexes, various genetic groups of geographic origin (sub-Saharan Africans, East Asians, etc.), and various ethnic groups (e.g., Ashkenazi Jews).

But even if all persons are not created equal, either mentally or physically, aren’t they equal under the law? If you believe that, you might just as well believe in the tooth fairy. As it says in 5b,

Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons.

Yes, it’s only a hypothesis, but one for which there is ample evidence in the history of mankind. It is confirmed by every instance of theft, murder, armed aggression, scorched-earth warfare, mob violence as catharsis, bribery, election fraud, gratuitous cruelty, and so on into the night.

And yet, human beings (Americans especially) persist in believing tooth-fairy stories about the inevitable triumph of good over evil, self-correcting science, and the emergence of truth from the marketplace of ideas. Balderdash, all of it.

But desiderata become beliefs. And beliefs are what bind people – or make enemies of them.

Michael Oakeshott, Rationalism, and America’s Present Condition

Michael Oakeshott (1901-1990), an English philosopher and political theorist, is remembered by Alberto Mingardi in a post at Econlib:

He was a self-confident thinker who did not search for others’ approval. He had an impressive career but somehow outside the mainstream, was remembered by friends (like Ken Minogue) as a splendid friend who cared about friendship deeply, eschewed honors, and was happy to retire in Dorset and lead a county life. In a beautiful article on Oakeshott, Gertrude Himmelfarb commented that he was “the political philosopher who has so modest a view of the task of political philosophy, the intellectual who is so reluctant a producer of intellectual goods, the master who does so little to acquire or cultivate disciples” and then all of these features perfectly fit in his character.

Oakeshott strongly influenced my view of conservatism, as you will see if you read any of the many posts in which I quote him or refer to his expositions of conservatism and critiques of rationalism. I drew heavily on Oakeshott’s analysis of rationalism in my prescient post of ten years ago about same-sex “marriage” and the destruction of civilizing social norms. Here is the post in its entirety, followed by my retrospective commentary (in italics):

Judge Vaughn Walker’s recent decision in Perry v. Schwarnenegger, which manufactures a constitutional right to same-sex marriage, smacks of Rationalism. Judge Walker distorts and sweeps aside millennia of history when he writes:

The right to marry has been historically and remains the right to choose a spouse and, with mutual consent, join together and form a household. Race and gender restrictions shaped marriage during eras of race and gender inequality, but such restrictions were never part of the historical core of the institution of marriage. Today, gender is not relevant to the state in determining spouses’ obligations to each other and to their dependents. Relative gender composition aside, same-sex couples are situated identically to opposite-sex couples in terms of their ability to perform the rights and obligations of marriage under California law. Gender no longer forms an essential part of marriage; marriage under law is a union of equals.

Judge Walker thereby secures his place in the Rationalist tradition. A Rationalist, as Michael Oakeshott explains,

stands … for independence of mind on all occasions, for thought free from obligations to any authority save the authority of ‘reason’. His circumstances in the modern world have made him contentious; he is the enemy of authority, of prejudice, of the merely traditional, customary or habitual. His mental attitude is at once sceptical and optimistic: sceptical, because there is no opinion, no habit, no belief, nothing so firmly rooted or so widely held that he hesitates to question it and to judge it by what he calls his ‘reason’; optimistic, because the Rationalist never doubts the power of his ‘reason … to determine the worth of a thing, the truth of an opinion or the propriety of an action. Moreover, he is fortified by a belief in a ‘reason’ common to all mankind, a common power of rational consideration…. But besides this, which gives the Rationalist a touch of intellectual equalitarianism, he is something also of an individualist, finding it difficult to believe that anyone who can think honestly and clearly will think differently from himself….

…And having cut himself off from the traditional knowledge of his society, and denied the value of any education more extensive than a training in a technique of analysis, he is apt to attribute to mankind a necessary inexperience in all the critical moments of life, and if he were more self-critical he might begin to wonder how the race had ever succeeded in surviving. (“Rationalism in Politics,” pp. 5-7, as republished in Rationalism in Politics and Other Essays)

At the heart of Rationalism is the view that “a problem” can be analyzed and “solved” as if it were separate and apart from the fabric of life.  On this point, I turn to John Kekes:

Traditions do not stand alone: they overlap, and the problems of one are often resolved in terms of another. Most traditions have legal, moral, political, aesthetic, stylistic, managerial, and multitude of other aspects. Furthermore, people participating in a tradition bring with them beliefs, values, and practices from other traditions in which they also participate. Changes in one tradition, therefore, are likely to produce changes in others; they are like waves that reverberate throughout the other traditions of a society. (“The Idea of Conservatism“)

Edward Feser puts it this way:

Tradition, being nothing other than the distillation of centuries of human experience, itself provides the surest guide to determining the most rational course of action. Far from being opposed to reason, reason is inseparable from tradition, and blind without it. The so-called enlightened mind thrusts tradition aside, hoping to find something more solid on which to make its stand, but there is nothing else, no alternative to the hard earth of human experience, and the enlightened thinker soon finds himself in mid-air…. But then, was it ever truly a love of reason that was in the driver’s seat in the first place? Or was it, rather, a hatred of tradition? Might the latter have been the cause of the former, rather than, as the enlightened pose would have it, the other way around?) (“Hayek and Tradition“)

Same-sex marriage will have consequences that most libertarians and “liberals” are unwilling to consider. Although it is true that traditional, heterosexual unions have their problems, those problems have been made worse, not better, by the intercession of the state. (The loosening of divorce laws, for example, signaled that marriage was to be taken less seriously, and so it has been.) Nevertheless, the state — pursuant to Judge Walker’s decision — may create new problems for society by legitimating same-sex marriage, thus signaling that traditional marriage is just another contractual arrangement in which any combination of persons may participate.

Heterosexual marriage — as Jennifer Roback Morse explains — is a primary and irreplicable civilizing force. The recognition of homosexual marriage by the state will undermine that civilizing force. The state will be saying, in effect, “Anything goes. Do your thing. The courts, the welfare system, and the taxpayer — above all — will “pick up the pieces.” And so it will go.

In Morse’s words:

The new idea about marriage claims that no structure should be privileged over any other. The supposedly libertarian subtext of this idea is that people should be as free as possible to make their personal choices. But the very nonlibertarian consequence of this new idea is that it creates a culture that obliterates the informal methods of enforcement. Parents can’t raise their eyebrows and expect children to conform to the socially accepted norms of behavior, because there are no socially accepted norms of behavior. Raised eyebrows and dirty looks no longer operate as sanctions on behavior slightly or even grossly outside the norm. The modern culture of sexual and parental tolerance ruthlessly enforces a code of silence, banishing anything remotely critical of personal choice. A parent, or even a peer, who tries to tell a young person that he or she is about to do something incredibly stupid runs into the brick wall of the non-judgmental social norm. (“Marriage and the Limits of Contract“)

The state’s signals are drowning out the signals that used to be transmitted primarily by voluntary social institutions: family, friendship, community, church, and club. Accordingly, I do not find it a coincidence that loud, loutish, crude, inconsiderate, rude, and foul behaviors have become increasingly prominent features of “social” life in America. Such behaviors have risen in parallel with the retreat of most authority figures in the face of organized violence by “protestors” and looters; with the rise of political correctness; with the perpetuation of the New Deal and its successor, the Great Society; with the erosion of swift and sure justice in favor of “rehabilitation” and “respect for life” (but not for potential victims of crime); and with the legal enshrinement of infanticide and buggery as acceptable (and even desirable) practices.

Thomas Sowell puts it this way:

One of the things intellectuals [his Rationalists] have been doing for a long time is loosening the bonds that hold a society together. They have sought to replace the groups into which people have sorted themselves with groupings created and imposed by the intelligentsia. Ties of family, religion, and patriotism, for example, have long been treated as suspect or detrimental by the intelligentsia, and new ties that intellectuals have created, such as class — and more recently “gender” — have been projected as either more real or more important….

Under the influence of the intelligentsia, we have become a society that rewards people with admiration for violating its own norms and for fragmenting that society into jarring segments. In addition to explicit denigrations of their own society for its history or current shortcomings, intellectuals often set up standards for their society which no society has ever met or is likely to meet.

Calling those standards “social justice” enables intellectuals to engage in endless complaints about the particular ways in which society fails to meet their arbitrary criteria, along with a parade of groups entitled to a sense of grievance, exemplified in the “race, class and gender” formula…. (Intellectuals and Society, pp. 303, 305)

And so it will go —  barring a sharp, conclusive reversal of Judge Walker and the movement he champions.

There was no sharp or conclusive reversal — quite the contrary, in fact. And the forces of rationalism have only grown stronger in the past decade. Witness the broadly supported movement to renounce America’s history for the sake of virtue-signaling, to blame “racism” for the government-inflicted and self-inflicted failings of blacks, to suppress religion, to undermine the economy in the name of a pseudo-science (“climate science”), and to destroy the social and economic lives of tens of millions of Americans by responding hysterically to the ever-changing and often unfounded findings of “science”.

Thinking about Thinking — and Other Things: Evolution

This is the second post in a series. (The first post is here.) This post, like its predecessor, will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

Evolution is simply change in organic (living) objects. Evolution, as a subject of scientific inquiry, is an attempt to explain how humans (and other animals) came to be what they are today.

Evolution (as a discipline) is a much scientism as it is science. Scientism, according to thefreedictionary.com is “the uncritical application of scientific or quasi-scientific methods to inappropriate fields of study or investigation.” When scientists proclaim truths instead of propounding hypotheses they are guilty of practicing scientism. Two notable scientistic scientists are Richard Dawkins and Peter Singer. It is unsurprising that Dawkins and Singer are practitioners of scientism. Both are strident atheists, and strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side.

Dawkins, Singer, and many other scientistic atheists share an especially “religious” view of evolution. In brief, they seem to believe that evolution rules out God. Evolution rules out nothing. Evolution may be true in outline but it does not bear close inspection. On that point, I turn to David Gelertner’s “Giving Up Darwin” (Claremont Review of Books, Spring 2019):

Darwin himself had reservations about his theory, shared by some of the most important biologists of his time. And the problems that worried him have only grown more substantial over the decades. In the famous “Cambrian explosion” of around half a billion years ago, a striking variety of new organisms—including the first-ever animals—pop up suddenly in the fossil record over a mere 70-odd million years. This great outburst followed many hundreds of millions of years of slow growth and scanty fossils, mainly of single-celled organisms, dating back to the origins of life roughly three and half billion years ago.

Darwin’s theory predicts that new life forms evolve gradually from old ones in a constantly branching, spreading tree of life. Those brave new Cambrian creatures must therefore have had Precambrian predecessors, similar but not quite as fancy and sophisticated. They could not have all blown out suddenly, like a bunch of geysers. Each must have had a closely related predecessor, which must have had its own predecessors: Darwinian evolution is gradual, step-by-step. All those predecessors must have come together, further back, into a series of branches leading down to the (long ago) trunk.

But those predecessors of the Cambrian creatures are missing. Darwin himself was disturbed by their absence from the fossil record. He believed they would turn up eventually. Some of his contemporaries (such as the eminent Harvard biologist Louis Agassiz) held that the fossil record was clear enough already, and showed that Darwin’s theory was wrong. Perhaps only a few sites had been searched for fossils, but they had been searched straight down. The Cambrian explosion had been unearthed, and beneath those Cambrian creatures their Precambrian predecessors should have been waiting—and weren’t. In fact, the fossil record as a whole lacked the upward-branching structure Darwin predicted.

The trunk was supposed to branch into many different species, each species giving rise to many genera, and towards the top of the tree you would find so much diversity that you could distinguish separate phyla—the large divisions (sponges, mosses, mollusks, chordates, and so on) that comprise the kingdoms of animals, plants, and several others—take your pick. But, as [David] Berlinski points out, the fossil record shows the opposite: “representatives of separate phyla appearing first followed by lower-level diversification on those basic themes.” In general, “most species enter the evolutionary order fully formed and then depart unchanged.” The incremental development of new species is largely not there. Those missing pre-Cambrian organisms have still not turned up. (Although fossils are subject to interpretation, and some biologists place pre-Cambrian life-forms closer than others to the new-fangled Cambrian creatures.)

Some researchers have guessed that those missing Precambrian precursors were too small or too soft-bodied to have made good fossils. Meyer notes that fossil traces of ancient bacteria and single-celled algae have been discovered: smallness per se doesn’t mean that an organism can’t leave fossil traces—although the existence of fossils depends on the surroundings in which the organism lived, and the history of the relevant rock during the ages since it died. The story is similar for soft-bodied organisms. Hard-bodied forms are more likely to be fossilized than soft-bodied ones, but many fossils of soft-bodied organisms and body parts do exist. Precambrian fossil deposits have been discovered in which tiny, soft-bodied embryo sponges are preserved—but no predecessors to the celebrity organisms of the Cambrian explosion.

This sort of negative evidence can’t ever be conclusive. But the ever-expanding fossil archives don’t look good for Darwin, who made clear and concrete predictions that have (so far) been falsified—according to many reputable paleontologists, anyway. When does the clock run out on those predictions? Never. But any thoughtful person must ask himself whether scientists today are looking for evidence that bears on Darwin, or looking to explain away evidence that contradicts him. There are some of each. Scientists are only human, and their thinking (like everyone else’s) is colored by emotion.

Yes, emotion, the thing that colors thought. Emotion is something that humans and other animals have. If Darwin and his successors are correct, emotion must be a facility that improves the survival and reproductive fitness of a species.

But that can’t be true because emotion is the spark that lights murder, genocide, and war. World War II, alone, is said to have occasioned the deaths of more than one-hundred million humans. Prominently among those killed were six million Ashkenzi Jews, members of a distinctive branch of humanity whose members (on average) are significantly more intelligent than other branches, and who have contributed beneficially to science, literature, and the arts (especially music).

The evil by-products of emotion – such as the near-extermination of peoples (Ashkenazi Jews among them) – should cause one to doubt that the persistence of a trait in the human population means that the trait is beneficial to survival and reproduction.

David Berlinski, in The Devil’s Delusion: Atheism and Its Scientific Pretensions, addresses the lack of evidence for evolution before striking down the notion that persistent traits are necessarily beneficial:

At the very beginning of his treatise Vertebrate Paleontology and Evolution, Robert Carroll observes quite correctly that “most of the fossil record does not support a strictly gradualistic account” of evolution. A “strictly gradualistic” account is precisely what Darwin’s theory demands: It is the heart and soul of the theory….

In a research survey published in 2001, and widely ignored thereafter, the evolutionary biologist Joel Kingsolver reported that in sample sizes of more than one thousand individuals, there was virtually no correlation between specific biological traits and either reproductive success or survival. “Important issues about selection,” he remarked with some understatement, “remain unresolved.”

Of those important issues, I would mention prominently the question whether natural selection exists at all.

Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not. Thomas Ray has for years been conducting computer experiments in an artificial environment that he has designated Tierra. Within this world, a shifting population of computer organisms meet, mate, mutate, and reproduce.

Sandra Blakeslee, writing for The New York Times, reported the results under the headline “Computer ‘Life Form’ Mutates in an Evolution Experiment: Natural Selection Is Found at Work in a Digital World.”

Natural selection found at work? I suppose so, for as Blakeslee observes with solemn incomprehension, “the creatures mutated but showed only modest increases in complexity.” Which is to say, they showed nothing of interest at all. This is natural selection at work, but it is hardly work that has worked to intended effect.

What these computer experiments do reveal is a principle far more penetrating than any that Darwin ever offered: There is a sucker born every minute….

“Contemporary biology,” [Daniel Dennett] writes, “has demonstrated beyond all reasonable doubt that natural selection— the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge— has the power to generate breathtakingly ingenious designs” (italics added).

These remarks are typical in their self-enchanted self-confidence. Nothing in the physical sciences, it goes without saying— right?— has been demonstrated beyond all reasonable doubt. The phrase belongs to a court of law. The thesis that improvements in life appear automatically represents nothing more than Dennett’s conviction that living systems are like elevators: If their buttons are pushed, they go up. Or down, as the case may be. Although Darwin’s theory is very often compared favorably to the great theories of mathematical physics on the grounds that evolution is as well established as gravity, very few physicists have been heard observing that gravity is as well established as evolution. They know better and they are not stupid….

The greater part of the debate over Darwin’s theory is not in service to the facts. Nor to the theory. The facts are what they have always been: They are unforthcoming. And the theory is what it always was: It is unpersuasive. Among evolutionary biologists, these matters are well known. In the privacy of the Susan B. Anthony faculty lounge, they often tell one another with relief that it is a very good thing the public has no idea what the research literature really suggests.

“Darwin?” a Nobel laureate in biology once remarked to me over his bifocals. “That’s just the party line.”

In the summer of 2007, Eugene Koonin, of the National Center for Biotechnology Information at the National Institutes of Health, published a paper entitled “The Biological Big Bang Model for the Major Transitions in Evolution.”

The paper is refreshing in its candor; it is alarming in its consequences. “Major transitions in biological evolution,” Koonin writes, “show the same pattern of sudden emergence of diverse forms at a new level of complexity” (italics added). Major transitions in biological evolution? These are precisely the transitions that Darwin’s theory was intended to explain. If those “major transitions” represent a “sudden emergence of new forms,” the obvious conclusion to draw is not that nature is perverse but that Darwin was wrong….

Koonin is hardly finished. He has just started to warm up. “In each of these pivotal nexuses in life’s history,” he goes on to say, “the principal ‘types’ seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate ‘grades’ or intermediate forms between different types are detectable.”…

[H[is views are simply part of a much more serious pattern of intellectual discontent with Darwinian doctrine. Writing in the 1960s and 1970s, the Japanese mathematical biologist Motoo Kimura argued that on the genetic level— the place where mutations take place— most changes are selectively neutral. They do nothing to help an organism survive; they may even be deleterious…. Kimura was perfectly aware that he was advancing a powerful argument against Darwin’s theory of natural selection. “The neutral theory asserts,” he wrote in the introduction to his masterpiece, The Neutral Theory of Molecular Evolution, “that the great majority of evolutionary changes at the molecular level, as revealed by comparative studies of protein and DNA sequences, are caused not by Darwinian selection but by random drift of selectively neutral or nearly neutral mutations” (italics added)….

Writing in the Proceedings of the National Academy of Sciences, the evolutionary biologist Michael Lynch observed that “Dawkins’s agenda has been to spread the word on the awesome power of natural selection.” The view that results, Lynch remarks, is incomplete and therefore “profoundly misleading.” Lest there be any question about Lynch’s critique, he makes the point explicitly: “What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms.”

Survival and reproduction depend on many traits. A particular trait, considered in isolation, may seem to be helpful to the survival and reproduction of a group. But that trait may not be among the particular collection of traits that is most conducive to the group’s survival and reproduction. If that is the case, the trait will become less prevalent.

Alternatively, if the trait is an essential member of the collection that is conducive to survival and reproduction, it will survive. But its survival depends on the other traits. The fact that X is a “good trait” does not, in itself, ensure the proliferation of X. And X will become less prevalent if other traits become more important to survival and reproduction.

In any event, it is my view that genetic fitness for survival has become almost irrelevant in places like the United States. The rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

In fact, there is a supportable hypothesis that humans in cosseted realms (i.e., the West) are, on average, becoming less intelligent. But, first, it is necessary to explain why it seemed for a while that humans were becoming more intelligent.

David Robson is on the case:

When the researcher James Flynn looked at [IQ] scores over the past century, he discovered a steady increase – the equivalent of around three points a decade. Today, that has amounted to 30 points in some countries.

Although the cause of the Flynn effect is still a matter of debate, it must be due to multiple environmental factors rather than a genetic shift.

Perhaps the best comparison is our change in height: we are 11cm (around 5 inches) taller today than in the 19th Century, for instance – but that doesn’t mean our genes have changed; it just means our overall health has changed.

Indeed, some of the same factors may underlie both shifts. Improved medicine, reducing the prevalence of childhood infections, and more nutritious diets, should have helped our bodies to grow taller and our brains to grow smarter, for instance. Some have posited that the increase in IQ might also be due to a reduction of the lead in petrol, which may have stunted cognitive development in the past. The cleaner our fuels, the smarter we became.

This is unlikely to be the complete picture, however, since our societies have also seen enormous shifts in our intellectual environment, which may now train abstract thinking and reasoning from a young age. In education, for instance, most children are taught to think in terms of abstract categories (whether animals are mammals or reptiles, for instance). We also lean on increasingly abstract thinking to cope with modern technology. Just think about a computer and all the symbols you have to recognise and manipulate to do even the simplest task. Growing up immersed in this kind of thinking should allow everyone [hyperbole alert] to cultivate the skills needed to perform well in an IQ test….

[Psychologist Robert Sternberg] is not alone in questioning whether the Flynn effect really represented a profound improvement in our intellectual capacity, however. James Flynn himself has argued that it is probably confined to some specific reasoning skills. In the same way that different physical exercises may build different muscles – without increasing overall “fitness” – we have been exercising certain kinds of abstract thinking, but that hasn’t necessarily improved all cognitive skills equally. And some of those other, less well-cultivated, abilities could be essential for improving the world in the future.

Here comes the best part:

You might assume that the more intelligent you are, the more rational you are, but it’s not quite this simple. While a higher IQ correlates with skills such as numeracy, which is essential to understanding probabilities and weighing up risks, there are still many elements of rational decision making that cannot be accounted for by a lack of intelligence.

Consider the abundant literature on our cognitive biases. Something that is presented as “95% fat-free” sounds healthier than “5% fat”, for instance – a phenomenon known as the framing bias. It is now clear that a high IQ does little to help you avoid this kind of flaw, meaning that even the smartest people can be swayed by misleading messages.

People with high IQs are also just as susceptible to the confirmation bias – our tendency to only consider the information that supports our pre-existing opinions, while ignoring facts that might contradict our views. That’s a serious issue when we start talking about things like politics.

Nor can a high IQ protect you from the sunk cost bias – the tendency to throw more resources into a failing project, even if it would be better to cut your losses – a serious issue in any business. (This was, famously, the bias that led the British and French governments to continue funding Concorde planes, despite increasing evidence that it would be a commercial disaster.)

Highly intelligent people are also not much better at tests of “temporal discounting”, which require you to forgo short-term gains for greater long-term benefits. That’s essential, if you want to ensure your comfort for the future.

Besides a resistance to these kinds of biases, there are also more general critical thinking skills – such as the capacity to challenge your assumptions, identify missing information, and look for alternative explanations for events before drawing conclusions. These are crucial to good thinking, but they do not correlate very strongly with IQ, and do not necessarily come with higher education. One study in the USA found almost no improvement in critical thinking throughout many people’s degrees.

Given these looser correlations, it would make sense that the rise in IQs has not been accompanied by a similarly miraculous improvement in all kinds of decision making.

So much for the bright people who promote and pledge allegiance to socialism and its various manifestations (e.g., the Green New Deal, and Medicare for All). So much for the bright people who suppress speech with which they disagree because it threatens the groupthink that binds them.

Robson also discusses evidence of dysgenic effects in IQ:

Whatever the cause of the Flynn effect, there is evidence that we may have already reached the end of this era – with the rise in IQs stalling and even reversing. If you look at Finland, Norway and Denmark, for instance, the turning point appears to have occurred in the mid-90s, after which average IQs dropped by around 0.2 points a year. That would amount to a seven-point difference between generations.

Psychologist (and intelligence specialist) James Thompson has addressed dysgenic effects at his blog on the website of The Unz Review. In particular, he had a lot to say about the work of an intelligence researcher named Michael Woodley. Here’s a sample from a post by Thompson:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong?

From a later post by Thompson:

The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by [Matthew] Sarraf.

The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.

The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”….

Here are the findings which I have arranged by generational decline (taken as 25 years).

  • Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.
  • 3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.
  • Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.
  • Working memory, over 85 years (3.4 generations) 0.16 drop/decade.
  • Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.
  • Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.

Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.…

My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.

How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

Here’s my hypothesis: The less-intelligent portions of the populace are breeding faster than the more-intelligent portions. As I said earlier, the rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

Thinking about Thinking … and Other Things: Time, Existence, and Science

This is the first post in a series. It will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

Before we can consider time and existence, we must consider whether they are illusions.

Regarding time, there’s a reasonable view that nothing exists but the present — the now — or, rather, an infinite number of nows. In the conventional view, one now succeeds another, which creates the illusion of the passage of time. In the view of some physicists, however, all nows exist at once, and we merely perceive sequential slice of the all nows. Inasmuch as there seems to be general agreement as to the contents of the slice, the only evidence that many nows exist in parallel are claims about such phenomena as clairvoyance, visions, and co-location. I won’t wander into that thicket.

A problem with the conventional view of time is that not everyone perceives the same now at the same time. Well, not according to Einstein’s special theory of relativity, at least. A problem with the view that all nows exist at once (known as the many-worlds view), is that it’s purely a mathematical concoction. Unless you’re a clairvoyant, visionary, or the like.

Oh, wait, the special theory of relativity is also a mathematical concoction. Further it doesn’t really show that not everyone perceives the same now at the same time. The key to special relativity – the Lorentz transformation — enables one to reconcile the various nows; that is, to be a kind of omniscient observer. So, in effect, there really is a now.

This leads to the question of what distinguishes one now from another now. The answer is change. If things didn’t change, there would be only a now, not an infinite series of them. More precisely, if things didn’t seem to change, time would seem to stand still. This is another way of saying that a succession of nows creates the illusion of the passage of time.

What happens between one now and the next now? Change, not the passage of time. What we think of as the passage of time is really an artifact of change.

Time is really nothing more than the counting of events that supposedly occur at set intervals — the “ticking” of an atomic clock, for example. I say supposedly because there’s no absolute measure of time against which one can calibrate the “ticking” of an atomic clock, or any other kind of clock.

In summary: Clocks don’t measure time. Clocks merely change (“tick”) at supposedly regular intervals, and those intervals are used in the representation of other things, such as the speed of an automobile or the duration of a 100-yard dash.

Time is an illusion. Or, if that conclusion bothers you, let’s just say that time is an ephemeral quality that depends on change.

Change is real. But change in what — of what does reality consist?

There are two basic views of reality. One of them, posited by Bishop Berkeley and his followers, is that the only reality is that which goes on in one’s own mind. But that’s just another way of saying that humans don’t perceive the external world directly. Rather, it is perceived second-hand, through the senses that detect external phenomena and transmit signals to the brain, which is where “reality” is formed.

There is an extreme version of the Berkeleyan view: Everything perceived is only a kind of dream or illusion. But even a dream or illusion is something, not nothing, so there is some kind of existence.

The sensible view, held by most humans (even most scientists), is that there is an objective reality out there, beyond the confines one’s mind. How can so many people agree about the existence of certain things (e.g., Cleveland) unless there’s something out there?

Over the ages, scientists have been able to describe objective reality in ever-minute detail. But what is it? What is the stuff of which it consists? No one knows or is likely ever to know. All we know is that stuff changes, and those changes give rise to what we call time.

The big question is how things came to exist. This has been debated for millennia. There are two schools of thought:

Things just exist and have always existed.

Things can’t come into existence on their own, so some non-thing must have caused things to exist.

The second option leaves open the question of how the non-thing came into existence, and can be interpreted as a variant of the first option; that is, some non-thing just exists and has always existed.

How can the big question be resolved? It can’t be resolved by facts or logic. If it could be, there would be wide agreement about the answer. (Not perfect agreement because a lot of human beings are impervious to facts and logic.) But there isn’t and never will be wide agreement.

Why is that? Can’t scientists someday trace the existence of things – call it the universe – back to a source? Isn’t that what the Big Bang Theory is all about? No and no. If the universe has always existed, there’s no source to be tracked down. And if the universe was created by a non-thing, how can scientists detect the non-thing if they’re only equipped to deal with things?

The Big Bang Theory posits a definite beginning, at a more or less definite point in time. But even if the theory is correct, it doesn’t tell us how that beginning began. Did things start from scratch, and if they did, what caused them to do so? And maybe they didn’t; maybe the Big Bang was just the result of the collapse of a previous universe, which was the result of a previous one, etc., etc., etc., ad infinitum.

Some scientists who think about such things (most of them, I suspect) don’t believe that the universe was created by a non-thing. But they don’t believe it because they don’t want to believe it. The much smaller number of similar scientists who believe that the universe was created by a non-thing hold that belief because they want to hold it.

That’s life in the world of science, just as it is in the world of non-science, where believers, non-believers, and those who can’t make up their minds find all kinds of ways in which to rationalize what they believe (or don’t believe), even though they know less than scientists do about the universe.

Let’s just accept that and move on to another big question: What is it that exists?  It’s not “stuff” as we usually think of it – like mud or sand or water droplets. It’s not even atoms and their constituent particles. Those are just convenient abstractions for what seem to be various manifestations of electromagnetic forces, or emanations thereof, such as light.

But what are electromagnetic forces? And what does their behavior (to be anthropomorphic about it) have to do with the way that the things like planets, stars, and galaxies move in relation to one another? There are lots of theories, but none of them has as yet gained wide acceptance by scientists. And even if one theory does gain wide acceptance, there’s no telling how long before it’s supplanted by a new theory.

That’s the thing about science: It’s a process, not a particular result. Human understanding of the universe offers a good example. Here’s a short list of beliefs about the universe that were considered true by scientists, and then rejected:

Thales (c. 620 – c. 530 BC): The Earth rests on water.

Aneximenes (c. 540 – c. 475 BC): Everything is made of air.

Heraclitus (c. 540 – c. 450 BC): All is fire.

Empodecles (c. 493 – c. 435 BC): There are four elements: earth, air, fire, and water.

Democritus (c. 460 – c. 370 BC): Atoms (basic elements of nature) come in an infinite variety of shapes and sizes.

Aristotle (384 – 322 BC): Heavy objects must fall faster than light ones. The universe is a series of crystalline spheres that carry the sun, moon, planets, and stars around Earth.

Ptolemey (90 – 168 AD): Ditto the Earth-centric universe,  with a mathematical description.

Copernicus (1473 – 1543): The planets revolve around the sun in perfectly circular orbits.

Brahe (1546 – 1601): The planets revolve around the sun, but the sun and moon revolve around Earth.

Kepler (1573 – 1630): The planets revolve around the sun in elliptical orbits, and their trajectory is governed by magnetism.

Newton (1642 – 1727): The course of the planets around the sun is determined by gravity, which is a force that acts at a distance. Light consists of corpuscles; ordinary matter is made of larger corpuscles. Space and time are absolute and uniform.

Rutherford (1871 – 1937), Bohr (1885 – 1962), and others: The atom has a center (nucleus), which consists of two elemental particles, the neutron and proton.

Einstein (1879 – 1955): The universe is neither expanding nor shrinking.

That’s just a small fraction of the mistaken and incomplete theories that have held sway in the field of physics. There are many more such mistakes and lacunae in the other natural sciences: biology, chemistry, and earth science — each of which, like physics, has many branches. And in all of the branches there are many unresolved questions. For example, the Standard Model of particle physics, despite its complexity, is known to be incomplete. And it is thought (by some) to be unduly complex; that is, there may be a simpler underlying structure waiting to be discovered.

Given all of this, it is grossly presumptuous to claim that climate science – to take a salient example — is “settled” when the phenomena that it encompasses are so varied, complex, often poorly understood, and often given short shrift (e.g., the effects of solar radiation on the intensity of cosmic radiation reaching Earth, which affects low-level cloud formation, which affects atmospheric temperature and precipitation).

Anyone who says that any aspect of science is “settled” is either ignorant, stupid, or a freighted with a political agenda. Anyone who says that “science is real” is merely parroting an empty slogan.

Matt Ridley (quoted by Judith Curry) explains:

In a lecture at Cornell University in 1964, the physicist Richard Feynman defined the scientific method. First, you guess, he said, to a ripple of laughter. Then you compute the consequences of your guess. Then you compare those consequences with the evidence from observations or experiments. “If [your guess] disagrees with experiment, it’s wrong. In that simple statement is the key to science. It does not make a difference how beautiful the guess is, how smart you are, who made the guess or what his name is…it’s wrong….

In general, science is much better at telling you about the past and the present than the future. As Philip Tetlock of the University of Pennsylvania and others have shown, forecasting economic, meteorological or epidemiological events more than a short time ahead continues to prove frustratingly hard, and experts are sometimes worse at it than amateurs, because they overemphasize their pet causal theories….

Peer review is supposed to be the device that guides us away from unreliable heretics. Investigations show that peer review is often perfunctory rather than thorough; often exploited by chums to help each other; and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field.

Herbert Ayres, an expert in operations research, summarized the problem well several decades ago: “As a referee of a paper that threatens to disrupt his life, [a professor] is in a conflict-of-interest position, pure and simple. Unless we’re convinced that he, we, and all our friends who referee have integrity in the upper fifth percentile of those who have so far qualified for sainthood, it is beyond naive to believe that censorship does not occur.” Rosalyn Yalow, winner of the Nobel Prize in medicine, was fond of displaying the letter she received in 1955 from the Journal of Clinical Investigation noting that the reviewers were “particularly emphatic in rejecting” her paper.

The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto.

As I said, there is no such thing as “settled science”. Real science is a vast realm of unsettled uncertainty. Newton put it thus:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Certainty is the last refuge of a person whose mind is closed to new facts and new ways of looking at old facts.

How uncertain is the real world, especially the world of events yet to come? Consider a simple, three-parameter model in which event C depends on the occurrence of event B, which depends on the occurrence of event A; in which the value of the outcome is the summation of the values of the events that occur; and in which value of each event is binary – a value of 1 if it happens, 0 if it doesn’t happen. Even in a simple model like that, there is a wide range of possible outcomes; thus:

A doesn’t occur (B and C therefore don’t occur) = 0.

A occurs but B fails to occur (and C therefore doesn’t occur) = 1.

A occurs, B occurs, but C fails to occur = 2.

A occurs, B occurs, and C occurs = 3.

Even when A occurs, subsequent events (or non-events) will yield final outcomes ranging in value from 1 to 3 times 1. A factor of 3 is a big deal. It’s why .300 hitters make millions of dollars a year and .100 hitters sell used cars.

Let’s leave it at that and move on.

The Iraq War in Retrospect

The Iraq War has been called many things, “immoral” being among the leading adjectives for it. Was it altogether immoral? Was it immoral to remain in favor of the war after it was (purportedly) discovered that Saddam Hussein didn’t have an active program for the production of weapons of mass destruction? Or was the war simply misdirected from its proper — and moral — purpose: the service of Americans’ interests by stabilizing the Middle East? I address those and other questions about the war in what follows.

THE WAR-MAKING POWER AND ITS PURPOSE

The sole justification for the United States government is the protection of Americans’ interests. Those interests are spelled out broadly in the Preamble to the Constitution: justice, domestic tranquility, the common defense, the general welfare, and the blessings of liberty.

Contrary to leftist rhetoric, the term “general welfare” in the Preamble (and in Article I, Section 8) doesn’t grant broad power to the national government to do whatever it deems to be “good”. “General welfare” — general well-being, not the well-being of particular regions or classes — is merely one of the intended effects of the enumerated and limited powers granted to the national government by conventions of the States.

One of the national government’s specified powers is the making of war. In the historical context of the adoption of the Constitution, it is clear the the purpose of the war-making power is to defend Americans and their legitimate interests: liberty generally and, among other things, the free flow of trade between American and foreign entities. The war-making power carries with it the implied power to do harm to foreigners in the course of waging war. I say that because the Framers, many of whom fought for independence from Britain, knew from experience that war, of necessity, must sometimes cause damage to the persons and property of non-combatants.

In some cases, the only way to serve the interests of Americans is to inflict deliberate damage on non-combatants. That was the case, for example, when U.S. air forces dropped atomic bombs on Hiroshima and Nagasaki to force Japan’s surrender and avoid the deaths and injuries of perhaps a million Americans. Couldn’t Japan have been “quarantined” instead, once its forces had been driven back to the homeland? Perhaps, but at great cost to Americans. Luckily, in those days American leaders understood that the best way to ensure that an enemy didn’t resurrect its military power was to defeat it unconditionally and to occupy its homeland. You will have noticed that as a result, Germany and Japan are no longer military threats to the U.S., whereas Iraq remained one after the Gulf War of 1990-1991 because Saddam wasn’t deposed. Russia, which the U.S. didn’t defeat militarily — only symbolically — is resurgent militarily. China, which wasn’t even defeated symbolically in the Cold War, is similarly resurgent, and bent on regional if not global hegemony, necessarily to the detriment of Americans’ interests. To paraphrase: There is no substitute for unconditional military victory.

That is a hard and unfortunate truth, but it eludes many persons, especially those of the left. They suffer under dual illusions, namely, that the Constitution is an outmoded document and that “world opinion” trumps the Constitution and the national sovereignty created by it. Neither illusion is shared by Americans who want to live in something resembling liberty and to enjoy the advantages pertaining thereto, including prosperity.

CASUS BELLI

The invasion of Iraq in 2003 by the armed forces of the U.S. government (and those of other nations) had explicit and implicit justifications. The explicit justifications for the U.S. government’s actions are spelled out in the Authorization for Use of Military Force Against Iraq of 2002 (AUMF). It passed the House by a vote of 296 – 133 and the Senate by a vote of 77 – 23, and was signed into law by President George W. Bush on October 16, 2002.

There are some who focus on the “weapons of mass destruction” (WMD) justification, which figures prominently in the “whereas” clauses of the AUMF. But the war, as it came to pass when Saddam failed to respond to legitimate demands spelled out in the AUMF, had a broader justification than whatever Saddam was (or wasn’t) doing with weapons of mass destruction (WMD). The final “whereas” puts it succinctly: it is in the national security interests of the United States to restore international peace and security to the Persian Gulf region.

An unstated but clearly understood implication of “peace and security in the Persian Gulf region” was the security of the region’s oil supply against Saddam’s capriciousness. The mantra “no blood for oil” to the contrary notwithstanding, it is just as important to defend the livelihoods of Americans as it is to defend their lives — and in many instances it comes to the same thing.

In sum, I disregard the WMD rationale for the Iraq War. The real issue is whether the war secured the stability of the Persian Gulf region (and the Middle East in general). And if it didn’t, why did it fail to do so?

ROADS TAKEN AND NOT TAKEN

One can only speculate about what might have happened in the absence of the Iraq War. For instance, how many more Iraqis might have been killed and tortured by Saddam’s agents? How many more terrorists might have been harbored and financed by Saddam? How long might it have taken him to re-establish his WMD program or build a nuclear weapons program? Saddam, who started it all with the invasion of Kuwait, wasn’t a friend of the U.S. or the West in general. The U.S. isn’t the world’s policeman, but the U.S. government has a moral obligation to defend the interests of Americans, preemptively if necessary.

By the same token, one can only speculate about what might have happened if the U.S. government had prosecuted the war differently than it did, which was “on the cheap”. There weren’t enough boots on the ground to maintain order in the way that it was maintained by the military occupations in Germany and Japan after World War II. Had there been, there wouldn’t have been a kind of “civil war” or general chaos in Iraq after Saddam was deposed. (It was those things, as much as the supposed absence of a WMD program that turned many Americans against the war.)

Speculation aside, I supported the invasion of Iraq, the removal of Saddam, and the rout of Iraq’s armed forces with the following results in mind:

  • A firm military occupation of Iraq, for some years to come.
  • The presence in Iraq and adjacent waters and airspace of U.S. forces in enough strength to control Iraq and deter misadventures by other nations in the region (e.g., Iran and Syria) and prospective interlopers (e.g., Russia).
  • Israel’s continued survival and prosperity under the large shadow cast by U.S. forces in the region.
  • Secure production and shipment of oil from Iraq and other oil-producing nations in the region.

All of that would have happened but for (a) too few boots on the ground (later remedied in part by the “surge”); (b) premature “nation-building”, which helped to stir up various factions in Iraq; (c) Obama’s premature surrender, which he was shamed into reversing; and (d) Obama’s deal with Iran, with its bundles of cash and blind-eye enforcement that supported Iran’s rearmament and growing boldness in the region. (The idea that Iraq, under Saddam, had somehow contained Iran is baloney; Iran was contained only until its threat to go nuclear found a sucker in Obama.)

In sum, the war was only a partial success because (once again) U.S. leaders failed to wage it fully and resolutely. This was due in no small part to incessant criticism of the war, stirred up and sustained by Democrats and the media.

WHO HAD THE MORAL HIGH GROUND?

In view of the foregoing, the correct answer is: the U.S. government, or those of its leaders who approved, funded, planned, and executed the war with the aim of bringing peace and security to the Persian Gulf region for the sake of Americans’ interests.

The moral high ground was shared by those Americans who, understanding the war’s justification on grounds broader than WMD, remained steadfast in support of the war despite the tumult and shouting that arose from its opponents.

There were Americans whose support of the war was based on the claim that Saddam had ore was developing WMD, and whose support ended or became less ardent when WMD seemed not to be in evidence. I wouldn’t presume to judge them harshly for withdrawing their support, but I would judge them myopic for basing it on solely on the WMD predicate. And I would judge them harshly if they joined the outspoken opponents of the war, whose opposition I address below.

What about those Americans who supported the war simply because they believed that President Bush and his advisers “knew what they were doing” or out of a sense of patriotism? That is to say, they had no particular reason for supporting the war other than a general belief that its successful execution would be a “good thing”. None of those Americans deserves moral approbation or moral blame. They simply had better things to do with their lives than to parse the reasons for going to war and for continuing it. And it is no one’s place to judge them for not having wasted their time in thinking about something that was beyond their ability to influence. (See the discussion of “public opinion” below.)

What about those Americans who publicly opposed the war, either from the beginning or later? I cannot fault all of them for their opposition — and certainly not  those who considered the costs (human and monetary) and deemed them not worth the possible gains.

But there were (and are) others whose opposition to the war was and is problematic:

  • Critics of the apparent absence of an active WMD program in Iraq, who seized on the WMD justification and ignored (or failed to grasp) the war’s broader justification.
  • Political opportunists who simply wanted to discredit President Bush and his party, which included most Democrats (eventually), effete elites generally, and particularly most members of the academic-media-information technology complex.
  • An increasingly large share of the impressionable electorate who could not (and cannot) resist a bandwagon.
  • Reflexive pro-peace/anti-war posturing by the young, who are prone to oppose “the establishment” and to do so loudly and often violently.

The moral high ground isn’t gained by misguided criticism, posturing, joining a bandwagon, or hormonal emotionalism.

WHAT ABOUT “PUBLIC OPINION”?

Suppose you had concluded that the Iraq War was wrong because the WMD justification seemed to have been proven false as the war went on. Perhaps even than false: a fraud perpetrated by officials of the Bush administration, if not by the president himself, to push Congress and “public opinion” toward support for an invasion of Iraq.

If your main worry about Iraq, under Saddam, was the possibility that WMD would be used against Americans, the apparent falsity of the WMD claim — perhaps fraudulent falsity — might well have turned you against the war. Suppose that there were many millions of Americans like you, whose initial support of the war turned to disillusionment as evidence of an active WMD program failed to materialize. Would voicing your opinion on the matter have helped to end the war? Did you have a moral obligation to voice your opinion? And, in any event, should wars be ended because of “public opinion”? I will try to answer those questions in what follows.

The strongest case to be made for the persuasive value of voicing one’s opinion might be found in the median-voter theorem. According to Wikipedia, the median-voter theorem

“states that ‘a majority rule voting system will select the outcome most preferred by the median voter”….

The median voter theorem rests on two main assumptions, with several others detailed below. The theorem is assuming [sic] that voters can place all alternatives along a one-dimensional political spectrum. It seems plausible that voters could do this if they can clearly place political candidates on a left-to-right continuum, but this is often not the case as each party will have its own policy on each of many different issues. Similarly, in the case of a referendum, the alternatives on offer may cover more than one issue. Second, the theorem assumes that voters’ preferences are single-peaked, which means that voters have one alternative that they favor more than any other. It also assumes that voters always vote, regardless of how far the alternatives are from their own views. The median voter theorem implies that voters have an incentive to vote for their true preferences. Finally, the median voter theorem applies best to a majoritarian election system.

The article later specifies seven assumptions underlying the theorem. None of the assumptions is satisfied in the real world of American politics. Complexity never favors the truth of any proposition; it simply allows the proposition to be wrong in more ways if all of the assumptions must be true, as is the case here.

There is a weak form of the theorem, which says that

the median voter always casts his or her vote for the policy that is adopted. If there is a median voter, his or her preferred policy will beat any other alternative in a pairwise vote.

That still leaves the crucial assumption that voters are choosing between two options. This is superficially true in the case of a two-person race for office or a yes-no referendum. But, even then, a binary option usually masks non-binary ramifications that voters take into account.

In any case, it is trivially true to say that the preference of the median voter foretells the outcome of an election in a binary election, if the the outcome is decided by majority vote and there isn’t a complicating factor like the electoral college. One could say, with equal banality, that the stronger man wins the weight-lifting contest, the outcome of which determines who is the stronger man.

Why am I giving so much attention to the median-voter theorem? Because, according to a blogger whose intellectual prowess I respect, if enough Americans believe a policy of the U.S. government to be wrong, the policy might well be rescinded if the responsible elected officials (or, presumably, their prospective successors) believe that the median voter wants the policy rescinded. How would that work?

The following summary of the blogger’s case is what I gleaned from his original post on the subject and several comments and replies. I have inserted parenthetical commentary throughout.

  • The pursuit of the Iraq War after the WMD predicate for it was (seemingly) falsified — hereinafter policy X — was immoral because X led unnecessarily to casualties, devastation, and other costs. (As discussed above, there were other predicates for X and other consequences of X, some of them good, but they don’t seem to matter to the blogger.)
  • Because X was immoral (in the blogger’s reckoning), X should have been rescinded.
  • Rescission would have (might have?/should have?) occurred through the operation of the median-voter theorem if enough persons had made known their opposition to X. (How might the median-voter theorem have applied when X wasn’t on a ballot? See below.)
  • Any person who had taken the time to consider X (taking into account only the WMD predicate and unequivocally bad consequences) could only have deemed it immoral. (The blogger originally excused persons who deemed X proper, but later made a statement equivalent to the preceding sentence. This is a variant of “heads, I win; tails, you lose”.)
  • Having deemed X immoral, a person (i.e., a competent, adult American) would have been morally obliged to make known his opposition to X. Even if the person didn’t know of the spurious median-voter theorem, his opposition to X (which wasn’t on a ballot) would somehow have become known and counted (perhaps in a biased opinion poll conducted by an entity opposed to X) and would therefore have helped to move the median stance of the (selectively) polled fragment of the populace toward opposition to X, whereupon X would be rescinded, according to the median-voter theorem. (Or perhaps vociferous opposition, expressed in public protests, would be reported by the media — especially by those already opposed to X — as indicative of public opinion, whether or not it represented a median view of X.)
  • Further, any competent, adult American who didn’t bother to take the time to evaluate X would have been morally complicit in the continuation of X. (This must be the case because the blogger says so, without knowing each person’s assessment of the slim chance that his view of the matter would affect X, or the opportunity costs of evaluating X and expressing his view of it.)
  • So the only moral course of action, according to the blogger, was for every competent, adult American to have taken the time to evaluate X (in terms of the WMD predicate), to have deemed it immoral (there being no other choice given the constraint just mentioned), and to have made known his opposition to the policy. (This despite the fact that most competent, adult Americans know viscerally or from experience that the median-voter theorem is hooey — more about that below — and that it would therefore have been a waste of their time to get worked up about a policy that wasn’t unambiguously immoral. Further, they were and are rightly reluctant to align themselves with howling mobs and biased media — even by implication, as in a letter to the editor — in protest of a policy that wasn’t unambiguously immoral.)
  • Then, X (which wasn’t on a ballot) would have been rescinded, pursuant to the median-voter theorem (or, properly, the outraged/vociferous-pollee/protester-biased pollster/media theorem). (Except that X wasn’t, in fact, rescinded despite massive outpourings of outrage by small fractions of the populace, which were gleefully reflected in biased polls and reported by biased media. Nor was it rescinded by implication when President Bush was up for re-election — he won. It might have been rescinded by implication when the Bush was succeeded by Obama — an opponent of X — but there were many reasons other than X for Obama’s victory: mainly the financial crisis, McCain’s lame candidacy, and a desire by many voters to signal — to themselves, at least — their non-racism by voting for Obama. And X wasn’t doing all that badly at the time of Obama’s election because of the troop “surge” authorized by Bush. Further, Obama’s later attempt to rescind X had consequences that caused him to reverse his attempted rescission, regardless of any lingering opposition to X.)

What about other salient, non-ballot issues? Does “public opinion” make a difference? Sometimes yes, sometimes no. Obamacare, for example, was widely opposed until it was enacted by Congress and signed into law by Obama. It suddenly became popular because much of the populace wants to be on the “winning side” of an issue. (So much for the moral value of public opinion.) Similarly, abortion was widely deemed to be immoral until the Supreme Court legalized it. Suddenly, it began to become acceptable according to “public opinion”. I could go on an on, but you get the idea: Public opinion often follows policy rather than leading it, and its moral value is dubious in any event.

But what about cases where government policy shifted in the aftermath of widespread demonstrations and protests? Did demonstrations and protests lead to the enactment of the Civil Rights Acts of the 1960s? Did they cause the U.S. government to surrender, in effect, to North Vietnam? No and no. From where I sat — and I was a politically aware, voting-age, adult American of the “liberal” persuasion at the time of those events — public opinion had little effect on the officials who were responsible for the Civil Rights Acts or the bug-out from Vietnam.

The civil-rights movement of the 1950s and 1960s and the anti-war movement of the 1960s and 1970s didn’t yield results until years after their inception. And those results didn’t (at the time, at least) represent the views of most Americans who (I submit) were either indifferent or hostile to the advancement of blacks and to the anti-patriotic undertones of the anti-war movement. In both cases, mass protests were used by the media (and incited by the promise of media attention) to shame responsible officials into acting as media elites wanted them to.

Further, it is a mistake to assume that the resulting changes in law (writ broadly to include policy) were necessarily good changes. The stampede to enact civil-rights laws in the 1960s, which hinged not so much on mass protests but on LBJ”s “white guilt” and powers of persuasion, resulted in the political suppression of an entire region, the loss of property rights, and the denial of freedom of association. (See, for example, Christopher Caldwell’s “The Roots of Our Partisan Divide“, Imprimis, February 2020.)

The bug-out from Vietnam foretold the U.S. government’s fecklessness in the Iran hostage crisis; the withdrawal of U.S. forces from Lebanon after the bombing of Marine barracks there; the failure of G.H.W. Bush to depose Saddam when it would have been easy to do so; the legalistic response to the World Trade Center bombing; the humiliating affair in Somalia; Clinton’s failure to take out Osama bin Laden; Clinton’s tepid response to Saddam’s provocations; nation-building (vice military occupation) in Iraq; and Obama’s attempt to pry defeat from the jaws of something resembling victory in Iraq.

All of that, and more, is symptomatic of the influence that “liberal” elites came to exert on American foreign and defense policy after World War II. Public opinion has been a side show, and protestors have been useful idiots to the cause of “liberal internationalism”, that is, the surrender of Americans’ economic and security interests for the sake of various rapprochements toward “allies” who scorn America when it veers ever so slightly from the road to serfdom, and enemies — Russia and China — who have never changed their spots, despite “liberal” wishful thinking. Handing America’s manufacturing base to China in the name of free trade is of a piece with all the rest.

IN CONCLUSION . . .

It is irresponsible to call a policy immoral without evaluating all of its predicates and consequences. One might as well call the Allied leaders of World War II immoral because they chose war — with all of its predictably terrible consequences — rather than abject surrender.

It is fatuous to ascribe immorality to anyone who was supportive of or indifferent to the war. One might as well ascribe immorality to the economic and political ignoramuses who failed to see that FDR’s policies would prolong the Great Depression, that Social Security and its progeny (Medicare and Medicaid) would become entitlements that paved the way for the central government’s commandeering of vast portions of the economy, or that the so-called social safety net would discourage work and permanently depress economic growth in America.

If I were in the business of issuing moral judgments about the Iraq War, I would condemn the strident anti-war faction for its perfidy.

Bleeding Heart Libertarians (the Blog): Good Riddance

Ist kaputt. Why is it good riddance? See this post and follow the links, most of which lead to posts critical of Bleeding Heart Libertarians.

Bleeding Heart Libertarians (the Blog): A Bibliography of Related Posts

A recent post at Policy of Truth by its proprietor, Irfan Khawaja, prompted me to compile a list of all of the posts that I have written about some of the blog posts and bloggers at Bleeding Heart Libertarians. Though Khawaja and I disagree about a lot, I believe that we agree about the fatuousness of bleeding-heart libertarianism. (BTW, Khawaja’s flaming valedictory, on a different subject, is worth a read.)

Here’s the bibliography, arranged chronologically from March 9, 2011, to September 11, 2014:

The Meaning of Liberty
Peter Presumes to Preach
Positive Liberty vs. Liberty
More Social Justice
On Self-Ownership and Desert
The Killing of bin Laden and His Ilk
In Defense of Subjectivism
The Folly of Pacifism, Again
What Is Libertarianism?
Why Stop at the Death Penalty?
What Is Bleeding-Heart Libertarianism?
The Morality of Occupying Public Property
The Equal-Protection Scam and Same-Sex Marriage
Liberty, Negative Rights, and Bleeding Hearts
Bleeding-Heart Libertarians = Left-Statists
Enough with the Bleeding Hearts Already
Not Guilty of Libertarian Purism
Obama’s Big Lie
Bleeding-Heart Libertarians = Left-Statists (Redux)
Egoism and Altruism
A Case for Redistribution Not Made

“It’s Tough to Make Predictions, Especially about the Future”

A lot of people have said it, or something like it, though probably not Yogi Berra, to whom it’s often attributed.

Here’s another saying, which is also apt here: History does not repeat itself. The historians repeat one another.

I am accordingly amused by something called cliodynamics, which is discussed at length by Amanda Rees in “Are There Laws of History?” (Aeon, May 2020). The Wikipedia article about cliodynamics describes it as

a transdisciplinary area of research integrating cultural evolution, economic history/cliometrics, macrosociology, the mathematical modeling of historical processes during the longue durée [the long term], and the construction and analysis of historical databases. Cliodynamics treats history as science. Its practitioners develop theories that explain such dynamical processes as the rise and fall of empires, population booms and busts, spread and disappearance of religions. These theories are translated into mathematical models. Finally, model predictions are tested against data. Thus, building and analyzing massive databases of historical and archaeological information is one of the most important goals of cliodynamics.

I won’t dwell on the methods of cliodynamics, which involve making up numbers about various kinds of phenomena and then making up models which purport to describe, mathematically, the interactions among the phenomena. Underlying it all is the practitioner’s broad knowledge of historical events, which he converts (with the proper selection of numerical values and mathematical relationships) into such things as the Kondratiev wave, a post-hoc explanation of a series of arbitrarily denominated and subjectively measured economic eras.

In sum, if you seek patterns you will find them, but pattern-making (modeling) is not science. (There’s a lot more here.)

Here’s a simple demonstration of what’s going on with cliodynamics. Using the RANDBETWEEN function of Excel, I generated two columns of random numbers ranging in value from 0 to 1,000, with 1,000 numbers in each column. I designated the values in the left column as x variables and the numbers in the right column as y variables. I then arbitrarily chose the first 10 pairs of numbers and plotted them:

As it turns out, the relationship, even though it seems rather loose, has only a 21-percent chance of being due to chance. In the language of statistics, two-tailed p=0.21.

Of course, the relationship is due entirely to chance because it’s the relationship between two sets of random numbers. So much for statistical tests of “significance”.

Moreover, I could have found “more significant” relationships had I combed carefully through the 1,000 pairs of random number with my pattern-seeking brain.

But being an honest person with scientific integrity, I will show you the plot of all 1,000 pairs of random numbers:

I didn’t bother to find a correlation between the x and y values because there is none. And that’s the messy reality of human history. Yes, there have been many determined (i.e., sought-for) outcomes  — such as America’s independence from Great Britain and Hitler’s rise to power. But they are not predetermined outcomes. Their realization depended on the surrounding circumstances of the moment, which were myriad, non-quantifiable, and largely random in relation to the event under examination (the revolution, the putsch, etc.). The outcomes only seem inevitable and predictable in hindsight.

Cliodynamics is a variant of the anthropic principle, which is that he laws of physics appear to be fine-tuned to support human life because we humans happen to be here to observe the laws of physics. In the case of cliodynamics, the past seems to consist of inevitable events because we are here in the present looking back (rather hazily) at the events that occurred in the past.

Cliodynametricians, meet Nostradamus. He “foresaw” the future long before you did.

Insidious Algorithms

Michael Anton inveighs against Big Tech and pseudo-libertarian collaborators in “Dear Avengers of the Free Market” (Law & Liberty, October 5, 2018):

Beyond the snarky attacks on me personally and insinuations of my “racism”—cut-and-paste obligatory for the “Right” these days—the responses by James Pethokoukis and (especially) John Tamny to my Liberty Forum essay on Silicon Valley are the usual sorts of press releases that are written to butter up the industry and its leaders in hopes of . . . what?…

… I am accused of having “a fundamental problem with capitalism itself.” Guilty, if by that is meant the reservations about mammon-worship first voiced by Plato and Aristotle and reinforced by the godfather of capitalism, Adam Smith, in his Theory of Moral Sentiments (the book that Smith himself indicates is the indispensable foundation for his praise of capitalism in the Wealth of Nations). Wealth is equipment, a means to higher ends. In the middle of the last century, the Right rightly focused on unjust impediments to the creation and acquisition of wealth. But conservatism, lacking a deeper understanding of the virtues and of human nature—of what wealth is for—eventually ossified into a defense of wealth as an end in itself. Many, including apparently Pethokoukis and Tamny, remain stuck in that rut to this day and mistake it for conservatism.

Both critics were especially appalled by my daring to criticize modern tech’s latest innovations. Who am I to judge what people want to sell or buy? From a libertarian standpoint, of course, no one may pass judgment. Under this view, commerce has no moral content…. To homo economicus any choice that does not inflict direct harm is ipso facto not subject to moral scrutiny, yet morality is defined as the efficient, non-coercive, undistorted operation of the market.

Naturally, then, Pethokoukis and Tamny scoff at my claim that Silicon Valley has not produced anything truly good or useful in a long time, but has instead turned to creating and selling things that are actively harmful to society and the soul. Not that they deny the claim, exactly. They simply rule it irrelevant. Capitalism has nothing to do with the soul (assuming the latter even exists). To which I again say: When you elevate a means into an end, that end—in not being the thing it ought to be—corrupts its intended beneficiaries.

There are morally neutral economic goods, like guns, which can be used for self-defense or murder. But there are economic goods that undermine morality (e.g., abortion, “entertainment” that glamorizes casual sex) and fray the bonds of mutual trust and respect that are necessary to civil society. (How does one trust a person who treats life and marriage as if they were unworthy of respect?)

There’s a particular aspect of Anton’s piece that I want to emphasize here: Big Tech’s alliance with the left in its skewing of information.

Continuing with Anton:

The modern tech information monopoly is a threat to self-government in at least three ways. First its … consolidation of monopoly power, which the techies are using to guarantee the outcome they want and to suppress dissent. It’s working….

Second, and related, is the way that social media digitizes pitchforked mobs. Aristocrats used to have to fear the masses; now they enable, weaponize, and deploy them…. The grandees of Professorville and Sand Hill Road and Outer Broadway can and routinely do use social justice warriors to their advantage. Come to that, hundreds of thousands of whom, like modern Red Guards, don’t have to be mobilized or even paid. They seek to stifle dissent and destroy lives and careers for the sheer joy of it.

Third and most important, tech-as-time-sucking-frivolity is infantilizing and enstupefying society—corroding the reason-based public discourse without which no republic can exist….

But all the dynamism and innovation Tamny and Pethokoukis praise only emerge from a bedrock of republican virtue. This is the core truth that libertarians seem unable to appreciate. Silicon Valley is undermining that virtue—with its products, with its tightening grip on power, and with its attempt to reengineer society, the economy, and human life.

I am especially concerned here with the practice of tinkering with AI algorithms to perpetuate bias in the name of  eliminating it (e.g., here). The bias to be perpetuated, in this case, is blank-slate bias: the mistaken belief that there are no inborn differences between blacks and whites or men and women. It is that belief which underpins affirmative action in employment, which penalizes the innocent and reduces the quality of products and services, and incurs heavy enforcement costs; “head start” programs, which waste taxpayers’ money; and “diversity” programs at universities, which penalize the innocent and set blacks up for failure. Those programs and many more of their ilk are generally responsible for heightening social discord rather than reducing it.

In the upside-down world of “social justice” an algorithm is considered biased if it is unbiased; that is, if it reflects the real correlations between race, sex, and ability in certain kinds of endeavors. Charles Murray’s Human Diversity demolishes the blank-slate theory with reams and reams of facts. Social-justice warriors will hate it, just as they hated The Bell Curve, even though they won’t read the later book, just as they didn’t read the earlier one.

Evaluating an Atheistic Argument

I am plowing my way through Theism, Atheism, and Big Bang Cosmology by William Lane Craig and Quentin Smith, and I continue to doubt that it will inform my views about cosmology. I concluded my preliminary thoughts about the book with this:

Craig … sees the hand of God in the Big Bang. The presence of the singularity … had to have been created so that the Big Bang could follow. That’s all well and good, but what was God doing before the Big Bang, that is, in the infinite span of time before 15 billion years ago? (Is it presumptuous of me to ask?) And why should the Big Bang prove God’s existence any more than, say, a universe that came into being at an indeterminate time? The necessity of God (or some kind of creator) arises from the known character of the universe: material effects follow from material causes, which cannot cause themselves. In short, Craig pins too much on the Big Bang, and his argument would collapse if the Big Bang is found to be a figment of observational error.

Later, however, Craig adopts a more reasonable position:

The theist … has no vested interest in denominating the Big Bang as the moment of creation. He is convinced that God created all of space-time reality ex nihilo, and the Big Bang model provides a powerful suggestion as to when that was; on the other hand, if it can be demonstrated that our observable universe originated in a broader spacetime, so be it — in that case it was this wider reality that was the immediate object of God’s creation.

Just so.

Many pages later, after rattling on almost unintelligibly, Smith gets down to brass tacks by offering an actual atheistic argument. It goes like this (with “premise” substituted for “premiss”):

(1) If God exists and there is an earliest state E of the universe, then God created E.

(2) If God created E, then E is ensured either to contain animate creatures or to lead to a subsequent state of the universe that contains animate creatures.

Premise (2) is entailed by two more basic theological premises, namely,

(3) God is omniscient, omnipotent, and perfectly benevolent.

(4) An animate universe is better than an inanimate universe….

[Further]

(5) There is an earliest state of the universe and it is the Big Bang singularity….

The scientific ideas [1, 2, and 5] also give us this premise

(6) The earliest state of the universe is inanimate since the singularity involves the life-hostile conditions of infinite temperature, infinite curvature, and infinite density.

Another scientific idea … the principle of ignorance, give us the summary premise

(7) The Big Bang singularity is inherently unpredictable and lawless and consequently there is no guarantee that it will emit a maximal configuration of particles that will evolve into an animate state of the universe….

(5) and (7) entail

(8) The earliest state of the universe is not ensured to lead to an animate state of the universe.

We now come to the crux of our argument. Given (2), (6), and (8), we can now infer that God could not have created the earliest state of the universe. It then follows, by (1), that God does not exist.

This is a terrible argument, and one that I expect to be demolished by Craig’s response, which I haven’t yet read. Here is my demolition: Smith’s premises (3) and (4) are superfluous to his argument; if they are worth anything it is to demonstrate the shallowness of his grasp of theistic arguments for the existence of God. Smith’s premise (2) is a non sequitur; the universe does contain animate creatures and might do so even if God didn’t exist. Smith’s (6), (7), and (8) are therefore irrelevant to Smith’s argument. And, by Smith’s own “logic”,  God must exist because premise (2) is confirmed by reality.

Here is my counter-argument:

(I) If God exists:

(A) He must exist infinitely, that is, without beginning.

(B) He is necessarily apart from His creation; therefore, His essence is beyond human comprehension and cannot be characterized anthropomorphically nor judged by any of His creations, except to the extent that if God is a conscious essence, He may enable human beings to know His character and intentions by some means.

(II) The universe, as a material manifestation that is observable by human beings (in part, at least) must have been created by God because material things cannot create themselves. (The processes appealed to by atheists, such as quantum fluctuations and vacuum energy, operate on existing material.)

(III) The universe, as a creation of an infinite God, may have had an indeterminate beginning.

(IV) There is, therefore, no reason to suppose that the Big Bang and all that ensued in the observable universe is the whole of the universe created by God. Rather, the Big Bang, and all that ensued in the observable universe may be a manifestation of a broader spacetime that is forever beyond the ken of human beings.

(V) Human knowledge of the observable universe is limited to the physical “laws” that can be inferred from what is known of the observable universe since its apparent origin as a material entity.

(VI) It is therefore impossible for human beings to know what processes preceded the origin of the observable universe.

(VII) Because animate, conscious organisms exist in the observable universe, the physical processes (if any) involved in the origination of the observable universe must have been conducive to the development of such organisms. But, by (6), whether that was by “design” or accident is beyond the ken of human beings.

(VIII) But animate, conscious organisms may be the result of a deliberate act of God, who may enable human beings to know of His existence and his design.

This is an unscientific argument, in that it can’t be falsified by observation. By the same token, so too is Smith’s argument unscientific, despite his use of scientific jargon and “scientific” speculations. But the weakness of Smith’s argument is proof (of a kind) that God exists and that He created the universe. That is to say, Smith (and countless others like him) seem determined to refute the logically necessary existence of God, but their refutations fail because they are illogical.


Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Free Will: A Proof by Example?
A Theory of Everything, Occam’s Razor, and Baseball
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
What Is Time?
Science’s Anti-Scientific Bent
The Tenth Dimension
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Not-So-Random Thoughts (II) (first item)
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Religion, Creation, and Morality
Atheistic Scientism Revisited
Through a Glass Darkly
Existence and Knowledge

Preliminary Thoughts about “Theism, Atheism, and Big Bang Cosmology”

I am in the early sections of Theism, Atheism, and Big Bang Cosmology by William Lane Craig and Quentin Smith, but I am beginning to doubt that it will inform my views about cosmology. (These are spelled out with increasing refinement here, here, here, and here.) The book consists of alternating essays by Craig and Smith, in which Craig defends the classical argument for a creation (the Kalām cosmological argument) against Smith’s counter-arguments.

For one thing, Smith — who takes the position that the universe wasn’t created — seems to pin a lot on the belief prevalent at the time of the book’s publication (1993) that the universe was expanding but at a decreasing rate. It is now believed generally among physicists that the universe is expanding at an accelerating rate. I must therefore assess Smith’s argument in light of the current belief.

For another thing, Craig and Smith (in the early going, at least) seem to be bogged down in an arcane argument about the meaning of infinity. Craig takes the position, understandably, that an actual infinity is impossible in the physical world. Smith, of course, takes the opposite position. The problem here is that Craig and Smith argue about what is an empirical (if empirically undecidable) matter by resorting to philosophical and mathematical concepts. The observed and observable facts are on Craig’s side: Nothing is known to have happened in the material universe without an antecedent material cause. Philosophical and mathematical arguments about the nature of infinity seem beside the point.

For a third thing, Craig seems to pin a lot on the Big Bang, while Smith is at pains to deny its significance. Smith seems to claim that the Big Bang wasn’t the beginning of the universe; rather, the universe was present in the singularity from which the Big Bang arose. The singularity might therefore have existed all along.

Craig, on the other hand, sees the hand of God in the Big Bang. The presence of the singularity (the original clump of material “stuff”) had to have been created so that the Big Bang could follow. That’s all well and good, but what was God doing before the Big Bang, that is, in the infinite span of time before 15 billion years ago? (Is it presumptuous of me to ask?) And why should the Big Bang prove God’s existence any more than, say, a universe that came into being at an indeterminate time? The necessity of God (or some kind of creator) arises from the known character of the universe: material effects follow from material causes, which cannot cause themselves. In short, Craig pins too much on the Big Bang, and his argument would collapse if the Big Bang is found to be a figment of observational error.

There’s much more to come, I hope.

Existence and Knowledge

Philosophical musings by a non-philosopher which are meant to be accessible to other non-philosophers.

Ontology is the branch of philosophy that deals with existence. Epistemology is the branch of philosophy that deals with knowledge.

I submit (with no claim to originality) that existence (what really is) is independent of knowledge (proposition A), but knowledge is impossible without existence (proposition B).

In proposition A, I include in existence those things that exist in the present, those things that have existed in the past, and the processes (happenings) by which past existences either end (e.g., death of an organism, collapse of a star) or become present existences (e.g., an older version of a living person, the formation of a new star). That which exists is real; existence is reality.

In proposition B, I mean knowledge as knowledge of that which exists, and not the kind of “knowledge” that arises from misperception, hallucination, erroneous deduction, lying, and so on. Much of what is called scientific knowledge is “knowledge” of the latter kind because, as scientists know (when they aren’t advocates) scientific knowledge is provisional. Proposition B implies that knowledge is something that human beings and other living organisms possess, to widely varying degrees of complexity. (A flower may “know” that the Sun is in a certain direction, but not in the same way that a human being knows it.) In what follows, I assume the perspective of human beings, including various compilations of knowledge resulting from human endeavors. (Aside: Knowledge is self-referential, in that it exists and is known to exist.)

An example of proposition A is the claim that there is a falling tree (it exists), even if no one sees, hears, or otherwise detects the tree falling. An example of proposition B is the converse of Cogito, ergo sum, I think, therefore I am; namely, I am, therefore I (a sentient being) am able to know that I am (exist).

Here’s a simple illustration of proposition A. You have a coin in your pocket, though I can’t see it. The coin is, and its existence in your pocket doesn’t depend on my act of observing it. You may not even know that there is a coin in your pocket. But it exists — it is — as you will discover later when you empty your pocket.

Here’s another one. Earth spins on its axis, even though the “average” person perceives it only indirectly in the daytime (by the apparent movement of the Sun) and has no easy way of perceiving it (without the aid of a Foucault pendulum) when it is dark or when asleep. Sunrise (or at least a diminution of darkness) is a simple bit of evidence for the reality of Earth spinning on its axis without our having perceived it.

Now for a somewhat more sophisticated illustration of proposition A. One interpretation of quantum mechanics is that a sub-atomic particle (really an electromagnetic phenomenon) exists in an indeterminate state until an observer measures it, at which time its state is determinate. There’s no question that the particle exists independently of observation (knowledge of the particle’s existence), but its specific characteristic (quantum state) is determined by the act of observation. Does this mean that existence of a specific kind depends on knowledge? No. It means that observation determines the state of the particle, which can then be known. Observation precedes knowledge, even if the gap is only infinitesimal. (A clear-cut case is the autopsy of a dead person to determine his cause of death. The autopsy didn’t cause the person’s death, but came after it as an act of observation.)

Regarding proposition B, there are known knowns, known unknowns, unknown unknowns, and unknown “knowns”. Examples:

Known knowns (real knowledge = true statements about existence) — The experiences of a conscious, sane, and honest person: I exist; am eating; I had a dream last night; etc. (Recollections of details and events, however, are often mistaken, especially with the passage of time.)

Known unknowns (provisional statements of fact; things that must be or have been but which are not in evidence) — Scientific theories, hypotheses, data upon which these are based, and conclusions drawn from them. The immediate causes of the deaths of most persons who have died since the advent of homo sapiens. The material process by which the universe came to be (i.e., what happened to cause the Big Bang, if there was a Big Bang).

Unknown unknowns (things that exist but are unknown to anyone) — Almost everything about the universe.

Unknown “knowns” (delusions and outright falsehoods accepted by some persons as facts) — Frauds, scientific and other. The apparent reality of a dream.

Regarding unknown “knowns”, one might dream of conversing with a dead person, for example. The conversation isn’t real, only the dream is. And it is real only to the dreamer. But it is real, nevertheless. And the brain activity that causes a dream is real even if the person in whom the activity occurs has no perception or memory of a dream. A dream is analogous to a movie about fictional characters. The movie is real but the fictional characters exist only in the script of the movie and the movie itself. The actors who play the fictional characters are themselves, not the fictional characters.

There is a fine line between known unknowns (provisional statements of fact) and unknown “knowns” (delusions and outright falsehoods). The former are statements about existence that are made in good faith. The latter are self-delusions of some kind (e.g., the apparent reality of a dream as it occurs), falsehoods that acquire the status of “truth” (e.g., George Washington’s false teeth were made of wood), or statements of “fact” that are made in bad faith (e.g., adjusting the historic temperature record to make the recent past seem warmer relative to the more distant past).

The moral of the story is that a doubting Thomas is a wise person.

Not-So-Random Thoughts (XXV)

“Not-So-Random Thoughts” is an occasional series in which I highlight writings by other commentators on varied subjects that I have addressed in the past. Other entries in the series can be found at these links: I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI, XXII, XXIII, and XXIV. For more in the same style, see “The Tenor of the Times” and “Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration“.

CONTENTS

The Real Unemployment Rate and Labor-Force Participation

Is Partition Possible?

Still More Evidence for Why I Don’t Believe in “Climate Change”

Transgenderism, Once More

Big, Bad Oligopoly?

Why I Am Bunkered in My Half-Acre of Austin

“Government Worker” Is (Usually) an Oxymoron


The Real Unemployment Rate and Labor-Force Participation

There was much celebration (on the right, at least) when it was announced that the official unemployment rate, as of November, is only 3.5 percent, and that 266,000 jobs were added to the employment rolls (see here, for example). The exultation is somewhat overdone. Yes, things would be much worse if Obama’s anti-business rhetoric and policies still prevailed, but Trump is pushing a big boulder of deregulation uphill.

In fact, the real unemployment rate is a lot higher than official figure I refer you to “Employment vs. Big Government and Disincentives to Work“. It begins with this:

The real unemployment rate is several percentage points above the nominal rate. Officially, the unemployment rate stood at 3.5 percent as of November 2019. Unofficially — but in reality — the unemployment rate was 9.4 percent.

The explanation is that the labor-force participation rate has declined drastically since peaking in January 2000. When the official unemployment rate is adjusted to account for that decline (and for a shift toward part-time employment), the result is a considerably higher real unemployment rate.

Arnold Kling recently discussed the labor-force participation rate:

[The] decline in male labor force participation among those without a college degree is a significant issue. Note that even though the unemployment rate has come down for those workers, their rate of labor force participation is still way down.

Economists on the left tend to assume that this is due to a drop in demand for workers at the low end of the skill distribution. Binder’s claim is that instead one factor in declining participation is an increase in the ability of women to participate in the labor market, which in turn lowers the advantage of marrying a man. The reduced interest in marriage on the part of women attenuates the incentive for men to work.

Could be. I await further analysis.


Is Partition Possible?

Angelo Codevilla peers into his crystal ball:

Since 2016, the ruling class has left no doubt that it is not merely enacting chosen policies: It is expressing its identity, an identity that has grown and solidified over more than a half century, and that it is not capable of changing.

That really does mean that restoring anything like the Founders’ United States of America is out of the question. Constitutional conservatism on behalf of a country a large part of which is absorbed in revolutionary identity; that rejects the dictionary definition of words; that rejects common citizenship, is impossible. Not even winning a bloody civil war against the ruling class could accomplish such a thing.

The logical recourse is to conserve what can be conserved, and for it to be done by, of, and for those who wish to conserve it. However much force of what kind may be required to accomplish that, the objective has to be conservation of the people and ways that wish to be conserved.

That means some kind of separation.

As I argued in “The Cold Civil War,” the natural, least stressful course of events is for all sides to tolerate the others going their own ways. The ruling class has not been shy about using the powers of the state and local governments it controls to do things at variance with national policy, effectively nullifying national laws. And they get away with it.

For example, the Trump Administration has not sent federal troops to enforce national marijuana laws in Colorado and California, nor has it punished persons and governments who have defied national laws on immigration. There is no reason why the conservative states, counties, and localities should not enforce their own view of the good.

Not even President Alexandria Ocasio-Cortez would order troops to shoot to re-open abortion clinics were Missouri or North Dakota, or any city, to shut them down. As Francis Buckley argues in American Secession: The Looming Breakup of the United States, some kind of separation is inevitable, and the options regarding it are many.

I would like to believe Mr. Codevilla, but I cannot. My money is on a national campaign of suppression, which will begin the instant that the left controls the White House and Congress. Shooting won’t be necessary, given the massive displays of force that will be ordered from the White House, ostensibly to enforce various laws, including but far from limited to “a woman’s right to an abortion”. Leftists must control everything because they cannot tolerate dissent.

As I say in “Leftism“,

Violence is a good thing if your heart is in the “left” place. And violence is in the hearts of leftists, along with hatred and the irresistible urge to suppress that which is hated because it challenges leftist orthodoxy — from climate skepticism and the negative effect of gun ownership on crime to the negative effect of the minimum wage and the causal relationship between Islam and terrorism.

There’s more in “The Subtle Authoritarianism of the ‘Liberal Order’“; for example:

[Quoting Sumantra Maitra] Domestically, liberalism divides a nation into good and bad people, and leads to a clash of cultures.

The clash of cultures was started and sustained by so-called liberals, the smug people described above. It is they who — firmly believing themselves to be smarter, on the the side of science, and on the side of history — have chosen to be the aggressors in the culture war.

Hillary Clinton’s remark about Trump’s “deplorables” ripped the mask from the “liberal” pretension to tolerance and reason. Clinton’s remark was tantamount to a declaration of war against the self-appointed champion of the “deplorables”: Donald Trump. And war it has been. much of it waged by deep-state “liberals” who cannot entertain the possibility that they are on the wrong side of history, and who will do anything — anything — to make history conform to their smug expectations of it.


Still More Evidence for Why I Don’t Believe in “Climate Change”

This is a sequel to an item in the previous edition of this series: “More Evidence for Why I Don’t Believe in Climate Change“.

Dave Middleton debunks the claim that 50-year-old climate models correctly predicted the susequent (but not steady) rise in the globe’s temperature (whatever that is). He then quotes a talk by Dr. John Christy of the University of Alabama-Huntsville Climate Research Center:

We have a change in temperature from the deep atmosphere over 37.5 years, we know how much forcing there was upon the atmosphere, so we can relate these two with this little ratio, and multiply it by the ratio of the 2x CO2 forcing. So the transient climate response is to say, what will the temperature be like if you double CO2– if you increase at 1% per year, which is roughly what the whole greenhouse effect is, and which is achieved in about 70 years. Our result is that the transient climate response in the troposphere is 1.1 °C. Not a very alarming number at all for a doubling of CO2. When we performed the same calculation using the climate models, the number was 2.31°C. Clearly, and significantly different. The models’ response to the forcing – their ∆t here, was over 2 times greater than what has happened in the real world….

There is one model that’s not too bad, it’s the Russian model. You don’t go to the White House today and say, “the Russian model works best”. You don’t say that at all! But the fact is they have a very low sensitivity to their climate model. When you look at the Russian model integrated out to 2100, you don’t see anything to get worried about. When you look at 120 years out from 1980, we already have 1/3 of the period done – if you’re looking out to 2100. These models are already falsified [emphasis added], you can’t trust them out to 2100, no way in the world would a legitimate scientist do that. If an engineer built an aeroplane and said it could fly 600 miles and the thing ran out of fuel at 200 and crashed, he might say: “I was only off by a factor of three”. No, we don’t do that in engineering and real science! A factor of three is huge in the energy balance system. Yet that’s what we see in the climate models….

Theoretical climate modelling is deficient for describing past variations. Climate models fail for past variations, where we already know the answer. They’ve failed hypothesis tests and that means they’re highly questionable for giving us accurate information about how the relatively tiny forcing … will affect the climate of the future.

For a lot more in this vein, see my pages “Climate Change” and “Modeling and Science“.


Transgenderism, Once More

Theodore Dalrymple (Anthony Daniels, M.D.) is on the case:

The problem alluded to in [a paper in the Journal of Medical Ethics] is, of course, the consequence of a fiction, namely that a man who claims to have changed sex actually has changed sex, and is now what used to be called the opposite sex. But when a man who claims to have become a woman competes in women’s athletic competitions, he often retains an advantage derived from the sex of his birth. Women competitors complain that this is unfair, and it is difficult not to agree with them….

Man being both a problem-creating and solving creature, there is, of course, a very simple way to resolve this situation: namely that men who change to simulacra of women should compete, if they must, with others who have done the same. The demand that they should suffer no consequences that they neither like nor want from the choices they have made is an unreasonable one, as unreasonable as it would be for me to demand that people should listen to me playing the piano though I have no musical ability. Thomas Sowell has drawn attention to the intellectual absurdity and deleterious practical consequences of the modern search for what he calls “cosmic justice.”…

We increasingly think that we live in an existential supermarket in which we pick from the shelf of limitless possibilities whatever we want to be. We forget that limitation is not incompatible with infinity; for example, that our language has a grammar that excludes certain forms of words, without in any way limiting the infinite number of meanings that we can express. Indeed, such limitation is a precondition of our freedom, for otherwise nothing that we said would be comprehensible to anybody else.

That is a tour de force typical of the good doctor. In the span of three paragraphs, he addresses matters that I have treated at length in “The Transgender Fad and Its Consequences” (and later in the previous edition of this series), “Positive Rights and Cosmic Justice“, and “Writing: A Guide” (among other entries at this blog).


Big, Bad Oligopoly?

Big Tech is giving capitalism a bad name, as I discuss in “Why Is Capitalism Under Attack from the Right?“, but it’s still the best game in town. Even oligopoly and its big brother, monopoly, aren’t necessarily bad. See, for example, my posts, “Putting in Some Good Words for Monopoly” and “Monopoly: Private Is Better than Public“. Arnold Kling makes the essential point here:

Do indicators of consolidation show us that the economy is getting less competitive or more competitive? The answer depends on which explanation(s) you believe to be most important. For example, if network effects or weak resistance to mergers are the main factors, then the winners from consolidation are quasi-monopolists that may be overly insulated from competition. On the other hand, if the winners are firms that have figured out how to develop and deploy software more effectively than their rivals, then the growth of those firms at the expense of rivals just shows us that the force of competition is doing its work.


Why I Am Bunkered in My Half-Acre of Austin

Randal O’Toole takes aim at the planners of Austin, Texas, and hits the bullseye:

Austin is one of the fastest-growing cities in America, and the city of Austin and Austin’s transit agency, Capital Metro, have a plan for dealing with all of the traffic that will be generated by that growth: assume that a third of the people who now drive alone to work will switch to transit, bicycling, walking, or telecommuting by 2039. That’s right up there with planning for dinner by assuming that food will magically appear on the table the same way it does in Hogwarts….

[W]hile Austin planners are assuming they can reduce driving alone from 74 to 50 percent, it is actually moving in the other direction….

Planners also claim that 11 percent of Austin workers carpool to work, an amount they hope to maintain through 2039. They are going to have trouble doing that as carpooling, in fact, only accounted for 8.0 percent of Austin workers in 2018.

Planners hope to increase telecommuting from its current 8 percent (which is accurate) to 14 percent. That could be difficult as they have no policy tools that can influence telecommuting.

Planners also hope to increase walking and bicycling from their current 2 and 1 percent to 4 and 5 percent. Walking to work is almost always greater than cycling to work, so it’s difficult to see how they plan to magic cycling to be greater than walking. This is important because cycling trips are longer than walking trips and so have more of a potential impact on driving.

Finally, planners want to increase transit from 4 to 16 percent. In fact, transit carried just 3.24 percent of workers to their jobs in 2018, down from 3.62 percent in 2016. Changing from 4 to 16 percent is a an almost impossible 300 percent increase; changing from 3.24 to 16 is an even more formidable 394 percent increase. Again, reality is moving in the opposite direction from planners’ goals….

Planners have developed two main approaches to transportation. One is to estimate how people will travel and then provide and maintain the infrastructure to allow them to do so as efficiently and safely as possible. The other is to imagine how you wish people would travel and then provide the infrastructure assuming that to happen. The latter method is likely to lead to misallocation of capital resources, increased congestion, and increased costs to travelers.

Austin’s plan is firmly based on this second approach. The city’s targets of reducing driving alone by a third, maintaining carpooling at an already too-high number, and increasing transit by 394 percent are completely unrealistic. No American city has achieved similar results in the past two decades and none are likely to come close in the next two decades.

Well, that’s the prevailing mentality of Austin’s political leaders and various bureaucracies: magical thinking. Failure is piled upon failure (e.g., more bike lanes crowding out traffic lanes, a hugely wasteful curbside composting plan) because to admit failure would be to admit that the emperor has no clothes.

You want to learn more about Austin? You’ve got it:

Driving and Politics (1)
Life in Austin (1)
Life in Austin (2)
Life in Austin (3)
Driving and Politics (2)
AGW in Austin?
Democracy in Austin
AGW in Austin? (II)
The Hypocrisy of “Local Control”
Amazon and Austin


“Government Worker” Is (Usually) an Oxymoron

In “Good News from the Federal Government” I sarcastically endorse the move to grant all federal workers 12 weeks of paid parental leave:

The good news is that there will be a lot fewer civilian federal workers on the job, which means that the federal bureaucracy will grind a bit more slowly when it does the things that it does to screw up the economy.

The next day, Audacious Epigone put some rhetorical and statistical meat on the bones of my informed prejudice in “Join the Crooks and Liars: Get a Government Job!“:

That [the title of the post] used to be a frequent refrain on Radio Derb. Though the gag has been made emeritus, the advice is even better today than it was when the Derb introduced it. As he explains:

The percentage breakdown is private-sector 76 percent, government 16 percent, self-employed 8 percent.

So one in six of us works for a government, federal, state, or local.

Which group does best on salary? Go on: see if you can guess. It’s government workers, of course. Median earnings 52½ thousand. That’s six percent higher than the self-employed and fourteen percent higher than the poor shlubs toiling away in the private sector.

If you break down government workers into two further categories, state and local workers in category one, federal workers in category two, which does better?

Again, which did you think? Federal workers are way out ahead, median earnings 66 thousand. Even state and local government workers are ahead of us private-sector and self-employed losers, though.

Moral of the story: Get a government job! — federal for strong preference.

….

Though it is well known that a government gig is a gravy train, opinions of the people with said gigs is embarrassingly low as the results from several additional survey questions show.

First, how frequently the government can be trusted “to do what’s right”? [“Just about always” and “most of the time” badly trail “some of the time”.]

….

Why can’t the government be trusted to do what’s right? Because the people who populate it are crooks and liars. Asked whether “hardly any”, “not many” or “quite a few” people in the federal government are crooked, the following percentages answered with “quite a few” (“not sure” responses, constituting 12% of the total, are excluded). [Responses of “quite a few” range from 59 percent to 77 percent across an array of demographic categories.]

….

Accompanying a strong sense of corruption is the perception of widespread incompetence. Presented with a binary choice between “the people running the government are smart” and “quite a few of them don’t seem to know what they are doing”, a solid majority chose the latter (“not sure”, at 21% of all responses, is again excluded). [The “don’t know what they’re doing” responses ranged from 55 percent to 78 percent across the same demographic categories.]

Are the skeptics right? Well, most citizens have had dealings with government employees of one kind and another. The “wisdom of crowds” certainly applies in this case.

“Human Nature” by David Berlinski: A Revew

I became fan of David Berlinksi, who calls himself a secular Jew, after reading The Devil’s Delusion: Atheism and Its Scientific Pretensions, described on Berlinkski’s personal website as

a biting defense of faith against its critics in the New Atheist movement. “The attack on traditional religious thought,” writes Berlinski, “marks the consolidation in our time of science as the single system of belief in which rational men and women might place their faith, and if not their faith, then certainly their devotion.”

Here is most of what I say in “Atheistic Scientism Revisited” about The Devil’s Delusion:

Berlinski, who knows far more about science than I do, writes with flair and scathing logic. I can’t do justice to his book, but I will try to convey its gist.

Before I do that, I must tell you that I enjoyed Berlinski’s book not only because of the author’s acumen and biting wit, but also because he agrees with me. (I suppose I should say, in modesty, that I agree with him.) I have argued against atheistic scientism in many blog posts (see below).

Here is my version of the argument against atheism in its briefest form (June 15, 2011):

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

As for scientism, I call upon Friedrich Hayek:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it. [The Counter Revolution Of Science]

As Berlinski amply illustrates and forcibly argues, atheistic scientism is rampant in the so-called sciences. I have reproduced below some key passages from Berlinski’s book. They are representative, but far from exhaustive (though I did nearly exhaust the publisher’s copy limit on the Kindle edition). I have forgone the block-quotation style for ease of reading, and have inserted triple asterisks to indicate (sometimes subtle) changes of topic. [Go to my post for the excerpts.]

On the strength of The Devil’s Delusion, I eagerly purchased Berlinski’s latest book, Human Nature. I have just finished it, and cannot summon great enthusiasm for it. Perhaps that is so because I expected a deep and extended examination of the title’s subject. What I got, instead, was a collection of 23 disjointed essays, gathered (more or less loosely) into seven parts.

Only the first two parts, “Violence” and “Reason”, seem to address human nature, but often tangentially. “Violence” deals specifically with violence as manifested (mainly) in war and murder. The first essay, titled “The First World War”, is a tour de force — a dazzling (and somewhat dizzying) reconstruction of the complex and multi-tiered layering of the historical precedent, institutional arrangements, and personalities that led to the outbreak of World War I.

Aha, I thought to myself, Berlinkski is warming to his task, and will flesh out the relevant themes at which he hints in the first essay. And in the second and third essays, “The Best of Times” and “The Cause of War”, Berlinski flays the thesis of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. But my post, “The Fallacy of Human Progress“, does a better job of it, thanks to the several critics and related sources quoted therein.

Berlinski ends the third essay with this observation:

Men go to war when they think that they can get away with murder.

Which is tantamount to an admission that Berlinski has no idea why men go to war, or would rather not venture an opinion on the subject. There is much of that kind of diffident agnosticism throughout the book, which is captured in his reply to an interviewer’s question in the book’s final essay:

Q. Would you share with us your hunches and suspicions about spiritual reality, the trend in your thinking, if not your firm beliefs?

A. No. Either I cannot or I will not. I do not know whether I am unable or unwilling. The question elicits in me a stubborn refusal. Please understand. It is not an issue of privacy. I have, after all, blabbed my life away: Why should I call a halt here? I suppose that I am by nature a counter-puncher. What I am able to discern of the religious experience often comes about reactively. V. S. Naipaul remarked recently that he found the religious life unthinkable.

He does? I was prompted to wonder. Why does he?

His attitude gives rise to mine. That is the way in which I wrote The Devil’s Delusion: Atheism and Its Scientific Pretensions.

Is there anything authentic in my religious nature?

Beats me.

That is a legitimate reply, but — I suspect — an evasive one.

Returning to the book’s ostensible subject, the second part, “Reason”, addresses human nature mainly in a negative way, that is, by pointing out (in various ways) flaws in the theory of evolution. There is no effort to knit the strands into a coherent theme. The following parts stray even further from the subject of the book’s title, and are even more loosely connected.

This isn’t to say that the book fails to entertain, for it often does that. For example, buried in a chapter on language, “The Recovery of Case”, is this remark:

Sentences used in the ordinary give-and-take of things are, of course, limited in their length. Henry James could not have constructed a thousand-word sentence without writing it down or suffering a stroke. Nor is recursion needed to convey the shock of the new. Four plain-spoken words are quite enough: Please welcome President Trump.

(I assume, given Berlinski’s track record for offending “liberal” sensibilities, that the italicized words refer to the shock of Trump’s being elected, and are not meant to disparage Trump.)

But the book also irritates, not only by its failure to deliver what the title seems to promise, but also by Berlinski’s proclivity for using the abstruse symbology of mathematical logic where words would do quite nicely and more clearly. In the same vein — showing off — is the penultimate essay, “A Conversation with Le Figaro“, which reproduces (after an introduction by Berlinksi) of a transcript of the interview — in French, with not a word of translation. Readers of the book will no doubt be more schooled in French than the typical viewer of prime-time TV fare, but many of them will be in my boat. My former fluency in spoken and written French has withered with time, and although I could still manage with effort to decipher the meaning of the transcript, it proved not to be worth the effort so I gave up on it.

There comes a time when once-brilliant persons can summon flashes of their old, brilliant selves but can no longer emit a sustained ray of brilliance. Perhaps that is true of Berlinski. I hope not, and will give him another try if he gives writing another try.

The Shallowness of Secular Ethical Systems

This post is prompted by a recent offering from Irfan Khawaja, who styles himself an ex-libertarian and tries to explain his apostasy. Khawaja abandoned libertarianism (or his version of it) because it implies a stance toward government spending that isn’t consistent with the desideratum of another ethical system.

Rather than get bogged down in the details of Khawaja’s dilemma, I will merely point out what should be obvious to him (and to millions of other true believers in this or that ethical system): Any system that optimizes on a particular desideratum (e.g., minimal coercion, maximum “social” welfare by some standard) will clash with at least one other system that optimizes a different desideratum.

Further, the various desiderata usually are overly broad. And when the desiderata are defined narrowly, what emerges is not a single, refined desideratum but two or more. Which means that there are more ethical systems and more opportunities for clashes between systems. Those clashes sometimes occur between systems that claim to optimize on the same (broad) desideratum. (I will later take up an example.)

What are the broad and refined desiderata of various ethical systems? The following list is a start, though it is surely incomplete:

  • Liberty

Freedom from all restraint

Freedom from governmental restraint

Freedom to do as one chooses, consistent with traditional social norms (some of which may be enforced by government)

Freedom to do as one chooses, regardless of one’s endowment of intelligence, talent, effort, wealth, etc.

  • Equality

Equal treatment under the law

Economic equality, regardless of one’s intelligence, talent, effort, wealth, etc.

Economic and social equality, regardless of one’s intelligence, talent, effort, wealth, etc.

  • Democracy

Participation in governmental decisions through the election of officials whose powers are limited to those deemed necessary to provide for the defense of innocent citizens from force and fraud

Participation in governmental decisions through the election of officials who have the power to bring about economic and social equality

Governmental outcomes that enact the “will of the people” (i.e., the desiderata of each group that propounds this kind of democracy)

  • Human welfare

The maximization of the sum of all human happiness, perhaps with some lower limit on the amount of happiness enjoyed by those least able to provide for themselves

The maximization of the sum of all human happiness, as above, but only with respect to specific phenomena viewed as threats (e.g., “climate change”, “overpopulation”, resource depletion)

  • Animal welfare (including but far from limited to human welfare)

Special protections for animals to prevent their mistreatment

Legal recognition of animals (or some of them) as “persons” with the same legal rights as human beings

No use of animals to satisfy human wants (e.g., food, clothing, shelter)

It would be pedantic of me to explain the many irreconcilable clashes between the main headings, between the subsidiary interpretations under each main heading, and between the subsidiary interpretations under the various main headings. They should be obvious to you.

But I will show that even a subsidiary interpretation of a broad desideratum can be rife with internal inconsistencies. Bear with me while I entertain you with a few examples, based on Khawaja’s dilemma — the conflict between his versions of welfarism and libertarianism.

Welfarism, according to Khawaja, means that a government policy, or a change in government policy, should result in no net loss of lives. This implies that that it is all right if X lives are lost, as long as Y lives are gained, where Y is greater than X. Which is utilitarianism on steroids — or, in the words of Jeremy Bentham (the godfather of utilitarianism), nonsense upon stilts (Bentham’s summary dismissal of the doctrine of natural rights). To see why, consider that the blogger’s desideratum could be accomplished by a ruthless dictator who kills people by the millions, while requiring those spared to procreate at a rate much higher than normal. Nirvana (not!).

A broader approach to welfare, and one that is more commonly adopted, is an appeal to the (fictional) social-welfare function. I have written about it many times. All I need do here, by way of dismissal, is to summarize it metaphorically: Sam obtains great pleasure from harming other people. And if Sam punches Joe in the nose, humanity is better off (that is, social welfare is increased) if Sam’s pleasure exceeds Joe’s pain. It should take you a nanosecond to understand why that is nonsense upon stilts.

In case it took you longer than a nanosecond, here’s the nonsense: How does one measure the pleasure and pain of disparate persons? How does one then sum those (impossible) measurements?

More prosaically: If you are Joe, and not a masochist, do you really believe that Sam’s pleasure somehow cancels your pain or compensates for it in the grand scheme of things? Do you really believe that there is a scoreboard in the sky that keeps track of such things? If your answer to both questions is “no”, you should ask yourself what gives anyone the wisdom to decree that Sam’s punch causes an increase in social welfare. The philosopher’s PhD? You were punched in the nose. You know that Sam’s pleasure doesn’t cancel or compensate for your pain. The philosopher (or politician or economist) who claims (or implies) that there is a social-welfare function is either a fool (the philosopher or economist) or a charlatan (the politician).

I turn now to libertarianism, which almost defies analysis because of its manifold variations and internal contradictions (some of which I will illustrate). But Khawaja’s account of it as a prohibition on the initiation of force (the non-aggression principle, a.k.a. the harm principle) is a good entry point. It is clear that Khawaja understands force to include government coercion of taxpayers to fund government programs. That’s an easy one for most libertarians, but Khawaja balks because the prohibition of government coercion might mean the curtailment of government programs that save lives. (Khawaja thus reveals himself to have been a consequentialist libertarian, that is, one who favors liberty because of its expected results, not necessarily because it represents a moral imperative. This is yet another fault line within libertarianism, but I won’t explore it here.)

Khawaja cites the example of a National Institutes of Health (NIH) program that might cure cystic fibrosis or alleviate its symptoms. But Khawaja neglects the crucial matter of opportunity cost (a strange omission for a consequentialist). Those whose taxes fund government programs usually aren’t those who benefit from them. Taxpayers have other uses for their money, including investments in scientific and technological advances that improve and lengthen life. The NIH (for one) has no monopoly on life-saving and life-enhancing research. To put it succinctly, Khawaja has fallen into the intellectual trap described by Frédéric Bastiat, which is to focus on that which is seen (the particular benefits of government programs) and to ignore the unseen (the things that could be done instead through private action, including — not trivially — the satisfaction of personal wants). When the problem is viewed in that way, most libertarians would scoff at Khawaja’s narrow view of libertarianism.

Here’s a tougher issue for libertarians (the extreme pacifists among them excluded): Does the prohibition on the initiation of force extend to preemptive self-defense against an armed thug who is clearly bent on doing harm? If it does, then libertarianism is unadulterated hogwash.

Let’s grant that libertarianism allows for preemptive self-defense, where the potential victim (or his agent) is at liberty to decide whether preemption is warranted by the threat. Let’s grant, further, that the right of preemptive self-defense includes the right to be prepared for self-defense, because there is always the possibility of a sudden attack by a thug, armed robber, or deranged person. Thus the right to bear arms at all times, and in all places should be unrestricted (unabridged, in the language of the Second Amendment).

Along comes Nervous Nellie, who claims that the sight of all of those armed people around her makes her fear for her life. But instead of arming herself, Nellie petitions government for the confiscation of all firearms from private persons. The granting of Nellie’s petition would constrain the ability of others to defend themselves against (a) private persons who hide their firearms successfully; (b) private persons who resort to other lethal means of attacking other persons, and (c) armed government agents who abuse their power.

The resulting dilemma can’t be resolved by appeal to the non-aggression principle. The principle is violated if the right of self-defense is violated, and (some would argue) it is also violated if Nellie lives in fear for her life because the right of self-defense is upheld.

Moreover, the ability of government to decide whether persons may be armed — indeed, the very existence of government — violates the non-aggression principle. But without government the non-aggression principle may be violated more often.

Thus we see more conflicts, all of which take place wholly within the confines of libertarianism, broadly understood.

The examples could go on an on, but enough is enough. The point is that ethical systems that seek to optimize on a single desideratum, however refined and qualified it might be, inevitably clash with other ethical systems. Those clashes illustrate Kurt Gödel‘s incompleteness theorems:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic….

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

There is the view that Gödel’s theorems aren’t applicable in fields outside of mathematical logic. But any quest for ethical certainties necessarily involves logic, however flawed it might be.

Persons who devise and purvey ethical systems, assuming their good intentions (often a bad assumption), are simply fixated on particular aspects of human behavior rather than taking it whole. (They cannot see the forest because they are crawling on the ground, inspecting tree roots.)

Given such myopia, you might wonder how humanity manages to coexist cooperatively and peacefully as much as it does. Yes, there are many places on the globe where conflict is occasioned by what could be called differences of opinion about ultimate desiderata (including religious ones). But most human beings (though a shrinking majority, I fear) don’t give a hoot about optimizing on a particular desideratum. That is to say, most human beings aren’t fanatical about a particular cause or belief. And even when they are, they mostly live among like persons or keep their views to themselves and do at least the minimum that is required to live in peace with those around them.

It is the same for persons who are less fixated (or not at all) on a particular cause or belief. Daily life, with its challenges and occasional pleasures, is enough for them. In the United States, at least, fanaticism seems to be confined mainly to capitalism’s spoiled children (of all ages), whether they be ultra-rich “socialists”, affluent never-Trumpers, faux-scientists and their acolytes who foresee a climatic apocalypse, subsidized students (e.g., this lot), and multitudes of other arrant knights (and dames) errant.

Atheists are fond of saying that religion is evil because it spawns hatred and violence. Such sentiments would be met with bitter laughter from the hundreds of millions of victims of atheistic communism, were not most of them dead or still captive to the ethical system known variously as socialism and communism, which promises social and economic equality but delivers social repression and economic want. Religion (in the West, at least) is a key facet of liberty.

Which brings me to the point of this essay. When I use “liberty” I don’t mean the sterile desideratum of so-called libertarians (who can’t agree among themselves about its meaning or prerequisites). What I mean is the mundane business of living among others, getting along with them (or ignoring them, if that proves best), treating them with respect or forbearance, and observing the norms of behavior that will cause them to treat you with respect or forbearance.

It is that — and not the fanatical (unto hysterical) rallying around the various desiderata of cramped ethical systems — which makes for social comity and economic progress. The problem with silver bullets (Dr. Ehrlich’s “magic” one being a notable exception) is that they ricochet, causing more harm than good — often nothing but harm, even to those whom they are meant to help.


Related pages and posts:

Climate Change
Economic Growth Since World War II
Leftism
Modeling and Science
Social Norms and Liberty

On Liberty
Greed, Cosmic Justice, and Social Welfare
Democracy and Liberty
Utilitarianism vs. Liberty
Fascism and the Future of America
The Indivisibility of Economic and Social Liberty
Tocqueville’s Prescience
Accountants of the Soul
Pseudo-Libertarian Sophistry vs. True Libertarianism
Bounded Liberty: A Thought Experiment
Evolution, Human Nature, and “Natural Rights”
More Pseudo-Libertarianism
The Meaning of Liberty
Positive Liberty vs. Liberty
Facets of Liberty
Burkean Libertarianism
What Is Libertarianism?
True Libertarianism, One More Time
Utilitarianism and Psychopathy
Why Conservatism Works
The Eclipse of “Old America”
Genetic Kinship and Society
Liberty as a Social Construct: Moral Relativism?
Defending Liberty against (Pseudo) Libertarians
Defining Liberty
The Pseudo-Libertarian Temperament
Modern Liberalism as Wishful Thinking
Getting Liberty Wrong
Romanticizing the State
Libertarianism and the State
My View of Libertarianism
The Principles of Actionable Harm
More About Social Norms and Liberty
Superiority
The War on Conservatism
Old America, New America, and Anarchy
The Authoritarianism of Modern Liberalism, and the Conservative Antidote
Society, Polarization, and Dissent
Social Justice vs. Liberty
The Left and “the People”
The Harm Principle Revisited: Mill Conflates Society and State
Liberty and Social Norms Re-examined
Natural Law, Natural Rights, and the Real World
Natural Law and Natural Rights Revisited
Libertarianism, Conservatism, and Political Correctness
My View of Mill, Endorsed
Rights, Liberty, the Golden Rule, and Leviathan
Suicide or Destiny?
O.J.’s Glove and the Enlightenment
James Burnham’s Misplaced Optimism
True Populism
Libertarianism’s Fatal Flaw
The Golden Rule and Social Norms
The Left-Libertarian Axis
Rooted in the Real World of Real People
Consequentialism
Conservatism, Society, and the End of America
Conservatism vs. Leftism and “Libertarianism” on the Moral Dimension
Free Markets and Democracy
“Libertarianism”, the Autism Spectrum, and Ayn Rand
Tragic Capitalism
A Paradox for Liberals
Rawls vs. Reality
The Subtle Authoritarianism of the “Liberal Order”
Liberty: Constitutional Obligations and the Role of Religion
Society, Culture, and America’s Future

Rawls vs. Reality

I have never understood the high esteem in which John Rawls‘s “original position” is held by many who profess political philosophy. Well, I understand that the original position supports redistribution of income and wealth — a concept beloved of the overpaid faux-socialist professoriate — but it is a logical and empirical absurdity that shouldn’t be esteemed by anyone who thinks about it rigorously. (Which tells me a lot about the intelligence, rigor, and honesty of those who pay homage to it.)

What is the original position? According to Wikipedia it is

a hypothetical situation developed by … Rawls as a thought experiment to replace the imagery of a savage state of nature of prior political philosophers like Thomas Hobbes.

In the original position, the parties select principles that will determine the basic structure of the society they will live in. This choice is made from behind a veil of ignorance, which would deprive participants of information about their particular characteristics: their ethnicity, social status, gender and, crucially, Conception of the Good (an individual’s idea of how to lead a good life). This forces participants to select principles impartially and rationally.

As a thought experiment, the original position is a hypothetical position designed to accurately reflect what principles of justice would be manifest in a society premised on free and fair cooperation between citizens, including respect for liberty, and an interest in reciprocity.

In the state of nature, it might be argued that certain persons (the strong and talented) would be able to coerce others (the weak and disabled) by virtue of the fact that the stronger and more talented would fare better in the state of nature. This coercion is sometimes thought to invalidate any contractual arrangement occurring in the state of nature. In the original position, however, representatives of citizens are placed behind a “veil of ignorance”, depriving the representatives of information about the individuating characteristics of the citizens they represent. Thus, the representative parties would be unaware of the talents and abilities, ethnicity and gender, religion or belief system of the citizens they represent. As a result, they lack the information with which to threaten their fellows and thus invalidate the social contract they are attempting to agree to….

Rawls specifies that the parties in the original position are concerned only with citizens’ share of what he calls primary social goods, which include basic rights as well as economic and social advantages. Rawls also argues that the representatives in the original position would adopt the maximin rule as their principle for evaluating the choices before them. Borrowed from game theory, maximin stands for maximizing the minimum, i.e., making the choice that produces the highest payoff for the least advantaged position. Thus, maximin in the original position represents a formulation of social equality.

The social contract, citizens in a state of nature contract with each other to establish a state of civil society. For example, in the Lockean state of nature, the parties agree to establish a civil society in which the government has limited powers and the duty to protect the persons and property of citizens. In the original position, the representative parties select principles of justice that are to govern the basic structure of society. Rawls argues that the representative parties in the original position would select two principles of justice:

  1. Each citizen is guaranteed a fully adequate scheme of basic liberties, which is compatible with the same scheme of liberties for all others;
  2. Social and economic inequalities must satisfy two conditions:
    • to the greatest benefit of the least advantaged (the difference principle);
    • attached to positions and offices open to all.

The reason that the least well off member gets benefited is that it is assumed that under the veil of ignorance, under original position, people will be risk-averse. This implies that everyone is afraid of being part of the poor members of society, so the social contract is constructed to help the least well off members.

There are objections aplenty to Rawls’s creaky construction, some of which are cited in the Wikipedia piece:

In Anarchy, State, and Utopia, Robert Nozick argues that, while the original position may be the just starting point, any inequalities derived from that distribution by means of free exchange are equally just, and that any re-distributive tax is an infringement on people’s liberty. He also argues that Rawls’s application of the maximin rule to the original position is risk aversion taken to its extreme, and is therefore unsuitable even to those behind the veil of ignorance.

In Solving the Riddle of Right and Wrong, Iain King argues that people in the original position should not be risk-averse, leading them to adopt the Help Principle (Help someone if your help is worth more to them than it is to you) rather than maximin.

In Liberalism and the Limits of Justice, Michael Sandel has criticized Rawls’s notion of veil of ignorance, pointing out that it is impossible, for an individual, to completely prescind from [his] beliefs and convictions … as … required by Rawls’s thought experiment.

There is some merit in those objections, but they they don’t get to the root error of Rawls’s concoction. For that’s what it is, a concoction that has nothing to do with real people in the real world. The original position is an exercise in moral masturbation.

To begin at the beginning, the ostensible aim of Rawls’s formulation is to outline the “rules” by which a society can attain social justice — or, more accurately, social justice as Rawls defines it. (In what follows, when I refer to social justice in the context of Rawls’s formulation, the reader should mentally add the qualifier “as Rawls defines it”.)

Rawls presumably didn’t believe that there could be an original position, let alone a veil of ignorance. So his real aim must have been to construct a template for the attainment of social justice. The actual position of a society could then (somehow) be compared with the template to determine what government policies would move society toward the Rawlsian ideal.

Clearly, Rawls believed that his template could be justified only if he arrived at it through what he thought would be a set of unexceptionable assumptions. Otherwise, he could simply have promulgated the template (the maximin distribution of primary social goods), and left it at that. But to have done so would have been to take a merely political position, not one that pretends to rest on deep principles and solid logic.

What are those principles, and what is the logic that leads to Rawls’s template for a just society? Because there is no such thing as an original position or veil of ignorance, Rawls assumes (implicitly) that the members of a society should want social justice to prevail, and should behave accordingly, or authorize government to behave accordingly on their behalf. The idea is to make it all happen without coercion, as if the maximin rule were obviously the correct route to social justice.

To make it happen without coercion, Rawls must adopt unrealistic assumptions about the citizens of his imaginary society: pervasive ignorance of one’s own situation and extreme risk-aversion. Absent those constraints, some kind of coercion would be required for the members of the society to agree on the maximin rule. Effectively, then, Rawls assumes the conclusion toward which he was aiming all along, namely, that the maximin rule should govern society’s treatment of what he calls primary social goods — or, rather, government’s treatment of those goods, as it enforces the consensus of a society of identical members.

What is that treatment? This, as I understand it:

  • Guarantee each citizen a fully adequate scheme of basic liberties, which is compatible with the same scheme of liberties for all others.
  • Tolerate only those inequalities with respect to social and economic outcomes that yield the greatest benefit to the least-advantaged.
  • Tolerate only those inequalities that derive from positions and offices that are open to all citizens.

Rawls’s scheme is superficially attractive to anyone who understands that forced equality is inimical to economic progress (not to mention social comity and liberty), and that it harms the least-advantaged (because they “share” in a smaller “pie”) as well as those who would otherwise be among the more-advantaged. Similarly, the idea that all citizens have the same basic rights and social advantages seems unexceptionable.

But many hard questions lurk beneath the surface of Rawls’s plausible concoction.

What is an adequate scheme of basic liberties? The two weasel-words — “adequate” and “basic” — mean that the scheme can be whatever government officials would prefer it to be, unless the clone-like populace defines the scheme in advance. But the populace can’t be clone-like, except in Rawls’s imagination, so government can’t be constrained by a definition of basic liberties that is conceived in the original position. Thus government must (and certainly will) adopt a scheme that reflects the outcome of intra-governmental bargaining (satisficing various popular and bureaucratic interests) — not a scheme that is the consensus of a clone-like citizenry lusting after social justice.

Do basic liberties entail equal rights under law? Yes, and they have been enshrined in American law for a century-and-a-half. Or have they? It seems that rights are a constantly evolving and malleable body of entitlements, which presently (in the view of many) include (inter alia) the right to defecate on public property, the right to be given addictive drugs, the right not to be offended or “triggered” emotionally, and the right not to be shunned by persons whose preferences don’t run to sodomy and “gender fluidity”.

The failure to provide equal rights– whatever they may be at the moment — isn’t a failure that can be remedied by magically reverting to the original position, where actual human beings aren’t to be found. The rights of the moment must be enforced by government. But government enforcement necessarily involves coercion, and certainly involves arbitrariness of a kind that might even offend Rawls. For government, in the real world, is a blunt instrument wielded by politicians and bureaucrats who strike crude bargains on behalf of the sundry interest groups to which they are beholden.

Turning to economic inequality, how does one define the least-advantaged? Are the least-advantaged those whose incomes fall below a certain level? For how long? Who defines the level? If raising incomes to that level reduces the rewards of economically productive work (e.g., invention, innovation, investment, entrepreneurship) through taxation, and thereby reduces the opportunities available to the least-advantaged, by what complex computation will the “right” level of taxation by determined? Surely not by citizens in the original position, operating behind the veil of ignorance, nor — it must be admitted — by government, the true nature of which is summarized in the final sentence of the preceding paragraph.

And what about wealth? How much wealth? Wealth at what stage of one’s life? When a person is still new to the work force but, like most workers, will earn more and accrue wealth? What about wealth that may be passed from generation to generation? Or is such wealth something that isn’t open to all and therefore forbidden? And if it is forbidden, what does that do to the incentives of wealth-builders to do things that advance economic growth, which benefits all citizens including the least-advantaged?

In both cases — income and wealth — we are dealing in arbitrary distinctions that must fall to government to decide, and to enforce by coercion. There is no question of deciding such things in the original position, even behind a veil of ignorance, unless the citizenry consists entirely of Rawls’s omniscient clones.

I must ask, further, why the least-advantaged — if they could be defined objectively and consistently — should be denied incentives to earn more income and build wealth? (Redistribution schemes do just that.) Is that social justice? No, it’s a particular kind of social justice that sees only the present and condescends toward the least-advantaged (whoever they might be).

What about the least-advantaged socially? If social status is directly correlated with income or wealth, there is no need to delve deeper. But if it is something else, the question arises: What is it, how can it be measured, and how can it be adjusted so that the least-advantaged are raised to some minimal level of social standing? How is that level defined and who defines it? Surely not Rawls’s clones operating in complete ignorance of such things. The task therefore, and again, must fall to government, the failings and coerciveness of which I have already addressed adequately.

Why should the least-advantaged on any dimension, if they can be defined, have privileges (i.e., government interventions in their favor) that are denied and harmful to the rest of the citizenry? Favoring the least-advantaged is, of course, “the right thing to do”. So all that Rawls accomplished by his convoluted, pristine “reasoning” was to make a plausible (but deeply flawed) case for something like the welfare state that already exists in the United States and most of the world. As for his conception of liberty and equal rights, Rawls cleverly justifies trampling on the liberty and equal rights of the more-advantaged by inventing like-minded clones who “authorize” the state to trample away.

Rawls put a lot of hard labor into his justification for welfare-statism in the service of “social justice”. The real thing, which was staring him in the face, amounts to this: Government intervenes in voluntarily cooperative social and economic arrangements only to protect citizens from force and fraud, where those terms are defined by long-standing social norms and applied by (not reworked or negated by) legislative, executive, and judicial acts. Which norms? The ones that prevailed in America before the 1960s would do just fine, as long as laws forbidding intimidation and violence were uniformly enforced across the land.

Perfection? Of course not, but attainable. The Framers of the original Constitution did a remarkable job of creating a template by which real human beings (not Rawls’s clones) could live in harmony and prosperity. Real human beings have a penchant for disharmony, waste, fraud, and abuse — but they’re all we have to work with.

The Unique “Me”

Children, at some age, will begin to understand that there is death, the end of a human life (in material form, at least). At about the same time, in my experience, they will begin to speculate about the possibility that they might have been someone else: a child born in China, for instance.

Death eventually loses its fascination, though it may come to mind from time to time as one grows old. (Will I wake up in the morning? Is this the day that my heart stops beating? Will I be able to break my fall when the heart attack happens, or will I just go down hard and die of a fractured skull?)

But after careful reflection, at some age, the question of being been born as someone else is answered in the negative.

For each person, there is only one “I”, the unique “me”. If I hadn’t been born, I wouldn’t be “I” — there wouldn’t be a “me”. I couldn’t have been born as someone else: a child born in China, instance. A child born in China — or at any place and time other than where and when my mother gave birth to me — must be a different “I’. not the one I think of as “me”.

(Inspired by Sir Roger Scruton’s On Human Nature, for which I thank my son.)

An Antidote to Alienation

This much of Marx’s theory of alienation bears a resemblance to the truth:

The design of the product and how it is produced are determined, not by the producers who make it (the workers)….

[T]he generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive, motions that offer the worker little psychological satisfaction for “a job well done.”

These statements are true not only of assembly-line manufacturing. They’re also true of much “white collar” work — certainly routine office work and even a lot of research work that requires advanced degrees in scientific and semi-scientific disciplines (e.g., economics). They are certainly true of “blue collar” work that is rote, and in which the worker has no ownership stake.

There’s a relevant post at West Hunter which is short enough to quote in full:

Many have noted how difficult it is to persuade hunter-gatherers to adopt agriculture, or more generally, to get people to adopt a more intensive kind of agriculture.

It’s worth noting that, given the choice, few individuals pick the more intensive, more ‘civilized’ way of life, even when their ancestors have practiced it for thousands of years.

Benjamin Franklin talked about this. “When an Indian Child has been brought up among us, taught our language and habituated to our Customs, yet if he goes to see his relations and makes one Indian Ramble with them, there is no persuading him ever to return. [But] when white persons of either sex have been taken prisoners young by the Indians, and lived a while among them, tho’ ransomed by their Friends, and treated with all imaginable tenderness to prevail with them to stay among the English, yet in a Short time they become disgusted with our manner of life, and the care and pains that are necessary to support it, and take the first good Opportunity of escaping again into the Woods, from whence there is no reclaiming them.”

The life of the hunter-gatherer, however fraught, is less rationalized than the kind of life that’s represented by intensive agriculture, let alone modern manufacturing, transportation, wholesaling, retailing, and office work.

The hunter-gatherer isn’t a cog in a machine, he is the machine: the shareholder, the co-manager, the co-worker, and the consumer, all in one. His work with others is truly cooperative. It is like the execution of a game-winning touchdown by a football team, and unlike the passing of a product from stage to stage in an assembly line, or the passing of a virtual piece of paper from computer to computer.

The hunter-gatherer’s social milieu was truly societal:

Hunter-gatherer bands in the [Pleistocene] were in the range of 25 to 150 individuals: men, women, and children. These small bands would have sometimes formed larger agglomerations of up to a few thousand for the purpose of mate-seeking and defense, but this would have been unusual. The typically small size for bands meant that interactions within the group were face-to-face, with everyone knowing the name and something of the reputation and character of everyone else. Though group members would have engaged in some specializa­tion of labor beyond the normal sex distinctions (men as hunters, women as gatherers), specialization would not have been strict: all men, for example, would haft adzes, make spears, find game, kill, and dress it, and hunt in bands of ten to twenty individuals. [From Denis Dutton’s review of Paul Rubin’s Darwinian Politics: The Evolutionary Origin of Freedom.]

Nor is the limit of 150 unique to hunter-gatherer bands:

[C]ommunal societies — like those our ancestors lived in, or in any human group for that matter — tend to break down at about 150. Such is perhaps due to our limited brain capacity to know any more people that intimately, but it’s also due to the breakdown of reciprocal relationships like those discussed above — after a certain number (again, around 150).

A great example of this is given by Richard Stroup and John Baden in an old article about communal Hutterite colonies. (Hutterites are sort of like the Amish — or more broadly like Mennonites — but settled in different areas of North America.) Stroup, an economist at Montana State University, shared with me his Spring 1972 edition of Public Choice, wherein he and political scientist John Baden write:

In a relatively small colony, the proportional contribution of each member is greater. Likewise, surveillance of him by each of the others is more complete and an informal accounting of contribution is feasible. In a colony, there are no elaborate systems of formal controls over a person’s contribution. Thus, in general, the incentive and surveillance structures of a small or medium-size colony are more effective than those of a large colony and shirking is lessened.

Interestingly, according to Stroup and Baden, once the Hutterites reach Magic Number 150, they have a tradition of breaking off and forming another colony. This idea is echoed in Gladwell’s The Tipping Point, wherein he discusses successful companies that use 150 in their organizational models.

Had anyone known about this circa 1848, someone might have told Karl Marx that his theory [communism] could work, but only up to the Magic Number. [From Max Borders’s “The Stone Age Trinity“.]

What all of this means, of course, is that for the vast majority of people there’s no going back. How many among us are willing — really willing — to trade our creature comforts for the “simple life”? Few would be willing when faced with the reality of what the “simple life” means; for example, catching or growing your own food, dawn-to-post-dusk drudgery, nothing resembling culture as we know it (high or low), and lives that are far closer to nasty, brutish, and short than today’s norms.

Given that, it is important (nay, crucial) to cultivate an inner life of intellectual or spiritual satisfaction. Only that inner life — and the love and friendship of a small circle of fellows — can hold alienation at bay. Only that inner life — and love and close friendships — can give us serenity as civilization crumbles around us.

(See also “Alienation” and “Another Angle on Alienation“.)

Consequentialism

According to consequentialism, an act (or a refusal to act in a particular way) should be judged by its consequences. But consequences can only follow from an act (or lack of action). So consequentialism is really founded on hope, with perhaps some justification in experience — if certain types of act (or inaction) are known to have certain consequences.

But even the simplest acts — those with a direct connection between their commission and the desired outcomes — can have unforeseen and unwanted consequences. Murder committed in the heat of the moment, but with the intention to kill, may land the murderer in prison or an execution chamber, neither of which outcome the murderer had in mind when he pulled the trigger or plunged the knife into his victim. Less dramatically, the outcome of a marriage proposal may — and usually does — lead not only to marital bliss (though perhaps not for as long as intended) but also to complications that hadn’t been contemplated (e.g., the raising of difficult children, financial stress, affairs, and other irritants large and small that strain the marriage).

Governance on the basis of consequentialism has proven time and time again to be foolish, when not treacherous. Social Security, for example, which was meant to be a boon to indigent old people has become a vast, economically draining, disincentivizing middle-to-lower-class welfare program. Social Security led to other vast and wasteful schemes, including Medicare and Medicaid and expansion through Obamacare, whose proponents made this fraudulent promise:

If you like the plan you have, you can keep it. If you like the doctor you have, you can keep your doctor, too. The only change you’ll see are falling costs as our reforms take hold.

Even successful wars — World War II, notably and uniquely in the American experience — have led to massive waste in lives and treasure. An annotated list of the ill-conceived and mis-conceived government ventures in the history of the United States would (and does) fill volumes.

I am not suggesting that persons refrain from taking action for fear that things won’t turn out as hoped for. (Government is entirely a different matter, and if most of it were abolished Americans would be far better off than they are.) What I am saying is that judging an action by its consequences can be done only after the fact, when all of the ramifications of the action have played out. Moreover, and crucially, similar actions can have wildly different consequences.

In sum: Consequentialism is an empty philosophical construct. “Good” consequences justify the actions that lead to them, but the actions have already been taken, and similar actions often have different consequences.

The Enlightenment’s Fatal Flaw

The fatal flaw is the reliance on reason. As Wikipedia puts it,

The Enlightenment included a range of ideas centered on reason as the primary source of knowledge….

Where reason is

the capacity of consciously making sense of things, establishing and verifying facts, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.

So much of life is — of necessity — conducted in a realm beyond “reason”, where instincts and customs come into play in a universe that is but dimly understood.

By contrast, as the Wikipedia article admits, the Enlightenment — like its contemporary manifestations in pseudo-science (e.g., Malthusianism, Marxism, “climate change”), politics (e.g., “social justice”), and many other endeavors — relies on reductionism, which is

the practice of oversimplifying a complex idea or issue to the point of minimizing or distorting it.

Reason relies on verbalization (or its mathematical equivalent), but words (and numbers) fail us:

Love, to take a leading example, is a feeling that just is. The why and wherefore of it is beyond our ability to understand and explain. Some of the feelings attached to it can be expressed in prose, poetry, and song, but those are superficial expressions that don’t capture the depth of love and why it exists.

The world of science is of no real help. Even if feelings of love could be expressed in scientific terms — the action of hormone A on brain region X — that would be worse than useless. It would reduce love to chemistry, when we know that there’s more to it than that. Why, for example, is hormone A activated by the presence or thought of person M but not person N, even when they’re identical twins?

The world of science is of no real help about “getting to the bottom of things.” Science is an infinite regress. S is explained in terms of T, which is explained in terms of U, which is explained in terms of V, and on and on. For example, there was the “indivisible” atom, which turned out to consist of electrons, protons, and neutrons. But electrons have turned out to be more complicated than originally believed, and protons and neutrons have been found to be made of smaller particles with distinctive characteristics. So it’s reasonable to ask if all of the particles now considered elementary are really indivisible. Perhaps there other more-elementary particles yet to be hypothesized and discovered. And even if all of the truly elementary particles are discovered, scientists will still be unable to explain what those particles really “are.”

Reason is valuable when it consists of the narrow application of logic to hard facts. But it has almost nothing to do with most of life — and especially not with politics.

Just as words fail us, so has the Enlightenment and much of what came in its wake.

As exemplified by this “child of the enlightenment”:

Child of the enlightenment

(See also “In Praise of Prejudice” and “We, the Children of the Enlightenment“.)