Natural Law, Positive Law, and Rights

There is a ten-part exchange about natural law vs. positive law at Political Questions, the blog of Steven Hayward. (Links are at the bottom of this post.) The exchange pits Hayward and Linda Denno (a.k.a. Lucretia) against John Yoo. Hayward and Denno defend natural law; Yoo defends positive law. The exchange is entertaining but thus far disappointing — to this lay person, at least.

To overcome my disappointment, I am offering my own version of the controversy. It is more straightforward and conclusive than the rather meandering (albeit enjoyable) repartee offered up by Hayward, Denno, and Yoo.

A law, in the realm of human endeavor, is a rule that arises from one or more sources:

  • It may be “ingrained” in human nature, either divinely or through natural processes. Such a rule would be universal and self-enforcing if all human beings were “wired” identically. But they are not. (Think of two squabbling siblings with different conceptions of fairness.)
  • It may a cultural norm, where culture includes religious belief. Here, it becomes obvious that universality is unattainable. (Think of Muslims and Jews.)
  • It may established and enforced by a powerful entity (e.g., a parent, the state). In this instance, it may mimic a norm that is either “ingrained” or cultural in origin (e.g., the prohibition of murder).

The first two points address “natural” law, which has an uncertain provenance. The third point addresses positive law. It is evident that “natural” law cannot be universal or self-enforcing. But even if it were universal (self-enforcement is patently impossible), most of mankind doesn’t agree as to its tenets. Positive law is therefore essential to the functioning of most human groupings, from nations to families.

Having disposed of the main issue, I will venture into the question of rights, which are implicit in law. For example, if there is a law (“natural” or positive) against the murder of blameless persons, all blameless persons must have the right not to be murdered.

Does a right inhere in a person, or is a right an obligation on the part of all persons? In the case of positive law, the answer is obvious: The right not to be murdered is a legal construction that imposes an obligation on all persons.

Regarding “natural” law, I argue at length here that rights do not inhere in persons — at least not through the operation of reproductive processes.

What about through the operation of cultural processes? Culture is an artifact of the inculcation of beliefs and behavior, adherence to which signifies cultural kinship. The belief that something is a right is a cultural artifact grounded in human nature. Thus the commonality of certain rights (e.g., the last six of the Ten Commandments) across many or most cultures.

I put it to you that the (almost) universal desire to live a life in which one is not a victim of murder, theft, etc., is the basis of rights. But what such rights signify are only desires that no law — “natural” or positive — can guarantee.

That observation is consistent with the accumulation of rights (in the West) over the past century-and-a-half. Until the rise of “progressivism” — with its ever-expanding list of rights intended for the attainment of “racial justice”, “social justice”, “equity” and the like — rights were negative in the main. Negative rights require (and even demand) no action on the part of others (e.g., abstention from murder and theft). “Progressivism” gave birth to a panoply of positive rights, through which government usurps private charity (the receipt of which is not a right) and imposes costs on the general public for the benefit of selected recipients. The worthiness of those recipients is determined by arbitrary measures, including (but far from limited to) skin color, record of criminality, lack of intelligence or aptitude for a job, and illegal residence in the United States.

It is true that a person cannot live without food, clothing, shelter, or the effective functioning of myriad biological processes. That fact, like the inevitability death (but not taxes) is a truly natural law. But that law belies to the existence of “natural” rights. If there is no right to live forever, there is no right to anything that enables living at all.

In a post that is now eight years old, I quote the late Jazz Shaw, who penned this:

If we wish to define the “rights” of man in this world, they are – in only the most general sense – the rights which groups of us agree to and work constantly to enforce as a society. And even that is weak tea in terms of definitions because it is so easy for those “rights” to be thwarted by malefactors. To get to the true definition of rights, I drill down even further. Your rights are precisely what you can seize and hold for yourself by strength of arm or force of wit. Anything beyond that is a desirable goal, but most certainly not a right and it is obviously not permanent. [“On the Truth of Man’s Rights under Natural Law“, Hot Air, March 29, 2015]

Amen.


Here are the links, in chronological order:

https://stevehayward.substack.com/p/thomas-jefferson-versus-jeremy-bentham

https://stevehayward.substack.com/p/is-natural-law-jurisprudence-just

https://stevehayward.substack.com/p/the-natural-law-vs-positivism-debate

https://stevehayward.substack.com/p/the-natural-law-vs-positivism-debate-174

https://stevehayward.substack.com/p/the-natural-law-vs-positivism-debate-37f

https://stevehayward.substack.com/p/natural-law-vs-positivism-debate

https://stevehayward.substack.com/p/natural-law-vs-positivism-chapter

https://stevehayward.substack.com/p/natural-law-vs-positivism-chapter-cb7

https://stevehayward.substack.com/p/natural-law-vs-positivism-chapter-335

https://stevehayward.substack.com/p/natural-law-vs-positivism-chapter-3f5

The Black-White Achievement Gap and Its Parallel in the Middle East

THE ACHIEVEMENT GAP

My post, “Race and Reason: The Achievement Gap — Causes and Implications“, long predates the summer of Black Lives Matter riots and the ensuing effort to whitewash (dare I say that?) the deep-seated and insurmountable differences between blacks and other racial-ethnic groups.

A key element of that post, but by no means the only key element, is the persistent intelligence gap, which is perhaps best measured by SAT scores on math. I introduce in evidence the misnamed and misguided “Race Gaps in SAT Scores Highlight Inequality and Hinder Upward Mobility” (Brookings, February 1, 2017).

The report is misnamed and misguided because inequality and upward mobility are the result of inherent differences in intelligence, not causes of those differences. (There is, of course, a feedback mechanism at work, but it rests on lower average intelligence among blacks.) Here is the meat of the report:

The mean score on the math section of the SAT for all test-takers is 511 out of 800, the average scores for blacks (428) and Latinos (457) are significantly below those of whites (534) and Asians (598). The scores of black and Latino students are clustered towards the bottom of the distribution, while white scores are relatively normally distributed, and Asians are clustered at the top:

Race gaps on the SATs are especially pronounced at the tails of the distribution. In a perfectly equal distribution, the racial breakdown of scores at every point in the distribution would mirror the composition of test-takers as whole i.e. 51 percent white, 21 percent Latino, 14 percent black, and 14 percent Asian. But in fact, among top scorers—those scoring between a 750 and 800—60 percent are Asian and 33 percent are white, compared to 5 percent Latino and 2 percent black. Meanwhile, among those scoring between 300 and 350, 37 percent are Latino, 35 percent are black, 21 percent are white, and 6 percent are Asian:

The College Board’s publicly available data provides data on racial composition at 50-point score intervals. We estimate that in the entire country last year at most 2,200 black and 4,900 Latino test-takers scored above a 700. In comparison, roughly 48,000 whites and 52,800 Asians scored that high. The same absolute disparity persists among the highest scorers: 16,000 whites and 29,570 Asians scored above a 750, compared to only at most 1,000 blacks and 2,400 Latinos. (These estimates—which rely on conservative assumptions that maximize the number of high-scoring black students, are consistent with an older estimate from a 2005 paper in the Journal of Blacks in Higher Education, which found that only 244 black students scored above a 750 on the math section of the SAT.) …

Disappointingly, the black-white achievement gap in SAT math scores has remained virtually unchanged over the last fifteen years. Between 1996 and 2015, the average gap between the mean black score and the mean white score has been .92 standard deviations. In 1996 it was .9 standard deviations and in 2015 it was .88 standard deviations. This means that over the last fifteen years, roughly 64 percent of all test-takers scored between the average black and average white score.

Note that the black-white gap shown in the third figure is inconsistent with the difference between the white and black means. The gap doesn’t shrink in the 2010s. Note also the following observations by the authors of the report:

The ceiling on the SAT score may … understate Asian achievement. If the exam was redesigned to increase score variance (add harder and easier questions than it currently has), the achievement gap across racial groups could be even more pronounced.

A standardized test with a wider range of scores, the LSAT, offers some evidence on this front. An analysis of the 2013-2014 LSAT finds an average black score of 142 compared to an average white score of 153. This amounts to a black-white achievement gap of 1.06 standard deviations, even higher than that on the SAT….

[T]here is a possibility that the SAT is racially biased, in which case the observed racial gap in test scores may overstate the underlying academic achievement gap. But most of the concerns about bias relate to the verbal section of the SAT, and our analysis focuses exclusively on the math section….

Finally, [these] data [are] limited in that [they] doesn’t allow us to disentangle race and class as drivers of achievement gaps. It is likely that at least some of these racial inequalities can be explained by different income levels across race….

However, a 2015 research paper from the Center for Studies in Higher Education at the University of California, Berkeley shows that between 1994 and 2011, race has grown more important than class in predicting SAT scores for UC applicants. While it is difficult to extrapolate from such findings to the broader population of SAT test-takers, it is unlikely that the racial achievement gap can be explained away by class differences across race.

In fact, my post includes hard evidence (from earlier data) that the race gaps persist across income levels. That is, blacks are less intelligent on average than whites (and others) in the same income bracket.

The evidence notwithstanding (because it is ignored and twisted), the current dogmas (critical race theory, or CRT; diversity, equity, and inclusion or DEI) insist that white culture — including the tenet of racial equality under the law and the importance of dispassionate, scientific inquiry— must be rejected because it is all tainted with racism. Rejection means the suppression of whites and white culture so that blacks may reach their true potential.

The true potential of blacks is determined by their intelligence and their culture. Blacks, on average, are less intelligent than whites, and black culture (in America) fosters violence, disdain for education, and family dysfunction to a greater degree than is true for whites, on average. (But that, somehow, is whitey’s fault.)

Where will this lead? Right where Dov Fischer predicts it will lead:

[T]he same disadvantaged groups who today rely on blaming instead of self-help will then be at the same exact rung on the social order that they are today, just as 50 years of racism-free society and Great Society “entitlements” have not accomplished equality of results today, even as newcomers from Asia entered this country these past 50 and 60 years and leap-frogged those already here.

Blacks, on the whole, are not where they are because of whitey, but because of their genes and culture. But whites (and East Asians) will nevertheless be burdened and suppressed for the sake of “equity”.

THE PARALLEL IN THE MIDDLE EAST

A parallel to the racism of CRT and DEI is that Israel is the aggressor, and that Palestinians are victims. They are “obviously” (to a leftist) victims of Israel because they live (mostly) in territories that were or are controlled by Israel: the Gaza Strip and the West Bank. There is also Israel’s controversial nation-state law, which according to an anti-Zionist source

declare[s] that only Jews have the right of self-determination in the country, something members of the Arab minority called racist and verging on apartheid.

The “nation-state” law, backed by the right-wing government, passed by a vote of 62-55 and two abstentions in the 120-member parliament after months of political argument. Some Arab lawmakers shouted and ripped up papers after the vote.

“This is a defining moment in the annals of Zionism and the history of the state of Israel,” Prime Minister Benjamin Netanyahu told the Knesset after the vote.

Largely symbolic [not observed in practice], the law was enacted just after the 70th anniversary of the birth of the state of Israel. It stipulates that “Israel is the historic homeland of the Jewish people and they have an exclusive right to national self-determination in it.”

The usual suspects have declared that the separation of Palestinians (and other Arabs) from Israel and from its governance amount to apartheid., a crime against “international (leftist) law”. From this it follows (to a leftist) that acts of savagery — like those committed by Hamas against Israelis on October 7, 2023 — are justified because Palestinians are victims It further follows (to a leftist) that Israel’s fully justified prophylactic retaliation is aggression.

Let’s examine those premises.

It’s true enough that the denizens of Gaza are poor and feckless. But what makes them victims? They are about on a par with most non-Jewish Semites, except for the relative handful who are members of a professional caste or oil-rich. The basic problem with most such members of humanity —whether in the Middle East or elsewhere — isn’t that they have been put down by anyone (e.g., Israelis) but that they lack the intellectual and cultural resources to raise themselves up.

A “funny” thing about Israel is that it is of a piece with the surrounding lands occupied by non-Jews, but it has become a relatively prosperous place because Jews have the intellectual and cultural resources that undergird prosperity.

For their success and for the fact that they are Jews, the Jews of Israel are the targets of envious failures (Palestinian and Muslim hot-heads), Jew-haters (in addition to the hot-heads), and leftists who can’t bear to think that Palestinians (like American blacks) mainly have their genes and culture to blame for their failings.

Israel was founded (re-founded, really) in the wake of the Holocaust. It wasn’t meant to be anything but a Jewish state, run by Jews for Jews — a safe haven for a historically oppressed and brutalized people. Israel has been under attack by Muslims since its modern founding. But Israel has a natural right to self-defense and to preserve its Jewish character, just as the American rebels of 1776 had a natural right to declare themselves independent of a despotic King and rapacious Parliament.

So, the Gaza Strip and the West Bank became de jure parts of Israel (though the Gaza Strip was foolishly handed over to anti-Israelis) as a matter of the natural right of self-defense. As for Palestinians living in Israel proper, why should they have a voice in the governance of a country that was re-founded for Jews? Well, despite the nation-state law cited above, they do have a voice in the governance of Israel.

So, all of the leftist propaganda to the contrary notwithstanding, Palestinians aren’t victims, except of their own genes and culture. But unlike the “liberals” of old, the left nowadays focuses on demonizing the meritorious instead of helping those who need help.

Finally, there is the accusation that Israelis are sometimes brutal in their own defense. Why shouldn’t they be? It’s impossible to defend a country against determined enemies without being brutal toward them — just as it’s impossible to protect law-abiding citizens by disarming them and depriving them of police protection. The left sees Israelis as brutal because the left despises Israeli Jews, not because Israelis are any more brutal than leftists like Hitler, Stalin, Mao, and their many successors unto the present day.

Coda: There was a time when most Americans recognized the need for superior force as a bulwark of the defense of America. It is a sad commentary on the state of America that a sizable and influential segment of the populace no longer believes that America is worth defending tooth and nail. (Just look at the response to the rise of China, our trading “partner”.)

Those pampered idiots believe that it is necessary for America to be taken down a peg or two, and that it is wrong for America (but not other nations) to excel. As beneficiaries of American exceptionalism, they are able to spout such nonsense and to tear down the very laws and traditions that shelter them from the wrath of the rightfully aggrieved victims of their beliefs and policies: hard-working, tax-paying, law-abiding victims of unfettered illegal immigration, unnecessary Covid lockdowns, prosperity-draining regulations and regulatory agencies, unnecessarily high energy costs because of the climate-change hoax, etc., etc., etc.

Second Thoughts

I read here that Angus Deaton, a Nobel laureate in economics, and eminent economist at Princeton University, has changed his mind about a few hot topics. One of them is globalization:

I am much more skeptical of the benefits of free trade to American workers and am even skeptical of the claim, which I and others have made in the past, that globalization was responsible for the vast reduction in global poverty over the past 30 years. I also no longer defend the idea that the harm done to working Americans by globalization was a reasonable price to pay for global poverty reduction because workers in America are so much better off than the global poor. I believe that the reduction in poverty in India had little to do with world trade. And poverty reduction in China could have happened with less damage to workers in rich countries if Chinese policies caused it to save less of its national income, allowing more of its manufacturing growth to be absorbed at home. I had also seriously underthought my ethical judgments about trade-offs between domestic and foreign workers. We certainly have a duty to aid those in distress, but we have additional obligations to our fellow citizens that we do not have to others.

Another is immigration:

I used to subscribe to the near consensus among economists that immigration to the US was a good thing, with great benefits to the migrants and little or no cost to domestic low-skilled workers. I no longer think so. Economists’ beliefs are not unanimous on this but are shaped by econometric designs that may be credible but often rest on short-term outcomes. Longer-term analysis over the past century and a half tells a different story. Inequality was high when America was open, was much lower when the borders were closed, and rose again post Hart-Celler (the Immigration and Nationality Act of 1965) as the fraction of foreign-born people rose back to its levels in the Gilded Age. It has also been plausibly argued that the Great Migration of millions of African Americans from the rural South to the factories in the North would not have happened if factory owners had been able to hire the European migrants they preferred.

I hope that Professor Deaton’s example will be emulated by many more academics. Not just with respect to the issues that he addresses in his essay but also with respect to the many other issues where academics — and so-called intellectuals — have abetted dangerous and/or costly errors (to which I will come).

There was a time when it was considered sound thinking to gather evidence — facts, not opinions or talking points — and to base judgments and policy recommendations on the evidence. That time is past, though not irretrievably. Professor Deaton’s epiphanies are proof of the possibility that science, in all its forms, might once again become evidence-based. That’s not to say that there is never room for disagreements. There always is. But science, properly done, advances because of disagreement. It stagnates and regresses when dogma replaces debate.

But that is what has happened in so many fields of inquiry. Science, in too many fields, has become captive to “scientists” who put their preconceptions ahead of the evidence and who howl for the heads of heretics. It is dispiriting to know how many so-called scientists have become willing and eager handmaidens of wokeness. (Prostitution takes many forms.)

Thus the dangerous and/or costly errors, of which these are leading examples:

  • The “war” on “climate change” is making Americans and Europeans generally poorer and less comfortable.
  • The unnecessarily draconian response to Covid-19 made billions of people poorer, less well educated, and uncomfortable in their daily lives. It also had a lot to do with the rampant inflation of recent years, which will never be rolled back.
  • The LGBTQ/non-binary craze is causing parents and young adults to do things to their children and themselves that will cause them much misery for years to come, if not forever.
  • The anti-racism craze has been endorsed by “scientists” and “scientific organization” (as well as elites, pundits, and politicians). The main result is that “persons of color” get “bonus points” which enable them to commit crimes with impunity; acquire jobs for which they aren’t qualified, and gain entrance to colleges and graduate schools despite their lower intelligence than whites and Asians with whom they are competing. The cost in social comity and inferior products and services (e.g., surgery) may be subtle, but it is real and long-lasting. And it will get worse as long as wokeness prevails in high places.

Needless to say, the politicians and wealthy elites who favor such things are well insulated from the dire effects. Among the wealthy elites are tens of thousands of academics at top-tier and even second-rate universities who rake in money and dispense lunacy.

Zeno Revisited (Again)

Bill Vallicella (Maverick Philosopher) asserts that

No one has successfully answered Zeno’s Paradoxes of Motion.  (No, kiddies, Wesley Salmon did not successfully rebut them; the ‘calculus solution’  is not a definitive (philosophically dispositive) solution.)

The link in the quoted passage leads to a post from 2009 in which BV addresses Zeno’s Regressive Dichotomy:

The Regressive Dichotomy is one of Zeno’s paradoxes of motion. How can I get from point A, where I am, to point B, where I want to be? It seems I can’t get started.

A_______1/8_______1/4_______________1/2_________________________________ B

To get from A to B, I must go halfway. But to travel halfway, I must first traverse half of the halfway distance, and thus 1/4 of the total distance. But to do this I must move 1/8 of the total distance. And so on. The sequence of runs I must complete in order to reach my goal has the form of an infinite regress with no first term:

. . . 1/16, 1/8, 1/4, 1/2, 1.

Since there is no first term, I can’t get started.

Zeno’s paradox rests on the assumption that a first step is an infinitesimal fraction of the distance to be traversed, a fraction that can never be resolved mathematically. But that is an obviously false and arbitrary assumption.

Zeno, had he been less provocative (though mundane), would have observed that first step in going from A to B is a random distance that depends on the stride of the traveler; it has nothing to do with the distance to be traversed.

Thus, the distance from point A to point B can be traversed in x strides, where

x = d/l

and

d = distance from A to B

l = average length of stride.

That’s all f-f-folks.


See also “Achilles and the Tortoise Revisited“.

Conservatism and Happiness

The most-viewed post in the history of this blog is “Intelligence, Personality, Politics, and Happiness” (January 4, 2011). It ends with this:

If you are very intelligent — with an IQ that puts you in the top 2 percent of the population — you are most likely to be an INTJ, INTP, ENTJ, ENTP, or INFJ, in that order. Your politics will lean heavily toward libertarianism or small-government conservatism. You probably vote Republican most of the time because, even if you are not a card-carrying Republican, you are a staunch anti-Democrat. And you are a happy person because your expectations are not constantly defeated by reality.

If I were of a mind to rewrite the post, I would amend the final sentence to read:

And you are a happy person because your intelligence and personality make you self-confident and self-reliant. You are your own master (even though you pay taxes and obey laws out of prudence), not an emotional slave to the purveyors of political, economic, and scientific lies that are aimed at giving them power over your mind and your votes.

I remembered my old post and devised its new ending after reading Luke Conway’s “The Curious Case of Conservative Happiness” (The American Spectator, November 17, 2023). Sociologists (Conway is one) have found repeatedly that conservatives are happier than “liberals”. But why? Conway explains:

There are two extant theories. The first and more influential theory is that conservatives are comfortable with inequality and unconcerned with societal fairness. This tendency towards “system justifying” attitudes — attitudes that make conservatives insensitive to the needs of groups suffering in society — tends to serve as a buffer against bad stuff in their world. It makes them believe they live in a world where their group is on top of a system that is totally fair and justifies their own ethnic and political biases. As a result of this set of system justification blinders, conservatives believe they are at the top of a good system — and that is why they are happier than the lower-status, accurate, compassionate liberals….

The second theory is that conservatism tends to promote good psychological adjustment. As far back as 2012, social psychology researchers have suggested that conservatives were happier because conservative ideology is associated with personal agency, religiosity, optimismemotional stability, and other variables that in turn are associated with positive psychological adjustment. In the words of Schlenker and colleagues: “Conservatives appear to have qualities that are traditionally associated with positive adjustment and mental health. When we examined established measures of personal agency, positive outlook, and transcendent moral beliefs (i.e., religiosity, moral commitment, tolerance of transgressions), we found ideological differences that accounted for the happiness gap.”…

… Part of the problem with past research is that it tends to conflate nasty-sounding “system justification” beliefs with perfectly healthy beliefs that would lead to good outcomes without any system-justifying component. For example, one of Napier and Jost’s primary measurements of system justifying beliefs was a single item anchored by “hard work doesn’t generally bring success, it’s more a matter of luck” on one end and “in the long run, hard work usually brings a better life” at the other.

Stop reading for one minute and think about that. In their view, believing that hard work usually is associated with success makes you a “system justifier” because that belief inherently blames people for the bad outcomes they get. But I’m not so sure about the immutability of that association. While it is possible that belief in hard work can be system-justifying, it need not be so. “If I work hard to prune this tree, it will be more likely to grow fruit” does not seem especially system justifying, as it doesn’t necessarily involve any social systems. Indeed, the two things are conceptually orthogonal. I might believe that hard work generally leads to good outcomes and yet believe that nonetheless this occurs in spite of admitted societal unfairness….

Thus, while it is certainly possible for someone to hold a belief in hard work to blame others’ failures on their lack of hard work, it need not be so. And there is no denying that believing in hard work also produces agency — the belief that one can make a difference — which is psychologically healthy….

So what happens when we try to separate the psychological adjustment and system justification models? Several years ago, our lab conducted a set of 5 studies to evaluate that question. We pitted the system justification theory against the psychological adjustment theory….

First, we found that direct measurements of a desire for social group inequality — the hallmark of the system justification explanation, a variable called “Social Dominance” — did not explain why conservatives were happy at all….

Second, three variable sets associated with psychological adjustment — religiosity, belief in hard work/achievement, and anti-entitlement attitudes — were good predictors of conservative happiness….

Third, Jost and Napier’s System Justification Scale, which is essentially a measurement of the degree that Americans believe American society is a good place, was in fact one of the better predictors of conservative happiness across our five studies. However, the system justification scale was also related to beliefs generally associated with psychological adjustment (hard work, religiosity). So even though the “system justification” scale explained part of conservative happiness, this is not overwhelmingly good evidence for the nastier implications of the system justification model. At worst for conservatives, it means that they are happier in part because living in a society they like makes them happy….

[R]esearchers often completely miss emphasizing the positive benefits of self-control, religion, hard work, and mental toughness in helping people deal with life’s challenges. In this omission, they do not largely fail conservatives — who are presumably doing those things anyway — they rather fail their liberal constituents by not equipping them with legitimate psychological tools for well-being….

Second, and more insidiously, this perspective simply mis-characterizes conservatives as uncaring people who, like rich autocrats stealing from the poor people they rule, gain their happiness at the expense of their lesser brethren….

For example, a recent four-study article illustrated that conservatives in both the United States and the United Kingdom actually show more empathy to their political enemies than liberals do. In the words of the authors: “conservatives consistently showed more empathy to liberals than liberals showed to conservatives.”

As I said: self-confidence and self-reliance (agency) make for happiness. Conservatives tend to have more of those things than “liberals”, which frees them (conservatives) of financial and regulatory dependence on the state and from psychological dependence on purveyors of lies — including lies about conservatism.


Related posts:

Obama’s Big Lie

That Which Dare Not Be Named

The Poison of Ideology

Ideology, which drives political and social discourse these days, is

a set of doctrines or beliefs shared by the members of a social group or that form the basis of a political, economic, or other system.

Get that? An ideology comprises doctrines or beliefs, not hard-won knowledge or social and economic norms that have been tested in the acid of use. An ideology leads its believers down the primrose path of a “system” — a way of viewing and organizing the world that flows from a priori reasoning.

An ideology, because of its basis in doctrines or beliefs, puts something at its center — a kind of golden calf that is the ideology’s raison d’être. The something may be the dominance of an Aryan Third Reich; the dictatorship of the proletariat; the destruction of infidels; big government as the solution to social and economic ills; free markets at all costs regardless of the immorality that they may spawn; “social democracy” in which all matters of social and economic importance are to be decided by a majority of the elected representatives of an electorate that has been enfranchised for the purpose of arriving at the “right” decisions; stateless societies that (contrary to human nature) would be livable because disputes would be settled through contractual arrangements and private defense agencies; etc., etc., etc.

You might expect that the bankruptcy of ideological thinking would be obvious, given that there are so many mutually contradictory ideologies (see above). But that isn’t the way of the world. Human beings seem to be wired to want to believe in something. And even to suffer and die for that something.

That can be a good thing if the something is personal and benign; for example, the satisfaction of raising a child to be mannerly and conscientious; sustaining one’s marriage through trials and tribulations for the love, companionship, and contentment it affords; getting through personal suffering and sorrow without resort to behavior that is destructive of self or relations with others; believing in God and the tenets of a religion for one’s own sake and not for their use as weapons of judgement or vengeance; taking pride in work that is “real” and of direct and obvious benefit to others, however humdrum it may seem and how little skill it may require. In other words, living life as if it has meaning and isn’t just an existential morass to be tolerated until one dies or an occasion for wreaking vengeance on the world because of one’s own anxieties and failings.

What I have just sketched are the yearnings and tensions that modern man has acquired, bit by bit, as old certainties and norms have been undermined. Is it any wonder that so many people since the dawn of the Enlightenment — where modernity really began — have wanted to quit the “rat race” for a meaningful life? Not creating an empire, leading a conquering army, or founding a dynasty. Just doing something self-satisfying, like farming, owning a small business in a small town, or teaching children to play the piano.

Ironically, Voltaire, an icon of the Enlightenment, sums it up:

“I know also,” said Candide, “that we must cultivate our garden.”

“You are right,” said Pangloss, “for when man was first placed in the Garden of Eden, he was put there ut operaretur eum, that he might cultivate it; which shows that man was not born to be idle.”

“Let us work,” said Martin, “without disputing; it is the only way to render life tolerable.”

The whole little society entered into this laudable design, according to their different abilities. Their little plot of land produced plentiful crops. Cunegonde was, indeed, very ugly, but she became an excellent pastry cook; Paquette worked at embroidery; the old woman looked after the[Pg 168] linen. They were all, not excepting Friar Giroflée, of some service or other; for he made a good joiner, and became a very honest man.

Pangloss sometimes said to Candide:

“There is a concatenation of events in this best of all possible worlds: for if you had not been kicked out of a magnificent castle for love of Miss Cunegonde: if you had not been put into the Inquisition: if you had not walked over America: if you had not stabbed the Baron: if you had not lost all your sheep from the fine country of El Dorado: you would not be here eating preserved citrons and pistachio-nuts.”

“All that is very well,” answered Candide, “but let us cultivate our garden.”

That is to say,

the main virtue of Candide’s garden is that it forces the characters to do hard, simple labor. In the world outside the garden, people suffer and are rewarded for no discernible cause. In the garden, however, cause and effect are easy to determine—careful planting and cultivation yield good produce. Finally, the garden represents the cultivation and propagation of life, which, despite all their misery, the characters choose to embrace.

And so should we all.


Related posts:

Alienation
Another Angle on Alienation
An Antidote to Alienation

What Do Wokesters Want?

I am using “wokesters” as a convenient handle for persons who subscribe to a range of closely related movements, which include but are not limited to wokeness, racial justice, equity, gender equality, transgenderism, social justice, cancel culture, environmental justice, and climate-change activism. It is fair to say that the following views, which might be associated with one or another of the movements, are held widely by members of all the movements (despite the truths noted parenthetically):

Race is a social construct. (Despite strong scientific evidence to the contrary.)

Racism is a foundational and systemic aspect of American history. (Which is a convenient excuse for much of what follows.)

Racism explains every bad thing that has befallen people of color in America. (Ditto.)

America’s history must be repudiated by eradicating all vestiges of it that glorify straight white males of European descent. (Because wokesters are intolerant of brilliance and success of it comes from straight white males of European descent.)

The central government (when it is run by wokesters and their political pawns) should be the sole arbiter of human relations. (Replacing smaller units of government, voluntary contractual arrangements, families, churches, clubs, and other elements of civil society through which essential services are provided, economic wants are satisfied efficiently, and civilizing norms are inculcated and enforced), except for those institutions that are dominated by wokesters or their proteges, of course.)

[You name it] is a human right. (Which — unlike true rights, which all can enjoy without cost to others — must be provided at cost to others.)

Economics is a zero-sum game; the rich get rich at the expense of the poor. (Though the economic history of the United States — and the Western world — says otherwise. The rich get rich — often rising from poverty and middling circumstances — by dint of effort risk-taking, and in the process produce things of value for others while also enabling them to advance economically.)

Profit is a dirty word. (But I — the elite lefty who makes seven figures a year, thank you, deserve every penny of my hard-earned income.)

Sex gender is assigned arbitrarily at birth. (Ludicrous).

Men can bear children. (Ditto.)

Women can have penises. (Ditto.)

Gender dysphoria in some children proves the preceding poiXXXX

Children can have two mommies, two daddies, or any combination of parents in any number and any gender. And, no, they won’t grow up anti-social for lack of traditional father (male) and mother (female) parents. (Just ask blacks who are unemployed for lack of education and serving prison time after having been raised without bread-winning fathers.)

Blacks, on average, are at the bottom of income and wealth distributions and at the top of the incarceration distribution — despite affirmative action, subsidized housing, welfare payments, etc. — because of racism. (Not because blacks, on average, are at the bottom of the intelligence distribution and have in many black communities adopted and enforced a culture the promotes violence and denigrates education?)

Black lives matter. (More than other lives? Despite the facts adduced above?)

Police are racist Nazis and ought to be de-funded. (So that law abiding blacks and other Americans can become easier targets for rape, murder, and theft.)

Grades, advanced placement courses, aptitude tests, and intelligence tests are racist devices. (Which happen to enable the best and brightest — regardless of race, sex, or socioeconomic class — to lead the country forward scientifically and economically, to the benefit of all.)

The warming of the planet by a couple of degrees in the past half-century (for reasons that aren’t well understood but which are attributed by latter-day Puritans to human activity) is a sign of things to come: Earth will warm to the point that it becomes almost uninhabitable. (Which is a case of undue extrapolation from demonstrably erroneous models and a failure to credit the ability of capitalism — gasp! — to adapt successfully to truly significant climatic changes.)

Science is real. (Though we don’t know what science is, and believe things that are labeled scientific if we agree with them. We don’t understand, or care, that science is a process that sometimes yields useful knowledge, or that the “knowledge” is always provisional, always in doubt, and sometimes wrong. We support the movement of recent decades to label some things as scientific that are really driven by a puritanical, anti-humanistic agenda, and which don’t hold up against rigorous, scientific examination, such as the debunked “science” of “climate change”; the essential equality of the races and sexes, despite their scientifically demonstrable differences; and the belief that a man can become a woman, and vice versa.)

Illegal immigrants migrants are just seeking a better life and should be allowed free entry into the United States. (Because borders are arbitrary — except when it comes to my property — and it doesn’t matter if the unfettered enty ro illegal immigrants burdens tax-paying Americans and takes jobs from working-class Americans.)

The United States spends too much on national defense because (a) borders are arbitrary (except when they delineate my property), (b) there’s no real threat to this country (except for cyberattacks and terrorism sponsored by other states, and growing Chinese and Russian aggression that imperils the economic interests of Americans), (c) America is the aggressor (except in World War I, World War II, the Korean War, the Vietnam War, Gulf War I, the terrorist attacks on 9/11, and in the future if America significantly reduces its defense forces), and (d) peace is preferable to war (except that it is preparedness for war that ensures peace, either through deterrence or victory).

What wokesters want is to see that these views, and many others of their ilk, are enforced by the central government. To that end, steps will be taken to ensure that the Democrat Party is permanently in control of the central government and is able to control most State governments. Accordingly, voting laws will be “reformed” to enable everyone, regardless of citizenship status or other qualification (perhaps excepting age, or perhaps not) to receive a mail-in ballot that will be harvested and cast for Democrat candidates; the District of Columbia and Puerto Rico (with their iron-clad Democrat super-majorities) will be added to the Union; the filibuster will be abolished; the Supreme Court and lower courts will be expanded and new seats will be filled by Democrat nominees; and on, and on.

Why do wokesters want what they want? Here’s my take:

  • They reject personal responsibility.
  • They don’t like the sense of real community that is represented in the traditional institutions of civil society.
  • They don’t like the truth if it contradicts their view of what the world should be like.
  • They are devoid of true compassion.
  • They are — in sum — alienated, hate-filled nihilists, the produce of decades of left-wing indoctrination by public schools, universities, and the media.

What will wokesters (and all of us) get?

At best, what they will get is a European Union on steroids, a Kafka-esque existence in a world run by bureaucratic whims from which entrepreneurial initiative and deeply rooted, socially binding cultures have been erased.

Somewhere between best and worst, they will get an impoverished, violent, drug-addled dystopia which is effectively a police state run for the benefit of cosseted political-media-corprate-academic elites.

At worst (as if it could get worse), what they will get is life under the hob-nailed boots of Russia and China:; for example:

Russians are building a military focused on killing people and breaking things. We’re apparently building a military focused on being capable of explaining microaggressions and critical race theory to Afghan Tribesmen.

A country whose political leaders oppose the execution of murderers, support riots and looting by BLM, will not back Israel in it’s life-or-death struggle with Islamic terrorists, and use the military to advance “wokeism” isn’t a country that you can count on to face down Russia and China.

Wokesters are nothing but useful idiots to the Russians and Chinese. And if wokesterst succeed in weakening the U.S. to the point that it becomes a Sino-Soviet vassal, they will be among the first to learn what life under an all-powerful central government is really like. Though, useful idiots that they are, they won’t survive long enough to savor the biter fruits of their labors.

The Vatican Goes All-Out Democrat

Politico has the story:

The head of the Vatican’s doctrine office is warning U.S. bishops to deliberate carefully and minimize divisions before proceeding with a possible plan to rebuke Roman Catholic politicians such as President Joe Biden for receiving Communion even though they support abortion rights.

The strong words of caution came in a letter from Cardinal Luis Ladaria, prefect of the Vatican’s Congregation for the Doctrine of the Faith, addressed to Archbishop José Gomez of Los Angeles, president of the U.S. Conference of Catholic Bishops. The USCCB will convene for a national meeting June 16, with plans to vote on drafting a document on the Communion issue….

Ladaria, in his letter, said any new policy “requires that dialogue occurs in two stages: first among the bishops themselves, and then between bishops and Catholic pro-choice politicians within their jurisdictions.”

Even then, Ladaria advised, the bishops should seek unanimous support within their ranks for any national policy, lest it become “a source of discord rather than unity within the episcopate and the larger church in the United States.”

Ladaria made several other points that could complicate the plans of bishops pressing for tough action: — He said any new statement should not be limited to Catholic political leaders but broadened to encompass all churchgoing Catholics in regard to their worthiness to receive Communion.

— He questioned the USCCB policy identifying abortion as “the preeminent” moral issue, saying it would be misleading if any new document “were to give the impression that abortion and euthanasia alone constitute the only grave matters of Catholic moral and social teaching that demand the fullest accountability on the part of Catholics.”

— He said that if the U.S. bishops pursue a new policy, they should confer with bishops’ conferences in other countries “both to learn from one another and to preserve unity in the universal church.”

— He said any new policy could not override the authority of individual bishops to make decisions on who can receive Communion in their dioceses. Cardinal Wilton Gregory, the archbishop of Washington, D.C., has made clear that Biden is welcome to receive Communion at churches in the archdiocese.

In other words, don’t embarrass Joe Biden (or Nancy Pelosi or any other nominally Catholic but pro-abortion politician). Doing so would upset the Pope, who has fully embraced their leftist views on abortion, “climate change”, income redistribution, etc., etc., etc. Cardinal Gregory obviously shares the Pope’s willingness to shape his religion to fit his political views.

And Cardinal Ladaria, a Jesuit, has given us a master class in Jesuitical casuistry.

Thinking about Thinking — and Other Things: Desiderata As Beliefs

This is the fifth post in a series. (The previous posts are here, here, here, and here.)This post, like its predecessors, will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

How many things does a human being believe because he wants to believe them, and not because there is compelling evidence to support his beliefs? Here is a small sample of what must be an extremely long list:

There is a God. (1a)

There is no God. (1b)

There is a Heaven. (2a)

There is no Heaven. (2b)

Jesus Christ was the Son of God. (3a)

Jesus Christ, if he existed, was a mere mortal. (3b)

Marriage is the eternal union, blessed by God, of one man and one woman. (4a)

Marriage is a civil union, authorized by the state, of one or more consenting adults (or not) of any gender, as the participants in the marriage so define themselves to be. (4b)

All human beings should have equal rights under the law, and those rights should encompass not only negative rights (e.g., the right not to be murdered) but also positive rights (e.g., the right to a minimum wage). (5a)

Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons. (5b)

The rise in global temperatures over the past 170 years has been caused primarily by a greater concentration of carbon dioxide in the atmosphere, which rise has been caused by human activity – and especially by the burning of fossil fuels. This rise, if it isn’t brought under control will make human existence far less bearable and prosperous than it has been in recent human history. (6a)

The rise in global temperatures over the past 170 years has not been uniform across the globe, and has not been in lockstep with the rise in the concentration of atmospheric carbon dioxide. The temperatures of recent decades, and the rate at which they are supposed to have risen, are not unprecedented in the long view of Earth’s history, and may therefore be due to conditions that have not been given adequate consideration by believers in anthropogenic global warming (e.g., natural shifts in ocean currents that have different effects on various regions of Earth, the effects of cosmic radiation on cloud formation as influenced by solar activity and the position of the solar system and the galaxy with respect to other objects in the universe, the shifting of Earth’s magnetic field, and the movement of Earth’s tectonic plates and its molten core). In any event, the models of climate change have been falsified against measured temperatures (even when the temperature record has been adjusted to support the models). And predictions of catastrophe do not take into account the beneficial effects of warming (e.g., lower mortality rates, longer growing seasons), whatever causes it, or the ability of technology to compensate for undesirable effects at a much lower cost than the economic catastrophe that would result from preemptive reductions in the use of fossil fuels. (6b)

Not one of those assertions, even the ones that seem to be supported by facts, is true beyond a reasonable doubt. I happen to believe 1a (with some significant qualifications about the nature of God), 2b, 3b (given my qualified version of 1a), a modified version of 4a (monogamous, heterosexual marriage is socially and economically preferable, regardless of its divine blessing or lack thereof), 5a (but only with negative rights) and 5b, and 6b.  But I cannot “prove” that any of my beliefs is the correct one, nor should anyone believe that anyone can “prove” such things.

Take the belief that all persons are created equal. No one who has eyes, ears, and a minimally functioning brain believes that all persons are created equal. Abraham Lincoln, the Great Emancipator, didn’t believe it:

On September 18, 1858 at Charleston, Illinois, Lincoln told the assembled audience:

I am not, nor ever have been, in favor of bringing about in any way the social and political equality of the white and black races, that I am not, nor ever have been, in favor of making voters or jurors of negroes, nor of qualifying them to hold office, nor to intermarry with white people; and I will say in addition to this that there is a physical difference between the white and black races which I believe will forever forbid the two races living together on terms of social and political equality … I will add to this that I have never seen, to my knowledge, a man, woman, or child who was in favor of producing a perfect equality, social and political, between negroes and white men….

This was before Lincoln was elected president and before the outbreak of the Civil War, but Lincoln’s speeches, writings, and actions after these events continued to reflect this point of view about race and equality.

African American abolitionist Frederick Douglass, for his part, remained very skeptical about Lincoln’s intentions and program, even after the p[resident issued a preliminary emancipation in September 1862.

Douglass had good reason to mistrust Lincoln. On December 1, 1862, one month before the scheduled issuing of an Emancipation Proclamation, the president offered the Confederacy another chance to return to the union and preserve slavery for the foreseeable future. In his annual message to congress, Lincoln recommended a constitutional amendment, which if it had passed, would have been the Thirteenth Amendment to the Constitution.

The amendment proposed gradual emancipation that would not be completed for another thirty-seven years, taking slavery in the United States into the twentieth century; compensation, not for the enslaved, but for the slaveholder; and the expulsion, supposedly voluntary but essentially a new Trail of Tears, of formerly enslaved Africans to the Caribbean, Central America, and Africa….

Douglass’ suspicions about Lincoln’s motives and actions once again proved to be legitimate. On December 8, 1863, less than a month after the Gettysburg Address, Abraham Lincoln offered full pardons to Confederates in a Proclamation of Amnesty and Reconstruction that has come to be known as the 10 Percent Plan.

Self-rule in the South would be restored when 10 percent of the “qualified” voters according to “the election law of the state existing immediately before the so-called act of secession” pledged loyalty to the union. Since blacks could not vote in these states in 1860, this was not to be government of the people, by the people, for the people, as promised in the Gettysburg Address, but a return to white rule.

It is unnecessary, though satisfying, to read Charles Murray’s account in Human Diversity of the broad range of inherent differences in intelligence and other traits that are associated with the sexes, various genetic groups of geographic origin (sub-Saharan Africans, East Asians, etc.), and various ethnic groups (e.g., Ashkenazi Jews).

But even if all persons are not created equal, either mentally or physically, aren’t they equal under the law? If you believe that, you might just as well believe in the tooth fairy. As it says in 5b,

Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons.

Yes, it’s only a hypothesis, but one for which there is ample evidence in the history of mankind. It is confirmed by every instance of theft, murder, armed aggression, scorched-earth warfare, mob violence as catharsis, bribery, election fraud, gratuitous cruelty, and so on into the night.

And yet, human beings (Americans especially) persist in believing tooth-fairy stories about the inevitable triumph of good over evil, self-correcting science, and the emergence of truth from the marketplace of ideas. Balderdash, all of it.

But desiderata become beliefs. And beliefs are what bind people – or make enemies of them.

Michael Oakeshott, Rationalism, and America’s Present Condition

Michael Oakeshott (1901-1990), an English philosopher and political theorist, is remembered by Alberto Mingardi in a post at Econlib:

He was a self-confident thinker who did not search for others’ approval. He had an impressive career but somehow outside the mainstream, was remembered by friends (like Ken Minogue) as a splendid friend who cared about friendship deeply, eschewed honors, and was happy to retire in Dorset and lead a county life. In a beautiful article on Oakeshott, Gertrude Himmelfarb commented that he was “the political philosopher who has so modest a view of the task of political philosophy, the intellectual who is so reluctant a producer of intellectual goods, the master who does so little to acquire or cultivate disciples” and then all of these features perfectly fit in his character.

Oakeshott strongly influenced my view of conservatism, as you will see if you read any of the many posts in which I quote him or refer to his expositions of conservatism and critiques of rationalism. I drew heavily on Oakeshott’s analysis of rationalism in my prescient post of ten years ago about same-sex “marriage” and the destruction of civilizing social norms. Here is the post in its entirety, followed by my retrospective commentary (in italics):

Judge Vaughn Walker’s recent decision in Perry v. Schwarnenegger, which manufactures a constitutional right to same-sex marriage, smacks of Rationalism. Judge Walker distorts and sweeps aside millennia of history when he writes:

The right to marry has been historically and remains the right to choose a spouse and, with mutual consent, join together and form a household. Race and gender restrictions shaped marriage during eras of race and gender inequality, but such restrictions were never part of the historical core of the institution of marriage. Today, gender is not relevant to the state in determining spouses’ obligations to each other and to their dependents. Relative gender composition aside, same-sex couples are situated identically to opposite-sex couples in terms of their ability to perform the rights and obligations of marriage under California law. Gender no longer forms an essential part of marriage; marriage under law is a union of equals.

Judge Walker thereby secures his place in the Rationalist tradition. A Rationalist, as Michael Oakeshott explains,

stands … for independence of mind on all occasions, for thought free from obligations to any authority save the authority of ‘reason’. His circumstances in the modern world have made him contentious; he is the enemy of authority, of prejudice, of the merely traditional, customary or habitual. His mental attitude is at once sceptical and optimistic: sceptical, because there is no opinion, no habit, no belief, nothing so firmly rooted or so widely held that he hesitates to question it and to judge it by what he calls his ‘reason’; optimistic, because the Rationalist never doubts the power of his ‘reason … to determine the worth of a thing, the truth of an opinion or the propriety of an action. Moreover, he is fortified by a belief in a ‘reason’ common to all mankind, a common power of rational consideration…. But besides this, which gives the Rationalist a touch of intellectual equalitarianism, he is something also of an individualist, finding it difficult to believe that anyone who can think honestly and clearly will think differently from himself….

…And having cut himself off from the traditional knowledge of his society, and denied the value of any education more extensive than a training in a technique of analysis, he is apt to attribute to mankind a necessary inexperience in all the critical moments of life, and if he were more self-critical he might begin to wonder how the race had ever succeeded in surviving. (“Rationalism in Politics,” pp. 5-7, as republished in Rationalism in Politics and Other Essays)

At the heart of Rationalism is the view that “a problem” can be analyzed and “solved” as if it were separate and apart from the fabric of life.  On this point, I turn to John Kekes:

Traditions do not stand alone: they overlap, and the problems of one are often resolved in terms of another. Most traditions have legal, moral, political, aesthetic, stylistic, managerial, and multitude of other aspects. Furthermore, people participating in a tradition bring with them beliefs, values, and practices from other traditions in which they also participate. Changes in one tradition, therefore, are likely to produce changes in others; they are like waves that reverberate throughout the other traditions of a society. (“The Idea of Conservatism“)

Edward Feser puts it this way:

Tradition, being nothing other than the distillation of centuries of human experience, itself provides the surest guide to determining the most rational course of action. Far from being opposed to reason, reason is inseparable from tradition, and blind without it. The so-called enlightened mind thrusts tradition aside, hoping to find something more solid on which to make its stand, but there is nothing else, no alternative to the hard earth of human experience, and the enlightened thinker soon finds himself in mid-air…. But then, was it ever truly a love of reason that was in the driver’s seat in the first place? Or was it, rather, a hatred of tradition? Might the latter have been the cause of the former, rather than, as the enlightened pose would have it, the other way around?) (“Hayek and Tradition“)

Same-sex marriage will have consequences that most libertarians and “liberals” are unwilling to consider. Although it is true that traditional, heterosexual unions have their problems, those problems have been made worse, not better, by the intercession of the state. (The loosening of divorce laws, for example, signaled that marriage was to be taken less seriously, and so it has been.) Nevertheless, the state — pursuant to Judge Walker’s decision — may create new problems for society by legitimating same-sex marriage, thus signaling that traditional marriage is just another contractual arrangement in which any combination of persons may participate.

Heterosexual marriage — as Jennifer Roback Morse explains — is a primary and irreplicable civilizing force. The recognition of homosexual marriage by the state will undermine that civilizing force. The state will be saying, in effect, “Anything goes. Do your thing. The courts, the welfare system, and the taxpayer — above all — will “pick up the pieces.” And so it will go.

In Morse’s words:

The new idea about marriage claims that no structure should be privileged over any other. The supposedly libertarian subtext of this idea is that people should be as free as possible to make their personal choices. But the very nonlibertarian consequence of this new idea is that it creates a culture that obliterates the informal methods of enforcement. Parents can’t raise their eyebrows and expect children to conform to the socially accepted norms of behavior, because there are no socially accepted norms of behavior. Raised eyebrows and dirty looks no longer operate as sanctions on behavior slightly or even grossly outside the norm. The modern culture of sexual and parental tolerance ruthlessly enforces a code of silence, banishing anything remotely critical of personal choice. A parent, or even a peer, who tries to tell a young person that he or she is about to do something incredibly stupid runs into the brick wall of the non-judgmental social norm. (“Marriage and the Limits of Contract“)

The state’s signals are drowning out the signals that used to be transmitted primarily by voluntary social institutions: family, friendship, community, church, and club. Accordingly, I do not find it a coincidence that loud, loutish, crude, inconsiderate, rude, and foul behaviors have become increasingly prominent features of “social” life in America. Such behaviors have risen in parallel with the retreat of most authority figures in the face of organized violence by “protestors” and looters; with the rise of political correctness; with the perpetuation of the New Deal and its successor, the Great Society; with the erosion of swift and sure justice in favor of “rehabilitation” and “respect for life” (but not for potential victims of crime); and with the legal enshrinement of infanticide and buggery as acceptable (and even desirable) practices.

Thomas Sowell puts it this way:

One of the things intellectuals [his Rationalists] have been doing for a long time is loosening the bonds that hold a society together. They have sought to replace the groups into which people have sorted themselves with groupings created and imposed by the intelligentsia. Ties of family, religion, and patriotism, for example, have long been treated as suspect or detrimental by the intelligentsia, and new ties that intellectuals have created, such as class — and more recently “gender” — have been projected as either more real or more important….

Under the influence of the intelligentsia, we have become a society that rewards people with admiration for violating its own norms and for fragmenting that society into jarring segments. In addition to explicit denigrations of their own society for its history or current shortcomings, intellectuals often set up standards for their society which no society has ever met or is likely to meet.

Calling those standards “social justice” enables intellectuals to engage in endless complaints about the particular ways in which society fails to meet their arbitrary criteria, along with a parade of groups entitled to a sense of grievance, exemplified in the “race, class and gender” formula…. (Intellectuals and Society, pp. 303, 305)

And so it will go —  barring a sharp, conclusive reversal of Judge Walker and the movement he champions.

There was no sharp or conclusive reversal — quite the contrary, in fact. And the forces of rationalism have only grown stronger in the past decade. Witness the broadly supported movement to renounce America’s history for the sake of virtue-signaling, to blame “racism” for the government-inflicted and self-inflicted failings of blacks, to suppress religion, to undermine the economy in the name of a pseudo-science (“climate science”), and to destroy the social and economic lives of tens of millions of Americans by responding hysterically to the ever-changing and often unfounded findings of “science”.

Thinking about Thinking — and Other Things: Evolution

This is the second post in a series. (The first post is here.) This post, like its predecessor, will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

Evolution is simply change in organic (living) objects. Evolution, as a subject of scientific inquiry, is an attempt to explain how humans (and other animals) came to be what they are today.

Evolution (as a discipline) is a much scientism as it is science. Scientism, according to thefreedictionary.com is “the uncritical application of scientific or quasi-scientific methods to inappropriate fields of study or investigation.” When scientists proclaim truths instead of propounding hypotheses they are guilty of practicing scientism. Two notable scientistic scientists are Richard Dawkins and Peter Singer. It is unsurprising that Dawkins and Singer are practitioners of scientism. Both are strident atheists, and strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side.

Dawkins, Singer, and many other scientistic atheists share an especially “religious” view of evolution. In brief, they seem to believe that evolution rules out God. Evolution rules out nothing. Evolution may be true in outline but it does not bear close inspection. On that point, I turn to David Gelertner’s “Giving Up Darwin” (Claremont Review of Books, Spring 2019):

Darwin himself had reservations about his theory, shared by some of the most important biologists of his time. And the problems that worried him have only grown more substantial over the decades. In the famous “Cambrian explosion” of around half a billion years ago, a striking variety of new organisms—including the first-ever animals—pop up suddenly in the fossil record over a mere 70-odd million years. This great outburst followed many hundreds of millions of years of slow growth and scanty fossils, mainly of single-celled organisms, dating back to the origins of life roughly three and half billion years ago.

Darwin’s theory predicts that new life forms evolve gradually from old ones in a constantly branching, spreading tree of life. Those brave new Cambrian creatures must therefore have had Precambrian predecessors, similar but not quite as fancy and sophisticated. They could not have all blown out suddenly, like a bunch of geysers. Each must have had a closely related predecessor, which must have had its own predecessors: Darwinian evolution is gradual, step-by-step. All those predecessors must have come together, further back, into a series of branches leading down to the (long ago) trunk.

But those predecessors of the Cambrian creatures are missing. Darwin himself was disturbed by their absence from the fossil record. He believed they would turn up eventually. Some of his contemporaries (such as the eminent Harvard biologist Louis Agassiz) held that the fossil record was clear enough already, and showed that Darwin’s theory was wrong. Perhaps only a few sites had been searched for fossils, but they had been searched straight down. The Cambrian explosion had been unearthed, and beneath those Cambrian creatures their Precambrian predecessors should have been waiting—and weren’t. In fact, the fossil record as a whole lacked the upward-branching structure Darwin predicted.

The trunk was supposed to branch into many different species, each species giving rise to many genera, and towards the top of the tree you would find so much diversity that you could distinguish separate phyla—the large divisions (sponges, mosses, mollusks, chordates, and so on) that comprise the kingdoms of animals, plants, and several others—take your pick. But, as [David] Berlinski points out, the fossil record shows the opposite: “representatives of separate phyla appearing first followed by lower-level diversification on those basic themes.” In general, “most species enter the evolutionary order fully formed and then depart unchanged.” The incremental development of new species is largely not there. Those missing pre-Cambrian organisms have still not turned up. (Although fossils are subject to interpretation, and some biologists place pre-Cambrian life-forms closer than others to the new-fangled Cambrian creatures.)

Some researchers have guessed that those missing Precambrian precursors were too small or too soft-bodied to have made good fossils. Meyer notes that fossil traces of ancient bacteria and single-celled algae have been discovered: smallness per se doesn’t mean that an organism can’t leave fossil traces—although the existence of fossils depends on the surroundings in which the organism lived, and the history of the relevant rock during the ages since it died. The story is similar for soft-bodied organisms. Hard-bodied forms are more likely to be fossilized than soft-bodied ones, but many fossils of soft-bodied organisms and body parts do exist. Precambrian fossil deposits have been discovered in which tiny, soft-bodied embryo sponges are preserved—but no predecessors to the celebrity organisms of the Cambrian explosion.

This sort of negative evidence can’t ever be conclusive. But the ever-expanding fossil archives don’t look good for Darwin, who made clear and concrete predictions that have (so far) been falsified—according to many reputable paleontologists, anyway. When does the clock run out on those predictions? Never. But any thoughtful person must ask himself whether scientists today are looking for evidence that bears on Darwin, or looking to explain away evidence that contradicts him. There are some of each. Scientists are only human, and their thinking (like everyone else’s) is colored by emotion.

Yes, emotion, the thing that colors thought. Emotion is something that humans and other animals have. If Darwin and his successors are correct, emotion must be a facility that improves the survival and reproductive fitness of a species.

But that can’t be true because emotion is the spark that lights murder, genocide, and war. World War II, alone, is said to have occasioned the deaths of more than one-hundred million humans. Prominently among those killed were six million Ashkenzi Jews, members of a distinctive branch of humanity whose members (on average) are significantly more intelligent than other branches, and who have contributed beneficially to science, literature, and the arts (especially music).

The evil by-products of emotion – such as the near-extermination of peoples (Ashkenazi Jews among them) – should cause one to doubt that the persistence of a trait in the human population means that the trait is beneficial to survival and reproduction.

David Berlinski, in The Devil’s Delusion: Atheism and Its Scientific Pretensions, addresses the lack of evidence for evolution before striking down the notion that persistent traits are necessarily beneficial:

At the very beginning of his treatise Vertebrate Paleontology and Evolution, Robert Carroll observes quite correctly that “most of the fossil record does not support a strictly gradualistic account” of evolution. A “strictly gradualistic” account is precisely what Darwin’s theory demands: It is the heart and soul of the theory….

In a research survey published in 2001, and widely ignored thereafter, the evolutionary biologist Joel Kingsolver reported that in sample sizes of more than one thousand individuals, there was virtually no correlation between specific biological traits and either reproductive success or survival. “Important issues about selection,” he remarked with some understatement, “remain unresolved.”

Of those important issues, I would mention prominently the question whether natural selection exists at all.

Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not. Thomas Ray has for years been conducting computer experiments in an artificial environment that he has designated Tierra. Within this world, a shifting population of computer organisms meet, mate, mutate, and reproduce.

Sandra Blakeslee, writing for The New York Times, reported the results under the headline “Computer ‘Life Form’ Mutates in an Evolution Experiment: Natural Selection Is Found at Work in a Digital World.”

Natural selection found at work? I suppose so, for as Blakeslee observes with solemn incomprehension, “the creatures mutated but showed only modest increases in complexity.” Which is to say, they showed nothing of interest at all. This is natural selection at work, but it is hardly work that has worked to intended effect.

What these computer experiments do reveal is a principle far more penetrating than any that Darwin ever offered: There is a sucker born every minute….

“Contemporary biology,” [Daniel Dennett] writes, “has demonstrated beyond all reasonable doubt that natural selection— the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge— has the power to generate breathtakingly ingenious designs” (italics added).

These remarks are typical in their self-enchanted self-confidence. Nothing in the physical sciences, it goes without saying— right?— has been demonstrated beyond all reasonable doubt. The phrase belongs to a court of law. The thesis that improvements in life appear automatically represents nothing more than Dennett’s conviction that living systems are like elevators: If their buttons are pushed, they go up. Or down, as the case may be. Although Darwin’s theory is very often compared favorably to the great theories of mathematical physics on the grounds that evolution is as well established as gravity, very few physicists have been heard observing that gravity is as well established as evolution. They know better and they are not stupid….

The greater part of the debate over Darwin’s theory is not in service to the facts. Nor to the theory. The facts are what they have always been: They are unforthcoming. And the theory is what it always was: It is unpersuasive. Among evolutionary biologists, these matters are well known. In the privacy of the Susan B. Anthony faculty lounge, they often tell one another with relief that it is a very good thing the public has no idea what the research literature really suggests.

“Darwin?” a Nobel laureate in biology once remarked to me over his bifocals. “That’s just the party line.”

In the summer of 2007, Eugene Koonin, of the National Center for Biotechnology Information at the National Institutes of Health, published a paper entitled “The Biological Big Bang Model for the Major Transitions in Evolution.”

The paper is refreshing in its candor; it is alarming in its consequences. “Major transitions in biological evolution,” Koonin writes, “show the same pattern of sudden emergence of diverse forms at a new level of complexity” (italics added). Major transitions in biological evolution? These are precisely the transitions that Darwin’s theory was intended to explain. If those “major transitions” represent a “sudden emergence of new forms,” the obvious conclusion to draw is not that nature is perverse but that Darwin was wrong….

Koonin is hardly finished. He has just started to warm up. “In each of these pivotal nexuses in life’s history,” he goes on to say, “the principal ‘types’ seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate ‘grades’ or intermediate forms between different types are detectable.”…

[H[is views are simply part of a much more serious pattern of intellectual discontent with Darwinian doctrine. Writing in the 1960s and 1970s, the Japanese mathematical biologist Motoo Kimura argued that on the genetic level— the place where mutations take place— most changes are selectively neutral. They do nothing to help an organism survive; they may even be deleterious…. Kimura was perfectly aware that he was advancing a powerful argument against Darwin’s theory of natural selection. “The neutral theory asserts,” he wrote in the introduction to his masterpiece, The Neutral Theory of Molecular Evolution, “that the great majority of evolutionary changes at the molecular level, as revealed by comparative studies of protein and DNA sequences, are caused not by Darwinian selection but by random drift of selectively neutral or nearly neutral mutations” (italics added)….

Writing in the Proceedings of the National Academy of Sciences, the evolutionary biologist Michael Lynch observed that “Dawkins’s agenda has been to spread the word on the awesome power of natural selection.” The view that results, Lynch remarks, is incomplete and therefore “profoundly misleading.” Lest there be any question about Lynch’s critique, he makes the point explicitly: “What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms.”

Survival and reproduction depend on many traits. A particular trait, considered in isolation, may seem to be helpful to the survival and reproduction of a group. But that trait may not be among the particular collection of traits that is most conducive to the group’s survival and reproduction. If that is the case, the trait will become less prevalent.

Alternatively, if the trait is an essential member of the collection that is conducive to survival and reproduction, it will survive. But its survival depends on the other traits. The fact that X is a “good trait” does not, in itself, ensure the proliferation of X. And X will become less prevalent if other traits become more important to survival and reproduction.

In any event, it is my view that genetic fitness for survival has become almost irrelevant in places like the United States. The rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

In fact, there is a supportable hypothesis that humans in cosseted realms (i.e., the West) are, on average, becoming less intelligent. But, first, it is necessary to explain why it seemed for a while that humans were becoming more intelligent.

David Robson is on the case:

When the researcher James Flynn looked at [IQ] scores over the past century, he discovered a steady increase – the equivalent of around three points a decade. Today, that has amounted to 30 points in some countries.

Although the cause of the Flynn effect is still a matter of debate, it must be due to multiple environmental factors rather than a genetic shift.

Perhaps the best comparison is our change in height: we are 11cm (around 5 inches) taller today than in the 19th Century, for instance – but that doesn’t mean our genes have changed; it just means our overall health has changed.

Indeed, some of the same factors may underlie both shifts. Improved medicine, reducing the prevalence of childhood infections, and more nutritious diets, should have helped our bodies to grow taller and our brains to grow smarter, for instance. Some have posited that the increase in IQ might also be due to a reduction of the lead in petrol, which may have stunted cognitive development in the past. The cleaner our fuels, the smarter we became.

This is unlikely to be the complete picture, however, since our societies have also seen enormous shifts in our intellectual environment, which may now train abstract thinking and reasoning from a young age. In education, for instance, most children are taught to think in terms of abstract categories (whether animals are mammals or reptiles, for instance). We also lean on increasingly abstract thinking to cope with modern technology. Just think about a computer and all the symbols you have to recognise and manipulate to do even the simplest task. Growing up immersed in this kind of thinking should allow everyone [hyperbole alert] to cultivate the skills needed to perform well in an IQ test….

[Psychologist Robert Sternberg] is not alone in questioning whether the Flynn effect really represented a profound improvement in our intellectual capacity, however. James Flynn himself has argued that it is probably confined to some specific reasoning skills. In the same way that different physical exercises may build different muscles – without increasing overall “fitness” – we have been exercising certain kinds of abstract thinking, but that hasn’t necessarily improved all cognitive skills equally. And some of those other, less well-cultivated, abilities could be essential for improving the world in the future.

Here comes the best part:

You might assume that the more intelligent you are, the more rational you are, but it’s not quite this simple. While a higher IQ correlates with skills such as numeracy, which is essential to understanding probabilities and weighing up risks, there are still many elements of rational decision making that cannot be accounted for by a lack of intelligence.

Consider the abundant literature on our cognitive biases. Something that is presented as “95% fat-free” sounds healthier than “5% fat”, for instance – a phenomenon known as the framing bias. It is now clear that a high IQ does little to help you avoid this kind of flaw, meaning that even the smartest people can be swayed by misleading messages.

People with high IQs are also just as susceptible to the confirmation bias – our tendency to only consider the information that supports our pre-existing opinions, while ignoring facts that might contradict our views. That’s a serious issue when we start talking about things like politics.

Nor can a high IQ protect you from the sunk cost bias – the tendency to throw more resources into a failing project, even if it would be better to cut your losses – a serious issue in any business. (This was, famously, the bias that led the British and French governments to continue funding Concorde planes, despite increasing evidence that it would be a commercial disaster.)

Highly intelligent people are also not much better at tests of “temporal discounting”, which require you to forgo short-term gains for greater long-term benefits. That’s essential, if you want to ensure your comfort for the future.

Besides a resistance to these kinds of biases, there are also more general critical thinking skills – such as the capacity to challenge your assumptions, identify missing information, and look for alternative explanations for events before drawing conclusions. These are crucial to good thinking, but they do not correlate very strongly with IQ, and do not necessarily come with higher education. One study in the USA found almost no improvement in critical thinking throughout many people’s degrees.

Given these looser correlations, it would make sense that the rise in IQs has not been accompanied by a similarly miraculous improvement in all kinds of decision making.

So much for the bright people who promote and pledge allegiance to socialism and its various manifestations (e.g., the Green New Deal, and Medicare for All). So much for the bright people who suppress speech with which they disagree because it threatens the groupthink that binds them.

Robson also discusses evidence of dysgenic effects in IQ:

Whatever the cause of the Flynn effect, there is evidence that we may have already reached the end of this era – with the rise in IQs stalling and even reversing. If you look at Finland, Norway and Denmark, for instance, the turning point appears to have occurred in the mid-90s, after which average IQs dropped by around 0.2 points a year. That would amount to a seven-point difference between generations.

Psychologist (and intelligence specialist) James Thompson has addressed dysgenic effects at his blog on the website of The Unz Review. In particular, he had a lot to say about the work of an intelligence researcher named Michael Woodley. Here’s a sample from a post by Thompson:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong?

From a later post by Thompson:

The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by [Matthew] Sarraf.

The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.

The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”….

Here are the findings which I have arranged by generational decline (taken as 25 years).

  • Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.
  • 3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.
  • Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.
  • Working memory, over 85 years (3.4 generations) 0.16 drop/decade.
  • Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.
  • Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.

Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.…

My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.

How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

Here’s my hypothesis: The less-intelligent portions of the populace are breeding faster than the more-intelligent portions. As I said earlier, the rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

Thinking about Thinking … and Other Things: Time, Existence, and Science

This is the first post in a series. It will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

Before we can consider time and existence, we must consider whether they are illusions.

Regarding time, there’s a reasonable view that nothing exists but the present — the now — or, rather, an infinite number of nows. In the conventional view, one now succeeds another, which creates the illusion of the passage of time. In the view of some physicists, however, all nows exist at once, and we merely perceive sequential slice of the all nows. Inasmuch as there seems to be general agreement as to the contents of the slice, the only evidence that many nows exist in parallel are claims about such phenomena as clairvoyance, visions, and co-location. I won’t wander into that thicket.

A problem with the conventional view of time is that not everyone perceives the same now at the same time. Well, not according to Einstein’s special theory of relativity, at least. A problem with the view that all nows exist at once (known as the many-worlds view), is that it’s purely a mathematical concoction. Unless you’re a clairvoyant, visionary, or the like.

Oh, wait, the special theory of relativity is also a mathematical concoction. Further it doesn’t really show that not everyone perceives the same now at the same time. The key to special relativity – the Lorentz transformation — enables one to reconcile the various nows; that is, to be a kind of omniscient observer. So, in effect, there really is a now.

This leads to the question of what distinguishes one now from another now. The answer is change. If things didn’t change, there would be only a now, not an infinite series of them. More precisely, if things didn’t seem to change, time would seem to stand still. This is another way of saying that a succession of nows creates the illusion of the passage of time.

What happens between one now and the next now? Change, not the passage of time. What we think of as the passage of time is really an artifact of change.

Time is really nothing more than the counting of events that supposedly occur at set intervals — the “ticking” of an atomic clock, for example. I say supposedly because there’s no absolute measure of time against which one can calibrate the “ticking” of an atomic clock, or any other kind of clock.

In summary: Clocks don’t measure time. Clocks merely change (“tick”) at supposedly regular intervals, and those intervals are used in the representation of other things, such as the speed of an automobile or the duration of a 100-yard dash.

Time is an illusion. Or, if that conclusion bothers you, let’s just say that time is an ephemeral quality that depends on change.

Change is real. But change in what — of what does reality consist?

There are two basic views of reality. One of them, posited by Bishop Berkeley and his followers, is that the only reality is that which goes on in one’s own mind. But that’s just another way of saying that humans don’t perceive the external world directly. Rather, it is perceived second-hand, through the senses that detect external phenomena and transmit signals to the brain, which is where “reality” is formed.

There is an extreme version of the Berkeleyan view: Everything perceived is only a kind of dream or illusion. But even a dream or illusion is something, not nothing, so there is some kind of existence.

The sensible view, held by most humans (even most scientists), is that there is an objective reality out there, beyond the confines one’s mind. How can so many people agree about the existence of certain things (e.g., Cleveland) unless there’s something out there?

Over the ages, scientists have been able to describe objective reality in ever-minute detail. But what is it? What is the stuff of which it consists? No one knows or is likely ever to know. All we know is that stuff changes, and those changes give rise to what we call time.

The big question is how things came to exist. This has been debated for millennia. There are two schools of thought:

Things just exist and have always existed.

Things can’t come into existence on their own, so some non-thing must have caused things to exist.

The second option leaves open the question of how the non-thing came into existence, and can be interpreted as a variant of the first option; that is, some non-thing just exists and has always existed.

How can the big question be resolved? It can’t be resolved by facts or logic. If it could be, there would be wide agreement about the answer. (Not perfect agreement because a lot of human beings are impervious to facts and logic.) But there isn’t and never will be wide agreement.

Why is that? Can’t scientists someday trace the existence of things – call it the universe – back to a source? Isn’t that what the Big Bang Theory is all about? No and no. If the universe has always existed, there’s no source to be tracked down. And if the universe was created by a non-thing, how can scientists detect the non-thing if they’re only equipped to deal with things?

The Big Bang Theory posits a definite beginning, at a more or less definite point in time. But even if the theory is correct, it doesn’t tell us how that beginning began. Did things start from scratch, and if they did, what caused them to do so? And maybe they didn’t; maybe the Big Bang was just the result of the collapse of a previous universe, which was the result of a previous one, etc., etc., etc., ad infinitum.

Some scientists who think about such things (most of them, I suspect) don’t believe that the universe was created by a non-thing. But they don’t believe it because they don’t want to believe it. The much smaller number of similar scientists who believe that the universe was created by a non-thing hold that belief because they want to hold it.

That’s life in the world of science, just as it is in the world of non-science, where believers, non-believers, and those who can’t make up their minds find all kinds of ways in which to rationalize what they believe (or don’t believe), even though they know less than scientists do about the universe.

Let’s just accept that and move on to another big question: What is it that exists?  It’s not “stuff” as we usually think of it – like mud or sand or water droplets. It’s not even atoms and their constituent particles. Those are just convenient abstractions for what seem to be various manifestations of electromagnetic forces, or emanations thereof, such as light.

But what are electromagnetic forces? And what does their behavior (to be anthropomorphic about it) have to do with the way that the things like planets, stars, and galaxies move in relation to one another? There are lots of theories, but none of them has as yet gained wide acceptance by scientists. And even if one theory does gain wide acceptance, there’s no telling how long before it’s supplanted by a new theory.

That’s the thing about science: It’s a process, not a particular result. Human understanding of the universe offers a good example. Here’s a short list of beliefs about the universe that were considered true by scientists, and then rejected:

Thales (c. 620 – c. 530 BC): The Earth rests on water.

Aneximenes (c. 540 – c. 475 BC): Everything is made of air.

Heraclitus (c. 540 – c. 450 BC): All is fire.

Empodecles (c. 493 – c. 435 BC): There are four elements: earth, air, fire, and water.

Democritus (c. 460 – c. 370 BC): Atoms (basic elements of nature) come in an infinite variety of shapes and sizes.

Aristotle (384 – 322 BC): Heavy objects must fall faster than light ones. The universe is a series of crystalline spheres that carry the sun, moon, planets, and stars around Earth.

Ptolemey (90 – 168 AD): Ditto the Earth-centric universe,  with a mathematical description.

Copernicus (1473 – 1543): The planets revolve around the sun in perfectly circular orbits.

Brahe (1546 – 1601): The planets revolve around the sun, but the sun and moon revolve around Earth.

Kepler (1573 – 1630): The planets revolve around the sun in elliptical orbits, and their trajectory is governed by magnetism.

Newton (1642 – 1727): The course of the planets around the sun is determined by gravity, which is a force that acts at a distance. Light consists of corpuscles; ordinary matter is made of larger corpuscles. Space and time are absolute and uniform.

Rutherford (1871 – 1937), Bohr (1885 – 1962), and others: The atom has a center (nucleus), which consists of two elemental particles, the neutron and proton.

Einstein (1879 – 1955): The universe is neither expanding nor shrinking.

That’s just a small fraction of the mistaken and incomplete theories that have held sway in the field of physics. There are many more such mistakes and lacunae in the other natural sciences: biology, chemistry, and earth science — each of which, like physics, has many branches. And in all of the branches there are many unresolved questions. For example, the Standard Model of particle physics, despite its complexity, is known to be incomplete. And it is thought (by some) to be unduly complex; that is, there may be a simpler underlying structure waiting to be discovered.

Given all of this, it is grossly presumptuous to claim that climate science – to take a salient example — is “settled” when the phenomena that it encompasses are so varied, complex, often poorly understood, and often given short shrift (e.g., the effects of solar radiation on the intensity of cosmic radiation reaching Earth, which affects low-level cloud formation, which affects atmospheric temperature and precipitation).

Anyone who says that any aspect of science is “settled” is either ignorant, stupid, or a freighted with a political agenda. Anyone who says that “science is real” is merely parroting an empty slogan.

Matt Ridley (quoted by Judith Curry) explains:

In a lecture at Cornell University in 1964, the physicist Richard Feynman defined the scientific method. First, you guess, he said, to a ripple of laughter. Then you compute the consequences of your guess. Then you compare those consequences with the evidence from observations or experiments. “If [your guess] disagrees with experiment, it’s wrong. In that simple statement is the key to science. It does not make a difference how beautiful the guess is, how smart you are, who made the guess or what his name is…it’s wrong….

In general, science is much better at telling you about the past and the present than the future. As Philip Tetlock of the University of Pennsylvania and others have shown, forecasting economic, meteorological or epidemiological events more than a short time ahead continues to prove frustratingly hard, and experts are sometimes worse at it than amateurs, because they overemphasize their pet causal theories….

Peer review is supposed to be the device that guides us away from unreliable heretics. Investigations show that peer review is often perfunctory rather than thorough; often exploited by chums to help each other; and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field.

Herbert Ayres, an expert in operations research, summarized the problem well several decades ago: “As a referee of a paper that threatens to disrupt his life, [a professor] is in a conflict-of-interest position, pure and simple. Unless we’re convinced that he, we, and all our friends who referee have integrity in the upper fifth percentile of those who have so far qualified for sainthood, it is beyond naive to believe that censorship does not occur.” Rosalyn Yalow, winner of the Nobel Prize in medicine, was fond of displaying the letter she received in 1955 from the Journal of Clinical Investigation noting that the reviewers were “particularly emphatic in rejecting” her paper.

The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto.

As I said, there is no such thing as “settled science”. Real science is a vast realm of unsettled uncertainty. Newton put it thus:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Certainty is the last refuge of a person whose mind is closed to new facts and new ways of looking at old facts.

How uncertain is the real world, especially the world of events yet to come? Consider a simple, three-parameter model in which event C depends on the occurrence of event B, which depends on the occurrence of event A; in which the value of the outcome is the summation of the values of the events that occur; and in which value of each event is binary – a value of 1 if it happens, 0 if it doesn’t happen. Even in a simple model like that, there is a wide range of possible outcomes; thus:

A doesn’t occur (B and C therefore don’t occur) = 0.

A occurs but B fails to occur (and C therefore doesn’t occur) = 1.

A occurs, B occurs, but C fails to occur = 2.

A occurs, B occurs, and C occurs = 3.

Even when A occurs, subsequent events (or non-events) will yield final outcomes ranging in value from 1 to 3 times 1. A factor of 3 is a big deal. It’s why .300 hitters make millions of dollars a year and .100 hitters sell used cars.

Let’s leave it at that and move on.

The Iraq War in Retrospect

The Iraq War has been called many things, “immoral” being among the leading adjectives for it. Was it altogether immoral? Was it immoral to remain in favor of the war after it was (purportedly) discovered that Saddam Hussein didn’t have an active program for the production of weapons of mass destruction? Or was the war simply misdirected from its proper — and moral — purpose: the service of Americans’ interests by stabilizing the Middle East? I address those and other questions about the war in what follows.

THE WAR-MAKING POWER AND ITS PURPOSE

The sole justification for the United States government is the protection of Americans’ interests. Those interests are spelled out broadly in the Preamble to the Constitution: justice, domestic tranquility, the common defense, the general welfare, and the blessings of liberty.

Contrary to leftist rhetoric, the term “general welfare” in the Preamble (and in Article I, Section 8) doesn’t grant broad power to the national government to do whatever it deems to be “good”. “General welfare” — general well-being, not the well-being of particular regions or classes — is merely one of the intended effects of the enumerated and limited powers granted to the national government by conventions of the States.

One of the national government’s specified powers is the making of war. In the historical context of the adoption of the Constitution, it is clear the the purpose of the war-making power is to defend Americans and their legitimate interests: liberty generally and, among other things, the free flow of trade between American and foreign entities. The war-making power carries with it the implied power to do harm to foreigners in the course of waging war. I say that because the Framers, many of whom fought for independence from Britain, knew from experience that war, of necessity, must sometimes cause damage to the persons and property of non-combatants.

In some cases, the only way to serve the interests of Americans is to inflict deliberate damage on non-combatants. That was the case, for example, when U.S. air forces dropped atomic bombs on Hiroshima and Nagasaki to force Japan’s surrender and avoid the deaths and injuries of perhaps a million Americans. Couldn’t Japan have been “quarantined” instead, once its forces had been driven back to the homeland? Perhaps, but at great cost to Americans. Luckily, in those days American leaders understood that the best way to ensure that an enemy didn’t resurrect its military power was to defeat it unconditionally and to occupy its homeland. You will have noticed that as a result, Germany and Japan are no longer military threats to the U.S., whereas Iraq remained one after the Gulf War of 1990-1991 because Saddam wasn’t deposed. Russia, which the U.S. didn’t defeat militarily — only symbolically — is resurgent militarily. China, which wasn’t even defeated symbolically in the Cold War, is similarly resurgent, and bent on regional if not global hegemony, necessarily to the detriment of Americans’ interests. To paraphrase: There is no substitute for unconditional military victory.

That is a hard and unfortunate truth, but it eludes many persons, especially those of the left. They suffer under dual illusions, namely, that the Constitution is an outmoded document and that “world opinion” trumps the Constitution and the national sovereignty created by it. Neither illusion is shared by Americans who want to live in something resembling liberty and to enjoy the advantages pertaining thereto, including prosperity.

CASUS BELLI

The invasion of Iraq in 2003 by the armed forces of the U.S. government (and those of other nations) had explicit and implicit justifications. The explicit justifications for the U.S. government’s actions are spelled out in the Authorization for Use of Military Force Against Iraq of 2002 (AUMF). It passed the House by a vote of 296 – 133 and the Senate by a vote of 77 – 23, and was signed into law by President George W. Bush on October 16, 2002.

There are some who focus on the “weapons of mass destruction” (WMD) justification, which figures prominently in the “whereas” clauses of the AUMF. But the war, as it came to pass when Saddam failed to respond to legitimate demands spelled out in the AUMF, had a broader justification than whatever Saddam was (or wasn’t) doing with weapons of mass destruction (WMD). The final “whereas” puts it succinctly: it is in the national security interests of the United States to restore international peace and security to the Persian Gulf region.

An unstated but clearly understood implication of “peace and security in the Persian Gulf region” was the security of the region’s oil supply against Saddam’s capriciousness. The mantra “no blood for oil” to the contrary notwithstanding, it is just as important to defend the livelihoods of Americans as it is to defend their lives — and in many instances it comes to the same thing.

In sum, I disregard the WMD rationale for the Iraq War. The real issue is whether the war secured the stability of the Persian Gulf region (and the Middle East in general). And if it didn’t, why did it fail to do so?

ROADS TAKEN AND NOT TAKEN

One can only speculate about what might have happened in the absence of the Iraq War. For instance, how many more Iraqis might have been killed and tortured by Saddam’s agents? How many more terrorists might have been harbored and financed by Saddam? How long might it have taken him to re-establish his WMD program or build a nuclear weapons program? Saddam, who started it all with the invasion of Kuwait, wasn’t a friend of the U.S. or the West in general. The U.S. isn’t the world’s policeman, but the U.S. government has a moral obligation to defend the interests of Americans, preemptively if necessary.

By the same token, one can only speculate about what might have happened if the U.S. government had prosecuted the war differently than it did, which was “on the cheap”. There weren’t enough boots on the ground to maintain order in the way that it was maintained by the military occupations in Germany and Japan after World War II. Had there been, there wouldn’t have been a kind of “civil war” or general chaos in Iraq after Saddam was deposed. (It was those things, as much as the supposed absence of a WMD program that turned many Americans against the war.)

Speculation aside, I supported the invasion of Iraq, the removal of Saddam, and the rout of Iraq’s armed forces with the following results in mind:

  • A firm military occupation of Iraq, for some years to come.
  • The presence in Iraq and adjacent waters and airspace of U.S. forces in enough strength to control Iraq and deter misadventures by other nations in the region (e.g., Iran and Syria) and prospective interlopers (e.g., Russia).
  • Israel’s continued survival and prosperity under the large shadow cast by U.S. forces in the region.
  • Secure production and shipment of oil from Iraq and other oil-producing nations in the region.

All of that would have happened but for (a) too few boots on the ground (later remedied in part by the “surge”); (b) premature “nation-building”, which helped to stir up various factions in Iraq; (c) Obama’s premature surrender, which he was shamed into reversing; and (d) Obama’s deal with Iran, with its bundles of cash and blind-eye enforcement that supported Iran’s rearmament and growing boldness in the region. (The idea that Iraq, under Saddam, had somehow contained Iran is baloney; Iran was contained only until its threat to go nuclear found a sucker in Obama.)

In sum, the war was only a partial success because (once again) U.S. leaders failed to wage it fully and resolutely. This was due in no small part to incessant criticism of the war, stirred up and sustained by Democrats and the media.

WHO HAD THE MORAL HIGH GROUND?

In view of the foregoing, the correct answer is: the U.S. government, or those of its leaders who approved, funded, planned, and executed the war with the aim of bringing peace and security to the Persian Gulf region for the sake of Americans’ interests.

The moral high ground was shared by those Americans who, understanding the war’s justification on grounds broader than WMD, remained steadfast in support of the war despite the tumult and shouting that arose from its opponents.

There were Americans whose support of the war was based on the claim that Saddam had ore was developing WMD, and whose support ended or became less ardent when WMD seemed not to be in evidence. I wouldn’t presume to judge them harshly for withdrawing their support, but I would judge them myopic for basing it on solely on the WMD predicate. And I would judge them harshly if they joined the outspoken opponents of the war, whose opposition I address below.

What about those Americans who supported the war simply because they believed that President Bush and his advisers “knew what they were doing” or out of a sense of patriotism? That is to say, they had no particular reason for supporting the war other than a general belief that its successful execution would be a “good thing”. None of those Americans deserves moral approbation or moral blame. They simply had better things to do with their lives than to parse the reasons for going to war and for continuing it. And it is no one’s place to judge them for not having wasted their time in thinking about something that was beyond their ability to influence. (See the discussion of “public opinion” below.)

What about those Americans who publicly opposed the war, either from the beginning or later? I cannot fault all of them for their opposition — and certainly not  those who considered the costs (human and monetary) and deemed them not worth the possible gains.

But there were (and are) others whose opposition to the war was and is problematic:

  • Critics of the apparent absence of an active WMD program in Iraq, who seized on the WMD justification and ignored (or failed to grasp) the war’s broader justification.
  • Political opportunists who simply wanted to discredit President Bush and his party, which included most Democrats (eventually), effete elites generally, and particularly most members of the academic-media-information technology complex.
  • An increasingly large share of the impressionable electorate who could not (and cannot) resist a bandwagon.
  • Reflexive pro-peace/anti-war posturing by the young, who are prone to oppose “the establishment” and to do so loudly and often violently.

The moral high ground isn’t gained by misguided criticism, posturing, joining a bandwagon, or hormonal emotionalism.

WHAT ABOUT “PUBLIC OPINION”?

Suppose you had concluded that the Iraq War was wrong because the WMD justification seemed to have been proven false as the war went on. Perhaps even than false: a fraud perpetrated by officials of the Bush administration, if not by the president himself, to push Congress and “public opinion” toward support for an invasion of Iraq.

If your main worry about Iraq, under Saddam, was the possibility that WMD would be used against Americans, the apparent falsity of the WMD claim — perhaps fraudulent falsity — might well have turned you against the war. Suppose that there were many millions of Americans like you, whose initial support of the war turned to disillusionment as evidence of an active WMD program failed to materialize. Would voicing your opinion on the matter have helped to end the war? Did you have a moral obligation to voice your opinion? And, in any event, should wars be ended because of “public opinion”? I will try to answer those questions in what follows.

The strongest case to be made for the persuasive value of voicing one’s opinion might be found in the median-voter theorem. According to Wikipedia, the median-voter theorem

“states that ‘a majority rule voting system will select the outcome most preferred by the median voter”….

The median voter theorem rests on two main assumptions, with several others detailed below. The theorem is assuming [sic] that voters can place all alternatives along a one-dimensional political spectrum. It seems plausible that voters could do this if they can clearly place political candidates on a left-to-right continuum, but this is often not the case as each party will have its own policy on each of many different issues. Similarly, in the case of a referendum, the alternatives on offer may cover more than one issue. Second, the theorem assumes that voters’ preferences are single-peaked, which means that voters have one alternative that they favor more than any other. It also assumes that voters always vote, regardless of how far the alternatives are from their own views. The median voter theorem implies that voters have an incentive to vote for their true preferences. Finally, the median voter theorem applies best to a majoritarian election system.

The article later specifies seven assumptions underlying the theorem. None of the assumptions is satisfied in the real world of American politics. Complexity never favors the truth of any proposition; it simply allows the proposition to be wrong in more ways if all of the assumptions must be true, as is the case here.

There is a weak form of the theorem, which says that

the median voter always casts his or her vote for the policy that is adopted. If there is a median voter, his or her preferred policy will beat any other alternative in a pairwise vote.

That still leaves the crucial assumption that voters are choosing between two options. This is superficially true in the case of a two-person race for office or a yes-no referendum. But, even then, a binary option usually masks non-binary ramifications that voters take into account.

In any case, it is trivially true to say that the preference of the median voter foretells the outcome of an election in a binary election, if the the outcome is decided by majority vote and there isn’t a complicating factor like the electoral college. One could say, with equal banality, that the stronger man wins the weight-lifting contest, the outcome of which determines who is the stronger man.

Why am I giving so much attention to the median-voter theorem? Because, according to a blogger whose intellectual prowess I respect, if enough Americans believe a policy of the U.S. government to be wrong, the policy might well be rescinded if the responsible elected officials (or, presumably, their prospective successors) believe that the median voter wants the policy rescinded. How would that work?

The following summary of the blogger’s case is what I gleaned from his original post on the subject and several comments and replies. I have inserted parenthetical commentary throughout.

  • The pursuit of the Iraq War after the WMD predicate for it was (seemingly) falsified — hereinafter policy X — was immoral because X led unnecessarily to casualties, devastation, and other costs. (As discussed above, there were other predicates for X and other consequences of X, some of them good, but they don’t seem to matter to the blogger.)
  • Because X was immoral (in the blogger’s reckoning), X should have been rescinded.
  • Rescission would have (might have?/should have?) occurred through the operation of the median-voter theorem if enough persons had made known their opposition to X. (How might the median-voter theorem have applied when X wasn’t on a ballot? See below.)
  • Any person who had taken the time to consider X (taking into account only the WMD predicate and unequivocally bad consequences) could only have deemed it immoral. (The blogger originally excused persons who deemed X proper, but later made a statement equivalent to the preceding sentence. This is a variant of “heads, I win; tails, you lose”.)
  • Having deemed X immoral, a person (i.e., a competent, adult American) would have been morally obliged to make known his opposition to X. Even if the person didn’t know of the spurious median-voter theorem, his opposition to X (which wasn’t on a ballot) would somehow have become known and counted (perhaps in a biased opinion poll conducted by an entity opposed to X) and would therefore have helped to move the median stance of the (selectively) polled fragment of the populace toward opposition to X, whereupon X would be rescinded, according to the median-voter theorem. (Or perhaps vociferous opposition, expressed in public protests, would be reported by the media — especially by those already opposed to X — as indicative of public opinion, whether or not it represented a median view of X.)
  • Further, any competent, adult American who didn’t bother to take the time to evaluate X would have been morally complicit in the continuation of X. (This must be the case because the blogger says so, without knowing each person’s assessment of the slim chance that his view of the matter would affect X, or the opportunity costs of evaluating X and expressing his view of it.)
  • So the only moral course of action, according to the blogger, was for every competent, adult American to have taken the time to evaluate X (in terms of the WMD predicate), to have deemed it immoral (there being no other choice given the constraint just mentioned), and to have made known his opposition to the policy. (This despite the fact that most competent, adult Americans know viscerally or from experience that the median-voter theorem is hooey — more about that below — and that it would therefore have been a waste of their time to get worked up about a policy that wasn’t unambiguously immoral. Further, they were and are rightly reluctant to align themselves with howling mobs and biased media — even by implication, as in a letter to the editor — in protest of a policy that wasn’t unambiguously immoral.)
  • Then, X (which wasn’t on a ballot) would have been rescinded, pursuant to the median-voter theorem (or, properly, the outraged/vociferous-pollee/protester-biased pollster/media theorem). (Except that X wasn’t, in fact, rescinded despite massive outpourings of outrage by small fractions of the populace, which were gleefully reflected in biased polls and reported by biased media. Nor was it rescinded by implication when President Bush was up for re-election — he won. It might have been rescinded by implication when the Bush was succeeded by Obama — an opponent of X — but there were many reasons other than X for Obama’s victory: mainly the financial crisis, McCain’s lame candidacy, and a desire by many voters to signal — to themselves, at least — their non-racism by voting for Obama. And X wasn’t doing all that badly at the time of Obama’s election because of the troop “surge” authorized by Bush. Further, Obama’s later attempt to rescind X had consequences that caused him to reverse his attempted rescission, regardless of any lingering opposition to X.)

What about other salient, non-ballot issues? Does “public opinion” make a difference? Sometimes yes, sometimes no. Obamacare, for example, was widely opposed until it was enacted by Congress and signed into law by Obama. It suddenly became popular because much of the populace wants to be on the “winning side” of an issue. (So much for the moral value of public opinion.) Similarly, abortion was widely deemed to be immoral until the Supreme Court legalized it. Suddenly, it began to become acceptable according to “public opinion”. I could go on an on, but you get the idea: Public opinion often follows policy rather than leading it, and its moral value is dubious in any event.

But what about cases where government policy shifted in the aftermath of widespread demonstrations and protests? Did demonstrations and protests lead to the enactment of the Civil Rights Acts of the 1960s? Did they cause the U.S. government to surrender, in effect, to North Vietnam? No and no. From where I sat — and I was a politically aware, voting-age, adult American of the “liberal” persuasion at the time of those events — public opinion had little effect on the officials who were responsible for the Civil Rights Acts or the bug-out from Vietnam.

The civil-rights movement of the 1950s and 1960s and the anti-war movement of the 1960s and 1970s didn’t yield results until years after their inception. And those results didn’t (at the time, at least) represent the views of most Americans who (I submit) were either indifferent or hostile to the advancement of blacks and to the anti-patriotic undertones of the anti-war movement. In both cases, mass protests were used by the media (and incited by the promise of media attention) to shame responsible officials into acting as media elites wanted them to.

Further, it is a mistake to assume that the resulting changes in law (writ broadly to include policy) were necessarily good changes. The stampede to enact civil-rights laws in the 1960s, which hinged not so much on mass protests but on LBJ”s “white guilt” and powers of persuasion, resulted in the political suppression of an entire region, the loss of property rights, and the denial of freedom of association. (See, for example, Christopher Caldwell’s “The Roots of Our Partisan Divide“, Imprimis, February 2020.)

The bug-out from Vietnam foretold the U.S. government’s fecklessness in the Iran hostage crisis; the withdrawal of U.S. forces from Lebanon after the bombing of Marine barracks there; the failure of G.H.W. Bush to depose Saddam when it would have been easy to do so; the legalistic response to the World Trade Center bombing; the humiliating affair in Somalia; Clinton’s failure to take out Osama bin Laden; Clinton’s tepid response to Saddam’s provocations; nation-building (vice military occupation) in Iraq; and Obama’s attempt to pry defeat from the jaws of something resembling victory in Iraq.

All of that, and more, is symptomatic of the influence that “liberal” elites came to exert on American foreign and defense policy after World War II. Public opinion has been a side show, and protestors have been useful idiots to the cause of “liberal internationalism”, that is, the surrender of Americans’ economic and security interests for the sake of various rapprochements toward “allies” who scorn America when it veers ever so slightly from the road to serfdom, and enemies — Russia and China — who have never changed their spots, despite “liberal” wishful thinking. Handing America’s manufacturing base to China in the name of free trade is of a piece with all the rest.

IN CONCLUSION . . .

It is irresponsible to call a policy immoral without evaluating all of its predicates and consequences. One might as well call the Allied leaders of World War II immoral because they chose war — with all of its predictably terrible consequences — rather than abject surrender.

It is fatuous to ascribe immorality to anyone who was supportive of or indifferent to the war. One might as well ascribe immorality to the economic and political ignoramuses who failed to see that FDR’s policies would prolong the Great Depression, that Social Security and its progeny (Medicare and Medicaid) would become entitlements that paved the way for the central government’s commandeering of vast portions of the economy, or that the so-called social safety net would discourage work and permanently depress economic growth in America.

If I were in the business of issuing moral judgments about the Iraq War, I would condemn the strident anti-war faction for its perfidy.

Bleeding Heart Libertarians (the Blog): Good Riddance

Ist kaputt. Why is it good riddance? See this post and follow the links, most of which lead to posts critical of Bleeding Heart Libertarians.

Bleeding Heart Libertarians (the Blog): A Bibliography of Related Posts

A recent post at Policy of Truth by its proprietor, Irfan Khawaja, prompted me to compile a list of all of the posts that I have written about some of the blog posts and bloggers at Bleeding Heart Libertarians. Though Khawaja and I disagree about a lot, I believe that we agree about the fatuousness of bleeding-heart libertarianism. (BTW, Khawaja’s flaming valedictory, on a different subject, is worth a read.)

Here’s the bibliography, arranged chronologically from March 9, 2011, to September 11, 2014:

The Meaning of Liberty
Peter Presumes to Preach
Positive Liberty vs. Liberty
More Social Justice
On Self-Ownership and Desert
The Killing of bin Laden and His Ilk
In Defense of Subjectivism
The Folly of Pacifism, Again
What Is Libertarianism?
Why Stop at the Death Penalty?
What Is Bleeding-Heart Libertarianism?
The Morality of Occupying Public Property
The Equal-Protection Scam and Same-Sex Marriage
Liberty, Negative Rights, and Bleeding Hearts
Bleeding-Heart Libertarians = Left-Statists
Enough with the Bleeding Hearts Already
Not Guilty of Libertarian Purism
Obama’s Big Lie
Bleeding-Heart Libertarians = Left-Statists (Redux)
Egoism and Altruism
A Case for Redistribution Not Made

“It’s Tough to Make Predictions, Especially about the Future”

A lot of people have said it, or something like it, though probably not Yogi Berra, to whom it’s often attributed.

Here’s another saying, which is also apt here: History does not repeat itself. The historians repeat one another.

I am accordingly amused by something called cliodynamics, which is discussed at length by Amanda Rees in “Are There Laws of History?” (Aeon, May 2020). The Wikipedia article about cliodynamics describes it as

a transdisciplinary area of research integrating cultural evolution, economic history/cliometrics, macrosociology, the mathematical modeling of historical processes during the longue durée [the long term], and the construction and analysis of historical databases. Cliodynamics treats history as science. Its practitioners develop theories that explain such dynamical processes as the rise and fall of empires, population booms and busts, spread and disappearance of religions. These theories are translated into mathematical models. Finally, model predictions are tested against data. Thus, building and analyzing massive databases of historical and archaeological information is one of the most important goals of cliodynamics.

I won’t dwell on the methods of cliodynamics, which involve making up numbers about various kinds of phenomena and then making up models which purport to describe, mathematically, the interactions among the phenomena. Underlying it all is the practitioner’s broad knowledge of historical events, which he converts (with the proper selection of numerical values and mathematical relationships) into such things as the Kondratiev wave, a post-hoc explanation of a series of arbitrarily denominated and subjectively measured economic eras.

In sum, if you seek patterns you will find them, but pattern-making (modeling) is not science. (There’s a lot more here.)

Here’s a simple demonstration of what’s going on with cliodynamics. Using the RANDBETWEEN function of Excel, I generated two columns of random numbers ranging in value from 0 to 1,000, with 1,000 numbers in each column. I designated the values in the left column as x variables and the numbers in the right column as y variables. I then arbitrarily chose the first 10 pairs of numbers and plotted them:

As it turns out, the relationship, even though it seems rather loose, has only a 21-percent chance of being due to chance. In the language of statistics, two-tailed p=0.21.

Of course, the relationship is due entirely to chance because it’s the relationship between two sets of random numbers. So much for statistical tests of “significance”.

Moreover, I could have found “more significant” relationships had I combed carefully through the 1,000 pairs of random number with my pattern-seeking brain.

But being an honest person with scientific integrity, I will show you the plot of all 1,000 pairs of random numbers:

I didn’t bother to find a correlation between the x and y values because there is none. And that’s the messy reality of human history. Yes, there have been many determined (i.e., sought-for) outcomes  — such as America’s independence from Great Britain and Hitler’s rise to power. But they are not predetermined outcomes. Their realization depended on the surrounding circumstances of the moment, which were myriad, non-quantifiable, and largely random in relation to the event under examination (the revolution, the putsch, etc.). The outcomes only seem inevitable and predictable in hindsight.

Cliodynamics is a variant of the anthropic principle, which is that he laws of physics appear to be fine-tuned to support human life because we humans happen to be here to observe the laws of physics. In the case of cliodynamics, the past seems to consist of inevitable events because we are here in the present looking back (rather hazily) at the events that occurred in the past.

Cliodynametricians, meet Nostradamus. He “foresaw” the future long before you did.

Insidious Algorithms

Michael Anton inveighs against Big Tech and pseudo-libertarian collaborators in “Dear Avengers of the Free Market” (Law & Liberty, October 5, 2018):

Beyond the snarky attacks on me personally and insinuations of my “racism”—cut-and-paste obligatory for the “Right” these days—the responses by James Pethokoukis and (especially) John Tamny to my Liberty Forum essay on Silicon Valley are the usual sorts of press releases that are written to butter up the industry and its leaders in hopes of . . . what?…

… I am accused of having “a fundamental problem with capitalism itself.” Guilty, if by that is meant the reservations about mammon-worship first voiced by Plato and Aristotle and reinforced by the godfather of capitalism, Adam Smith, in his Theory of Moral Sentiments (the book that Smith himself indicates is the indispensable foundation for his praise of capitalism in the Wealth of Nations). Wealth is equipment, a means to higher ends. In the middle of the last century, the Right rightly focused on unjust impediments to the creation and acquisition of wealth. But conservatism, lacking a deeper understanding of the virtues and of human nature—of what wealth is for—eventually ossified into a defense of wealth as an end in itself. Many, including apparently Pethokoukis and Tamny, remain stuck in that rut to this day and mistake it for conservatism.

Both critics were especially appalled by my daring to criticize modern tech’s latest innovations. Who am I to judge what people want to sell or buy? From a libertarian standpoint, of course, no one may pass judgment. Under this view, commerce has no moral content…. To homo economicus any choice that does not inflict direct harm is ipso facto not subject to moral scrutiny, yet morality is defined as the efficient, non-coercive, undistorted operation of the market.

Naturally, then, Pethokoukis and Tamny scoff at my claim that Silicon Valley has not produced anything truly good or useful in a long time, but has instead turned to creating and selling things that are actively harmful to society and the soul. Not that they deny the claim, exactly. They simply rule it irrelevant. Capitalism has nothing to do with the soul (assuming the latter even exists). To which I again say: When you elevate a means into an end, that end—in not being the thing it ought to be—corrupts its intended beneficiaries.

There are morally neutral economic goods, like guns, which can be used for self-defense or murder. But there are economic goods that undermine morality (e.g., abortion, “entertainment” that glamorizes casual sex) and fray the bonds of mutual trust and respect that are necessary to civil society. (How does one trust a person who treats life and marriage as if they were unworthy of respect?)

There’s a particular aspect of Anton’s piece that I want to emphasize here: Big Tech’s alliance with the left in its skewing of information.

Continuing with Anton:

The modern tech information monopoly is a threat to self-government in at least three ways. First its … consolidation of monopoly power, which the techies are using to guarantee the outcome they want and to suppress dissent. It’s working….

Second, and related, is the way that social media digitizes pitchforked mobs. Aristocrats used to have to fear the masses; now they enable, weaponize, and deploy them…. The grandees of Professorville and Sand Hill Road and Outer Broadway can and routinely do use social justice warriors to their advantage. Come to that, hundreds of thousands of whom, like modern Red Guards, don’t have to be mobilized or even paid. They seek to stifle dissent and destroy lives and careers for the sheer joy of it.

Third and most important, tech-as-time-sucking-frivolity is infantilizing and enstupefying society—corroding the reason-based public discourse without which no republic can exist….

But all the dynamism and innovation Tamny and Pethokoukis praise only emerge from a bedrock of republican virtue. This is the core truth that libertarians seem unable to appreciate. Silicon Valley is undermining that virtue—with its products, with its tightening grip on power, and with its attempt to reengineer society, the economy, and human life.

I am especially concerned here with the practice of tinkering with AI algorithms to perpetuate bias in the name of  eliminating it (e.g., here). The bias to be perpetuated, in this case, is blank-slate bias: the mistaken belief that there are no inborn differences between blacks and whites or men and women. It is that belief which underpins affirmative action in employment, which penalizes the innocent and reduces the quality of products and services, and incurs heavy enforcement costs; “head start” programs, which waste taxpayers’ money; and “diversity” programs at universities, which penalize the innocent and set blacks up for failure. Those programs and many more of their ilk are generally responsible for heightening social discord rather than reducing it.

In the upside-down world of “social justice” an algorithm is considered biased if it is unbiased; that is, if it reflects the real correlations between race, sex, and ability in certain kinds of endeavors. Charles Murray’s Human Diversity demolishes the blank-slate theory with reams and reams of facts. Social-justice warriors will hate it, just as they hated The Bell Curve, even though they won’t read the later book, just as they didn’t read the earlier one.

Evaluating an Atheistic Argument

I am plowing my way through Theism, Atheism, and Big Bang Cosmology by William Lane Craig and Quentin Smith, and I continue to doubt that it will inform my views about cosmology. I concluded my preliminary thoughts about the book with this:

Craig … sees the hand of God in the Big Bang. The presence of the singularity … had to have been created so that the Big Bang could follow. That’s all well and good, but what was God doing before the Big Bang, that is, in the infinite span of time before 15 billion years ago? (Is it presumptuous of me to ask?) And why should the Big Bang prove God’s existence any more than, say, a universe that came into being at an indeterminate time? The necessity of God (or some kind of creator) arises from the known character of the universe: material effects follow from material causes, which cannot cause themselves. In short, Craig pins too much on the Big Bang, and his argument would collapse if the Big Bang is found to be a figment of observational error.

Later, however, Craig adopts a more reasonable position:

The theist … has no vested interest in denominating the Big Bang as the moment of creation. He is convinced that God created all of space-time reality ex nihilo, and the Big Bang model provides a powerful suggestion as to when that was; on the other hand, if it can be demonstrated that our observable universe originated in a broader spacetime, so be it — in that case it was this wider reality that was the immediate object of God’s creation.

Just so.

Many pages later, after rattling on almost unintelligibly, Smith gets down to brass tacks by offering an actual atheistic argument. It goes like this (with “premise” substituted for “premiss”):

(1) If God exists and there is an earliest state E of the universe, then God created E.

(2) If God created E, then E is ensured either to contain animate creatures or to lead to a subsequent state of the universe that contains animate creatures.

Premise (2) is entailed by two more basic theological premises, namely,

(3) God is omniscient, omnipotent, and perfectly benevolent.

(4) An animate universe is better than an inanimate universe….

[Further]

(5) There is an earliest state of the universe and it is the Big Bang singularity….

The scientific ideas [1, 2, and 5] also give us this premise

(6) The earliest state of the universe is inanimate since the singularity involves the life-hostile conditions of infinite temperature, infinite curvature, and infinite density.

Another scientific idea … the principle of ignorance, give us the summary premise

(7) The Big Bang singularity is inherently unpredictable and lawless and consequently there is no guarantee that it will emit a maximal configuration of particles that will evolve into an animate state of the universe….

(5) and (7) entail

(8) The earliest state of the universe is not ensured to lead to an animate state of the universe.

We now come to the crux of our argument. Given (2), (6), and (8), we can now infer that God could not have created the earliest state of the universe. It then follows, by (1), that God does not exist.

This is a terrible argument, and one that I expect to be demolished by Craig’s response, which I haven’t yet read. Here is my demolition: Smith’s premises (3) and (4) are superfluous to his argument; if they are worth anything it is to demonstrate the shallowness of his grasp of theistic arguments for the existence of God. Smith’s premise (2) is a non sequitur; the universe does contain animate creatures and might do so even if God didn’t exist. Smith’s (6), (7), and (8) are therefore irrelevant to Smith’s argument. And, by Smith’s own “logic”,  God must exist because premise (2) is confirmed by reality.

Here is my counter-argument:

(I) If God exists:

(A) He must exist infinitely, that is, without beginning.

(B) He is necessarily apart from His creation; therefore, His essence is beyond human comprehension and cannot be characterized anthropomorphically nor judged by any of His creations, except to the extent that if God is a conscious essence, He may enable human beings to know His character and intentions by some means.

(II) The universe, as a material manifestation that is observable by human beings (in part, at least) must have been created by God because material things cannot create themselves. (The processes appealed to by atheists, such as quantum fluctuations and vacuum energy, operate on existing material.)

(III) The universe, as a creation of an infinite God, may have had an indeterminate beginning.

(IV) There is, therefore, no reason to suppose that the Big Bang and all that ensued in the observable universe is the whole of the universe created by God. Rather, the Big Bang, and all that ensued in the observable universe may be a manifestation of a broader spacetime that is forever beyond the ken of human beings.

(V) Human knowledge of the observable universe is limited to the physical “laws” that can be inferred from what is known of the observable universe since its apparent origin as a material entity.

(VI) It is therefore impossible for human beings to know what processes preceded the origin of the observable universe.

(VII) Because animate, conscious organisms exist in the observable universe, the physical processes (if any) involved in the origination of the observable universe must have been conducive to the development of such organisms. But, by (6), whether that was by “design” or accident is beyond the ken of human beings.

(VIII) But animate, conscious organisms may be the result of a deliberate act of God, who may enable human beings to know of His existence and his design.

This is an unscientific argument, in that it can’t be falsified by observation. By the same token, so too is Smith’s argument unscientific, despite his use of scientific jargon and “scientific” speculations. But the weakness of Smith’s argument is proof (of a kind) that God exists and that He created the universe. That is to say, Smith (and countless others like him) seem determined to refute the logically necessary existence of God, but their refutations fail because they are illogical.


Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Free Will: A Proof by Example?
A Theory of Everything, Occam’s Razor, and Baseball
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
What Is Time?
Science’s Anti-Scientific Bent
The Tenth Dimension
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Not-So-Random Thoughts (II) (first item)
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Religion, Creation, and Morality
Atheistic Scientism Revisited
Through a Glass Darkly
Existence and Knowledge

Preliminary Thoughts about “Theism, Atheism, and Big Bang Cosmology”

I am in the early sections of Theism, Atheism, and Big Bang Cosmology by William Lane Craig and Quentin Smith, but I am beginning to doubt that it will inform my views about cosmology. (These are spelled out with increasing refinement here, here, here, and here.) The book consists of alternating essays by Craig and Smith, in which Craig defends the classical argument for a creation (the Kalām cosmological argument) against Smith’s counter-arguments.

For one thing, Smith — who takes the position that the universe wasn’t created — seems to pin a lot on the belief prevalent at the time of the book’s publication (1993) that the universe was expanding but at a decreasing rate. It is now believed generally among physicists that the universe is expanding at an accelerating rate. I must therefore assess Smith’s argument in light of the current belief.

For another thing, Craig and Smith (in the early going, at least) seem to be bogged down in an arcane argument about the meaning of infinity. Craig takes the position, understandably, that an actual infinity is impossible in the physical world. Smith, of course, takes the opposite position. The problem here is that Craig and Smith argue about what is an empirical (if empirically undecidable) matter by resorting to philosophical and mathematical concepts. The observed and observable facts are on Craig’s side: Nothing is known to have happened in the material universe without an antecedent material cause. Philosophical and mathematical arguments about the nature of infinity seem beside the point.

For a third thing, Craig seems to pin a lot on the Big Bang, while Smith is at pains to deny its significance. Smith seems to claim that the Big Bang wasn’t the beginning of the universe; rather, the universe was present in the singularity from which the Big Bang arose. The singularity might therefore have existed all along.

Craig, on the other hand, sees the hand of God in the Big Bang. The presence of the singularity (the original clump of material “stuff”) had to have been created so that the Big Bang could follow. That’s all well and good, but what was God doing before the Big Bang, that is, in the infinite span of time before 15 billion years ago? (Is it presumptuous of me to ask?) And why should the Big Bang prove God’s existence any more than, say, a universe that came into being at an indeterminate time? The necessity of God (or some kind of creator) arises from the known character of the universe: material effects follow from material causes, which cannot cause themselves. In short, Craig pins too much on the Big Bang, and his argument would collapse if the Big Bang is found to be a figment of observational error.

There’s much more to come, I hope.

Existence and Knowledge

Philosophical musings by a non-philosopher which are meant to be accessible to other non-philosophers.

Ontology is the branch of philosophy that deals with existence. Epistemology is the branch of philosophy that deals with knowledge.

I submit (with no claim to originality) that existence (what really is) is independent of knowledge (proposition A), but knowledge is impossible without existence (proposition B).

In proposition A, I include in existence those things that exist in the present, those things that have existed in the past, and the processes (happenings) by which past existences either end (e.g., death of an organism, collapse of a star) or become present existences (e.g., an older version of a living person, the formation of a new star). That which exists is real; existence is reality.

In proposition B, I mean knowledge as knowledge of that which exists, and not the kind of “knowledge” that arises from misperception, hallucination, erroneous deduction, lying, and so on. Much of what is called scientific knowledge is “knowledge” of the latter kind because, as scientists know (when they aren’t advocates) scientific knowledge is provisional. Proposition B implies that knowledge is something that human beings and other living organisms possess, to widely varying degrees of complexity. (A flower may “know” that the Sun is in a certain direction, but not in the same way that a human being knows it.) In what follows, I assume the perspective of human beings, including various compilations of knowledge resulting from human endeavors. (Aside: Knowledge is self-referential, in that it exists and is known to exist.)

An example of proposition A is the claim that there is a falling tree (it exists), even if no one sees, hears, or otherwise detects the tree falling. An example of proposition B is the converse of Cogito, ergo sum, I think, therefore I am; namely, I am, therefore I (a sentient being) am able to know that I am (exist).

Here’s a simple illustration of proposition A. You have a coin in your pocket, though I can’t see it. The coin is, and its existence in your pocket doesn’t depend on my act of observing it. You may not even know that there is a coin in your pocket. But it exists — it is — as you will discover later when you empty your pocket.

Here’s another one. Earth spins on its axis, even though the “average” person perceives it only indirectly in the daytime (by the apparent movement of the Sun) and has no easy way of perceiving it (without the aid of a Foucault pendulum) when it is dark or when asleep. Sunrise (or at least a diminution of darkness) is a simple bit of evidence for the reality of Earth spinning on its axis without our having perceived it.

Now for a somewhat more sophisticated illustration of proposition A. One interpretation of quantum mechanics is that a sub-atomic particle (really an electromagnetic phenomenon) exists in an indeterminate state until an observer measures it, at which time its state is determinate. There’s no question that the particle exists independently of observation (knowledge of the particle’s existence), but its specific characteristic (quantum state) is determined by the act of observation. Does this mean that existence of a specific kind depends on knowledge? No. It means that observation determines the state of the particle, which can then be known. Observation precedes knowledge, even if the gap is only infinitesimal. (A clear-cut case is the autopsy of a dead person to determine his cause of death. The autopsy didn’t cause the person’s death, but came after it as an act of observation.)

Regarding proposition B, there are known knowns, known unknowns, unknown unknowns, and unknown “knowns”. Examples:

Known knowns (real knowledge = true statements about existence) — The experiences of a conscious, sane, and honest person: I exist; am eating; I had a dream last night; etc. (Recollections of details and events, however, are often mistaken, especially with the passage of time.)

Known unknowns (provisional statements of fact; things that must be or have been but which are not in evidence) — Scientific theories, hypotheses, data upon which these are based, and conclusions drawn from them. The immediate causes of the deaths of most persons who have died since the advent of homo sapiens. The material process by which the universe came to be (i.e., what happened to cause the Big Bang, if there was a Big Bang).

Unknown unknowns (things that exist but are unknown to anyone) — Almost everything about the universe.

Unknown “knowns” (delusions and outright falsehoods accepted by some persons as facts) — Frauds, scientific and other. The apparent reality of a dream.

Regarding unknown “knowns”, one might dream of conversing with a dead person, for example. The conversation isn’t real, only the dream is. And it is real only to the dreamer. But it is real, nevertheless. And the brain activity that causes a dream is real even if the person in whom the activity occurs has no perception or memory of a dream. A dream is analogous to a movie about fictional characters. The movie is real but the fictional characters exist only in the script of the movie and the movie itself. The actors who play the fictional characters are themselves, not the fictional characters.

There is a fine line between known unknowns (provisional statements of fact) and unknown “knowns” (delusions and outright falsehoods). The former are statements about existence that are made in good faith. The latter are self-delusions of some kind (e.g., the apparent reality of a dream as it occurs), falsehoods that acquire the status of “truth” (e.g., George Washington’s false teeth were made of wood), or statements of “fact” that are made in bad faith (e.g., adjusting the historic temperature record to make the recent past seem warmer relative to the more distant past).

The moral of the story is that a doubting Thomas is a wise person.