How many things does a human being believe because he wants to believe them, and not because there is compelling evidence to support his beliefs? Here is a small sample of what must be an extremely long list:
There is a God. (1a)
There is no God. (1b)
There is a Heaven. (2a)
There is no Heaven. (2b)
Jesus Christ was the Son of God. (3a)
Jesus Christ, if he existed, was a mere mortal. (3b)
Marriage is the eternal union, blessed by God, of one man and one woman. (4a)
Marriage is a civil union, authorized by the state, of one or more consenting adults (or not) of any gender, as the participants in the marriage so define themselves to be. (4b)
All human beings should have equal rights under the law, and those rights should encompass both negative rights (e.g., not to be murdered or defrauded) and positive rights (e.g., to benefit from government-granted promotions, college admissions, and other peoples’ money ). (5a)
Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons. (5b)
The rise in global temperatures over the past 170 years has been caused primarily by a greater concentration of carbon dioxide in the atmosphere, which rise has been caused by human activity – and especially by the burning of fossil fuels. This rise, if it isn’t brought under control will make human existence far less bearable and prosperous than it has been in recent human history. (6a)
The rise in global temperatures over the past 170 years has not been uniform across the globe, and has not been in lockstep with the rise in the concentration of atmospheric carbon dioxide. The temperatures of recent decades, and the rate at which they are supposed to have risen, are not unprecedented in the long view of Earth’s history, and may therefore be due to conditions that have not been given adequate consideration by believers in anthropogenic global warming (e.g., natural shifts in ocean currents that have different effects on various regions of Earth, the effects of cosmic radiation on cloud formation as influenced by solar activity and the position of the solar system and the galaxy with respect to other objects in the universe, the shifting of Earth’s magnetic field, and the movement of Earth’s tectonic plates and its molten core). In any event, the models of climate change have been falsified against measured temperatures (even when the temperature record has been adjusted to support the models). And predictions of catastrophe do not take into account the beneficial effects of warming (e.g., lower mortality rates, longer growing seasons), whatever causes it, or the ability of technology to compensate for undesirable effects at a much lower cost than the economic catastrophe that would result from preemptive reductions in the use of fossil fuels. (6b)
Not one of those assertions, even the ones that seem to be supported by facts, is true beyond a reasonable doubt. I happen to believe 1a (with some significant qualifications about the nature of God), 2b, 3b (given my qualified version of 1a), a modified version of 4a (monogamous, heterosexual marriage is socially and economically preferable, regardless of its divine blessing or lack thereof), 5a (but only with negative rights) and 5b, and 6b. But I cannot “prove” that any of my beliefs is the correct one, nor should anyone believe that anyone can “prove” such things.
Take the belief that all persons are created equal. No one who has eyes, ears, and a minimally functioning brain believes that all persons are created equal, though they may (if they are law-abiding) deserve equal treatment under the law (restricted to the enforcement of their negative rights).
On September 18, 1858 at Charleston, Illinois, Lincoln told the assembled audience:
I am not, nor ever have been, in favor of bringing about in any way the social and political equality of the white and black races, that I am not, nor ever have been, in favor of making voters or jurors of negroes, nor of qualifying them to hold office, nor to intermarry with white people; and I will say in addition to this that there is a physical difference between the white and black races which I believe will forever forbid the two races living together on terms of social and political equality … I will add to this that I have never seen, to my knowledge, a man, woman, or child who was in favor of producing a perfect equality, social and political, between negroes and white men….
This was before Lincoln was elected president and before the outbreak of the Civil War, but Lincoln’s speeches, writings, and actions after these events continued to reflect this point of view about race and equality.
African American abolitionist Frederick Douglass, for his part, remained very skeptical about Lincoln’s intentions and program, even after the p[resident issued a preliminary emancipation in September 1862.
Douglass had good reason to mistrust Lincoln. On December 1, 1862, one month before the scheduled issuing of an Emancipation Proclamation, the president offered the Confederacy another chance to return to the union and preserve slavery for the foreseeable future. In his annual message to congress, Lincoln recommended a constitutional amendment, which if it had passed, would have been the Thirteenth Amendment to the Constitution.
The amendment proposed gradual emancipation that would not be completed for another thirty-seven years, taking slavery in the United States into the twentieth century; compensation, not for the enslaved, but for the slaveholder; and the expulsion, supposedly voluntary but essentially a new Trail of Tears, of formerly enslaved Africans to the Caribbean, Central America, and Africa….
Douglass’ suspicions about Lincoln’s motives and actions once again proved to be legitimate. On December 8, 1863, less than a month after the Gettysburg Address, Abraham Lincoln offered full pardons to Confederates in a Proclamation of Amnesty and Reconstruction that has come to be known as the 10 Percent Plan.
Self-rule in the South would be restored when 10 percent of the “qualified” voters according to “the election law of the state existing immediately before the so-called act of secession” pledged loyalty to the union. Since blacks could not vote in these states in 1860, this was not to be government of the people, by the people, for the people, as promised in the Gettysburg Address, but a return to white rule.
It is unnecessary, though satisfying, to read Charles Murray’s account in Human Diversity of the broad range of inherent differences in intelligence and other traits that are associated with the sexes, various genetic groups of geographic origin (sub-Saharan Africans, East Asians, etc.), and various ethnic groups (e.g., Ashkenazi Jews).
But even if all persons are not created equal, either mentally or physically, aren’t they equal under the law? If you believe that, you might just as well believe in the tooth fairy. As it says in 5b,
Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons.
Yes, it’s only a hypothesis, but one for which there is ample evidence in the history of mankind. It is confirmed by every instance of theft, murder, armed aggression, scorched-earth warfare, mob violence as catharsis, bribery, election fraud, gratuitous cruelty, and so on into the night.
And yet, human beings (Americans especially, it seems) persist in believing tooth-fairy stories about the inevitable triumph of good over evil, self-correcting science, and the emergence of truth from the marketplace of ideas. Balderdash, all of it.
But desiderata become beliefs. And beliefs are what bind people – or make enemies of them.
Fasten your seat belts and get ready for a hard landing.
Like many other conservatives, I expected much more from the mid-term election than (perhaps) a slim majority in the House of Representatives. I said this (optimistically) in “The Bitter Fruits of America’s Disintegration”,
Is there hope for an American renaissance? The upcoming mid-term election will be pivotal but not conclusive. It will be a very good thing if the GOP regains control of Congress. But it will take more than that to restore sanity to the land.
A Republican (of the right kind) must win in 2024. The GOP majority in Congress must be enlarged. A purge of the deep state must follow, and it must scour every nook and cranny of the central government to remove every bureaucrat who has a leftist agenda and the ability to thwart the administration’s initiatives.
Beyond that, the American people should be rewarded for their (aggregate) return to sanity by the elimination several burdensome (and unconstitutional departments of the executive branch), by the appointment of dozens of pro-constitutional judges, and by the appointment of a string of pro-constitutional justices of the Supreme Court.
After that, the rest will take care of itself: Renewed economic vitality, a military whose might deters our enemies, and something like the restoration of sanity in cultural matters. (Bandwagon effects are powerful, and they can go uphill as well as downhill.)
But all of that is hope. The restoration of America’s greatness will not be easy or without acrimony and setbacks.
If America’s greatness isn’t restored, America will become a vassal state. And the leftists who made it possible will be the first victims of their new masters.
The election was pivotal, but not in the way that I expected it to be. It marked the end of hope for a restoration of sanity. All that happened is that some Red States got Redder, some Blue States got Bluer, and the loony left in the House gained several new members. The latter development is probably the best indication of what will happen in the coming elections.
If there was no “Red wave” this year, it’s unlikely that there’ll be one in the future. What’s more likely is that the election of 2024 will return a Democrat (not Biden) to the White House and Democrat control of Congress will be restored. From there, expect the following:
“Wokeness” will be in the saddle.
Influential institutions (Big Tech, the media, the academy, public “education”, and most government bureaucracies) will be more than ever dominated by the left.
Violent crime and the coddling of criminals will continue apace, or get worse.
There will be more suppression of conservative views through electronic censorship, financial blackmail, and selective enforcement of laws.
The insanity of replacing reliable and cheap fossil fuels with unreliable and therefore expensive “renewables” will continue with a vengeance.
The regulatory-welfare state will control more and more of the economy.
Inflation will continue to wreak economic havoc until price controls make things worse by further disincentivizing productive capital investments and entrepreneurship.
Defense spending will continue to be well below what is required to deter America’s enemies and protects Americans’ overseas interests.
In short, the decline of America — social, economic, and military — will continue. Our enemies will be able to dictate the terms on which America may survive — economically and politically. There will be no Chamberlain-esque moment of surrender, it will just happen gradually and with official approval.
The only hope for (some) Americans is a national divorce. But with the left in the saddle, that is no more likely to happen than was the emancipation of slaves by peaceful means.
I hope I’m wrong again, but I fear that I am right.
I discussed Type 1 and Type 2 thinking in the previous entry. Type 2 thinking — deliberate reasoning — has two main branches: scientific and scientistic.
The scientific branch leads (often in roundabout ways) to improvements in the lot of mankind: better and more abundant food, better clothing, better shelter, faster and more comfortable means of transportation, better sanitation, a better understanding of diseases and more effective means of combating them, and on and on.
You might protest that not all of those things, and perhaps only a minority of them, emanated from formal scientific endeavors conducted by holders of Ph.D. and M.D. degrees working out of pristine laboratories or with delicate equipment. But science is much more than that. Science includes learning by doing, which encompasses everything from the concoction of effective home remedies to the hybridization of crops to the invention and refinement of planes, trains, and automobiles – and, needless to say, to the creation and development of much of the electronic technology and related software with which we are “blessed” today.
The scientific branch yields its fruits because it is based on facts about the so-called material universe. The essence of the universe may be unknown and unknowable, as discussed in an earlier entry, but it manifests themselves in observable and often predictable ways.
The scientific branch, in sum, is inductive at its core. Observations of specific phenomena lead to guesses about the causes of those phenomena or the relationships between them. The guesses are codified as hypotheses, often in mathematical form. The hypotheses are tested against new observations of the same kinds of phenomena. If the hypotheses are found wanting, they are either rejected outright or modified to take into account the new observations. Revised hypotheses are then tested against newer observations, and so on. (There is nothing scientific about testing a new hypothesis against the observations that led to it; that is a scientistic trick used by, among others, climate “scientists” who wish to align their models with historical climate data.)
If new observations are found to comport with a new hypothesis, the hypothesis is said to be confirmed. Confirmed doesn’t mean proven, it just means not disproved. Lay persons — and a lot of scientists, apparently — mistake confirmation, in the scientific sense, for proof. There is no such thing in science.
The scientistic branch of Type 2 thinking is deductive. It assumes truths and then generalizes from those assumptions; for example:
All Cretans are liars, according to Epimenides (a Cretan who lived ca. 600 BC).
Epimenides was a Cretan.
Therefore, Epimenides was a liar.
But if Epimenides was lying, not all Cretans are liars. The conclusion therefore doesn’t follow from the premises. The flaw in the argument is the unprovable generalization that all Cretans are liars.
The syllogism illustrates the fatuousness of deductive reasoning, that is, reasoning which proceeds from general statements that cannot be disproven (falsified).
Though deductive reasoning can be useful in contriving hypothesis, it cannot be used to “prove” anything. But there are persons who claim to be scientists, or who claim to “believe” science, who do reason deductively. It starts when a hypothesis that has been advanced by a scientist becomes an article of faith to that scientist, to a group of scientists, or to non-scientists who use their belief to justify political positions – which they purport to be “scientific” or “science-based”.
There is no more “science” in such positions as there is in the belief that the Sun revolves around the Earth or that all persons are created equal. The Sun may seem to revolve around the Earth if one’s perspective is limited to the relative motions of Sun and Earth and anchored in the implicit assumption that Earth’s position is fixed. All persons may be deemed equal in a narrow and arbitrary way — as in the legal doctrine of equal treatment under the law — but that hardly makes all persons equal in every respect; for example, in intelligence, physical strength, athletic ability, attractiveness to the opposite sex, work ethic, conditions of birth, or proneness to various ailments. (I will say more about equality as a non-scientific desideratum in the next entry.)
This isn’t to say that some scientific hypotheses — and their implications — can’t be relied upon. If they couldn’t be, humans wouldn’t have benefited from the many things mentioned earlier in this post — and much more. But confirmed hypotheses can be relied upon because they are based on observed phenomena, tested in the acid of use, and — most important — employed with ample safeguards, which still may be inadequate to real-world conditions. Despite the best efforts of physicists, chemists, and engineers, airplanes crash, bridges collapse, and so on, because there is never enough knowledge to foresee all of the conditions that might arise in the real world.
This isn’t to say that human beings would be better off without science. Far from it. Science and its practical applications have made us far better off than we would be without them. But neither scientists nor those who apply the (tentative) findings of science are infallible.
It’s the time of year when economists like to remind the unwashed that voting is a waste of time. And right on schedule there’s “Sorry, But Your Vote Doesn’t Count” by Pierre Lemiux, writing at EconLog. A classic of the genre appeared 17 years ago, in the form of “Why Vote?” by Stephen J. Dubner and Steven D. Levitt (of Freakonomics fame). Here are some relevant passages:
The odds that your vote will actually affect the outcome of a given election are very, very, very slim. This was documented by the economists Casey Mulligan and Charles Hunter, who analyzed more than 56,000 Congressional and state-legislative elections since 1898. For all the attention paid in the media to close elections, it turns out that they are exceedingly rare. The median margin of victory in the Congressional elections was 22 percent; in the state-legislature elections, it was 25 percent. Even in the closest elections, it is almost never the case that a single vote is pivotal. Of the more than 40,000 elections for state legislator that Mulligan and Hunter analyzed, comprising nearly 1 billion votes, only 7 elections were decided by a single vote, with 2 others tied. Of the more than 16,000 Congressional elections, in which many more people vote, only one election in the past 100 years – a 1910 race in Buffalo – was decided by a single vote….
Still, people do continue to vote, in the millions. Why? Here are three possibilities:
1. Perhaps we are just not very bright and therefore wrongly believe that our votes will affect the outcome.
2. Perhaps we vote in the same spirit in which we buy lottery tickets. After all, your chances of winning a lottery and of affecting an election are pretty similar. From a financial perspective, playing the lottery is a bad investment. But it’s fun and relatively cheap: for the price of a ticket, you buy the right to fantasize how you’d spend the winnings – much as you get to fantasize that your vote will have some impact on policy.
3. Perhaps we have been socialized into the voting-as-civic-duty idea, believing that it’s a good thing for society if people vote, even if it’s not particularly good for the individual. And thus we feel guilty for not voting. [The New York Times Magazine, November 6, 2005]
In true economistic fashion, Dubner and Levitt omit a key reason for voting: It makes a person feel good. Even if one’s vote will not change the outcome of an election, one attains a degree of satisfaction from taking an official (even if secret) stand in favor of or in opposition to a certain candidate, bond issue, or other issue on a ballot.
Dubner and Levitt (and their ilk) seem to inhabit a world in which a thing is not worth doing unless the payoff can be measured with some precision and compared with other, similarly quantifiable, uses of one’s time and money. I doubt that they govern their own lives accordingly. If they do, they must be missing out on a lot of life’s pleasures: sex and ice cream, to name only two.
Their article continues on a different tack:
But wait a minute, you say. If everyone thought about voting the way economists do, we might have no elections at all. No voter goes to the polls actually believing that her single vote will affect the outcome, does she? And isn’t it cruel to even suggest that her vote is not worth casting?
This is indeed a slippery slope – the seemingly meaningless behavior of an individual, which, in aggregate, becomes quite meaningful. Here’s a similar example in reverse. Imagine that you and your 8-year-old daughter are taking a walk through a botanical garden when she suddenly pulls a bright blossom off a tree.
“You shouldn’t do that,” you find yourself saying.
“Why not?” she asks.
“Well,” you reason, “because if everyone picked one, there wouldn’t be any flowers left at all.”
“Yeah, but everybody isn’t picking them,” she says with a look. “Only me.”
Clever, what? Too clever by half. This argument overlooks the powerful effect of exemplary behavior — where “exemplary”, as used here, does not imply “laudable”. By Dubner and Levitt’s account, allowing a vandal to deface a public building would not encourage other vandals to do the same thing, and would not lead to the widespread defacement of buildings and other anti-social acts. (I refer, of course, to James Q. Wilson’s Broken Windows Theory, on which Levitt and Dubner tried to cast doubt onFreakonomics. They wronglysuggested that the onset of legalized abortion was instrumental in the reduction of crime rates.)
Dubner and Levitt’s argument also overlooks the key fact that when economists preach against voting, they are not just preaching to themselves. Dubner and Levitt’s sermon appeared in the pages of one of the country’s most widely read and influential publications. It was not addressed to an individual person, but to thousands upon thousands of persons. And I doubt that they would have objected if the article had appeared in every newspaper and magazine in the country. In effect, the Dubner-Levitt argument is not just an argument that the marginal vote makes little difference — it is advice to millions of Americans that they should abstain from voting.
That’s paradoxical advice. Abstention by millions of Americans could very well make a difference in the outcome of an election. The tendency to abstain might, in a particular election, be disproportionate to party affiliation. That’s why political campaigns try to counter apathy by whipping up enthusiasm. For example, Democrats might be able to pare their losses in the coming election if they can convince enough pro-Democrat voters that defeat isn’t inevitable, and that their votes will make a difference.
In any event, Levitt, Dubner, and their ilk are guilty of paternalism as well as economism.
It’s true that instinctive (or impulsive) actions can be foolish, dangerous, and deadly. But they can also be beneficial. If, in your peripheral vision, you see an object hurtling toward you at high speed, you don’t deliberately compute its trajectory and decide whether to move out of its path. No, your brain does that for you without your having to “think” about it. And if your brain works quickly enough, you will have moved out of the object’s path before you would have finished “thinking” about what to do.
In sum, you (your brain) engaged in Type 1 thinking about the problem at hand and resolved it quickly. If you had engaged in deliberate, Type 2, thinking you might have been killed by the impact of the object that was hurtling toward you.
The distinction that I’m making here is one that Daniel Kahneman labors over in Thinking Fast and Slow. But I won’t bore you with the details of that boring book. Life is too short, and certainly shorter for me than for most of you. Let’s just say that there’s nothing especially meritorious about Type 2 thinking, and that it can lead to actions that are as foolish, dangerous, and deadly as those that result from “instinct”.
I will go further and say that Type 2 thinking has brought Americans to the brink of bankruptcy, serfdom, and civil war. But to understand why I say that, you will have to follow this series to its bitter-sweet ending.
* * *
If the need to survive ever had anything to do with the advancement of human intelligence and knowledge, that day is long past for most human beings in “developed” nations.
Type 1 thinking is restricted mainly to combat, competitive sports, operating motorized equipment, playing video games, and reacting to photos of Donald Trump or Joe Biden. It is the key to survival in a narrow range of activities aside from combat, such as driving on a busy highway, ducking when a lethal projectile is headed your way, and instinctively avoiding persons whose actions or appearance seem menacing. The erosion of the avoidance instinct is due in part to the cosseted lives that most Westerners (and Japanese) lead, and in part to the barrage of propaganda that denies differences in the behavior of various classes, races, and ethnic groups. (Thus, for example, disruptive black children aren’t to be ejected from classrooms unless an equal proportion of white children, disruptive or not, is likewise ejected.)
Type 2 thinking of the kind that might advance useful knowledge and its beneficial application is a specialty of the educated, intermarrying elite – a class that dominates academia and the applied sciences (e.g., medicine, medical research, and the various fields of engineering). The same class also dominates the media (including so-called entertainment), “technology” companies (most of which don’t really produce technology), the upper echelons of major corporations, and the upper echelons of government.
But, aside from academicians and professionals whose work advances practical knowledge (how to build a better mousetrap, a more earthquake-resistant building, a less collapsible bridge, or an effective vaccine), the members of aforementioned class have nothing on the yeomen who become skilled in sundry trades (construction, plumbing, electrical work) by the heuristic method — learning and improving by doing. That, too, is Type 2 thinking (though it often incorporates the sudden insights yielded by type 1 thinking). But practical knowledge accumulates over years and is tested in the acid of use, unlike the kind of Type 2 thinking that produces intricate but wildly inaccurate climate models whose designers believe in and defend because they are emotional human beings, like all of us.
Type 2 thinking, despite the stereotype that it is deliberate and dispassionate, is riddled with emotion. Emotion isn’t just rage, lust, and the like. Those are superficial manifestations of the thing that drives us all: egoism.
No matter how you slice it, everything that a person does deliberately — including type 2 thinking — is done to bolster his own sense of well-being. Altruism is merely the act of doing good for others so that one may feel better about oneself. You cannot be another person, and actually feel what another person is experiencing. You can only be a person whose sense of self is invested in loving another person or being thought of as loving mankind — whatever that means.
Type 2 thinking — the Enlightenment’s exalted “reason” — is both an aid to survival and a hindrance to it. It is an aid in ways such as those mentioned above, that is, in the advancement of practical knowledge to defeat disease, move people faster and more safely, build dwellings that will stand up against the elements, and so on.
It is a hindrance when, as Shakespeare’s Hamlet says, “the native hue of resolution Is sicklied o’er with the pale cast of thought”. Type 1 thinking causes us to smite an enemy. Type 2 thinking causes us to believe, quite wrongly, that by sparing an enemy we somehow become a law-abiding exemplar whose forbearance diminishes the level of violence in the world and the likelihood that violence will be visited upon us in the future.
Neville Chamberlain exemplified Type 2 thinking when he settled for Hitler’s empty promise of peace instead of gearing up to fight an inevitable war. Lyndon Johnson exemplified Type 2 thinking in his vacillating prosecution of the war in Vietnam, where he was more concerned with “world opinion” (whatever that is) and “public opinion” (i.e., the bleating of pundits and protestors) than he was with the real job of the commander-in-chief, which it to fight and win or don’t fight at all. George H.W. Bush exemplified Type 2 thinking when he declined to depose Saddam Hussein in 1991. Barack Obama exemplified Type 2 thinking when he made a costly deal with Iran’s ayatollahs that profited them greatly for an easily betrayed promise to refrain from the development of nuclear weapons. Type 2 thinking of the kind exemplified by Chamberlain, Johnson, Bush, and Obama is egoistic and delusional: It reflects and justifies the thinker’s inner view of the world as he wants it to be, not the world as it is.
Type 2 thinking is valuable to the survival of humanity when it passes the acid test of use. It is a danger to the survival of humanity when it arises from a worldview that excludes the facts of life. One of those facts of life is that predators exist and must be killed or somehow (and usually at greater expense) neutralized.
Evolution is simply change in organic (living) objects. Evolution, as a subject of scientific inquiry, is an attempt to explain how humans (and other animals) came to be what they are today.
Evolution (as a discipline) is a much scientism as it is science. Scientism, according to thefreedictionary.com is “the uncritical application of scientific or quasi-scientific methods to inappropriate fields of study or investigation.” When scientists proclaim truths instead of propounding hypotheses they are guilty of practicing scientism. Two notable scientistic scientists are Richard Dawkins and Peter Singer. It is unsurprising that Dawkins and Singer are practitioners of scientism. Both are strident atheists, and strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side.
Dawkins, Singer, and many other scientistic atheists share an especially “religious” view of evolution. In brief, they seem to believe that evolution rules out God. Evolution rules out nothing. Evolution may be true in outline but it does not bear close inspection. On that point, I turn to David Gelertner’s “Giving Up Darwin” (Claremont Review of Books, Spring 2019):
Darwin himself had reservations about his theory, shared by some of the most important biologists of his time. And the problems that worried him have only grown more substantial over the decades. In the famous “Cambrian explosion” of around half a billion years ago, a striking variety of new organisms—including the first-ever animals—pop up suddenly in the fossil record over a mere 70-odd million years. This great outburst followed many hundreds of millions of years of slow growth and scanty fossils, mainly of single-celled organisms, dating back to the origins of life roughly three and half billion years ago.
Darwin’s theory predicts that new life forms evolve gradually from old ones in a constantly branching, spreading tree of life. Those brave new Cambrian creatures must therefore have had Precambrian predecessors, similar but not quite as fancy and sophisticated. They could not have all blown out suddenly, like a bunch of geysers. Each must have had a closely related predecessor, which must have had its own predecessors: Darwinian evolution is gradual, step-by-step. All those predecessors must have come together, further back, into a series of branches leading down to the (long ago) trunk.
But those predecessors of the Cambrian creatures are missing. Darwin himself was disturbed by their absence from the fossil record. He believed they would turn up eventually. Some of his contemporaries (such as the eminent Harvard biologist Louis Agassiz) held that the fossil record was clear enough already, and showed that Darwin’s theory was wrong. Perhaps only a few sites had been searched for fossils, but they had been searched straight down. The Cambrian explosion had been unearthed, and beneath those Cambrian creatures their Precambrian predecessors should have been waiting—and weren’t. In fact, the fossil record as a whole lacked the upward-branching structure Darwin predicted.
The trunk was supposed to branch into many different species, each species giving rise to many genera, and towards the top of the tree you would find so much diversity that you could distinguish separate phyla—the large divisions (sponges, mosses, mollusks, chordates, and so on) that comprise the kingdoms of animals, plants, and several others—take your pick. But, as [David] Berlinski points out, the fossil record shows the opposite: “representatives of separate phyla appearing first followed by lower-level diversification on those basic themes.” In general, “most species enter the evolutionary order fully formed and then depart unchanged.” The incremental development of new species is largely not there. Those missing pre-Cambrian organisms have still not turned up. (Although fossils are subject to interpretation, and some biologists place pre-Cambrian life-forms closer than others to the new-fangled Cambrian creatures.)
Some researchers have guessed that those missing Precambrian precursors were too small or too soft-bodied to have made good fossils. Meyer notes that fossil traces of ancient bacteria and single-celled algae have been discovered: smallness per se doesn’t mean that an organism can’t leave fossil traces—although the existence of fossils depends on the surroundings in which the organism lived, and the history of the relevant rock during the ages since it died. The story is similar for soft-bodied organisms. Hard-bodied forms are more likely to be fossilized than soft-bodied ones, but many fossils of soft-bodied organisms and body parts do exist. Precambrian fossil deposits have been discovered in which tiny, soft-bodied embryo sponges are preserved—but no predecessors to the celebrity organisms of the Cambrian explosion.
This sort of negative evidence can’t ever be conclusive. But the ever-expanding fossil archives don’t look good for Darwin, who made clear and concrete predictions that have (so far) been falsified—according to many reputable paleontologists, anyway. When does the clock run out on those predictions? Never. But any thoughtful person must ask himself whether scientists today are looking for evidence that bears on Darwin, or looking to explain away evidence that contradicts him. There are some of each. Scientists are only human, and their thinking (like everyone else’s) is colored by emotion.
Yes, emotion, the thing that colors thought. Emotion is something that humans and other animals have. If Darwin and his successors are correct, emotion must be a faculty that improves the survival and reproductive fitness of a species.
But that can’t be true because emotion is the spark that lights murder, genocide, and war. World War II, alone, is said to have occasioned the deaths of more than one-hundred million humans. Prominently among those killed were six million Ashkenzi Jews, members of a distinctive branch of humanity whose members (on average) are significantly more intelligent than other branches, and who have contributed beneficially to science, literature, and the arts (especially music).
The evil by-products of emotion – such as the near-extermination of peoples (Ashkenazi Jews among them) – should cause one to doubt that the persistence of a trait in the human population means that the trait is beneficial to survival and reproduction.
At the very beginning of his treatise Vertebrate Paleontology and Evolution, Robert Carroll observes quite correctly that “most of the fossil record does not support a strictly gradualistic account” of evolution. A “strictly gradualistic” account is precisely what Darwin’s theory demands: It is the heart and soul of the theory….
In a research survey published in 2001, and widely ignored thereafter, the evolutionary biologist Joel Kingsolver reported that in sample sizes of more than one thousand individuals, there was virtually no correlation between specific biological traits and either reproductive success or survival. “Important issues about selection,” he remarked with some understatement, “remain unresolved.”
Of those important issues, I would mention prominently the question whether natural selection exists at all.
Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not. Thomas Ray has for years been conducting computer experiments in an artificial environment that he has designated Tierra. Within this world, a shifting population of computer organisms meet, mate, mutate, and reproduce.
Sandra Blakeslee, writing for TheNew York Times, reported the results under the headline “Computer ‘Life Form’ Mutates in an Evolution Experiment: Natural Selection Is Found at Work in a Digital World.”
Natural selection found at work? I suppose so, for as Blakeslee observes with solemn incomprehension, “the creatures mutated but showed only modest increases in complexity.” Which is to say, they showed nothing of interest at all. This is natural selection at work, but it is hardly work that has worked to intended effect.
What these computer experiments do reveal is a principle far more penetrating than any that Darwin ever offered: There is a sucker born every minute….
“Contemporary biology,” [Daniel Dennett] writes, “has demonstrated beyond all reasonable doubt that natural selection— the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge— has the power to generate breathtakingly ingenious designs” (italics added).
These remarks are typical in their self-enchanted self-confidence. Nothing in the physical sciences, it goes without saying— right?— has been demonstrated beyond all reasonable doubt. The phrase belongs to a court of law. The thesis that improvements in life appear automatically represents nothing more than Dennett’s conviction that living systems are like elevators: If their buttons are pushed, they go up. Or down, as the case may be. Although Darwin’s theory is very often compared favorably to the great theories of mathematical physics on the grounds that evolution is as well established as gravity, very few physicists have been heard observing that gravity is as well established as evolution. They know better and they are not stupid….
The greater part of the debate over Darwin’s theory is not in service to the facts. Nor to the theory. The facts are what they have always been: They are unforthcoming. And the theory is what it always was: It is unpersuasive. Among evolutionary biologists, these matters are well known. In the privacy of the Susan B. Anthony faculty lounge, they often tell one another with relief that it is a very good thing the public has no idea what the research literature really suggests.
“Darwin?” a Nobel laureate in biology once remarked to me over his bifocals. “That’s just the party line.”
In the summer of 2007, Eugene Koonin, of the National Center for Biotechnology Information at the National Institutes of Health, published a paper entitled “The Biological Big Bang Model for the Major Transitions in Evolution.”
The paper is refreshing in its candor; it is alarming in its consequences. “Major transitions in biological evolution,” Koonin writes, “show the same pattern of sudden emergence of diverse forms at a new level of complexity” (italics added). Major transitions in biological evolution? These are precisely the transitions that Darwin’s theory was intended to explain. If those “major transitions” represent a “sudden emergence of new forms,” the obvious conclusion to draw is not that nature is perverse but that Darwin was wrong….
Koonin is hardly finished. He has just started to warm up. “In each of these pivotal nexuses in life’s history,” he goes on to say, “the principal ‘types’ seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate ‘grades’ or intermediate forms between different types are detectable.”…
[H[is views are simply part of a much more serious pattern of intellectual discontent with Darwinian doctrine. Writing in the 1960s and 1970s, the Japanese mathematical biologist Motoo Kimura argued that on the genetic level— the place where mutations take place— most changes are selectively neutral. They do nothing to help an organism survive; they may even be deleterious…. Kimura was perfectly aware that he was advancing a powerful argument against Darwin’s theory of natural selection. “The neutral theory asserts,” he wrote in the introduction to his masterpiece, The Neutral Theory of Molecular Evolution, “that the great majority of evolutionary changes at the molecular level, as revealed by comparative studies of protein and DNA sequences, are caused not by Darwinian selection but by random drift of selectively neutral or nearly neutral mutations” (italics added)….
Writing in the Proceedings of the National Academy of Sciences, the evolutionary biologist Michael Lynch observed that “Dawkins’s agenda has been to spread the word on the awesome power of natural selection.” The view that results, Lynch remarks, is incomplete and therefore “profoundly misleading.” Lest there be any question about Lynch’s critique, he makes the point explicitly: “What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms.”
Survival and reproduction depend on many traits. A particular trait, considered in isolation, may seem to be helpful to the survival and reproduction of a group. But that trait may not be among the particular collection of traits that is most conducive to the group’s survival and reproduction. If that is the case, the trait will become less prevalent.
Alternatively, if the trait is an essential member of the collection that is conducive to survival and reproduction, it will survive. But its survival depends on the other traits. The fact that X is a “good trait” does not, in itself, ensure the proliferation of X. And X will become less prevalent if other traits become more important to survival and reproduction.
In any event, it is my view that genetic fitness for survival has become almost irrelevant in places like the North America, Europe, and Japan. The rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.
In fact, there is a supportable hypothesis that humans in cosseted realms (i.e., the West) are, on average, becoming less intelligent. But, first, it is necessary to explain why it seemed for a while that humans were becoming more intelligent.
When the researcher James Flynn looked at [IQ] scores over the past century, he discovered a steady increase – the equivalent of around three points a decade. Today, that has amounted to 30 points in some countries.
Although the cause of the Flynn effect is still a matter of debate, it must be due to multiple environmental factors rather than a genetic shift.
Perhaps the best comparison is our change in height: we are 11cm (around 5 inches) taller today than in the 19th Century, for instance – but that doesn’t mean our genes have changed; it just means our overall health has changed.
Indeed, some of the same factors may underlie both shifts. Improved medicine, reducing the prevalence of childhood infections, and more nutritious diets, should have helped our bodies to grow taller and our brains to grow smarter, for instance. Some have posited that the increase in IQ might also be due to a reduction of the lead in petrol, which may have stunted cognitive development in the past. The cleaner our fuels, the smarter we became.
This is unlikely to be the complete picture, however, since our societies have also seen enormous shifts in our intellectual environment, which may now train abstract thinking and reasoning from a young age. In education, for instance, most children are taught to think in terms of abstract categories (whether animals are mammals or reptiles, for instance). We also lean on increasingly abstract thinking to cope with modern technology. Just think about a computer and all the symbols you have to recognise and manipulate to do even the simplest task. Growing up immersed in this kind of thinking should allow everyone [hyperbole alert] to cultivate the skills needed to perform well in an IQ test….
[Psychologist Robert Sternberg] is not alone in questioning whether the Flynn effect really represented a profound improvement in our intellectual capacity, however. James Flynn himself has argued that it is probably confined to some specific reasoning skills. In the same way that different physical exercises may build different muscles – without increasing overall “fitness” – we have been exercising certain kinds of abstract thinking, but that hasn’t necessarily improved all cognitive skills equally. And some of those other, less well-cultivated, abilities could be essential for improving the world in the future.
Here comes the best part:
You might assume that the more intelligent you are, the more rational you are, but it’s not quite this simple. While a higher IQ correlates with skills such as numeracy, which is essential to understanding probabilities and weighing up risks, there are still many elements of rational decision making that cannot be accounted for by a lack of intelligence.
Consider the abundant literature on our cognitive biases. Something that is presented as “95% fat-free” sounds healthier than “5% fat”, for instance – a phenomenon known as the framing bias. It is now clear that a high IQ does little to help you avoid this kind of flaw, meaning that even the smartest people can be swayed by misleading messages.
People with high IQs are also just as susceptible to the confirmation bias – our tendency to only consider the information that supports our pre-existing opinions, while ignoring facts that might contradict our views. That’s a serious issue when we start talking about things like politics.
Nor can a high IQ protect you from the sunk cost bias – the tendency to throw more resources into a failing project, even if it would be better to cut your losses – a serious issue in any business. (This was, famously, the bias that led the British and French governments to continue funding Concorde planes, despite increasing evidence that it would be a commercial disaster.)
Highly intelligent people are also not much better at tests of “temporal discounting”, which require you to forgo short-term gains for greater long-term benefits. That’s essential, if you want to ensure your comfort for the future.
Besides a resistance to these kinds of biases, there are also more general critical thinking skills – such as the capacity to challenge your assumptions, identify missing information, and look for alternative explanations for events before drawing conclusions. These are crucial to good thinking, but they do not correlate very strongly with IQ, and do not necessarily come with higher education. One study in the USA found almost no improvement in critical thinking throughout many people’s degrees.
Given these looser correlations, it would make sense that the rise in IQs has not been accompanied by a similarly miraculous improvement in all kinds of decision making.
So much for the bright people who promote and pledge allegiance to socialism and its various manifestations (e.g., the Green New Deal, and Medicare for All). So much for the bright people who suppress speech with which they disagree because it threatens the groupthink that binds them.
Robson also discusses evidence of dysgenic effects in IQ:
Whatever the cause of the Flynn effect, there is evidence that we may have already reached the end of this era – with the rise in IQs stalling and even reversing. If you look at Finland, Norway and Denmark, for instance, the turning point appears to have occurred in the mid-90s, after which average IQs dropped by around 0.2 points a year. That would amount to a seven-point difference between generations.
Psychologist (and intelligence specialist) James Thompson has addressed dysgenic effects at his blog on the website of The Unz Review. In particular, he had a lot to say about the work of an intelligence researcher named Michael Woodley. Here’s a sample from a post by Thompson:
We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.
Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.
Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.
It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”. This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.
So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong?
The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.
Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by [Matthew] Sarraf.
The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.
The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”….
Here are the findings which I have arranged by generational decline (taken as 25 years).
Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.
3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.
Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.
Working memory, over 85 years (3.4 generations) 0.16 drop/decade.
Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.
Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.
Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.…
My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.
How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.
Here’s my hypothesis: The less-intelligent portions of the populace are breeding faster than the more-intelligent portions. As I said earlier, the rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.
Evolution — in the absence of challenges that ensure survival of the fittest — seems to result in devolution.
In this post I use the academic “we”, as opposed to the royal “we” and the politician’s presumptuous “we”.
Before we can consider time and existence, we must consider whether they are illusions.
Regarding time, there’s a reasonable view that nothing exists but the present — the now — or, rather, an infinite number of nows. In the conventional view, one now succeeds another, which creates the illusion of the passage of time. A problem with the conventional view of time is that not everyone perceives the same now. The compilation of a comprehensive now is a practical impossibility, though it could be done in theory. (Einstein’s theories of relativity notwithstanding, the Lorentz transformation restores simultenaity.)
In the view of some physicists, however, all nows exist at once, and we merely perceive sequential slices of all nows. A problem with the view that all nows exist at once (known as the many-worlds view), is that it’s purely a mathematical concoction. Inasmuch as there seems to be general agreement as to the contents of the slice, the only (very weak) evidence that many nows exist in parallel is provided by claims about such phenomena as clairvoyance, visions, and co-location. I won’t wander into that thicket.
What distinguishes one now from another now? The answer is change. If things didn’t change, there would be only a now, not an infinite series of them. More precisely, if things didn’t seem to change, time would seem to stand still. This is another way of saying that a succession of nows creates the illusion of the passage of time.
What happens between one now and the next now? Change, not the passage of time. What we think of as the passage of time is really an artifact of change.
Time is really nothing more than the awareness of events that supposedly occur at set intervals — the “ticking” of an atomic clock, for example. I say supposedly because there’s no absolute measure of time against which one can calibrate the “ticking” of an atomic clock, or any other kind of clock.
In summary: Clocks don’t measure time, which is an illusion caused by change. Clocks merely change (“tick”) at supposedly regular intervals, and those intervals are used in the representation of other things, such as the speed of an automobile or the duration of a 100-yard dash.
Change is real. But change in what — of what does reality consist?
There are two basic views of reality. One of them, posited by Bishop Berkeley and his followers, is that the only reality is that which goes on in one’s own mind. But that’s just another way of saying that humans don’t perceive the external world directly. Rather, it is perceived second-hand, through the senses that detect external phenomena and transmit signals to the brain, which is where a person’s “reality” is formed.
The sensible view, held by most humans (even most scientists), is that there is an objective reality out there, beyond the confines of one’s mind. How can so many people agree about the existence of certain things (e.g., Cleveland) if there’s not something out there? Mass psychosis, perhaps? No, because that arises from a desire to believe that a thing exists. Cleveland is real because it is and has been actually experienced by myriad persons. The widespread belief in catastrophic “climate change” is a kind of mass psychosis, triggered by a combination of scientific malfeasance; greed (for research grants and notoriety); politicians’ and bureaucrats’ naïveté and power-lust; and laypersons’ naïveté, virtue-signaling, and conformity to peers’ beliefs.
The big question is how reality came into being. This has been debated for millennia. There are two main schools of thought:
Things just exist and have always existed.
Things can’t come into existence on their own, so some non-thing must have caused things to exist. The non-thing must necessarily have always existed apart from things; that is, it is timeless and immaterial.
How can the issue be resolved? It can’t be resolved by logic alone, though logic is on the side of the second option. It can’t be resolved by facts because facts are about perceptible things. If it could be resolved by facts, there would be wide agreement about the answer. (Not perfect agreement because many human beings are impervious to facts.)
In sum, existence is a profound mystery.
Why is that? Can’t scientists someday trace the existence of things – call it the universe – back to a source? Isn’t that what the Big Bang Theory is all about? No and no. If the universe has always existed, there’s no source to be tracked down. And if the universe was created by a non-thing, how can scientists detect the non-thing if they’re only equipped to deal with things?
The Big Bang Theory posits a definite beginning, at a more or less definite point in time. But even if the theory is correct, it doesn’t tell us how that beginning began. Did things start from scratch, and if they did, what caused them to do so? And maybe they didn’t; maybe the Big Bang was just the result of the collapse of a previous universe, which was the result of a previous one, etc., etc., etc., ad infinitum. But that gets back to the question of what started it all.
Some scientists who think about such things don’t believe that the universe was created by a non-thing. But they don’t believe it because they don’t want to believe it. The much smaller number of similar scientists who believe that the universe was created by a non-thing hold that belief because they want to hold it, and because logic is on their side.
That’s life in the world of science, just as it is in the world of non-science, where believers, non-believers, and those who can’t make up their minds find all kinds of ways in which to rationalize what they believe (or don’t believe), even though they know less than scientists do about the universe.
Let’s just accept that and move on to another big question: What is it that exists? It’s not “stuff” as we usually think of it – like mud or sand or water droplets. It’s not even atoms and their constituent particles. Those are just convenient abstractions for what seem to be various manifestations of electromagnetic forces, or emanations thereof, such as light.
But what are electromagnetic forces? And what does their behavior (to be anthropomorphic about it) have to do with the way that the things like planets, stars, and galaxies move in relation to one another? Those are more big questions that probably won’t be answered, or answered definitively.
That’s the thing about science: It’s a process, not a particular result. Human understanding of the universe offers a good example. Here’s a short list of beliefs about the universe that were considered true and then rejected:
Thales (c. 620 – c. 530 BC): The Earth rests on water.
Aneximenes (c. 540 – c. 475 BC): Everything is made of air.
Heraclitus (c. 540 – c. 450 BC): All is fire.
Empodecles (c. 493 – c. 435 BC): There are four elements: earth, air, fire, and water.
Democritus (c. 460 – c. 370 BC): Atoms (basic elements of nature) come in an infinite variety of shapes and sizes.
Aristotle (384 – 322 BC): Heavy objects must fall faster than light ones. The universe is a series of crystalline spheres that carry the sun, moon, planets, and stars around Earth.
Ptolemey (90 – 168 AD): Ditto the Earth-centric universe, with a mathematical description.
Copernicus (1473 – 1543): The planets revolve around the sun in perfectly circular orbits.
Brahe (1546 – 1601): The planets revolve around the sun, but the sun and moon revolve around Earth.
Kepler (1573 – 1630): The planets revolve around the sun in elliptical orbits, and their trajectory is governed by magnetism.
Newton (1642 – 1727): The course of the planets around the sun is determined by gravity, which is a force that acts at a distance. Light consists of corpuscles; ordinary matter is made of larger corpuscles. Space and time are absolute and uniform.
Rutherford (1871 – 1937), Bohr (1885 – 1962), and others: The atom has a center (nucleus), which consists of two elemental particles, the neutron and proton.
Einstein (1879 – 1955): The universe is neither expanding nor shrinking.
That’s just a small fraction of the mistaken and incomplete theories that have held sway in the field of physics. There are many more such mistakes and lacunae in the other natural sciences: biology, chemistry, and earth science — each of which, like physics, has many branches. And in all of the branches there are many unresolved questions. For example, the Standard Model of particle physics, despite its complexity, is known to be incomplete. And it is thought (by some) to be unduly complex; that is, there may be a simpler underlying structure waiting to be discovered.
Given all of this, it is grossly presumptuous to claim that climate science – to take a salient example — is “settled” when the phenomena that it encompasses are so varied, complex, often poorly understood, and often given short shrift (e.g., the effects of solar radiation on the intensity of cosmic radiation reaching Earth, which affects low-level cloud formation, which affects atmospheric temperature and precipitation).
Anyone who says that any aspect of science is “settled” is either ignorant, stupid, or a freighted with a political agenda. Anyone who says that “science is real” is merely parroting an empty slogan.
In a lecture at Cornell University in 1964, the physicist Richard Feynman defined the scientific method. First, you guess, he said, to a ripple of laughter. Then you compute the consequences of your guess. Then you compare those consequences with the evidence from observations or experiments. “If [your guess] disagrees with experiment, it’s wrong. In that simple statement is the key to science. It does not make a difference how beautiful the guess is, how smart you are, who made the guess or what his name is…it’s wrong….
In general, science is much better at telling you about the past and the present than the future. As Philip Tetlock of the University of Pennsylvania and others have shown, forecasting economic, meteorological or epidemiological events more than a short time ahead continues to prove frustratingly hard, and experts are sometimes worse at it than amateurs, because they overemphasize their pet causal theories….
Peer review is supposed to be the device that guides us away from unreliable heretics. Investigations show that peer review is often perfunctory rather than thorough; often exploited by chums to help each other; and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field.
Herbert Ayres, an expert in operations research, summarized the problem well several decades ago: “As a referee of a paper that threatens to disrupt his life, [a professor] is in a conflict-of-interest position, pure and simple. Unless we’re convinced that he, we, and all our friends who referee have integrity in the upper fifth percentile of those who have so far qualified for sainthood, it is beyond naive to believe that censorship does not occur.” Rosalyn Yalow, winner of the Nobel Prize in medicine, was fond of displaying the letter she received in 1955 from the Journal of Clinical Investigation noting that the reviewers were “particularly emphatic in rejecting” her paper.
The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto.
As I said, there is no such thing as “settled science”. Real science is a vast realm of unsettled uncertainty. Newton put it thus:
I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.
Certainty is the last refuge of a person whose mind is closed to new facts and new ways of looking at old facts.
How uncertain is the real world, especially the world of events yet to come? Consider a simple, three-parameter model in which event C depends on the occurrence of event B, which depends on the occurrence of event A; in which the value of the outcome is the summation of the values of the events that occur; and in which value of each event is binary – a value of 1 if it happens, 0 if it doesn’t happen. Even in a simple model like that, there is a wide range of possible outcomes; thus:
A doesn’t occur (B and C therefore don’t occur) = 0.
A occurs but B fails to occur (and C therefore doesn’t occur) = 1.
A occurs, B occurs, but C fails to occur = 2.
A occurs, B occurs, and C occurs = 3.
Even when A occurs, subsequent events (or non-events) will yield final outcomes ranging in value from 1 to 3 times 1. A factor of 3 is a big deal. It’s why .300 hitters make millions of dollars a year and .100 hitters sell used cars.
Roy Spencer posted this at Roy Spencer, Ph.D. on January 8, 2021:
White House Brochures on Climate (There is no climate crisis)
January 8th, 2021 by Roy W. Spencer, Ph. D.
Late last year, several of us were asked by David Legates (White House Office of Science and Technology Policy) to write short, easily understandable brochures that supported the general view that there is no climate crisis or climate emergency, and pointing out the widespread misinformation being promoted by alarmists through the media.
Below are the resulting 9 brochures, and an introduction by David. Mine is entitled, “The Faith-Based Nature of Human Caused Global Warming”.
David hopes to be able to get these posted on the White House website by January 20 (I presume so they will become a part of the outgoing Administration’s record) but there is no guarantee given recent events.
He said we are free to disseminate them widely. I list them in no particular order. We all thank David for taking on a difficult job in more hostile territory that you might imagine.
Spencer followed up with this post on January 12, 2021:
At the White House, the Purge of Skeptics Has Started
January 12th, 2021 by Roy W. Spencer, Ph. D.
Dr. David Legates has been Fired by White House OSTP Director and Trump Science Advisor, Kelvin Droegemeier
[Image of the seal of the Executive Office of the President]
President Donald Trump has been sympathetic with the climate skeptics’ position, which is that there is no climate crisis, and that all currently proposed solutions to the “crisis” are economically harmful to the U.S. specifically, and to humanity in general.
Today I have learned that Dr. David Legates, who had been brought to the Office of Science and Technology Policy to represent the skeptical position in the Trump Administration, has been fired by OSTP Director and Trump Science Advisor, Dr. Kelvin Droegemeier.
The event that likely precipitated this is the invitation by Dr. Legates for about a dozen of us to write brochures that we all had hoped would become part of the official records of the Trump White House. We produced those brochures (no funding was involved), and they were formatted and published by OSTP, but not placed on the WH website. My understanding is that David Legates followed protocols during this process.
So What Happened?
What follows is my opinion. I believe that Droegemeier (like many in the administration with hopes of maintaining a bureaucratic career in the new Biden Administration) has turned against the President for political purposes and professional gain. If Kelvin Droegemeier wishes to dispute this, let him… and let’s see who the new Science Advisor/OSTP Director is in the new (Biden) Administration.
I would also like to know if President Trump approved of his decision to fire Legates.
In the meantime, we have been told to remove links to the brochures, which is the prerogative of the OSTP Director since they have the White House seal on them.
But their content will live on elsewhere, as will Dr. Droegemeier’s decision.
I have saved the ten brochures in their original (.pdf) format. The following links to the files are listed in the order in which Dr. Spencer listed them in his post of January 8, 2021:
Though there are now only nine seats on the U.S. Supreme Court, the tables below list eleven lines of succession. There is one for the chief justiceship and ten for the associate justiceships that Congress has created at one time and another as it has changed the size of the Court. In other words, two associate justiceships have “died out” in the course of the Court’s history. The present members of the Court, in addition to the chief justice, hold the first, second, third, fourth, sixth, eighth, ninth, and tenth associate justiceships created by Congress.
Reading across, there is a column for each president, a column for each chief justice, and columns for the ten associate justiceships. Justices who have held each seat are listed in chronological order, beginning with the justices nominated by the heroic George Washington and ending with the justices nominated by the lying, fear-mongering, chameleon-like Joe Whatshisname.
There are two horizontal divisions. The first, indicated by double red lines, delineates presidencies. The beginning of every justice’s term is associated with the president who nominated that person to a seat on the Court. The end of each justice’s term is associated with the president who was in office when the justice’s term ended by resignation or death.
The second horizontal division, indicated by alternating bands of gray and white, delineates chief justiceships. Thus the reader can see which justices served with a particular chief justice. The “Roberts Court”, for example, has thus far included Roberts and — in order of ascension to the Court — Stevens, O’Connor, Scalia, Kennedy, Souter, Thomas, Ginsburg, Breyer, Alito, Sotomayor, Kagan, Gorsuch, Kavanaugh, Barrett, and Brown-Jackson.
Because there is a separate line of succession for the chief justiceship, persons who were already on the Court and then elevated to the chief justiceship are listed in two different places. Also, the names of a few other justices appear in more than one place because they served non-consecutive terms on the Court.
The table is divided into three parts for ease of reading. (Zoom in if the type is too small for you.) Part I covers the chief justiceship (currently Roberts) and associate justice positions 1-3 (currently Sotomayor, Brown-Jackson, and Kavanaugh). Part II covers associate justice positions 4-7 (currently Kagan, 4; Barrett, 6). Part III covers associate justice positions 8-10 (currently Alito, Gorsuch, and Thomas).
Aging is of interest to me because I am among the oldest five percent Americans. Not that I feel old — I don’t — but objectively I am old.
I am probably also among the more solitary of Americans. But I am not lonely in my solitude, for it is and long has been of my own choosing.
This is so because of my strong introversion. I suppose that the seeds of my introversion are genetic, but the symptoms didn’t appear in earnest until I was in my early thirties. After that I became steadily more focused on a few friendships (which eventually dwindled to none) and decidedly uninterested in the aspects of work that required more than brief meetings (one-on-one preferred). Finally, enough became more than enough and I quit full-time work at the age of fifty-six. There followed, a few years later, a stint of part-time work that also became more than enough. And so, at the age of fifty-nine, I banked my final paycheck. Happily.
What does my introversion have to do with my aging? I suspected that my continued withdrawal from social intercourse (more about that, below) might be a symptom of aging. And I found this, in the Wikipedia article “Disengagement Theory“:
The disengagement theory of aging states that “aging is an inevitable, mutual withdrawal or disengagement, resulting in decreased interaction between the aging person and others in the social system he belongs to”. The theory claims that it is natural and acceptable for older adults to withdraw from society….
Disengagement theory was formulated by [Elaine] Cumming and [William Earl] Henry in 1961 in the book Growing Old, and it was the first theory of aging that social scientists developed….
The disengagement theory is one of three major psychosocial theories which describe how people develop in old age. The other two major psychosocial theories are the activity theory and the continuity theory, and the disengagement theory [is at] odds with both.
The continuity theory
states that older adults will usually maintain the same activities, behaviors, relationships as they did in their earlier years of life. According to this theory, older adults try to maintain this continuity of lifestyle by adapting strategies that are connected to their past experiences [whatever that means].
I don’t see any conflict between the continuity theory and the disengagement theory. A strong introvert like me, for example, finds it easy to maintain the same activities, behaviors, and relationships as I did before I retired. Which is to say that I had begun minimizing my social interactions before retiring, and continued to do so after retiring.
What about the activity theory? Well, it’s a normative theory, unlike the other two (which are descriptive), and it goes like this:
The activity theory … proposes that successful aging occurs when older adults stay active and maintain social interactions. It takes the view that the aging process is delayed and the quality of life is enhanced when old people remain socially active.
That’s just a social worker’s view of “appropriate” behavior for older persons. Take my word for it, introverts don’t need social activity, which is stressful for them, and they resent those who try to push them into it. The life of the mind is far more rewarding than chit-chat and bingo with geezers.
The life of the mind is certainly more rewarding than “social media”. My use of that peculiar institution was limited to Facebook. And my use of it dwindled from occasional to never a few years ago. And there it will stay.
Anyway, I mentioned my continued withdrawal from social intercourse. A particular, recent instance of withdrawal sparked this post. For about fifteen years I corresponded regularly with a former colleague. He has a malady that I have dubbed email-arrhea: several messages a day (links and jokes, nothing original) to a large mailing list, with many insipid replies from recipients whose choose “reply all”. Enough of that finally became too much, and I declared to him my intention to refrain from correspondence until … whenever. (“Don’t call me, I’ll call you.”) So all of his messages and those of his other correspondents were dumped automatically into my email trash folder. He finally got the message, so to speak, and quit transmitting.
My withdrawal from that particular mode of social intercourse was eased by the fact that the correspondent is a “collaborator”” with a deep-state mindset. So it was satisfying to terminate our relationship — and to devote more time to things that Ienjoy, like blogging.
It’s almost time to “fall back”, which reminds me of the perennial controversy about daylight saving time (that’s “saving” not “savings”). The main complaint seems to be the stress that results from moving clocks ahead in March:
Springing forward may be hazardous to your health. The Monday following the start of daylight saving time (DST) is a particularly bad one for heart attacks, traffic accidents, workplace injuries and accidental deaths. Now that most Americans have switched their clocks an hour ahead, studies show many will suffer for it.
Most Americans slept about 40 minutes less than normal on Sunday night, according to a 2009 study published in the Journal of Applied Psychology…. Since sleep is important for maintaining the body’s daily performance levels, much of society is broadly feeling the impact of less rest, which can include forgetfulness, impaired memory and a lower sex drive, according to WebMD.
One of the most striking affects of this annual shift: Last year, Colorado researchers reported finding a 25 percent increase in the number of heart attacks that occur on the Monday after DST starts, as compared with a normal Monday…. A cardiologist in Croatia recorded about twice as many heart attacks than expected during that same day, and researchers in Sweden have also witnessed a spike in heart attacks in the week following the time adjustment, particularly among those who were already at risk.
Workplace injuries are more likely to occur on that Monday, too, possibly because workers are more susceptible to a loss of focus due to too little sleep. Researchers at Michigan State University used over 20 years of data from the Mine Safety and Health Administration to determine that three to four more miners than average sustain a work-related injury on the Monday following the start of DST. Those injuries resulted in 2,649 lost days of work, which is a 68 percent increase over the hours lost from injuries on an average day. The team found no effects following the nation’s one-hour shift back to standard time in the fall….
There’s even more bad news: Drivers are more likely to be in a fatal traffic accident on DST’s first Monday, according to a 2001 study in Sleep Medicine. The authors analyzed 21 years of data on fatal traffic accidents in the U.S. and found that, following the start of DST, drivers are in 83.5 accidents as compared with 78.2 on the average Monday. This phenomenon has also been recorded in Canadian drivers and British motorists.
If all that wasn’t enough, a researcher from the University of British Columbia who analyzed three years of data on U.S. fatalities reported that accidental deaths of any kind are more likely in the days following a spring forward. Their 1996 analysis showed a 6.5 percent increase, which meant that about 200 more accidental deaths occurred immediately after the start of DST than would typically occur in a given period of the same length.
I’m convinced. But the solution to the problem isn’t to get rid of DST. No, the solution is to get rid of standard time and use DST year around.
I’m not arguing for year-around DST from an economic standpoint. The evidence about the economic advantages of DST is inconclusive.
I’m arguing for year-around DST as a way to eliminate “spring forward” stress and enjoy an extra hour of daylight in the winter.
Don’t you enjoy those late summer sunsets? I sure do, and a lot other people seem to enjoy them, too. That’s why daylight saving time won’t be abolished.
But if you love those late summer sunsets, you should also enjoy an extra hour of daylight at the end of a drab winter day. I know that I would. And it’s not as if you’d miss anything if sunrise occurs an hour later in the winter, as it would with DST. Even with standard time, most working people and students have to be up and about before winter sunrise.
How would year-around DST affect you? The following table gives the times of sunrise and sunset on the longest and shortest days for nine major cities, north to south and west to east:
I report, you decide. If it were up to me, the decision would be year-around DST. I hate “spring forward”.
The lunatics are in charge of the asylum, and have set it on fire.
Almost 250 years ago, a relatively small but determined band of revolutionaries overthrew British rule of the colonies that became known as the United States of America. The act of defying the Crown and establishing a new government was an open conspiracy, but it was nevertheless a conspiracy because it arose from “an agreement to perform together … [a] subversive act.” That conspiracy, of course, was the American Revolution.
Now, twelve score and six years since that conspiracy was announced to the world in the Declaration of Independence, the resulting polity — the United States of America — is approaching a crisis that is the result of another conspiracy, which I have described here.
What the conspirators seek is a secular theocracy, in which they are the high priests and theologians. If that reminds you of Mussolini’s Italy, Hitler’s Germany, the USSR, Communist China, and similar regimes, that’s because it’s of the same ilk: leftism
Leftists have a common trait: wishful thinking. Thomas Sowell calls it the unconstrained vision; I call it the unrealistic vision. It’s also known as magical thinking, in which “ought” becomes “is” and the forces of nature and human nature can be held in abeyance by edict; for example:
California wildfires caused by misguided environmentalism.
The killing of small businesses, especially restaurants, by minimum wage laws.
The killing of jobs for people who need them the most, by ditto.
Bloated pension schemes for Blue-State (and city) employees, which are bankrupting those States (and cities) and penalizing their citizens who aren’t government employees.
The idea that men can become women and should be allowed to compete with women in athletic competitions because the men in question have endured some surgery and taken some drugs.
The idea that it doesn’t and shouldn’t matter to anyone that a self-identified “woman” uses women’s rest-rooms where real women and girls became prey for prying eyes and worse.
Mass murder on a Hitlerian-Stalinist scale in the name of a “woman’s right to choose”, when she made that choice (in almost every case) by engaging in consensual sex.
Disrespect for and attacks on the police and military personnel who keep the spoiled children of capitalism safe in their cosseted existences.
The under-representation of women and blacks in certain fields is due to rank discrimination, not genetic differences (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).
Peace can be had without preparedness for war.
Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”.
The cost of health care will go down while the number of mandates is increased.
Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).
Closely related to magical thinking is the nirvana fallacy (hypothetical perfection always seems better than feasible reality), large doses of neurotic hysteria (e.g., the overpopulation fears of Paul Ehrlich, the AGW hoax of Al Gore et al.), and rampant adolescent rebelliousness (e.g., instant protests about everything, the post-election tantrum-riots of 2016).
But to say any of the foregoing about the left’s agenda, the assumptions and attitudes underlying it, the left’s strategic and tactical methods, or the psychological underpinnings of leftism, is to be “hateful”. (In my observation, nothing is more full of hate than a lefitst who has been contradicted or thwarted.) So, through the magic of psychological projection, those who dare speak the truth about leftism are called “haters”, “racists”, “fascists”, “Nazis”, and other things that apply to leftists themselves.
Labeling anti-leftists as evil “justifies” the left’s violent enforcement of its agenda. The violence takes many forms, from riots (as in the George Floyd “protests”), to suppression by force (e.g., Stalin’s war on the Cossacks), to genocide (e.g., the Holocaust), to overtly peaceful but coercive state action (e.g., forced unionization of American industry, the J6 committee’s Stalinesque “show trial”, suppression of religious liberty and freedom of association in the name of same-sex “marriage”, and the vast accumulation of economic regulations).
In a word: disintegration.
THE “GREATEST GENERATION” AND THE WASP ESTABLISHMENT SET THE STAGE FOR DISINTEGRATION
Every line of human endeavor reaches a peak, from which decline is sure to follow if the things that caused it to peak are mindlessly rejected for the sake of novelty (i.e., rejection of old norms just because they are old). This is nowhere more obvious than in the arts.
I have written elsewhere that 1963 (or thereabouts) was a “year zero” in American history. It was then that the post-World War II promise of social and economic progress, built on a foundation of unity (or as much of it as a heterogeneous nation is likely to muster), began to crumble.
At first, the “adults in the room” forgot their main duty: to be exemplars for the next generation.
the world in which we live … seems more and more to resemble the kind of world in which parents have failed in their duty to inculcate in their children the values of honesty, respect, and hard work….
I subscribe to the view that the rot set in after World War II….
The[] sudden emergence [in the 1960s of “campus rebels”] was due to the failure of too many members of the so-called Greatest Generation to inculcate in their children the values of honesty, respect, and hard work. How does one do that? By being clear about expectations and by setting limits on behavior — limits that are enforced swiftly, unequivocally, and sometimes with the palm of a hand. When children learn that they can “get away” with dishonesty, disrespect, and sloth, guess what? They become dishonest, disrespectful, and slothful. They give vent to their disrespect through whining, tantrum-like behavior, and even violence.
H]eir to an immense railroad fortune, a polo champion, and a Groton graduate, Harriman had been the first man in his class tapped for Yale’s Skull and Bones society…. Harriman … founded Brown Brothers Harriman, implemented Roosevelt’s National Industrial Recovery Act, and served as the U.S. ambassador to the Soviet Union. Later, he was elected governor of New York and advised Presidents Kennedy, Johnson, and Carter.
… E. Digby Baltzell, the preeminent exponent of the “Protestant Establishment” and the coiner of the term “wasp,” lamented in the early 1980s that the United States no longer boasted patricians with the moral authority to keep down the McCarthyite rabble, then newly ascendant, in Baltzell’s depressingly conventional opinion, in the figure of Ronald Reagan.
Harriman at the same time deplored Reagan as much as Baltzell did. In 1983, at the age of ninety-one, he traveled to Moscow just to reassure Communist Party General Secretary Yuri Andropov that not all Americans saw the Soviet Union as an “Evil Empire.” Before Reagan emerged, Harriman’s bête noire was Richard Nixon, whose success in exposing communists in the U.S. government was unforgiveable. Mercifully for their poor, quivering souls, neither Baltzell nor Harriman lived long enough to witness the rise of Donald Trump….
Exactly what Harriman and his brethren stood for politically is not easy to discern. Harriman’s politics shifted throughout his life. He voted for Harding in 1920. It was only at the urging of his fashionably engagée sister that he even entered the Roosevelt administration. Harriman went from Cold War hawk in 1946 and champion of George Kennan’s long telegram to dove in the aftermath of Vietnam. Unable to identify what principles wasp leaders stood for, their defenders frequently praise their dedication to what they call “service.”… [Harriman] sniffed at men who “didn’t do a damn thing.” “Service,” “giving back,” and “doing things”: these of course are polite euphemisms for exercising power. If power had to be wielded at all, America’s mid-twentieth-century Wise Men reasoned, it naturally ought to be in their hands.
Their upbringing made that an easy assumption. By the early twentieth century, the American upper class had constructed a cursus honorum as rigid as that faced by any Roman senator’s son….
The system was designed to produce custodians rather than leaders….
Having thrived in a system that rewarded conformity, the final generation of wasp leaders lacked what George H.W. Bush, the acknowledged last of their breed, called “the vision thing.”… When in the mid-1960s a guest expressed support for Vietnam protestors, Harriman denounced her as a traitor. Just a few years later, he was joining them….
From time to time, outsiders would plead with the Protestant Establishment to recover some moral fortitude. In God and Man at Yale, William F. Buckley Jr., a Catholic, warned that Yale was not only failing to uphold what Buckley called “individualism” (i.e., the free-enterprise system) and Christianity, but was also actively undermining them. For his trouble, Yale’s leaders denounced him as a reactionary bigot. McGeorge Bundy—yet another Bonesperson—called the book “dishonest in its use of facts, false in its theory, and a discredit to its author.” As a national security advisor in the Vietnam era, Bundy would go on to become literally the textbook example of policy failure. Later in life, he defended the spread of affirmative action….
[T]he wasps seemed to have erected institutions that uniquely selected for men who, as baseball scouts used to say, looked good in a uniform. Harriman may have earned middling grades and may not have been able to speak a single foreign language, but from adolescence on he looked the part of an ambassador. Harriman thus rose to the top of his Yale class. Thirty years later, he was negotiating with Stalin on behalf of the United States.
The 1960s generation is often blamed for contemporary woes. But it was the last generation of wasps that set in motion the forces that, as Buckley predicted, would lead the United States to ruin. Affirmative action, the tolerance of vagrancy (redubbed “homelessness” in the Lindsay era), the dishonoring of Christianity in public life, living constitutionalism in law, the ever-spreading blight of modern architecture, and the sacking of our cities by criminals: all of these features of the American regime were instituted by wasp patricians. America may have won the Cold War against communism, but within a generation it has fallen to a woke Marxian regime of its own making.
The wasp’s ancestors created the freest, most prosperous nation in history. By the time of the Protestant Establishment’s fading, its luminaries had left a nation ugly, depraved, and enthralled. They received a goodly heritage and squandered it.
THE NEW “ESTABLISHMENT” BECOMES THE ENEMY
The “establishment” has diversified over the years. As the “old boys” of Harriman’s generation died off, they were replaced by new men — and women. What we have is a new “elite” that has found a new way to distinguish itself from the “masses”.
The American culture war is part of a global trend. The German far right marches against covid restrictions and immigration. In France, Le Pen wins the countryside and gets crushed in urban centers. Throughout the developed world you see the same cleavages opening up, with an educated urban elite that is more likely to support left-wing parties, and an exurban and rural populist backlash that looks strikingly similar across different societies….
Increasing wealth causes class differentiation and segregation. One thing people with money buy is separation from poor people or others not like them, while assortative mating moves these trends along.
With modern communications technology and women playing a larger role in intellectual life, genetic (i.e., true) explanations of class differentiation are disfavored, as is anything that would blame the poor or otherwise unfortunate for their own problems [i.e., leftist condescension].
Despite social desirability bias leading to the triumph of egalitarian ideologies, the natural tendency towards a kind of class consciousness does not go away. The higher class therefore becomes more strenuous in defining itself as aesthetically and morally superior to the lower classes….
The more egalitarian the official ideology, the harder the upper class has to work to find some other grounds on which to differentiate itself from the masses, leading to an exaggeration of the moral differences between the two tribes….
Thence the use of governmental power — directly and indirectly — to impose the left’s ideology on the “masses”. There is a government-corporate-technology-media-academic complex that moves together not just in matters of military spending or foreign policy, but in matters fundamental to the daily lives and livelihoods of Americans — “climate change”, energy policy, gender identity, the definition of marriage, immigration policy, the treatment of criminals, and much more. The approved positions on such matters are leftist, of course, and so the new establishment consists almost entirely of persons, corporations, foundations, and think-tanks that are effectively organs of the Democrat Party.
Thus did the establishment — old and new — allow, encourage, and abet the disintegration of America that is now in full spate.
In the remaining sections of this post I will trace a few of the symptoms and consequences of disintegration: military failure, economic rot, and the rise of psuedo-science in the service of leftist causes. There’s no need for me to say any more about social disintegration, the evidence of which is everywhere to be seen.
MILITARY FAILURE AS A SYMPTOM OF NATIONAL ROT
A critical element of America’s disintegration has been the unalloyed record of military futility and defeat since the end of World War II. No amount of belligerent talk can compensate for the fact that the enemies of America see that — with the exception of the Reagan and Trump years — America’s defense policy is to balk at doing what must be done to win, to disarm at the first hint of “peace”, and then fail to rearm quickly enough to prevent the next war.
The record of futility and fecklessness actual began at the end of World War II when an enfeebled FDR, guided by the Communists in his administration, gave away Eastern Europe to Stalin. The giveaway was unnecessary. The U.S. had been relatively unscathed by the war; the Soviet Union’s losses in life, property, and industrial capacity had been devastating. The U.S. (with Britain) was in a position to dictate to Stalin.
The Korean War was unnecessary, in that it was invited by the Truman administration’s policies: exclusion of Korea from the Asian defense perimeter (announced by another “old boy”) and massive cuts in the U.S. defense budget. But it was essential to defend South Korea so that the powers behind North Korea (Communist China and, by extension, the USSR) would grasp the willingness of the U.S. to maintain a forward defensive posture against aggression. That signal was blunted by Truman’s decision to sack MacArthur when the general persisted in his advocacy of attacking Chinese bases following the entry of China into the war. The end result was a stalemate, where a decisive victory might have broken the back of communistic adventurism around the globe. The Korean War, as it was fought by the U.S., became “a war to foment war”.
Anti-war propaganda disguised as journalism helped to snatch defeat from the jaws of victory in Vietnam. What was shaping up as a successful military campaign collapsed under the weight of the media’s overwrought and erroneous depiction of the Tet offensive as a Vietcong victory, the bombing of North Vietnam as “barbaric” (where the Tet offensive was given a “heroic cast”), and the deaths of American soldiers as somehow “in vain”, though many more deaths a generation earlier had not been in vain. (What a difference there was between Edward R. Murrow and Walter Cronkite and his sycophants.) Unlike Korea, U.S. forces were withdrawn from Vietnam and it took little time for North Vietnam to swallow South Vietnam.
The Gulf War of 1990-91 began with Saddam Hussein’s invasion of oil-rich Kuwait. U.S. action to repel the invasion was fully justified by the potential economic effects of Saddam’s capture of Kuwait’s petroleum reserves and oil production. The proper response to Saddam’s aggression would have been not only to defeat the Iraqi army but also to depose Saddam. The failure to do so further reinforced the pattern of compromise and retreat that had begun at the end of World War II, and necessitated the long, contentious Iraq War of the 2000s.
The quick victory in Iraq, coupled with the coincidental end of the Cold War, helped to foster a belief that the peace had been won. (That belief was given an academic imprimatur in Francis Fukuyama’s The End of History and the Last Man.) The stage was set for Clinton’s much-ballyhooed fiscal restraint, which was achieved by cutting the defense budget. Clinton’s lack of resolve in the face of terrorism underscored the evident unwillingness of American “leaders” to defend Americans’ interests, thus inviting 9/11. (For more about Clinton’s foreign and defense policy, go here and scroll down to the section on Clinton.)
What can be said about the wars in Iraq and Afghanistan of 2001-2021 but that they were conducted in the same spirit as the wars in Korea, Vietnam, and the earlier war in Iraq. Rather than reproduce a long post that I wrote at the mid-point of the futile, post-9/11 wars, I will point you to it: “The War on Terror as It Should Have Been Fought”. Subsequent events — and especially Biden’s disgraceful bugout from Afghanistan — only underscore the main point of that post: Going to war and failing to win only encourages America’s enemies.
The war in Ukraine is a costly sideshow that detracts from the ability of the U.S. to prepare for a real showdown with Russia, China, Iran, and North Korea — a showdown that has been made more likely by the rush to arrange an unnecessary confrontation with Putin.. There are, in fact, good reasons to believe that (a) he is actually trying to protect Russia and Russians and (b) he has the facts of history on his side.
The axis of China, Russia, Iran, and North Korea can play the “long game”, which the U.S. and the West demonstrably cannot do because of their political systems and thrall to “public (elite) opinion”. By the time the axis is ready to bring the West to its knees, an outright attack of some kind probably won’t be necessary, as Putin has shown by cutting off vital fuel supplies to western Europe.
The only way to ensure that the U.S. isn’t cowed by the axis is to arm to the teeth, have a leader with moral courage, and dare the axis to harm vital U.S. interests. What is more likely to happen, given America’s present course, is a de facto surrender by the U.S. (and the West) — marked by significant concessions on trade and the scope of military operations and influence.
America — once an impregnable fortress — is on a path to becoming an isolated, subjugated, and exploited colony of the axis.
ECONOMIC ROT
The wisepersons who wrought America’s military decline are of the same breed as those who wrought its economic decline. In the first instance they rushed into wars that they were not willing to see through to victory. In the second instance they rushed into policy-making whose economic consequences they could have foreseen if they hadn’t been preoccupied with “social jutice” and similar hogwash.
America’s economic rot can be traced to the early 1900s, when the toll of Progresssivism (the original brand) began to be felt. It is no coincidence that a leading Progressive of the time was Teddy Roosevelt, a card-carrying member of the old establishment.
Consider the following graph, which is derived from estimates of constant-dollar GDP per capita that are available here:
There are four eras, as shown by the legend (1942-1946 omitted because of the vast economic distortions caused by World War II):
1866-1907 — annual growth of 2.0 percent — A robust economy, fueled by (mostly) laissez-faire policies and the concomitant rise of industry, mass production, technological innovation, and entrepreneurship.
1908-1941 — annual growth of 1.4 percent — A dispirited economy, shackled by the fruits of “progressivism”; for example, trust-busting; the onset of governance through regulation; the establishment of the income tax; the creation of the destabilizing Federal Reserve; and the New Deal, which prolonged the Great Depression.
1947- 2007 — annual growth of 2.2 percent — A rejuvenated economy, buoyed by the end of the New Deal and the fruits of advances in technology and business management. The rebound in the rate of growth meant that the earlier decline wasn’t the result of an “aging” economy, which is an inapt metaphor for a living thing that is constantly replenished with new people, new capital, and new ideas.
2008-2021 — annual growth of 1.0 percent — An economy sagging under the cumulative weight of the fruits of “progressivism” (old and new); for example, the never-ending expansion of Medicare, Medicaid, and Social Security; and an ever-growing mountain of regulatory restrictions on business. (In a similar post, which I published in 2009, I wrote presciently that “[u]nless Obama’s megalomaniacal plans are aborted by a reversal of the Republican Party’s fortunes, the U.S. will enter a new phase of economic growth — something close to stagnation.)
Had the economy of the U.S. not been deflected from the course that it was on from 1866 to 1907, per capita GDP would now be about 1.4 times its present level. Compare the position of the dashed green line in 2021 — $83,000 — with per capita GDP in that year — $58,000.
If that seems unbelievable to you, it shouldn’t. A growing economy is a kind of compound-interest machine; some of its output is invested in intellectual and physical capital that enables the same number of workers to produce more, better, and more varied products and services. (More workers, of course, will produce even more products and services.) As the experience of 1947-2007 attests, nothing other than government interventions (or a war far more devastating to the U.S than World War II) could have kept the economy from growing along the path of 1866-1907. (I should add that economic growth in 1947-2007 would have been even greater than it was but for the ever-rising tide of government interventions.)
The sum of the annual gaps between what could have been (the dashed green line) and the reality after 1907 (omitting 1942-1946) is almost $700,000 — that’s per person in 2012 dollars. It’s $800,000 per person in 2021 dollars, and even more in 2022 dollars.
That cumulative gap represents our mega-depression.
I have identified the specific causes of the mega-depression elsewhere. They are — unsurprisingly — government spending as a fraction of GDP, government regulatory activity, reductions in private business investment (resulting from the first two items), and the rate of inflation. Based on recent values of those variables, the rate of real GDP growth for the next 10 years will be about -6 percent. Yes, that’s minus 6 percent!
Is such a thing possible in the United States? Yes! The estimates of inflation-adjusted GDP available at the website of the Bureau of Economic Analysis (an official arm of the U.S. government) yield these frightening statistics: Constant-dollar GDP dropped at an annualized rate of -9.3 percent from 1929 to 1932, and at an annualized rate of -7.4 percent from 1929 to 1933.
In any event, the outlook is gloomy:
PSEUDO-SCIENCE IN THE SADDLE: SOME EXAMPLES
The Keynesian Multiplier
It is fitting to begin this section with a summary of “The Keynesian Multiplier: Fiction vs. Fact”. When push comes to shove, the advocates of big government (which undermines economic growth) love to spend like drunken sailors (with other people’s money), claiming that such spending will stimulate the economy. And, by extension, they claim (against commons sense and statistical evidence), that government spending is economically beneficial, as well as necessary (as long as it’s not for defense).
The Keynesian multiplier is a pseudo-scientific product of the pseudo-science of macroeconomics. It is nothing more than a descriptive equation without operational significance. What it is supposed to mean is that if spending rises by X, the rise in spending will cause GDP to rise by a multiple (k) of X. What it really means is that if the relationship between GDP and spending remains constant, when GDP rises by some amount spending will have necessarily risen by a fraction of that amount. This relationship holds true regardless of the kind of spending under discussion — private investment, private consumption, or government. But proponents of government spending prefer to put “government” in front of “spending”, and then pretend (or uncritically believe) that the causation runs from government spending to GDP and not the other way around.
“Climate Change”
Here is a case of scientists becoming invested in an invalid hypothesis. The hypothesis in question is that atmospheric CO2 is largely responsible for the rise in measured temperatures (averaged “globally”) by about 1.5 degrees Celsius since the middle of the 19th century. The hypothesis has been falsified (i.e., disproved) in so many ways that I have lost count (though one will do.) You can read dozens of scientific rebuttals here, and some of my own contributions here, here, and here.
the “science” behind the claim that human carbon emissions are heading us toward some kind of planetary catastrophe is not only not “settled,” but actually non-existent.
None of that matters — so far — because the “climatistas” have brainwashed Western political “leaders”, Western bureuacracies, and the media-information industry, which is doing its damnedest to suppress and discredit “climate deniers” (i.e., people who actually follow the science). The cost of having the “climatistas” in charge has been revealed: soaring fuel prices and freezing Europeans. There’s worse to come if the “climatistas” aren’t ejected from their positions of influence — vast economic destruction and the social distruption that goes with it.
While most countries imposed draconian restrictions, there was an exception: Sweden. Early in the pandemic, Swedish schools and offices closed briefly but then reopened. Restaurants never closed. Businesses stayed open. Kids under 16 went to school.
That stood in contrast to the U.S. By April 2020, the CDC and the National Institutes of Health recommended far-reaching lockdowns that threw millions of Americans out of work. A kind of groupthink set in. In print and on social media, colleagues attacked experts who advocated a less draconian approach. Some received obscene emails and death threats. Within the scientific community, opposition to the dominant narrative was castigated and censored, cutting off what should have been vigorous debate and analysis.
In this intolerant atmosphere, Sweden’s “light touch,” as it is often referred to by scientists and policy makers, was deemed a disaster. “Sweden Has Become the World’s Cautionary Tale,” carpedThe New York Times. Reuters reported, “Sweden’s COVID Infections Among Highest in Europe, With ‘No Sign Of Decrease.’” Medical journals published equally damning reports of Sweden’s folly.
But Sweden seems to have been right. Countries that took the severe route to stem the virus might want to look at the evidence found in a little-known 2021 report by the Kaiser Family Foundation. The researchers found that among 11 wealthy peer nations, Sweden was the only one with no excess mortality among individuals under 75. None, zero, zip.
That’s not to say that Sweden had no deaths from COVID. It did. But it appears to have avoided the collateral damage that lockdowns wreaked in other countries. The Kaiser study wisely looked at excess mortality, rather than the more commonly used metric of COVID deaths. This means that researchers examined mortality rates from all causes of death in the 11 countries before the pandemic and compared those rates to mortality from all causes during the pandemic. If a country averaged 1 million deaths per year before the pandemic but had 1.3 million deaths in 2020, excess mortality would be 30 percent….
The Kaiser results might seem surprising, but other data have confirmed them. As of February, Our World in Data, a database maintained by the University of Oxford, shows that Sweden continues to have low excess mortality, now slightly lower than Germany, which had strict lockdowns. Another study found no increased mortality in Sweden in those under 70. Most recently, a Swedish commission evaluating the country’s pandemic response determined that although it was slow to protect the elderly and others at heightened risk from COVID in the initial stages, its laissez-faire approach was broadly correct….
One of the most pernicious effects of lockdowns was the loss of social support, which contributed to a dramatic rise in deaths related to alcohol and drug abuse. According to a recent report in the medical journal JAMA, even before the pandemic such “deaths of despair” were already high and rising rapidly in the U.S., but not in other industrialized countries. Lockdowns sent those numbers soaring.
The U.S. response to COVID was the worst of both worlds. Shutting down businesses and closing everything from gyms to nightclubs shielded younger Americans at low risk of COVID but did little to protect the vulnerable. School closures meant chaos for kids and stymied their learning and social development. These effects are widely considered so devastating that they will linger for years to come. While the U.S. was shutting down schools to protect kids, Swedish children were safe even with school doors wide open. According to a 2021 research letter, there wasn’t a single COVID death among Swedish children, despite schools remaining open for children under 16….
Of the potential years of life lost in the U.S., 30 percent were among Blacks and another 31 percent were among Hispanics; both rates are far higher than the demographics’ share of the population. Lockdowns were especially hard on young workers and their families. According to the Kaiser report, among those who died in 2020, people lost an average of 14 years of life in the U.S. versus eight years lost in peer countries. In other words, the young were more likely to die in the U.S. than in other countries, and many of those deaths were likely due to lockdowns rather than COVID.
And that isn’t all. There’s also this working paper from the National Bureau of Economic Research, which concludes:
The first estimates of the effects of COVID-19 on the number of business owners from nationally representative April 2020 CPS data indicate dramatic early-stage reductions in small business activity. The number of active business owners in the United States plunged from 15.0 million to 11.7 million over the crucial two-month window from February to April 2020. No other one-, two- or even 12-month window of time has ever shown such a large change in business activity. For comparison, from the start to end of the Great Recession the number of business owners decreased by 730,000 representing only a 5 percent reduction. In general, business ownership is relatively steady over the business cycle (Fairlie 2013; Parker 2018). The loss of 3.3 million business owners (or 22 percent) was comprised of large drops in important subgroups such as owners working roughly two days per week (28 percent), owners working four days a week (31 percent), and incorporated businesses (20 percent).
And that was more than two years ago, before the political panic had spawned a destructive tsunami of draconian measures. Such measures made the pandemic worse by creating the conditions for the evolution of more contagious strains of the coronavirus.
The correct (i.e., scientific) approach would have been to quarantine and care for the most vulnerable members of the populace: the old, those with compromised immune systems, those with diseases that left them especially vulnerable (heart disease, COPD, morbid obesity, etc.). As for the rest of us, widespread exposure to the disease would have meant the natural immunization of the populace through exposure to the coronavirus and the development of antibodies through that exposure.
In the end, millions of people have been made poorer, deprived of education and beneficial human interactions, and suffered and died needlessly because politicians and bureaucrats couldn’t (and can’t) resist the urge to do something — especially when something means trying to conquer nature and suppress human nature.
(For much more on this subject, see David Stockman’s “The Macroeconomic Consequences Of Lockdowns & The Aftermath”, reproduced at ZeroHedge.)
The Wages of Pseudo-Science
The worst thing about fallacies such as the three that I have just discussed isn’t the fact that they are widely accepted, even by scientists (if you can call economics a science). The worst thing is they have been embraced by politicians and bureaucrats eager to “solve” a “problem” whether or not it is within their power to solve. The result is the concoction and enforcement of economically and socially destructive policies. But that matters little to cosseted elites who — like their counterparts in the USSR — can live high on the hog while the masses are starving and freezing.
CODA
Is there hope for an American renaissance? The upcoming mid-term election will be pivotal but not conclusive. It will be a very good thing if the GOP regains control of Congress. But it will take more than that to restore sanity to the land.
A Republican (of the right kind) must win in 2024. The GOP majority in Congress must be enlarged. A purge of the deep state must follow, and it must scour every nook and cranny of the central government to remove every bureaucrat who has a leftist agenda and the ability to thwart the administration’s initiatives.
Beyond that, the American people should be rewarded for their (aggregate) return to sanity by the elimination several burdensome (and unconstitutional departments of the executive branch), by the appointment of dozens of pro-constitutional judges, and by the appointment of a string of pro-constitutional justices of the Supreme Court.
After that, the rest will take care of itself: Renewed economic vitality, a military whose might deters our enemies, and something like the restoration of sanity in cultural matters. (Bandwagon effects are powerful, and they can go uphill as well as downhill.)
But all of that is hope. The restoration of America’s greatness will not be easy or without acrimony and setbacks.
If America’s greatness isn’t restored, America will become a vassal state. And the leftists who made it possible will be the first victims of their new masters.
These posts include and link to an abundance of supporting material. The additional material reproduced below consists of quotations from the cited sources. The quotations (and sources) are consistent with and confirm several points made in my earlier posts:
Intelligence has a strong genetic component; it is heritable.
Race is a real manifestation of genetic differences among sub-groups of human beings. Those subgroups are not only racial but also ethnic in character.
Intelligence therefore varies by race and ethnicity, though it is influenced by environment.
Specifically, intelligence varies in the following way: There are highly intelligent persons of all races and ethnicities, but the proportion of highly intelligent persons is highest among Ashkenazi Jews, followed in order by East Asians, Northern Europeans, Hispanics (of European/Amerindian descent), and sub-Saharan Africans — and the American descendants of each group.
Males are disproportionately represented among highly intelligent persons, relative to females. Males have greater quantitative skills (including spatio-temporal aptitude) relative to females; whereas, females have greater verbal skills than males.
Intelligence is positively correlated with attractiveness, health, and longevity.
The Flynn effect (rising IQ) is a transitory environmental effect brought about by environment (e.g., better nutrition) and practice (e.g., learning and application of technical skills). The Woodley effect is (probably) a long-term dysgenic effect among people whose survival and reproduction depends more on technology (devised by a relatively small portion of the populace) than on the ability to cope with environmental threats (i.e., intelligence).
Researchers of group differences have pointed out until they are blue in the face that believing in equal rights is not contingent on believing all people are born with the same abilities and that merely by discussing the causes of group differences in mean IQ they are not intending to question the moral basis for sexual or racial equality. You can believe that there are between-group IQ differences – you can even believe that these differences are 80% heritable – and still remain committed to equal rights….
But anti-hereditarians seem to have extraordinary difficulty grasping this point – it is as if they want their opponents to be making this false inference even though, by imagining this sin, they are unconsciously committing it themselves. If you argue that any research into group differences is ‘dangerous’ because it threatens to undermine the basis for equal rights, you are implicitly accepting the twisted logic of the racist’s argument, namely, that if people aren’t equal in their capabilities, then we would be justified in denying some groups their civil rights. It is this inference that is racist, not any claim about group differences, whether true or not, and it is not one that most intelligence researchers are guilty of. No doubt some hereditarians are racists, but then the beliefs of some cultural determinists are pretty toxic too, such as Joseph Stalin, Chairman Mao and Pol Pot.
Where are we now, in the continuing story of the genetics of intelligence? Usually, one goes to a meta-analysis to discern the pattern of results.
A combined analysis of genetically correlated traits identifies 187 loci and a role for neurogenesis and myelination in intelligence. W. D. Hill, R. E. Marioni, O. Maghzian, S. J. Ritchie, S. P. Hagenaars, A. M. McIntosh, C. R. Gale, G. Davies & I. J. Deary
Seven novel biological systems associated with intelligence differences were found.
1 Neurogenesis, the process by which neurons are generated from neural stem cells. 2 Genes expressed in the synapse, consistent with previous studies showing a role for synaptic plasticity. 3 Regulation of nervous system development. 4 Neuron projection 5 Neuron differentiation 6 Central nervous system neuron differentiation. 7 Oligodendrocyte differentiation.
In addition to these novel results, the finding that regulation of cell development (gene-set size = 808 genes, P-value 9.71 × 10−7) is enriched for intelligence was replicated.
In summary, if further proof were needed that these bits of the genetic code were associated with brainpower, the list homes in on everything likely to be required for a fast-thinking powerful biological system.…
They canter to a conclusion:
We found 187 independent associations for intelligence in our GWAS, and highlighted the role of 538 genes being involved in intelligence, a substantial advance on the 18 loci previously reported.…
For the first time, scientists have discovered that smart people have bigger brain cells than their peers.
As well as being bulkier, the cells are better connected to their neighbours, allowing them to process more information at a faster rate….
The study is the first to ever show that the physical size and structure of brain cells is related to a person’s intelligence levels.
Christof Koch at the Allen Institute for Brain Science in Seattle told New Scientist: ‘We’ve known there is some link between brain size and intelligence. The team confirm this and take it down to individual neurons.”
I`ve accumulated recent data on the average scores by race for five exams: the GRE for grad school, the LSAT for law school, the MCAT for medical school, the GMAT for business school, and the DAT for dental school.
To make all the numbers comprehensible, I`ve converted them to show where the mean for each race would fall in percentile terms relative to the distribution of scores among non—Hispanic white Americans….
Thus, for example, on the Graduate Management Admission Test (GMAT), the gatekeeper for the M.B.A. degree, the mean score for whites falls, by definition, at the 50th percentile of the white distribution of scores. The mean score for black test—takers would rank at the 13th percentile among whites. Asians average a little better than the typical white, scoring at the 55th percentile….
If we look at how many people of each group take the test, we can understand the variations in average score a little better.
Thus, for example, whites, who in 2007 made up 61.5 percent of the 20—24—year—old cohort, took 68.7 percent of the GMATs. Blacks took the GMAT at a per capita rate just under half (49 percent) of the white rate. Asians are more than twice (205 percent) as likely as whites to sit the GMAT. Mexicans are only a fifth (18 per cent) as likely.
[R]ace IS a social construct. But race does exist. Saying something is a “social construct” can be true and still yet not be really meaningful.
Think of it, the periodic table of chemical elements is a social construct. Do chemical elements then not exist? Or, much more relevant – in fact, exactly like race – Linnaean taxonomy is a social construct. Do kingdoms, classes, species not exist? Race is merely an extension of this.
In reality, genetic analysis can separate human populations into distinct groups. This works at the level of continental groups or even ethnic groups within a continent (or even groups within an ethnicity). At times the progression is smooth, with each group gradually giving way to the next, and at other times, the transition is abrupt….
[F]or those that accept that genetic analysis can indeed separate humanity into distinct populations, they then claim that “race” doesn’t exist because human variation is “clinial”, that is, continuous. Across continents, neighboring groups don’t separate into sharply distinct races but slowly give way to from one group to the next, so they claim. Because of this, the claim is that different racial groups don’t exist….
[T]o say that a “smooth” clinial progression of human differences renders the individual groups non-existent is equivalent to looking at [the visible color spectrum] … and concluding that each individual color does not exist because they smoothly blend into one another. That’s clearly patently ridiculous. Even if the distribution of human groups is continuous (and it often is), that wouldn’t render each group along the distribution non-existent – nor would it render the differences between each group insignificant. That would be tantamount to saying yellow is equivalent to orange.
[Further] the claim that the distribution of human populations is always clinial is not even true. Razib Khan once addressed this….
[Regarding the claim that intelligence and behavioral traits can’t be in any way inherited, because no one has found a “gene for intelligence” or for any behavioral trait]: This is one of those things that’s not even wrong. It is a red herring, and reflects a fundamental misunderstanding of genetics and what the genes do. Firstly, the genome is not like a shopping list, where there is a 1-to-1 correspondence between each “gene” and some physiological feature. Rather, the genes are like a recipe, and it is only through the complex interaction of all the genes do physical (and hence behavioral) traits emerge….
[I]t is not necessary to know which genetic variants lead to variation in a trait to know that trait variation is affected by genetic variation. That’s like saying that you need to know the names of all the people who work in a factory to know that the people there produce widgets.
As we’ve seen, behavioral genetic methods confirm the very high heritability of intelligence and behavioral traits. “Classic” behavioral genetic methods, such as twin and adoption studies, were enough to establish this by themselves….
[Regarding the claim that non-whites score low on IQ tests because the tests are culturally biased]: No. Indeed, not all non-Whites score below “Whites” (as we’ve see above, hardly a monolithic category itself). East Asians, specifically those from China, Korea, and Japan, tend to outscore Northern Europeans on IQ tests, scoring in the 105 or so range, on average. Ashkenazi Jews also are found to outscore non-Jewish Whites, the former possessing an average IQ around 112. In the case of Blacks (that is, specifically, those of West African descent), they tend to do the best on culture-“loaded” IQ tests, and do significantly worse on more “culture-free” tests like the Raven’s Progressive Matrices (which use test questions like the one seen here). “Fresh off the boat” East Asian immigrants to the West don’t seem to have a problem with either IQ tests or eventual real-world performance….
[Regarding the claim that poverty and/or discrimination are the causes of racial gaps intelligence]: Partly true, mostly false. An adverse environment, especially when we’re talking severe poverty – of the type you find in sub-Saharan Africa today – likely does have a deleterious effect on IQ. Hence, average IQ in sub-Saharan Africa is likely quite a bit lower than it would be under optimum conditions. However, we can’t reduce all racial IQ differences to environmental deprivation.
For one, racial gaps in IQ and achievement persist even in developed countries. Interventions, like Head Start, meant to ameliorate any educational deficits do nothing for the gap, as a comprehensive study by the U.S. government showed. As well, while income is correlated with IQ and educational attainment for all races, the relationship between childhood SES and IQ is different for different racial groups….
[O]n the SAT (which is simply another IQ test), the poorest Whites collectively outscored the wealthiest Blacks. As well, as we see, Blacks whose parents have graduate degrees are matched by Whites whose parents are only high school grads.
Even more interestingly, the group IQ and achievement hierarchy visible in the U.S. is found all over the world. All across the world, Blacks, for example – as a group – generally do poorly versus Europeans. East Asians and Ashkenazi Jews collectively do well all around the world, better than Northern Europeans do. Across the globe and across very different societies and different economic systems, you see roughly the same pattern you do in the United States. One could attempt to piece together some “cultural” explanation for any particular society, but how to explain this global consistency, then? This is true with populations who have been in these respective countries for many generations, as is the case in Brazil, for instance.
Source: JayMan, “JayMan’s Race, Inheritance, and IQ FAQ (F.R.B.)“, JayMan’s Blog, May 4, 2015 (This writer’s style is crude and sometimes ungrammatical, but I have includedthis excerpt because the writer has a good command of the relevant research and has summarized it well.)
Davide Piffer is a 34 year-old Italian anthropologist with a Master’s degree from England’s prestigious Durham University. He has an IQ of over 132. Piffer is currently studying for his PhD at Israel’s Ben Gurion University.
Piffer has written an analysis of a Genome Wide Association Study (GWAS). Putting it in lay terms, his “forbidden paper” explores the correlation between the percentage of people in a country who carry several dozen genetic variants that are significantly associated with very high educational attainment—based on this GWAS— and average national IQ.
National IQs are robust because they correlate very strongly, at about 0.8, with other national measures of cognitive ability, such as international assessment tests. (Intelligence: A Unifying Construct for the Social Sciences, by Richard Lynn and Tatu Vanhanen, 2012) Very high educational attainment is overwhelmingly a function of high IQ.
Piffer found that the correlation between the prevalence of the polygenic score (the average frequency of several genetic variants) in nations and national IQ was 0.9. This, of course, essentially proves that race differences in intelligence are overwhelmingly genetic.
Now, obviously, Piffer needs to get this in a high impact journal: because he deserves to, for his own career advancement, and also so that it can’t be fallaciously dismissed via an appeal to snobbery—not an insignificant factor in academic life.
And this is where the problems have arisen.
In late 2014, Piffer submitted his paper on this subject to the leading journal Intelligence. One would have assumed there’d be no problem, considering that the journal has published numerous articles on race differences in IQ and has even been condemned by SJWs for doing so [Racism is creeping back into mainstream science – we have to stop it,by Angela Saini, The Guardian, January 24, 2018]. But the editor, Doug Detterman, rejected the paper citing the reviews he received. In fact, only one of two reviewers recommended rejection; the other was extremely positive. Nevertheless, the decision letter read as if both reviews were negative.
In 2015, Piffer re-submitted the paper to Intelligence. He had successfully dealt with all the criticisms, and the paper should have been accepted for publication.
However, in 2016 Detterman stepped down as head of ISIR and was replaced by Richard Haier. With new reviewers and a new editor, it was rejected out of hand.
Piffer doesn’t give up easily, that’s for sure. Tiring of Intelligence, he improved the paper once more, in light of the critical reviews, and sent it to Frontiers in Psychology, another highly-respected journal. It passed the review process after three rounds, with reviewers recommending publication. However, Piffer tells me, “the editor, after sitting on the reviews for three weeks, decided to reject it, overturning the reviewers’ recommendation.”
Piffer adds: “This decision was kind of unprecedented and especially weird for a journal like Frontiers, whose philosophy is based on transparent review and less editorial power.”
More recently, Piffer self-published another paper, this time on Rpubs, using data from the latest GWAS carried out on 1.1 million people [Correlation between PGS and environmental variables, ]. It confirms his earlier findings, extending them to 52 populations from all over the globe and showing what he calls “fascinating correlations with latitude and polygenic scores of other traits.”
The top place is occupied by East Asians, followed by Europeans and equatorial people further down. “Geographic or genetic distances don’t explain these findings,” stresses Piffer, “as Austronesians (e.g. Papuans and Melanesians) have scores comparable to Africans, despite being genetically more different from African than are Europeans.”
Similarly, Piffer observes that Native Americans score lower than Europeans, despite being genetically closer to East Asians. This suggests that, after the East Asian-Amerindian split, there were later selective pressures for cognitive abilities among Eurasians.
Nobody can fault the sample size. The latest GWAS boasts an army of 1.1 million people and 2400 genetic variants. Piffer has created a plot with scores for the populations from the Human Genome Diversity Project:
Piffer is now working on getting this into a good journal. He says: “It’s to be hoped that the next editor will have enough intellectual honesty to let my findings see the light of mainstream science.”
Let’s summarize: it has now been effectively proven that racial IQ differences in intelligence are fundamentally genetic. The only counter-argument from our SJW friends is an appeal to authority: “Why hasn’t it been published in a top peer-reviewed journal, then?”
But that won’t suppress results like it used to. Brave academics can simply self-publish their results until an equally brave journal editor can be found.
Postscript: Absurdly, recent developments suggest it is acceptable to note that there is a genetic explanation of the higher incidence of prostate cancer among some populations e.g, West Africans than in others.
If one accepts the theory that modern humans first evolved in Africa and began colonizing the rest of the world 50,000 to 60,000 years ago, it is obvious that there has been enormous evolutionary change since that time. Zulus and Danes presumably had a common ancestor about the time humans left Africa, but are now so different from each other that standard taxonomies might well classify them as separate species….
People consciously direct the evolution of plants and animals, but [Gregory Cochran and Henry Harpending, writing in The 10,000 Year Explosion,] point out that the process is no different from the rigors of natural selection — just quicker. Much as the race deniers hate to admit it, humans in different environments evolved in sharply different directions. As the authors conclude, “We expect that differences between human ethnic groups are qualitatively similar to those between dog breeds.”
What, however, caused human evolution suddenly to speed up ten to twelve thousand years ago? For Professors Cochran and Harpending, the short answer is “agriculture.” It did so in two ways: by sharply increasing the number of people and by radically changing the environment in which they lived.
More humans meant more children, and therefore more mutations. Most babies are born with about 100 mutations, all but one or two of which are in DNA that does not seem to do anything and therefore have no effect. Those that make a difference are usually harmful or neutral but it is the occasional helpful mutation that drives evolution. Sixty thousand years ago, before the expansion out of Africa, there were perhaps only about 250,000 humans. By the Bronze Age, 3,000 years ago, there were 60 million, so a mutation that would have taken 100,000 years to occur could appear in just 400 years. Evolution was painfully slow among Paleolithic proto-humans because beneficial mutations show up so rarely in tiny populations.
Large populations are therefore a reservoir of new mutations and their size hardly slows down the propagation of good genes. According to the authors, a genetic leg up is like the flu, and can sweep through a population of 100 million in only twice the time it takes to go through a population of just 10,000.
Agriculture also brought perhaps the most dramatic change in the biological and social environment our species has ever experienced. Farming meant that for the first time in their existence Homo sapiens stayed in one place, and could therefore own more things than they could carry with them. They could become wealthier than their neighbors, and had to guard possessions against theft. Farmers could produce more food than their families needed, and this gave rise to commerce, division of labor, artisans, and non-productive elites. This social environment was completely new.
Of particular significance from an evolutionary point of view were the change of diet, domestication of animals, and population densities….
Professors Cochran and Harpending point out that some groups took up farming long before others, and that this explains a lot. Australian aborigines never farmed, and the American Indians of Illinois and Ohio started farming only 1,000 years ago. Both groups never drank alcohol before the white man showed up, and are highly susceptible to alcoholism. Fetal alcohol syndrome is about 30 times more common in these groups than in whites.
Aborigines and American Indians suffer in other ways from only recently having adopted a farming diet. Type 2 diabetes is related to a sensitivity to carbohydrates and a metabolic tendency to obesity. It is four times more prevalent among Aborigines and 2.5 times more prevalent among Navajos than among whites.
Sub-Saharan Africa was also late to take up agriculture — 7,500 years after it arose in the Middle East — and this helps explain why intelligence differences alone do not explain differences in black and white behavior. When the two groups are matched for IQ, blacks are still more likely to be criminal, shiftless, or have illegitimate children. This is probably due in part to the persistence of the smash-and-grab mentality that suits hunters but is gradually bred out of farmers….
The brain has evolved differently among different groups just as have skin color, body type, and facial features. The authors write that there are recent variants of genes that affect synapse formation, axon growth, formation of the layers of the cerebral cortex, and brain growth. “Again, most of these new variants are regional,” they add. “Human evolution is madly galloping off in all directions.”
Sometimes, even what appear to be racial similarities are actually differences that merely resemble each other. The authors point out, for example, that although both Asians and Caucasians have much lighter skins than ancestral Africans, the genetic mechanisms that shut down melanin production are different in the two races. In both Asia and Europe it was useful to let in more sunlight for vitamin D synthesis, but evolution found different ways to do it….
The 10,000 Year Explosion has a long chapter that proposes an explanation for how Ashkenazi Jews became the smartest people in the world. Trading and money-lending were high-IQ jobs, and in 1,000 years, or about 40 generations, European Jews appear to have increased their average IQs by about 12 points.
Jewish intelligence seems to be genetically associated with such diseases as Tay-Sachs, Gaucher’s, and familiar dysautonomia, which are up to 100 times more common among Jews than European gentiles. People with one copy of these genes appear to have an IQ advantage whereas two copies cause the disease. Professors Cochran and Harpending write that over time, advantageous mutations with such dangerous side effects are usually replaced by more benign mutations. The persistence of these odd mutations in Jews suggests they are recent.
One highly speculative but stimulating chapter considers the possibility that Neanderthals might have made crucial genetic contributions to Homo sapiens. There is no doubt that something important happened 30 to 40 thousand years ago. New tools, improved weapons, art, sculpture, and more efficient use of fire made big changes in what was still a Stone Age existence. These changes took place only in Eurasia — nowhere else — and Professors Cochran and Harpending are convinced they would not have come about without some important genetic change.
As it happens, this Stone Age flowering took place during the 10,000 years or so during which modern man and Neanderthals competed against each other in the same territory. Neanderthals are gone and we are not, so it is safe to assume Homo sapiens were superior — perhaps in intelligence, language, or resistance to disease. However, the authors believe there must have been genetic mixing with Neanderthals, and explain that even if just a few Neanderthal genes were useful to modern man, they would have spread through populations while the useless ones were eliminated. “It is highly likely that out of some 20,000 genes, at least a few of theirs [Neanderthal’s] were worth having,” they write. The authors concede that the genetic evidence is inconclusive — Neanderthal DNA is hard to come by — but they cite cases of “introgression,” in which wild species have acquired useful mutations from other populations.
[W]hat influence does intelligence measured at age 11 have on longevity? The good news is that a standard-deviation increase in IQ score is associated with a 24% decrease in mortality risk. So, at IQ 115 lifespan is 24% longer than average*. This is good news, together with the 60% increase in wages above the average level from an OECD study.
On the wages front, the effect of intelligence had already been shown by Charles Murray in his well-known 1998 “Income inequality and IQ” in which he compared the earnings of one child in a family with that of another sibling, showing that the effect of intelligence was powerful at creating later life differences even between siblings brought up within the same family environment.…
Iveson and colleagues have extended the within-family method by linking children in their 1947 national sample to younger siblings in a “Six day” sample, such that they had families tested at the same age with the same Moray House intelligence tests, and longevity measured up to November 2015…..
Here are the results of the contrast between the living and the dead.
These are scary figures, and worth showing to friends who doubt that IQ has any practical meaning.
Digit Span must be one of the simplest tests ever devised. The examiner says a short string of digits at the rate of one digit a second in a monotone voice, and then the examinee repeats them. The examiner then tries a string which is one digit longer, and continues in this fashion with longer and longer strings of digits until the examinee fails both trials at that particular length. That determines the number of digits forwards.
Then the examiner explains that he will say a string of digits and the examinee has to repeat them backwards, that is, in reverse order. For example, 3 – 7 is to be said back to the examiner as 7 –3. This continues until the examinee fails two trials at a particular length which determines the number of digits backwards.…
The test is not only bereft of intellectual content, but is also low on cultural content. Once you have learnt digit names you are ready to do the test. I assume that forwards and backwards are concepts understood by all cultures worthy of the name.…
If any group defined in genetic or cultural terms has a particular difficulty with digits backwards this is a strong indicator that they have difficulty with tasks as they get more intellectually demanding. The higher the g loading the more they should differ from brighter groups.
Hence the great interest in the most recent scores, to see if they conform to the usual pattern described by Jensen in the G factor (p. 405, referring to work he did in 1975 with Figueroa, ref on p 614). Over at Human Varieties, Dalliard has tried to replicate those results using data from CNLSY (these are the children of the female participants in NLSY79). Incidentally, this is a great follow-up survey: “My Mummy did your tests before I was born”. Gradually we are getting to understand the transmission of intelligence through the generations.…
Dalliard says: “That the black-white gap on forward digits is substantially smaller than on backwards digits is a robust finding confirmed in this new analysis. This poses a challenge to the argument that racial differences in exposure to the kinds of information that are needed in cognitive tests cause the black-white test score gap. The informational demands of the digit span tests are minimal, as only the knowledge of numbers from 1 to 9 is required. Forward digits is a simple memory test assessing the ability to store information and immediately recall it. The informational demands of backwards digits are the same as those of forward digits, but the requirement that the digits be repeated in the reverse order means that it is not simply a memory test but one that also requires mental transformation or manipulation of the information presented.”
You may recall that I wrote with great enthusiasm about the wonders of digit span, describing it as a modest bombshell. It is a true measure: every additional digit you can remember is of equal unit size to the next, a scaled score with a true zero. Few psychometric measures have that property (a ratio scale level of measurement in SS Steven’s terms), so the results are particularly informative about real abilities, not just abilities in relation to the rubber ruler of normative samples. If there are any differences, including group differences, on such a basic test, then it is likely they are real.
Then last November Gilles Gignac dropped another bombshell. He found that if you looked at total digit span scores since 1923 there was not a glimmer of improvement in this very basic ability. This cast enormous doubt on the Flynn effect being a real effect, rather than an artefact of re-standardisation procedures. Gignac noted that digits forwards and backwards were in opposite directions, but not significantly so.…
[Michael A.] Woodley tells me that Gignac’s “substantial and impressive body of normative data on historical means of various measures of digit span covering the period from 1923 to 2008” reveals a hidden finding: the not-very-g-loaded digits forwards scores have gone up and the g-loaded digits backwards scores have gone down. This suggests that the fluffy forwards repetition task has benefitted from secular environmental gains, while the harder reversal task reveals the inner rotting core of a dysgenic society….
Woodley has proposed a co-occurrence model: losses in national IQ brought about by the higher fertility of the less intelligent exist at the same time as IQ gains brought about by more favourable social environments.
Woodley argues that general ability is falling because of dysgenic effects, but also becoming more specialized, which he calls the Co-Occurrence model. Should it be called “Duller but specialized”?
How do these theories fare in the light of a massive new meta-analysis?…
The paper [Peera Wongupparaj et al., “The Flynn Effect for Verbal and Visuospatial Short-Term and Working Memory: A Cross-Temporal Meta-Analysis“, Intelligence, September 2017] is massive in scope, has more study samples than previous publications on this topic, is extremely large with circa 140,000 subjects and is also a massive confirmation of Woodley’s reworking of Gignac’s data, on a far larger scale. It seems that over the last 43 years we have become able to repeat a bit more but manipulate a bit less. We can echo more, and analyze less….
The authors add that the increase on the less g loaded forwards conditions suggests environmental causes including practice effects, while the decrease on the more g loaded backwards conditions suggests dysgenic effects, probably the reduced fertility of brighter persons, but perhaps also an effect of ageing populations….
So, Woodley’s “co-occurrence” model gets a strong confirmation.
In a previous post, I show, using an American sample from the National Longitudinal Study of Adolescent Health, that physically more attractive people are more intelligent. As I explain in a subsequent post, the association between physical attractiveness and intelligence may be due to one of two reasons. Genetic quality may be a common cause for both (such that genetically healthier people are simultaneously more beautiful and more intelligent). Alternatively, the association may result from a cross-trait assortative mating, where more intelligent and higher status men of greater resources marry more beautiful women. Because both intelligence and physical attractiveness are highly heritable, their children will be simultaneously more beautiful and more intelligent. Regardless of the reason for the association, the new evidence suggests that the association between physical attractiveness and general intelligence may be much stronger than we previously thought….
The halo-effect explanation for the association between physical attractiveness and intelligence, however, runs into three different problems. First, it presumes that the judgment of physical attractiveness is arbitrary and subjective. As I explain in an earlier post, however, beauty is not in the eye of the beholder; it is an objective, quantifiable trait of someone like height or weight. Second, as I note in the previous post, the association between beauty and intelligence has been found in the American Add Health sample, where physical attractiveness of the respondents is assessed by the interviewer who is unaware of their intelligence.
Most importantly, however, the halo-effect explanation simply leads to another question: Where does the teachers’ belief that more intelligent students are more attractive come from? The notion that more intelligent individuals are physically more attractive is a stereotype, and, just like all other stereotypes, it is empirically true, as both the American and British data show. Teachers (and everyone else in society) believe that more intelligent individuals are physically more attractive because they are.
In this first whole population birth cohort study linking childhood intelligence test scores to cause of death, in a follow-up spanning age 11-79, we found inverse associations for all major causes of death, including coronary heart disease, stroke, cancer, respiratory disease, digestive disease, external causes of death, and dementia. For specific cancer types the results were heterogeneous, with only smoking related cancers showing an association with childhood ability. In general, the effect sizes were similar for women and men (albeit marginally greater for women), with the exception of death by suicide, which had an inverse association with childhood ability in men but not women. In a representative subsample with additional background data, there was evidence that childhood socioeconomic status and physical status indicators had no more than a modest confounding impact on the observed associations.
Sex differences are in the news. A male Google employee reviewed some of the literature on the topic in the context of his workplace practices, and got sacked. A book questioning the role of testosterone in sex differences, and more generally the veracity of innate biological sex differences, got the Royal Society Science Book prize, though it was not reviewed by Royal Society Fellows expert in that area of knowledge. More generally, there are frequent news items about the lack of women in STEM subjects, in technology jobs and in corporate boardrooms, and these discussions often blame a glass ceiling of misogyny impeding women’s progress. Meanwhile, with rather less publicity, Prof Richard Lynn has revisited his 1994 paper in the light of recent research, and invited critics to take his finding apart….
Prof Lynn begins with the following observation:
It is a paradox that males have a larger average brain size than females, that brain size is positively associated with intelligence, and yet numerous experts have asserted that there is no sex difference in intelligence. This paper presents the developmental theory of sex differences in intelligence as a solution to this problem. This states that boys and girls have about the same IQ up to the age of 15 years but from the age of 16 the average IQ of males becomes higher than that of females with an advantage increasing to approximately 4 IQ points in adulthood.
Which way do the fair sex incline: to matters verbal or mathematical? Verbal, it would seem, and all the more so as you go up the ability spectrum.…
The authors [of the study summarized in the post] highlight the following findings:
Sex differences in math-verbal ability tilt in the right tail were examined across 35 years. Sample included >2 million gifted adolescents across multiple measures in the U.S. and India. Ability tilt favored males for math > verbal and females for verbal > math. Sex differences in ability tilt remained fairly stable over time and replicated across measures….
[S]kipping a thousand words, here is the pictorial summary, which shows that sex differences increase as ability tilt increases:
To my eye, starting from the bottom for all students, these violin plots show the following: women are almost perfectly balanced between verbal and mathematical ability, but men incline towards being better at maths than at verbal tasks. Men are more likely to calculate….
At the higher intellectual level of the top 1 in a 100 of the population [middle part of the graphic] both men and women incline more to mathematical thinking, but men predominate more.
At the eminent level of the top 1 in 10,000 of the population [top part of the graphic], men outnumber women by about 2.5.
The evidence for genes heavily influencing personality, intelligence and almost everything about human behaviour got stronger and stronger as more and more studies of twins and adoption came through. However, the evidence implicating any particular gene in any of these traits stubbornly refused to emerge, and when it did, it failed to replicate.
Ten years ago I recall talking to Robert Plomin about this crisis in the science of which he was and is the doyen. He was as baffled as anybody. The more genes seemed to matter, the more they refused to be identified. Were we missing something about heredity? He came close to giving up research and retiring to a sailing boat.
Fortunately, he did not. With the help of the latest genetic techniques, Plomin has now solved the mystery and this is his book setting out the answer. It is a hugely important book — and the story is very well told. Plomin’s writing combines passion with reason (and passion for reason) so fluently that it is hard to believe this is his first book for popular consumption, after more than 800 scientific publications….
[M]ost measures of the “environment” show substantial genetic influence. That is, people adapt their environment better to suit their natures. For example, Plomin discovered that the amount of television adopted children watch correlates twice as well with the amount their biological parents watch rather than with the amount watched by their adoptive parents….
Our personalities are also influenced by the environment, but Plomin’s second key insight is that we are more influenced by accidental events of short duration than by family. Incredibly, children growing up in the same family are no more similar than children growing up in different families, if you correct for their genetic similarities. Parents matter, but they do not make a difference.
Plomin says these chance events can be big and traumatic things such as war or bereavement, but are mostly small but random things, like Charles Darwin being selected for HMS Beagle because Captain Robert Fitzroy believed in “phrenology” and thought he could read Darwin’s character from the shape of his nose. Environmental influences turn out to be “unsystematic, idiosyncratic, serendipitous events without lasting effects”, says Plomin….
… [H]eritability increases as we get older. The longer we live, the more we come to express our own natures, rather than the influences of others on us. We “grow into our genes”, as Plomin puts it. An obvious example is male-pattern baldness, which shows low heritability at 20 and very high heritability at 60.
Two other findings are that normal and abnormal behaviour are influenced by the same genes, and that genetic effects are general across traits; there are not specific genes for intelligence, schizophrenia or personality — they all share sets of genes.
[Nicholas Nassim] Taleb has made sweeping assertions [about intelligence testing] with great confidence and surrounded by insulting language. Those assertions may well influence people who feel unsure about intelligence, and who assume that someone who is sure of themselves must know what they are talking about. That is understandable: an unsure person is aware they need to do more reading and thinking before feeling confident, and charitably assume that only a knowledgeable person who had done the necessary reading would dare speak with confidence.
Yet, far from giving scientific references at the end of his essay, Taleb confidently asserts that he does not need to do so, because the field is broken because…. Convexity. This is presented as if it were an essential ingredient of statistical analysis, rather than one of his interesting ideas about research strategies. This is amusing, because even in the area which Taleb calls his own, as a financial instruments trader, it is easy to find a careful, long term, large sample study that shows the beneficial effects of intelligence on investment behaviour. On his own home ground he is down 1-0.
The other lapse is to ignore the decades of debate carried out by intelligence researchers, notably Jensen, to improve measures of intelligence so that they conform to the requirements set out by SS Stevens. Digit Span is such a measure. So is Digit Symbol and, if measured extensively, Vocabulary. Simple and complex reaction times are other examples. Overall, Taleb is not providing new or original insights that advance the field. But his aim does not appear to be constructive or even informative.
I don’t know why an able man is so ill-disposed to measures of ability, but can only assume he is well aware of his abilities, and regards himself as above such mundanities. He does not give references, but mentions a book he is about to publish. Better to stick to the facts.
Does Taleb’s boastful dismissal of a field he palpably does not master mean that we should dismiss his contributions to other fields? Probably not. Public figures sometimes stray out of their field of competence. It is an occupational hazard brought on by public adulation, known since Roman times. However, if he can be so bombastic when out of his depth, then it would be prudent to go back to his other writings with a slightly more critical eye. When I read his thoughts on probability I made positive assumptions about some of his pronouncements on risk on the very prudent grounds that I could not contest his mathematical excursions. Perhaps I was Fooled by Algebra. Perhaps I was not the only one.
Taleb describes himself as a flaneur, which is a stroller, the sort of person who swans about. No problem with that. Swans are beguiling, but beautiful shapes can lead us astray.
Thank you to all those who commented on the “Swanning About: Fooled by Algebra” blog and associated tweets. A number of themes came up, so here are individual responses I made to some comments, and also some general points.
Since Taleb thought he could dismiss a century of psychometry, there are rather a lot of references I needed to give in reply [listed below without surrounding text or ellipses]….
Why was it that [Stephen Jay] Gould had such an impact when he argued [in The Mismeasure of Man] that the [intelligence] tests were biased against working class and minority racial groups? Morever, how did his views ever take hold when the issue of bias in intelligence testing had just been comprehensively evaluated in Arthur Jensen’s … Bias in Mental Testing. Jensen showed that, far from under-predicting African-American achievements, they perhaps slightly over-predicted them. I presume that Jensen’s volume was less often read, though it was written by an expert, not a polemicist. Perhaps precisely because it was written by an expert, in a restrained and far from folksy style, it had less impact on popular culture, which is what tends to determine public debates….
Gould’s book made a number of assertions. Two that stuck in people’s minds were: that measures of brain size derived from the study of skulls of different races had been biased, and that many items on the Army tests of intelligence were culturally biased.
The debate about the ancient skulls has raged to and fro for a long time, but it seems highly probable that the measures were taken correctly
Now the redoubtable Russell Warne has taken a detailed look at what Gould said about the Army Beta test, and finds that on that topic he has been unreliable and incorrect….
Warne says:
Given these results from our replication, it seems that Gould’s criticism of time limits and his argument that the Army Beta did not measure intelligence are without basis. Despite the short time limits for each Army Beta subtest, the results of this replication support the World War I psychologists’ belief that the Army Beta measured intelligence.
… Bluntly, Gould mis-represented the test, and misled his readers. Gould probably achieved his objective, which was to trash intelligence testing in the eyes of a generation of academics.
Warne has shown that the Beta test still works. It is a good predictor of intelligence, which correlates with current measures of scholastic attainment, shows a positive manifold and resolves into a common factor. In a standard which Gould never attempted, Warne pre-registered his prior assumptions so that the results of his experiment could be plainly seen by the reader, and so that the facts could prove him wrong.
Warne’s achievement is to have shown that Gould got it wrong. [See also Warne’s article, “The Mismeasurement of Stephen Jay Gould“, Quillette, March 19, 2019.]
Source: James Thompson, “Gould Got It Wrong“, The Unz Review, February 25, 2019
Teachers loom large in most children’s lives, and are long remembered. Class reunions often talk of the most charismatic teacher, the one whose words and helpfulness made a difference. Who could doubt that they can have an influence on children’s learning and future achievements?
Doug Detterman is one such doubter:
Education and Intelligence: Pity the Poor Teacher because Student Characteristics are more Significant than Teachers or Schools.
Douglas K. Detterman, Case Western Reserve University (USA) The Spanish Journal of Psychology (2016), 19, e93, 1–11. doi:10.1017/sjp.2016.88
At least in the United States and probably much of the rest of the world, teachers are blamed or praised for the academic achievement of the students they teach. Reading some educational research it is easy to get the idea that teachers are entirely responsible for the success of educational outcomes. I argue that this idea is badly mistaken. Teachers are responsible for a relatively small portion of the total variance in students’ educational outcomes. This has been known for at least 50 years. There is substantial research showing this but it has been largely ignored by educators. I further argue that the majority of the variance in educational outcomes is associated with students, probably as much as 90% in developed economies. A substantial portion of this 90%, somewhere between 50% and 80% is due to differences in general cognitive ability or intelligence. Most importantly, as long as educational research fails to focus on students’ characteristics we will never understand education or be able to improve it.
Doug Detterman is a noble toiler in the field of intelligence, and has very probably read more papers on intelligence than anyone else in the world. He notes that the importance of student ability was known by Chinese administrators in 200 BC, and by Europeans in 1698.
The main reason people seem to ignore the research is that they concentrate on the things they think they can change easily and ignore the things they think are unchangeable.
Despite some experiments, the basics of teaching have not changed very much: the teacher presents stuff on a blackboard/projector screen which the students have to learn by looking at the pages of a book/screen, and then writing answers on a page/screen. By now you might have expected all lessons to have been taught by some computer driven correspondence tutorials, cheaply delivered remotely. There is some of that, but not as much as dreamed of decades ago.
Detterman reviews Coleman et al. (1966) and Jencks et al. (1972) which first brought to attention that 10% to 20% of variance in student achievement was due to schools and 80% to 90% due to students.He then look at more recent reviews of the same issue.
Gamoran and Long (2006) reviewed the 40 years of research following the Coleman report but also included data from developing countries. They found that for countries with an average per capita income above $16,000 the general findings of the Coleman report held up well. Schools accounted for a small portion of the variance. But for countries with lower per capita incomes the proportion of variance accounted for by schools is larger. Heyneman and Loxley (1983) had earlier found that the proportion of variance accounted for by poorer countries was related to the countries per capita income. This became known as the Heyneman-Loxley effect. A recent study by Baker, Goesling, and LeTendre (2002) suggests that the increased availability of schooling in poorer countries has decreased the Heyneman-Loxley effect so that these countries are showing school effects consistent with or smaller than those in the Coleman report.
The largest effect of schooling in the developing world is 40% of variance, and that includes “schooling” where children attend school inconsistently, and staff likewise.
After being destroyed during the Second World War, Warsaw came under control of a Communist government which allocated residents randomly to the reconstructed city, to eliminate cognitive differences by avoiding social segregation. The redistribution was close to random, so they expected that the Raven’s Matrices scores would not correlate with parental class and education, since the old class neighbourhoods had been broken up, and everyone attended the schools to which they had randomly been assigned. The authorities assumed that the correlation between student intelligence and the social class index of the home would be 0.0 but in fact it was R2= 0.97, almost perfect. The difference due to different schools was 2.1%. In summary, in this Communist heaven student variance accounted for 98% of the outcome.
Angoff and Johnson (1990) showed that the type of college or university attended by undergraduates accounted for 7% of the variance in GRE Math scores. Fascinatingly, a full 35% of students did not take up the offer from the most selective college they were admitted to, instead choosing to go to a less selective college. Their subsequent achievements were better predicted by the average SAT score of the college they turned down than the average SAT scores of the college they actually attended, the place where they received their teaching. Remember the Alma Mater you could have attended.
Twins attending the same classroom are about 8% more concordant than those with different teachers, which is roughly in line with the usual school effect of 10%.…
Given all that, why bother to chose a good school? Finding somewhere safe, friendly, and close to home could be important. Even if the particular school is not going to make a big scholastic difference, it can make a difference to satisfaction, belonging, and happiness. That is worth searching for.
Do bright people earn more than others? If not, it would strengthen the view that intelligence tests are no more than meaningless scores on paper and pencil tests composed of arbitrary items which have no relevance to real life.…
Dalliard argues that many of the low estimates for the correlation between intelligence and income are based on single year earning figures, and it is better to look at rolling averages over several years, and to note that very early in career and very late in career figures may be a poor reflection of overall career earnings. Better to calculate “permanent” earnings of the sort achieved between ages 25 and 55. He looks at NLSY79 data, wisely taking only earnings and wages (no welfare payments). Wages above the cutoff are set to the average of all wages above the cutoff. Using a log transform he shows that one additional IQ point predicts a 2.5% boost in income. The standardized effect size, or correlation, is 0.36 and the R squared is 13%.
Men’s income is more strongly related to intelligence:
For example, the expected permanent annual income of a man with an IQ of 100 is e^(8.004 + 0.027 * 100), or $44,530. The expected permanent annual income of a woman with the same IQ is e^(8.004 + 0.021 * 100), or $24,440.
Below, income by racial group, which should be compared with group differences in intelligence, with which there are close parallels.
Looking at how to predict the effect of intelligence on each racial group, and interesting finding emerges:
Black men have a significantly lower intercept and a significantly higher slope coefficient: each additional IQ point predicts 3.6% (95% CI: 2.6%-4.5%) more income for black men.
This suggests that employers value intelligence, and pay higher wages for all brighter employees, an effect which is bigger in a group with a lower average ability level.
…
There are other studies which could be added, and more detail which can be explained in each of these sources, but I have picked a selection of studies to make a general point: I think it is pretty clear that intelligence has real-world implications.
Generational changes in IQ (the Flynn Effect) have been extensively researched and debated. Within the US, gains of 3 points per decade have been accepted as consistent across age and ability level, suggesting that tests with outdated norms yield spuriously high IQs. However, findings are generally based on small samples, have not been validated across ability levels, and conflict with reverse effects recently identified in Scandinavia and other countries. Using a well-validated measure of fluid intelligence, we investigated the Flynn Effect by comparing scores normed in 1989 and 2003, among a representative sample of American adolescents ages 13–18 (n = 10,073). Additionally, we examined Flynn Effect variation by age, sex, ability level, parental age, and SES. Adjusted mean IQ differences per decade were calculated using generalized linear models. Overall the Flynn Effect was not significant; however, effects varied substantially by age and ability level. IQs increased 2.3 points at age 13 (95% CI = 2.0, 2.7), but decreased 1.6 points at age 18 (95% CI = −2.1, −1.2). IQs decreased 4.9 points for those with IQ ≤ 70 (95% CI = −4.9, −4.8), but increased 3.5 points among those with IQ ≥ 130 (95% CI = 3.4, 3.6). The Flynn Effect was not meaningfully related to other background variables. Using the largest sample of US adolescent IQs to date, we demonstrate significant heterogeneity in fluid IQ changes over time. Reverse Flynn Effects at age 18 are consistent with previous data, and those with lower ability levels are exhibiting worsening IQ over time. Findings by age and ability level challenge generalizing IQ trends throughout the general population.
The authors … investigate the genetic contribution of g to variation in each of the cognitive tests. Genetic correlation is simply the correlation between the genetic contributors to each of the measured abilities. It is correlation at the level of genes, not test scores. If the brain is made up of modules, then one would expect such genetic correlations to be low. On the other hand, a brain largely based on general ability would have strong correlations. In fact, the genetic correlations range from .14 to .87, with a mean of .53 and the first principal component accounted for a total of 62.17% of the genetic variance. The genetics of intelligence is largely g based, it would seem….
So, have the authors “got away with” their combative title? The best way to answer would be to set the question “What else do you want?” The claim is that intelligence is real, and is a real aspect of the brain. To show that that is the case you can show a link between intelligence test scores and real life (this has been done many, many times, and some examples are shown below) and a link between intelligence test scores and implied measures of genetic heritability via twin and family studies (also done many times), and now finally a link between intelligence test scores broken up into general and specific factors and measures of heritability via actual genomic studies identifying locations for general and specific factors of intelligence.
In my view this is a very important advance. It shows an underlying reality, at a genetic level, between general and specific aspects of cognitive ability. It allows investigations to proceed at two levels: the test score level and the genomic code level. Further studies will drill down into yet more detail.
It is fair to say that this is an objective approach, and ought to answer any reasonable critic of the reality of cognitive ability being based on brains which are under substantial genetic control.
You know the story, but here we go again. The standard account of sex differences in intelligence is that there aren’t any. Or not significant ones, or perhaps some slight ones, but they counter-balance each other. The standard account usually goes on to concede that males are more variable than females, that is to say, they are more widely dispersed around the mean. Although this is an oft-repeated finding, in some circles it is still referred to a merely a hypothesis. There is a standardisation sample in Ro mania which did not show this difference, and others epidemiological samples where the differences are slight, but usual finding is that men show a wider standard deviation of ability.
Against this orthodoxy, Irwing and Lynn (2006) have argued that boys and girls mature at different speeds, with girls ahead till about age 16 and with boys moving ahead thereafter, such that men are 2-4 IQ points ahead of women throughout adult life.
Lynn further argues that if men are 4 points ahead, and have a standard deviation of 15 as opposed to women’s standard deviation of 14, those two findings almost fully explain the higher number of men in intellectually demanding occupations. There is no glass ceiling. Fewer women are capable of the higher levels required for the glittering prizes. Furthermore, this explains why men know more things. At the very highest levels of ability there are more men, and they have more knowledge, which is why they win general knowledge competitions.
This, the seditious faction suggest, is just a fact of sexual dimorphism. Male brains are very, very much bigger than women’s, and each of the component regions of the male brain are bigger than the same regions in women, and also more variable in size.
Standardization samples ought to be good, and often are so, but they are not as good as birth cohorts or major epidemiological samples, so the latter are to be favoured when looking for reliable sex differences.
However, here is another paper on standardization samples confirming the same pattern of male advantage, though not greater male variability in one of the samples.
Sex Differences on the WAIS-III in Taiwan and the United States Hsin-Yi Chen and Richard Lynn. Pages 324-328.
Sex differences are reported in the standardization samples of the WAIS-III in Taiwan and the United States. In Taiwan, men obtained a significantly higher Full Scale IQ than women of 4.35 IQ points and in the United States men obtained a significantly higher Full Scale IQ than women of 2.78 IQ points. The sex differences on the 14 subtests are generally similar with a correlation between the two of .65. In the Taiwan sample there were no consistent sex differences in variability.
The authors say:
There are three points of interest in the results. First, in the Taiwan sample males obtained a higher Full Scale IQ of .29d, the equivalent of 4.35 IQ points. This confirms the thesis advanced by Lynn (1994, 1998, 1999) that in adults, males have a higher average IQ than females of around 4-5 IQ points. Males obtained a higher Full Scale IQ in the American standardization sample of the WAIS-III of .185d (2.78 IQ points). These two results disconfirm the assertions of Haier et al. (2004) and Halpern (2012) that “Comparisons of general intelligence assessed with standard measures like the WAIS show essentially no differences between men and women” (Halpern, 2012, p. 115).
Second, the sex differences in the Taiwan and American WAIS-III are generally similar. On the 14 subtests the correlation between the two is .65 (p <.001). Thus, in both samples men obtained their greatest advantage on Information and their lowest advantage on Digit Symbol – Coding. Third, there was no consistent sex difference in variability. On the Taiwan Full Scale IQ the VR of 1.02 is negligible, and males had greater variability in 9 of the 14 subtests while females had greater variability in 5 of the subtests. These results do not confirm the greater variability of males reported in numerous previous studies e.g., Arden and Plomin (2006) and Dykiert, Gale and Deary (2009).
This study, on the gold standard Wechsler test, seems to confirm a male advantage in general intelligence. As discussed, standardisation samples are designed to be an excellent representation of the population on which the test will be used (with changes to make it culturally accurate), and there is no reason to believe that this balanced selection would favour males. Birth samples would be even better, but this is a good test of the male advantage proposal.
The Information subtest is a measure of very general General Knowledge, not requiring any specialist interests, but asking about the things which would generally be known in the general population. A .44 sd advantage on this subtest is enormous. The greater male representation in high level general knowledge competitions seems well founded. On the US sample there is almost as big a male advantage for Maths, and a large deficit for the digit symbol coding task, which measures simple processing speed.
The lack of a greater standard deviation in the Taiwanese sample goes against the general finding, as did the standardisation sample for Romania. Standardisation samples are not as representative as larger epidemiological surveys, but it is interesting nonetheless, in that it suggests some sampling restriction.
We know from twin studies that intelligence is heritable, and genome-wide association studies are trying to identify the genes responsible for this result. (We know that genetics is powerful in real life, now we need to show it in theory). Part of the problem is that larger studies have put together results from disparate tests, so [Robert Plomin’s] team has designed a 40 item intelligence game which produces a reliable (internal consistency = .78, two week retest reliability = .88) measure of g which they have given to 4,751 young adults from their twin study.
This very big sample of 4,751 25-year-olds shows significant sex differences in favour of men. The authors don’t comment on this, but it fits the emerging pattern of a male intellectual advantage in adulthood.…
The summary is that the team have created a good new 15-minute IQ test which correlates well with the many longer assessments used over the years on their very large sample of twins. It also has good predictive power. If more widely adopted, and the few bits of explanatory English language translated into other languages, it could be a very useful contribution to large GWAS investigation of the genetic basis of intelligence.
Source: James Thompson, “Game On: Pathfinder“, The Unz Review, August 14, 2021
Heritability studies cannot show definitively that race differences in intelligence have a genetic cause. It is always possible that there is some hidden environmental factor(s) – a so-called “X factor,” analogous to nitrates in the corn example – that explains differences between – but not within – races (an X factor that explains the Black–White IQ gap would have to affect all Blacks equally in order to preserve high heritability and shift the population mean downwards without changing the variance). Nevertheless, the high within-group heritability of IQ can be part of a package of evidence for between-group heritability. This point was made by Jensen (1969), who was widely misunderstood as naively inferring between- from within-group heritability (see Sesardic, 2005, pp. 128–138). Expounding on Jensen’s argument, Flynn (1980) writes that “the probability of a genetic hypothesis will be much enhanced if, in addition to evidencing high [heritability] estimates, we find we can falsify literally every plausible environmental hypothesis [i.e., X factor] one by one” (p. 40; quoted in Sesardic, 2005, p. 136). The obvious candidate X factor that could explain race differences is, of course, racism. If racism lowers IQ, this could explain why the mean is shifted downwards for victimized groups. However, as Flynn argues, attributing differences to racism
is simply an escape from hard thinking and hard research. Racism is not some magic force that operates without a chain of causality. Racism harms people because of its effects and when we list those effects, lack of confidence, low self-image, emasculation of the male, the welfare mother home, poverty, it seems absurd to claim that any one of them does not vary significantly within both black and white America. (Flynn, 1980, p. 60; quoted in Sesardic, 2005, pp. 141–142)
Now, after decades of intensive searching, the X factor remains elusive. The adult Black–White IQ gap has remained stubbornly constant at approximately one standard deviation (15 IQ points) among cohorts born since around 1970 (Murray, 2007). Dickens and Flynn (2006) report that “Blacks gained 4 to 7 IQ points on non-Hispanic Whites between 1972 and 2002” (p. 913), but these gains appear to be among Blacks born before the early seventies. Dickens and Flynn (2006, Figure 3) indicate that, in 2002, the Black–White IQ gap in among 20-year-olds was approximately one standard deviation, or 15 points. Nisbett (2017) writes that “Dickens and Flynn found [the Black–White gap in IQ to be] around 9.5 points,” but this is only the gap if we include children (as R. Nisbett confirmed in a personal communication, December 24, 2018). More recent evidence indicates that the gap has persisted or even widened. Frisby and Beaujean (2015, Table 8) find a Black–White IQ gap of 1.16 standard deviations among a population-representative sample of adults used to norm the Wechsler Adult Intelligence Scale IV in 2007. Intensive interventions can raise IQ substantially during childhood when the heritability of IQ is low. But despite some misleading claims about the success of early intervention programs, gains tend to dissolve by late adolescence or early adulthood (Baumeister & Bacharach, 2000; Lipsey, Farran, & Durkin, 2018; Protzko, 2015). Adoption by white families – one of the most extreme interventions possible – has virtually no effect on the IQ of black adoptees by adulthood. Black children adopted by middle- and upper-middle-class white families in Minnesota obtained IQ scores at age 17 that were roughly identical to the African American average. Adoptees with one black biological parent obtained IQ scores that were intermediate between the black and white means (Loehlin, 2000, Table 9.3).2
To reiterate, the high within-group heritability of IQ combined with the failure to find an environmental X factor to explain the IQ gap does not show decisively that race differences are genetic, because it is possible that an X factor will be discovered in the future. However, the environmentalist theory of race differences has not, by normal scientific standards, been an especially progressive research program (in the sense of Lakatos, 1970). Environmentalists never predicted that the Black–White IQ gap would, after reaching one standard deviation, remain impervious to early education, adoption, massive improvements in the socioeconomic status of Blacks, and the (apparent) waning of overt racism and discrimination. Commenting 45 years ago on environmentalist theories that appeal to an X factor, Urbach (1974) noted that “any data in the world can be made consistent with any theory by invoking nameless and untested factors” (p. 134). Nevertheless, we cannot technically say that the environmentalist explanation for the IQ gap has been falsified. The fact that the gap did narrow since the early twentieth century gives some credibility to the idea that environment is playing a role.
Let us turn to the second method for investigating the role of genes in development: genome-wide association studies (GWAS). Unlike heritability studies, GWAS can uncover specific genetic variants – or single-nucleotide polymorphisms (SNPs) – associated with IQ. In just the last couple years, GWAS has identified hundreds of such SNPs (Davies et al., 2018; Savage et al., 2018; Sniekers et al., 2017), which together explain around 11% of the variance in IQ (Allegrini et al., 2019).
If we find that the SNPs implicated in IQ are differentially distributed across racial groups, this would not necessarily imply that race differences in intelligence are genetic. SNPs might have different effects across races and environments due to gene–gene and gene–environment interactions. SNPs with no causal relation to intelligence can be genetically linked to SNPs that do have a causal relation in some populations but not others, so SNP–intelligence correlations may not always hold across races (Rosenberg, Edge, Pritchard, & Feldman, 2019). But if we find that many of the same SNPs predict intelligence in different racial groups, a risky prediction made by the hereditarian hypothesis will have passed a crucial test. Even then, however, GWAS will only establish a correlation between SNPs and IQ without revealing the causal chain linking SNP to phenotype. It would still be theoretically possible that these SNPs lead to differences in intelligence as a consequence of environmental factors (e.g., parenting effects) that can be manipulated so as to eliminate race differences. But if work on the genetics and neuroscience of intelligence becomes sufficiently advanced, it may soon become possible to give a convincing causal account of how specific SNPs affect brain structures that underlie intelligence (Haier, 2017). If we can give a biological account of how genes with different distributions lead to race differences, this would essentially constitute proof of hereditarianism. As of now, there is nothing that would indicate that it is particularly unlikely that race differences will turn out to have a substantial genetic component. If this possibility cannot be ruled out scientifically, we must face the ethical question of whether we ought to pursue the truth, whatever it may be.
As you may have noticed, it is not popular to suggest that genetics is a possible cause of individual differences, and distinctly unpopular to even hint that it might be a cause of genetic group differences….
Well, if facts exist, someone will notice them and, if brave, find a way of letting people know what they have noticed.
We used data from the Adolescent Brain Cognitive Development Study to create a multimodal MRI-based predictor of intelligence. We applied the elastic net algorithm to over 50,000 neurological variables. We find that race can confound models when a multiracial training sample is used, because models learn to predict race and use race to predict intelligence. When the model is trained on non-Hispanic Whites only, the MRI-based predictor has an out-of-sample model accuracy of r = .51, which is 3 to 4 times greater than the validity of whole brain volume in this dataset. This validity generalized across the major socially-defined racial/ethnic groupings (White, Black, and Hispanic). There are race gaps on the predicted scores, even though the model is trained on White subjects only.
This predictor explains about 37% of the relation between both the Black and Hispanic classification and intelligence.
So, by looking at all the MRI measures they can predict IQ better than by using brain size alone. So, we should stop using the brain size measure (about 0.28) and move to this better measure (0.51) and eventually the even better ones that may be found later.
First of all, some summary data:
These are adolescents, so things may change a bit with age, but these are good sample sizes. Black adolescents have a somewhat lower than expected low score, and a high standard deviation, the latter surprisingly so, since many previous black samples have a standard deviation of 13. I don’t know how to interpret this, but it might be due to the subtests used.
The intelligence tests used were not the best sample of skills (where were Maths, or Wechsler Vocabulary or Block Design?) and they over-represented working memory tests, which I think are weak measures, though fashionable. It may account for the large standard deviation finding. I predict that a more representative range of tests would lead to even higher predictive accuracy overall, and perhaps lower standard deviations.
The learning algorithm they employed was one suitable for use in the tricky setting where there are far more variables than individual subjects. When you apply the algorithm to the whole sample, leaving aside race, then the correlation with IQ is 0.60 which is very high. Using this technique, a brain image gives a good guide to the power of the brain as a problem-solving organ.
Using an MRI-based predictive equation the authors did a better job of predicting a person’s IQ than was possible from knowing their parent-described race, despite the racial differences in intelligence being large. These “social race” labels were redundant for the purposes of predicting intelligence. The correlations of MRI prediction with actual intelligence test results within each social race were pretty similar: white 0.51, black 0.53, Hispanic 0.54 and other 0.58.
They tried to see if their algorithm could use MRI data to predict the social race of the adolescents, and found they could do so with 73% accuracy. The distinction between blacks and whites could be drawn with almost complete accuracy, only a 2% error rate either way.
They then trained a model to predict genetic ancestry. You might want to call this race.
They say:
We find that MRI-based predictions of genetic ancestry are very accurate. The three correlations of interest are .91, .89, and .61, for European, African, and Amerindian ancestry, respectively. As expected, the correlations were higher for European and African ancestry than for White and Black social race, respectively.
So, you can study a brain image and predict the person’s race with very high accuracy.
A possible counter-argument is that the model has learned to spot race, and then sharpens its intelligence predictors. The authors decided to test a predictive equation for whites only, so as to cut out this possible confounder. In fact, it only goes down from 0.51 on the full dataset to 0.48 for whites only.
As usual, I have left out some of the additional tests carried out to make sure the findings were robust.
the model learned to predict subjects’ social race/genetic ancestry based on the MRI data, and then used this information to predict intelligence. This finding is consistent with those of Gichoya et al. (2022), who report that machine learning can recognize self-reported race/ethnicity from a wide variety of medical imaging data. This is unsurprising because in the USA, socially identified race closely tracks continental genetic ancestry (Kirkegaard et al., 2021; Tang et al., 2005), which certainly is not “only skin deep”.
We further analyzed which aspects of MRI data were most useful in the prediction of intelligence. We found that functional MRI (fMRI) task data, which measures blood flow while performing tasks, had the highest validity for predicting intelligence. Additionally, MRI datasets which had more variables, which showed larger race differences, and which had higher correlations with polygenic scores, had higher validities.
So, what can we conclude here? Using a statistical technique to study the many scores produced by magnetic resonance images, it is possible to predict the subject’s intelligence and their race with high accuracy. The predictive equations work for all races, so they have not been damaged by some presumed test bias. Some of the MRI measures are more predictive than others, (these tasks are not used to calculate IQ scores), which are measuring something about brain flow in the brain as tasks are carried out.
All this is brain deep, not skin deep.
Is there anywhere left for blank slate-ists to hide? They had trouble accepting that intelligence was heritable within genetic groups. After many decades of research some (but by no means all) made a grudging admission that heredity accounted for something. Then they argued that heredity was weaker in lower socio-economic-status groups. That was eventually shown to be unlikely, though the debate has been long and hard, so not all researchers accept that finding. Throughout, blank slate-ists denied that genetics could explain genetic group differences. They argued that each race showed the effects of heredity within race, but none between races (or not more than a bare 5%). They claimed that observed differences must (95% of them) be due to environmental differences, including those which are hard to detect, but have powerful effects. Will this paper change their minds? I think not. Not for 50 years, anyway.
This paper makes a simple point: brains differ between races, and these differences relate to differences in intelligence.
The conspiracy within the left’s grand conspiracy to seize control of America.
This post began life on August 31, 2018, as a page at my old blog. My initial purpose was to explain and document the anti-Trump conspiracy that began before the presidential election of 2016 and which continues to this day. The anti-Trump conspiracy is part of the broader, left-wing conspiracy to gain permanent control of the central government and to suppress traditional American values. The larger conspiracy involves not only politicians and bureaucrats but also the “elites” in the academy, Big Tech, major corporations, and the media. A key part of the conspiracy was the successful theft of the election of 2020.
The discussions of the anti-Trump conspiracy (Obamagate) and the broader conspiracy in the next two sections are followed by a long and ever-expanding list of related readings. The entries focus on Obamagate until the fall of 2022. The scope of the entries then widens to include items related to the broader conspiracy. I will continue to add relevant items to the list until 0the Obamagate conspiracy is exposed in all media outlets, key conspirators are imprisoned, and the left is cast into the political wilderness.
Obamagate (a.k.a Russiagate and Spygate)
Neither Donald Trump nor anyone acting on his behalf colluded with Russia to influence the outcome of the 2016 presidential election.
The Obamagate conspiracy was designed to discredit Trump and deny him the presidency. It continued while he was in office, with a coordinated assault by congressional Democrats and media leftists to eject him from the presidency. It has continued since he left office, with the specific aim of preventing him from running or being elected in 2024, and the broader aim of discrediting the GOP and undermining electoral support for GOP candidates.
The conspiracy was instigated by the White House and the Clinton campaign, and executed by the CIA and FBI. The entire Obamagate operation is reminiscent of Obama’s role in the IRS’s persecution of conservative non-profit groups. Obama spoke out against “hate groups” and Lois Lerner et al. got the message. Lerner’s loyalty to Obama was rewarded with a whitewash by Obama’s. Department of Justice and FBI.
In the case of Obamagate, Obama expressed his “concern” about Russia’s attempt to influence the election. Obama’s “concern” was eagerly seized upon by hyper-partisan members of his administration, including (but not limited to):
Deputy Attorney General Sally Yates, who became Acting Attorney General in the first weeks of the Trump administration, and who was fired for refusing to defend Trump’s “travel ban” (which the Supreme Court ultimately upheld). (Yates didn’t become involved in the conspiracy until after the election, as indicated by Susan Rice’s memo of January 20, 2017, in which she notes that Obama asked Yates and Comey to stay behind after the end of a meeting of January 5, 2020, presumably so that he could fill them in on the effort to frame General Flynn and discuss how they were to deal with the incoming administration. Again, see Menton’s piece dated May 11, 2020 in “related reading”.)
Deputy Associate Attorney General Bruce Ohr, a subordinate of Sally Yates and Christopher Steele’s contact in the Department of Justice
Nelli Ohr, wife of Bruce Ohr, who was hired by Fusion GPS to do opposition research for the Clinton campaign
FBI Director James Comey. (He was initially an outsider, a nominal Republican in a Democrat administration, and possibly a willing dupe at first; see the pieces by VDH dated August 7, 2018, and Margot Cleveland dated December 20, 2019. But if he was initially a willing dupe with his own agenda, it seems that he had became a full-fledged conspirator by the time of Trump’s inauguration; see the piece by Andrew McCarthy dated May 2, 2020.)
Peter Strzok, chief of the FBI’s counterintelligence section;
Lisa Page, the FBI attorney (and Strzok’s paramour), who (with Strzok) was assigned to the Mueller investigation.
The ball got rolling with the production of the “Steele Dossier” – “proof” of Trump’s collusion with Russia – by the Clinton campaign. With the help of the CIA, the “dossier” was then brought to the attentions of eager accomplices at the FBI.
The “dossier” was used to launch a three-pronged attack on Trump and the Trump campaign. The first prong was to infiltrate and spy on the campaign, seeking (a) to compromise campaign officials and (b) learn what “dirt” the campaign had on Clinton. The second prong was to boost Clinton’s candidacy by casting Trump as a dupe of Putin. The third prong was to discredit Trump, should he somehow win the election, in furtherance of the (previously planned) effort to sabotage Trump’s administration should he be elected.
The investigation led by Robert Mueller of the Trump-Russia collusion story was a continuation and expansion of FBI investigation (Crossfire Hurricane) that had been aimed at “proving” a conspiracy between the Trump campaign and Russia. Mueller’s remit was enlarged to include the possibility that Trump obstructed justice by attempting to interfere with the FBI investigations. The Mueller “investigation” was meant to provide ammunition for Trump’s impeachment and removal from office. That would leave a Republican in the White House, but — as with the forced resignation of Nixon — it would weaken the GOP, cause a “Blue wave” election in 2018, and result in the election of a Democrat president in 2020.
(Aside: The effort to brand Trump as a dupe of Russia is ironic, given the anti-anti-communist history of the Democrat party, Barack Obama’s fecklessness in his dealings with Russia, and his stated willingness to advance Russia’s interests while abandoning traditional European allies. Then there was FDR, who was surrounded and guided by Soviet agents.)
Why was it important to defeat Trump if possible, and to discredit or remove him if — by some quirk of fate — he won the election?
First, Obama wanted to protect his “legacy”, which included the fraudulent trifecta of Obamacare, the Iran nuclear deal, and the Paris climate accord. The massive increase in the number of federal regulations under Obama was also at risk, along with his tax increase, embrace of Islam, and encouragement of illegal immigration (and millions of potential Democrat voters).
Second, members of the Obama administration, including Obama himself, were anxious to thwart efforts by the Trump campaign to obtain derogatory information about Hillary Clinton. Such information included, but was not limited to, incriminating e-mails that Russians had retrieved from the illegal private server set up for Clinton’s use. That Obama knew about the private server implicated him in the illegality.
Third, there was and still is an urgent need to cover up Joe Biden’s central role and financial interest in the influence-peddling scheme fronted by his son Hunter, which couldn’t have escaped Obama’s attention.
The rest is history. The highlights of that history are:
Two sham impeachments of Trump.
Relentlessly negative coverage of him by the media outlets that serve as extensions of the Democrat Party.
A legally dubious indictment of Trump for falsifying business records in connection with “hush money” payments to Stormy Daniels.
The furor about the so-called insurrection on January 6, 2021, which was a protest that got out of hand due to the combination of Nancy Pelosi’s failure to protect the Capitol and the use of agents provocateurs to stir up the crowd. This trumped-up riot set the stage for an indictment of Trump for attempting to overturn the election of 2020.
The FBI raid on Mar-a-Lago and the subsequent indictment of Trump for deliberately mishandling classified documents.
A spurious indictment for Trump’s alleged efforts to overturn the 2020 election in Georgia.
Add to those legal proceedings, Trump’s appeal of the verdict in E. Jean Carroll’s convenient suit for battery, sexual harassment, and defamation, 28 years after Trump’s alleged actions. A New York jury unsurprisingly awarded damages to Carroll.
The Broader Conspiracy
Targeting specific individuals – like Trump — is just a sideshow. The left is after everyone who opposes its agenda, especially (but not exclusively) Hillary Clinton’s “deplorables” and Joe Biden’s “semi-fascists” — that is, Americans who hold traditional values about the role of government, marriage, sex (there are two), the rule of law, and much more. It would be disastrous (for the left) if its opponents could muster enough electoral support to overwhelm the combination of hard-left voters, squishy centrists, and stuffed ballot boxes (and their electronic equivalents).
The left thrives on control. The left therefore seeks every opportunity to transfer power from civil society and the States to the central government; disproportionately engages in electoral fraud; and seeks to undermine social norms and shape them to their own view of how the world should be. Anything that delays or thwarts the left’s march toward totalitarianism is called a threat to “democracy”. What leftists want is “democratic”; anything else is profoundly wrong or plain evil.
The like of Clinton and Biden are merely the most visible face of the broader conspiracy to take America away from Americans. The conspiracy includes not only most Democrat politicians but also vast portions of government bureaucracies throughout the land; most of the public education indoctrination industry; most institutions of higher learning advanced indoctrination; most media outlets; most “entertainers”; far too many corporate executives and administrator; and, of course, Big Tech (owners, managers, and employees alike).
As these various institutions slid to the left, they formed an informal but tight alliance that moves in lockstep to attain left-wing objectives. Their combined power enables them to advance the left’s agenda by shaping (distorting) perceptions of issues, and controlling the making and enforcement of laws and regulations. The whole thing is a classic Stalinist operation: scapegoat, shame, suppress and prosecute the opposition, and — above all keep — an iron grip on power. (The left loves to project its own feelings and methods onto its opponents.)
The broader conspiracy is an open one, but no less dangerous to liberty than subversion by agents of a foreign power. With the aid of naive and insouciant conservatives, it has been going on since the Progressive Era of the late 1890s and early 1900s, when leftism emerged as a political force in America. Here’s a sample of the left’s causes and “successes”, from the Progressive Era to the present:
Direct election of U.S. senators, which had been the prerogative of State legislatures).
Establishment of a national income tax.
Creation of the Federal Reserve, with its dismal record of causing bubbles and crashes.
Creation (with the aid of the Supreme Court) of the massive “social insurance” schemes that are bankrupting the nation — Social Security, Medicare, Medicaid and the expansion of the latter two by “Obamacare”.
Other Supreme Court rulings that allowed the central government to regulatealmost all economic activity in the country, allowed national-defense secrets to be published freely by the media, and made America safer for criminals.
The Civil Rights Act of 1964, which instigated an epoch of costly and socially divisive racial discrimination in favor of blacks, gender discrimination in favor of women, and (recently) favoritism toward gender-confused persons and predators who abuse that privilege.
The butchery of tens of millions of unborn children in the name of “privacy” and “women’s rights”, for what (in almost all cases) is the unwillingness to take responsibility for the predictable consequences of a consensual act.
The hollowing out of America’s industrial base in favor of corporate globalism, and to the advantage of America’s enemies.
The “climate change” hoax, the price of which is higher energy costs, the widespread use of expensive and inefficient vehicles, subsidized investments in inefficient “renewable” energy sources, and (in Europe, if not America) more cold-weather privations and deaths.
Unfettered illegal immigration, which breeds crime, burdens public schools and other public facilities, and causes higher taxes for communities subject to large influxes of “migrants”.
Various measures to make Americans more vulnerable to criminals, ranging from the all-but-complete abolition of the death penalty to “prison reform” (i.e., shorter sentences or none) to “bail reform” (i.e., the elimination of bail so that violent criminals can be let loose on the populace).
“Globalism”, which hasn’t died (despite some opinions to the contrary), but which lives on in the accommodation of Chinese economic predations and the rapid growth of China’s military might and influence.
The squandering of tens of billions of dollars and valuable military resources in support of Ukraine because its leader has the goods on Joe Biden.
The following list of readings ends in August 2023, at the point at which new items about Biden Family corruption began to appear too often to allow me to keep the list up to date. At the same time, some notice began to be paid by the legacy media — initially “spun” to exonerate the president, but increasingly critical of him. Where all of this will lead is anyone’s guess. Mine is here.
u/lonestarbeliever, “Connecting Some Dots”, Reddit, August 21, 2018 (This illustrates the ease with which conspiracy theories can be constructed, which isn’t to say that it’s wrong.)
Scott Johnson, “Coup’s Next“, Power Line, February 16, 2019 (A roundup of links to commentary about Andrew McCabe’s admission of the FBI’s attempt to remove Trump from office.)
Paul Sperry, “Justice Dept. Watchdog Has Evidence Comey Probed Trump, on the Sly“, RealClearInvestigations, July 22, 2019 (This supports my view that Comey was acting on his own, for his own reasons, and was at most a “useful idiot” for the concerted, Brennan-led effort to frame Trump.)
Jeff Carlson and Hans Mahncke, “New Email Reveals Answer to Establishment’s Efforts to Oust Trump”, The Epoch Times, October 25, 2022 (available at ZeroHedge)
Erik Wemple, “James Bennet Was Right”, The Washington Post, October 27, 2022 (Bennet was the op-ed editor of The New York Times who was fired for publishing Senator Tom Cotton’s piece about the BLM riots.)
Fred Lucas, “House Oversight: Biden Family Got Over $10 Million From Foreign Entities While He Was VP”, The Daily Signal, May 10, 2023 (As of the morning of publication, similar stories were abundant across the media, except for the left-wing outlets. There’s no mention of the story in The New York Times online, and it isn’t to be found on the main page of The Washington Post online — finding it on WaPo’s site requires a search.)
Many links about the prepared statement and testimony of “Whistleblower X”, IRS agent Joseph Ziegler, July 19, 2023: here, here, here, here, here, here, and here.
Many links about the incriminating contents of FBI witness statement, the FBI’s authentication of Hunter’s laptop and subsequent coverup, and other news about the Bidens’ influence-peddling operation, July 20, 2023: here, here, here, here, here, here, here, and here.
Further revelations about Joe Biden’s direct involvement in the influence-peddling business and the FBI’s coverup, July 24, 2033: here, here, herehere, and here.
Techno Fog, “A Tale of Two Plea Deals”, The Reactionary, July 28, 2023 [In which the writer shows the extraordinarily favorable treatment afforded Hunter Biden by the so-called prosecutors.]
Devon Archer’s testimony (posted in full here on August 3, 2023), following reactions to partial revelations on July 31 and August 1, 2023, here, here, here, here, here, here, here, here, here, and here.
You go to hell slowly at first, then all of a sudden.
The “marginal revolution” in economics, which occurred in the latter part of the 19th century, introduced marginalism,
a theory of economics that attempts to explain the discrepancy in the value of goods and services by reference to their secondary, or marginal, utility. The reason why the price of diamonds is higher than that of water, for example, owes to the greater additional satisfaction of the diamonds over the water. Thus, while the water has greater total utility, the diamond has greater marginal utility.
Although the central concept of marginalism is that of marginal utility, marginalists, following the lead of Alfred Marshall, drew upon the idea of marginal physical productivity in explanation of cost. The neoclassical tradition that emerged from British marginalism abandoned the concept of utility and gave marginal rates of substitution a more fundamental role in analysis. Marginalism is an integral part of mainstream economic theory.
But pure marginalism can be the road to ruin for a business if the average cost of a unit of output is greater than average revenue, that is, the price for which a unit is sold.
Marginalism is the road to ruin in law and politics. If a governmental act can be shown to have a positive effect “at the margin”, its broader consequences are usually ignored. This kind of marginalism is responsible for the slippery slope–ratchet effect enactment and perpetuation of one economically and socially destructive government program after another. Obamacare, same-sex “marriage”, and rampant transgenderism are some notorious examples of recent years. Among the many examples of earlier years are the Pure Food and Drug Act, the Supreme Court’s holding in Wickard v. Filburn, the Social Security Act and its judicialvindication, the Civil Rights Act of 1964, and the various enactments related to “equal employment opportunity”, including the Americans with Disabilities Act.
[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.
The unseen effects — the theft of Americans’ liberty and prosperity — had been foreseen by some (e.g., Tocqueville and Hayek). But their wise words have been overwhelmed by power-lust, ignorance, and greed. Greed manifests itself in the interest-group paradox:
The interest-group paradox is a paradox of mass action….
… Pork-barrel legislation exemplifies the interest-group paradox in action, though the paradox encompasses much more than pork-barrel legislation. There are myriad government programs that — like pork-barrel projects — are intended to favor particular classes of individuals. Here is a minute sample:
Social Security, Medicare, and Medicaid, for the benefit of the elderly (including the indigent elderly)
Tax credits and deductions, for the benefit of low-income families, charitable and other non-profit institutions, and home buyers (with mortgages)
Progressive income-tax rates, for the benefit of persons in the mid-to-low income brackets
Subsidies for various kinds of “essential” or “distressed” industries, such as agriculture and automobile manufacturing
Import quotas, tariffs, and other restrictions on trade, for the benefit of particular industries and/or labor unions
Pro-union laws (in many States), for the benefit of unions and unionized workers
Non-smoking ordinances, for the benefit of bar and restaurant employees and non-smoking patrons.
What do each of these examples have in common? Answer: Each comes with costs. There are direct costs (e.g., higher taxes for some persons, higher prices for imported goods), which the intended beneficiaries and their proponents hope to impose on non-beneficiaries. Just as importantly, there are indirect costs of various kinds (e.g., disincentives to work and save, disincentives to make investments that spur economic growth)….
You may believe that a particular program is worth what it costs — given that you probably have little idea of its direct costs and no idea of its indirect costs. The problem is millions of your fellow Americans believe the same thing about each of their favorite programs. Because there are thousands of government programs (federal, State, and local), each intended to help a particular class of citizens at the expense of others, the net result is that almost no one in this fair land enjoys a “free lunch.” Even the relatively few persons who might seem to have obtained a “free lunch” — homeless persons taking advantage of a government-provided shelter — often are victims of the “free lunch” syndrome. Some homeless persons may be homeless because they have lost their jobs and can’t afford to own or rent housing. But they may have lost their jobs because of pro-union laws, minimum-wage laws, or progressive tax rates (which caused “the rich” to create fewer jobs through business start-ups and expansions).
The paradox that arises from the “free lunch” syndrome is…. like the paradox of panic, in that there is a crowd of interest groups rushing toward a goal — a “pot of gold” — and (figuratively) crushing each other in the attempt to snatch the pot of gold before another group is able to grasp it. The gold that any group happens to snatch is a kind of fool’s gold: It passes from one fool to another in a game of beggar-thy-neighbor, and as it passes much of it falls into the maw of bureaucracy.
As far as I know, only one agency of the federal government has been abolished in my lifetime, while dozens have been created and expanded willy-nilly at the behest of politicians, bureaucrats, and cronies. The one that was abolished — the Interstate Commerce Commission — still had “residual functions” that were transferred elsewhere. That’s the way it works in Washington, and in State capitals.
So one obvious danger of marginal thinking is that the nose of the camel under the edge of the tent is invariably followed by its neck, its humps, its tail, another camel’s nose, etc., etc. etc.
There’s a less obvious danger, which is typified by the penchant of faux-libertarians for dismissing objections to this and that “harmless” act. Economist Mark Perry, for example, regurgitated Milton Friedman’s 30-year-old plea for the decriminalization of drugs. Just because some behavior is “private” doesn’t mean that it’s harmless to others. Murder behind a closed door is still murder. (Abortion is one such case.)
In the case of drugs, I turn to Theodore Dalrymple:
[I]t is not true that problems with drugs arise only when or because they are prohibited.
The relationship between crime and drug prohibition is also much more complex than the legalizers would have us believe. It is certainly true that gangs quickly form that try to control drug distribution in certain areas, and that conflict between the aspirant gangs leads to violence…. But here I would point out two things: first that the violence of such criminal gangs was largely confined to the subculture from which they emerged, so that other people were not much endangered by it; and second that, in my dealings with such people, I did not form the impression that, were it not for the illegality of drugs, they would otherwise be pursuing perfectly respectable careers. If my impression is correct, then the illegality of drugs might protect the rest of society from their criminality: the illegal drug trade being the occasion, but not the cause, of their violence.…
What about Prohibition, is the natural reply? It is true that the homicide rate in the United States fell dramatically in the wake of repeal. By the 1960s, however, when alcohol was not banned, it had climbed higher than during Prohibition…. Moreover, what is less often appreciated, the homicide rate in the United States rose faster in the thirteen years before than in the thirteen years during Prohibition. (In other respects, Prohibition was not as much of a failure as is often suggested: alcohol-related problems such as liver disease declined during it considerably. But no consequences by themselves can justify a policy, otherwise the amputation of thieves’ hands would be universal.) Al Capone was not a fine upstanding citizen before Prohibition turned him into a gangster. [“Ditching Drug Prohibition: A Dissent”, Library of Law and Liberty, July 23, 2015, and the second in a series; see also “The Simple Truth about J.S. Mill’s Simple Truth”, op. cit., July 20, 2015; “Myths and Realities of Drug Addiction, Consumption, and Crime”, op. cit., July 31, 2015; and “Closing Argument on the Drug Issue”, op. cit., August 4, 2015]
Although eugenics is not mentioned in Prohibition, it looms in the background. For eugenics — like prohibition of alcohol and, later, the near-prohibition of smoking — is symptomatic of the “progressive” mentality. That mentality is paternalistic, through and through. And “progressive” paternalism finds its way into the daily lives of Americans through the regulation of products and services — for our own good, of course. If you can think of a product or service that you use (or would like to use) that is not shaped by paternalistic regulation or taxes levied with regulatory intent, you must live in a cave.
However, the passing acknowledgement of “progressivism” as a force for the prohibition of alcohol is outweighed by the attention given to the role of “evangelicals” in the enactment of prohibition. I take this as a subtle swipe at anti-abortion stance of fundamentalist Protestants and adherents of the “traditional” strands of Catholicism and Judaism. Here is the “logic” of this implied attack on pro-lifers: Governmental interference in a personal choice is wrong with respect to the consumption of alcohol and similarly wrong with respect to abortion.
By that “logic,” it is wrong for government to interfere in or prosecute robbery, assault, rape, murder and other overtly harmful acts, which — after all — are merely the consequences of personal choices made by their perpetrators. Not even a “progressive” would claim that robbery, assault, etc., should go unpunished, though he would quail at effective punishment.
“Liberals” of both kinds (“progressive” fascists and faux-libertarians) just don’t know when to smack camels on the nose. Civilization depends on deep-seated and vigorously enforced social norms. They reflect eons of trial and error, and can’t be undone peremptorily without unraveling the social fabric — the observance of mores and morals that enable a people to coexist peacefully and beneficially because they are bound by mutual trust, mutual respect, and mutual forbearance.
A key function of those norms is to inculcate self-restraint. For it is the practice of self-restraint that underlies peaceful, beneficial coexistence: What goes around comes around.
Several years ago I published four Kindle books. Each one is on the hefty side (or would be if printed on paper), so at $0.99 or $1.99 per title you will get your money’s worth even if you read selectively.
[I]f conservatives want to save the country they are going to have to rebuild and in a sense re-found it, and that means getting used to the idea of wielding power, not despising it. Why? Because accommodation or compromise with the left is impossible. One need only consider the speed with which the discourse shifted on gay marriage, from assuring conservatives ahead of the 2015 Obergefell decision that gay Americans were only asking for toleration, to the never-ending persecution of Jack Phillips.
The left will only stop when conservatives stop them, which means conservatives will have to discard outdated and irrelevant notions about “small government.” The government will have to become, in the hands of conservatives, an instrument of renewal in American life — and in some cases, a blunt instrument indeed.
To stop Big Tech, for example, will require using antitrust powers to break up the largest Silicon Valley firms. To stop universities from spreading poisonous ideologies will require state legislatures to starve them of public funds. To stop the disintegration of the family might require reversing the travesty of no-fault divorce, combined with generous subsidies for families with small children. Conservatives need not shy away from making these arguments because they betray some cherished libertarian fantasy about free markets and small government. It is time to clear our minds of cant.
In other contexts, wielding government power will mean a dramatic expansion of the criminal code. It will not be enough, for example, to reach an accommodation with the abortion regime, to agree on “reasonable limits” on when unborn human life can be snuffed out with impunity. As Abraham Lincoln once said of slavery, we must become all one thing or all the other. The Dobbs decision was in a sense the end of the beginning of the pro-life cause. Now comes the real fight, in state houses across the country, to outlaw completely the barbaric practice of killing the unborn.
Conservatives had better be ready for it, and Republican politicians, if they want to stay in office, had better have an answer ready when they are asked what reasonable limits to abortion restrictions they would support. The answer is: none, for the same reason they would not support reasonable limits to restrictions on premeditated murder.
On the transgender question, conservatives will have to repudiate utterly the cowardly position of people like David French, in whose malformed worldview Drag Queen Story Hour at a taxpayer-funded library is a “blessing of liberty.” Conservatives need to get comfortable saying in reply to people like French that Drag Queen Story Hour should be outlawed; that parents who take their kids to drag shows should be arrested and charged with child abuse; that doctors who perform so-called “gender-affirming” interventions should be thrown in prison and have their medical licenses revoked; and that teachers who expose their students to sexually explicit material should not just be fired but be criminally prosecuted.
If all that sounds radical, fine. It need not, at this late hour, dissuade conservatives in the least. Radicalism is precisely the approach needed now because the necessary task is nothing less than radical and revolutionary. [John Daniel Davidson, “We Need to Stop Calling Ourselves Conservatives”, The Federalist, October 20, 2022]
As Davidson says later in his post (and as I have said in the past), conservatives are at war with the left. Therefore, as Davidson says,
there are only two paths open to conservatives. Either they awake from decades of slumber to reclaim and re-found what has been lost, or they will watch our civilization die. There is no third road.
It is impossible to conserve what has been destroyed. But it is possible to rebuild what has been destroyed. That is what conservatives must strive to do. And they must be ruthless about it because the left cannot be reasoned with — it must be overpowered.
In that spirit, I will expand on the idea of a conservative counter-revolution, drawing posts of mine that are several years old. (Davidson is late to the party.)
THE “FREE-SPEECH” PROBLEM
The left, in its drive to impose its agenda on the nation, has become censor-at-large. Two can play that game.
I am not a free-speech absolutist; nor do I subscribe to the conceit that the “best” ideas will emerge triumphant in the so-called marketplace of ideas. (See this, for example.) The “marketplace of ideas” ensures only that the most popular ideas or those with the strongest political backing will prevail. Nor is science immune to persistent error.
Democracy in practice has never, and could never, interpret the right of free speech in an absolute and unrestricted sense. No one, for example, is allowed to advocate, and organize for, mass murder, rape, and arson. No one feels that such prohibitions are anti-democratic….
We may generalize as follows. The principles of an organized society cannot be interpreted in practice in such a way as to make organized society impossible. The special principles of a special form of government, in this case democratic government, cannot be interpreted in practice in such a way as to make that form of government impossible.
Liberalism [of the kind that prevailed in the early 1960s] defines free speech and the related freedoms of assembly and association, as it does “peace” and “disarmament,” in abstraction, without tying them to specific persons and circumstance. For liberalism, these freedoms are the procedural rules sustaining a democratic society that rests on the will of the majority and solves its internal conflicts of interest and opinion through continuous discussion, negotiation and compromise. But this meaning of free speech and the related freedoms is significant and operable only for those who share the wish or at least willingness to have and preserve some sort of free and constitutional society. For those others— and they are not few among us— whose aim is to subvert, overthrow and replace free and constitutional society, these freedoms of speech, assembly and the rest are merely convenient levers to use in accomplishing their purpose.
The liberal ideologue is thus caught in the inescapable dilemma of his own making that we have previously examined. If he extends the freedoms to the subverters, they will use them, as they have done in one nation after another, to throw the free society into turmoil and in the end to destroy it. But if he denies the freedoms to anyone, he will feel, does feel, that he has betrayed his own principles, “imitated the methods of the enemy,” and thus joined the company of subverters. So, when a showdown with the subverters comes, as it comes from time to time to all nations, the liberals are demoralized in advance, if they do finally forget ideology and decide to resist, by the guilt generated from this feeling of self-betrayal. Let us note that this is a purely ideological trap. Common sense, unlike ideology, understands that you can play a game only with those who accept the rules; and that the rules’ protection does not cover anyone who does not admit their restrictions and penalties.
Bear in mind that Burnham was writing when “liberals” actually subscribed to the notion of unfettered speech — in principle, at least. The ACLU, a leading “liberal” institution, had consistently defended the speech rights of so-called hate groups and political figures deemed unpalatable by the left. I say “had” because the ACLU has joined the ban-wagon against “hate” speech, that is, speech which offends the sensitivities of “liberals”.
If there is one idea that today’s “liberals” (leftists) share with conservatives, it is that absolute freedom of speech can undermine liberty. The rub is that leftists mean something other than liberty when they use the word. Their idea of liberty includes, among many anti-libertarian things (e.g., coerced redistribution of income, race-based discrimination), the rejection and suppression of facts and opinions because they threaten the attainment of the left’s agenda (e.g., biased reports and web-search results about the left’s anti-scientific, fraudulent, and economically devastating push to eliminate fossil fuels).
In sum, the left’s stance on freedom of speech has nothing to do with the preservation of liberty and everything to do with the advancement of an anti-libertarian agenda.
THE CLEAR AND PRESENT DANGER
Events of the past two years have confirmed my long-held view (captured here) that when the White House in the hands of a left-wing Democrat (is there any other kind now?) and there is an aggressive left-wing majority in Congress, these things (and much more) will come to pass:
Freedom of speech, freedom of association, and property rights will become not-so-distant memories.
“Affirmative action” (a.k.a. “diversity”) will be enforced on an unprecedented scale of ferocity.
The nation will become vulnerable to foreign enemies while billions of dollars are wasted on the hoax of catastrophic anthropogenic global warming and “social services” for the indolent.
The economy, already bucklingunder the weight of statism, will teeter on the brink of collapse as the regulatory regime goes into high gear and entrepreneurship is all but extinguished by taxation and regulation.
All of that will be secured by courts dominated by left-wing judges — from here to eternity. (The U.S. Supreme Court will be “packed” as necessary to ensure that it no longer stands in the way of the left’s march toward totalitarianism.)
The left’s game plan is threatened by those who speak against such things. Thus the left’s virulent, often violent, and increasingly overt attacks on conservatives and the suppression of conservative discourse.
This is coming to pass in large part because of free-speech absolutism. Unfettered speech isn’t necessary to liberty. In fact, it can undermine it, given that liberty, properly understood, is not a spiritual state of bliss. It is, as I have written,
a modus vivendi, not the result of a rational political scheme. Though a rational political scheme, such as the one laid out in the Constitution of the United States, could promote liberty.
The key to a libertarian modus vivendi is the evolutionary development and widespread observance of social norms that foster peaceful coexistence and mutually beneficial cooperation.
Unfettered speech, obviously, can undermine the modus vivendi. It can do so directly, by shredding social norms — the bonds of mutual trust, respect, and forbearance that underlie the modus vivendi that is liberty. And it can do so indirectly by subverting the institutions that preserve the modus vivendi.
One of those institutions is the rule of law under a Constitution that was meant to limit the power of government, leaving people free to govern themselves in accordance with the norms of civil society. The steady rise of governmental power has in fact shredded social norms and subverted civil society. Which is precisely what the left wants, so that it can remake “society” to its liking.
TIT FOR TAT
It follows, therefore, that liberty can be rescued only by suppressing the left’s anti-libertarian actions. If that seems anti-libertarian, I refer you back to James Burnham. Or you can consider this: If you kill an intruder who has tried to kill you, you are doing what you must do — and what the law allows you to do (except where it is perverted by leftists). Killing a would-be murderer and suppressing those who would suppress you are equally valid instances of self-defense.
Winning and preserving liberty is not for the faint of heart, or for free-speech absolutists whose rationalism clouds their judgment. They are morally equivalent to pacifists who declare that preemptive war is always wrong, and who would wait until the enemy has struck a mortal blow before acting against the enemy — if then.
The left is at war against liberty in America — and against many Americans — and has been for a long time. Preemptive war against the left is therefore long overdue. If the left wins, will there be freedom of speech and a “marketplace of ideas” (however flawed)? Of course not.
Leftists are violent and hateful toward those who disagree with them. The left will suppress and criminalize anything and anyone standing in the way of its destructive agenda (even a former president of the United States). The left’s continuation in power — or return to it — will result in the complete destruction of the social norms and civilizing institutions that held this country more or less together between the end of the Civil War and the early 1960s.
A SHOOTING CIVIL WAR ISN’T THE ANSWER
Will a new (shooting) civil war result if (when) the left takes firm control of the central government? There is much talk about that possibility, accompanied by inflated rhetoric about the people with guns (mainly conservatives) “kicking ass” of the people without guns (mainly leftists). But that is wishful and probably suicidal thinking.
Firm left-wing control of the central government would mean control of the surveillance apparatus, troops, and weapons for which a mostly untrained “army” of rifle-toting patriots would be no match. Terrorist acts by the patriots, unless carefully aimed at government installations and troops actually engaged in suppressive operations, would only backfire and cause the silent majority to scurry into the protective arms of the central government.
GRABBING THE GOLD RING BEFORE IT’S TOO LATE
With that in mind, it is clear that drastic actions — but constitutionally defensible ones — must be taken as soon as Republicans regain firm control of the White House and Congress. (Whether that will happen is another matter. But if it does, it may well be the last time, barring drasctic action.)
The rationale and prescriptions for action are detailed here. In what follows, I focus on two prescriptions.
An Anti-Federalist Revival
One presciption is to attack — on a broad front — all statutes, regulations, and judicial decisions of the United States government that grievously countermand the Constitution:
Compile a catatlog of all anti-constitutional actions, which would include (but be far from limited to) enactments like Social Security, Medicare, Medicaid, and Obamacare that aren’t among the limited and enumerated powers of Congress, as listed in Article I, Section 8.
Prioritize the list, roughly according to the degree of damage each item does to the liberty and prosperity of Americans.
Re-prioritize the list, to eliminate or reduce the priority of items that would be difficult or impossible to act on quickly. For example, although Social Security, Medicare, and Medicaid are unconstitutional, they have been around so long that it would be too disruptive and harmful to eliminate them without putting in place a transition plan that takes many years to execute.
Of the remaining high-priority items, some will call for action (e.g., restoring the integrity of America’s southern border); some will call for passivity (e.g., allowing individual States to opt out of federal programs without challenging those States in court).
Mount a public-relations offensive to explain the broad plan and how the people will benefit, socially and economically, as it is executied.
Announce specific actions to be taken with regard to each high-priority item. There would be — for general consumption — a simplified version that explains the benefits to individuals and the country as a whole. There would also be a full, legal explanation of the constitutional validity of each action. The legal explanation would be “for the record”, in the likely event of a serious attempt to impeach the president and his “co-conspirators”. The legal version would be the administration’s only response to judicial interventions, which the administration would ignore.
A key result would be the drastic pruning of the size and scope of the central government, accompanied by far fewer regulations and lower taxes. The economy would grow at a rate that might equal the post-Civil War boom. The felt need to wrap up in Uncle Sam’s security blanket would diminish greatly.
A more important result would be the resurgence of liberty along many dimensions, not the least of which would be this:
Enforcement of the First Amendment’s Guarantee of Freedom of Speech
One of the actions would be to enforce the First Amendment against information-entertainment-media-academic complex. This would begin with action against high-profile targets (e.g., Google and a few large universities that accept federal money). That should be enough to bring the others into line. If it isn’t, keep working down the list until the miscreants cry uncle. (As someone once said, “Grab ’em by the throat Their hearts and minds will follow.”)
What kind of action do I have in mind? This is a delicate matter because the action must be seen as rescuing the First Amendment, not suppressing it. And it must be supported by Congress. Even then, the hue and cry will be deafening, as will the calls for impeachment. It will take nerves of steel to proceed on this front.
Here’s a way to do it:
EXECUTIVE ORDER NO. __________
The Constitution is the supreme law of the land. (Article V.)
Amendment I to the Constitution says that “Congress shall make no law … abridging the freedom of speech”.
Major entities in the telecommunications, news, entertainment, and education industries have exerted their power to suppress speech because of its content. (See appended documentation.) The collective actions of these entities — many of them government-licensed and government-funded, encouraged by previous administrations — effectively constitute a governmental violation of the Constitution’s guarantee of freedom of speech (See Smith v. Allwright, 321 U.S. 649 (1944) and Marsh v. Alabama, 326 U.S. 501 (1946).)
As President, it is my duty to “take Care that the Laws be faithfully executed”. The Constitution’s guarantee of freedom of speech is a fundamental law of the land.
Therefore, by the authority vested in me as President by the Constitution and in accordance with a delegation of emergency power by Congress to me as President, it is hereby ordered as follows:
1. The United States Marshals Service shall monitor the activities of the entities listed in the appendix to this Executive Order, for the sole purpose of ascertaining whether those entities are discriminating against persons or groups based on the views, opinions, or facts expressed by those persons or groups.
2. Wherever the Marshals Service observes effective discrimination against certain views, opinions, or facts, it shall immediately countermand such discrimination and order remedial action by the offending entity.
3. Officials and employees of the entities in question who refuse to cooperate with the Marshals Service, or to follow its directives pursuant to this Executive Order, shall be suspended from duty but will continue to be compensated at their normal rates during their suspensions, however long they may last.
4. This order shall terminate with respect to a particular entity when I, as President, am satisfied that the entity will no longer discriminate against views, opinions, or facts on the basis of their content.
5. This order shall terminate in its entirety when I, as President, am satisfied that freedom of speech has been restored to the land.
NOTHING TO LOSE BY TRYING
The drastic actions recommended here are necessary because of the imminent danger to what is left of Americans’ liberty and prosperity. The alternative is to do nothing and watch liberty and prosperity vanish from view. There is nothing to be lost, and much to be regained.
And if the actions succeed in the suppression of the left — which is their aim, frankly — the bandwagon effect will do the rest. And there will be a rebirth of liberty througout the land.
In large, complex, and dynamic systems (e.g., war, economy, climate) there is much uncertainty about the relevant parameters, about how to characterize their interactions mathematically, and about their numerical values.
… Consider, for example, a simple model with only 10 parameters. Even if such a model doesn’t omit crucial parameters or mischaracterize their interactions, its results must be taken with large doses of salt. Simple mathematics tells the cautionary tale: An error of about 12 percent in the value of each parameter can produce a result that is off by a factor of 3 (a hemibel); An error of about 25 percent in the value of each parameter can produce a result that is off by a factor of 10. (Remember, this is a model of a relatively small system.)
If you think that models and “data” about such things as macroeconomic activity and climatic conditions cannot be as inaccurate as that, you have no idea how such models are devised or how such data are collected and reported. It would be kind to say that such models are incomplete, inaccurate guesswork. It would be fair to say that all too many of them reflect their developers’ biases.
This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to include conscious and unconscious decisions that researchers make during data analysis and that may lead to diverging results. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data [emphasis added] to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of research based on secondary data, we find that research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 90% of the total variance in numerical results remains unexplained even after accounting for research decisions identified via qualitative coding of each team’s workflow. This reveals a universe of uncertainty that is otherwise hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a new explanation for why many scientific hypotheses remain contested. These results call for epistemic humility and clarity in reporting scientific findings.
Later:
The scientific process confronts researchers with a multiplicity of seemingly minor, yet nontrivial, decision points, each of which may introduce variability in research outcomes. An important but underappreciated fact is that this even holds for what is often seen as the most objective step in the research process: working with the data after it has come in. Researchers can take literally millions of different paths in wrangling, analyzing, presenting, and interpreting their data. The number of choices grows exponentially with the number of cases and variables included….
A bias-focused perspective implicitly assumes that reducing ‘perverse’ incentives to generate surprising and sleek results would instead lead researchers to generate valid conclusions. This may be too optimistic. While removing these barriers leads researchers away from systematically taking invalid or biased analytical paths … , this alone does not guarantee validity and reliability. For reasons less nefarious, researchers can disperse in different directions in what Gelman and Loken call a ‘garden of forking paths’ in analytical decision-making….
There are two primary explanations for variation in forking decisions. The competency hypothesis posits that researchers may make different analytical decisions because of varying levels of statistical and subject expertise that leads to different judgments as to what constitutes the ‘ideal’ analysis in a given research situation. The confirmation bias hypothesis holds that researchers may make reliably different analytical choices because of differences in preexisting beliefs and attitudes, which may lead to justification of analytical approaches favoring certain outcomes post hoc. However, many other covert influences, large and small, may also lead to unreliable – and thus unexplainable idiosyncratic – variation in analytical decision pathways…. Crucially, even when distinct pathways appear equally reasonable to outsiders, seemingly minor variations between them may add up and interact to produce widely varying outcomes.
How much variation was there in the results reported by 71 research teams (2 of the original 73 dropped out)? This much:
Fig. 1 [below] visualizes the substantial variation of numerical results reported by 71 researcher teams who analyzed the same data. Results are diffuse: Little more than half the reported estimates were statistically not significantly different from zero at 95% CI, while a quarter were significantly different and negative, and 16.9 percent were statistically significant and positive.
In summary:
In a model that accounts for more than a few variables, uncertainty about the values of those variables yields a wide range of possible results, even when the relationships among the variables are well-known and modeled rigorously.
Even when there is certainty about the values of variables, the results of modeling will vary widely according to the assumptions and unconscious biases of the modelers.
Anyone who relies on global climate models — with their many inadequately measured variables and many inadequately accounted-for variables (e.g., cloud formation) — to predict future “global” temperatures is either a fool or a charlatan. He is not a scientist. Nor is he a believer in “science” because he doesn’t know what it is.
Revisiting a guest-blogging stint from w-a-a-y back.
Many years ago, I was asked by Timothy Sandefur to guest-blog for a week at Freespace. By combing the archives of Mr. Sandefur’s blog and using The Wayback Machine, I reconstructed that week and its sequel, in which Sandefur and I continue an exchange that began during my guest-blogging stint. I reproduce the entire sequence of posts below.
Finally, before diving into the exchange, I should note that my political views have matured in the intervening years. I then thought of myself as a libertarian, but I came to understand that I am really conservative by disposition and only libertarian in some of my political views — and a long way from being an ideological libertarian. See my posts “On Liberty”, “Political Ideologies”, and “Disposition and Ideology”.
Sandefur
Thanks to John Lanius for fantastic guest blogging. Be sure to check out his regular blog at TexasBestGrok….
This week’s guest blogger is X, who regularly blogs at Liberty Corner…. Welcome, Mr. X.
* * *
Me
What Realignment?
My thanks to Timothy Sandefur for inviting me to guest-blog at Freespace. This is my ice-breaking post.
Several of the blogs that I follow have commented on a recent Washington Postarticle by John F. Harris, “Was Nov. 2 a Realignment — Or a Tilt?” Harris opens by saying this:
By any measure, President Bush and his fellow Republicans had a good night on Nov. 2. The question now is whether the election results set the GOP up for a good decade — or more.
Harris then goes on to cite various experts, but only one whose views seem to be grounded in historical fact:
Yale political scientist David R. Mayhew two years ago wrote a book calling the entire notion of realignments a fiction, at least at the presidential level. In the 15 presidential elections since World War II, he noted, the incumbent party has kept power eight times and lost it seven times. “You can’t get any closer to a coin toss than this,” he said. “At the presidential level, the traits of the candidates are so important that they blot out party identification.”
Mayhew is on to something, but his perspective is too short. Let’s go back to 1868, the year of the first presidential election following the Civil War. That span of 136 years covers 35 presidential elections and three more-or-less distinct eras of dominance by one party or the other. Here’s the big picture:
In the first era, Republicans dominated presidential politics from 1868 through 1928, winning 11 of 15 elections. The GOP might have made it 13 of 15 had Theodore Roosevelt’s “Bull Moose” candidacy of 1912 not enabled the election (and re-election) of Woodrow Wilson. Following the Wilsonian interregnum, the Republican era resumed resoundingly in 1920, when the GOP “realigned” itself with its pre-Theodore Roosevelt theme of limited government.
The brief era of Democratic dominance began in 1932 and barely survived the 1948 election. Franklin D. Roosevelt’s victory in 1932 had everything to do with the Great Depression and nothing to do with political philosophy. FDR wasn’t offering “big government” in 1932, he was merely offering a fresh face. Southern Democrats continued to be Democrats and many Northern Republicans simply switched sides out of despair. This second era ended as abruptly as it had begun, with the Dixiecrat rebellion of 1948, which nearly cost Harry Truman the election.
And so the third era — a new Republican era — began with Eisenhower’s victory in 1952. That victory — built on Ike’s popularity and Southern Democrats’ repugnance for the national party’s stance on civil rights — marked the beginning of a long realignment in presidential politics. That realignment didn’t end until the 1980s. Since then, Republicans and Democrats have been fighting border skirmishes over personalities and the issues of the day, just as they have in the past.
But there is no doubt that we are still in a Republican era. Republican control of Congress is as secure as it has been since the 1920s, as is Republican control of State governments. The current Republican era will end — if and when it does — in the aftermath of an economic, social, or military trauma whose nature and timing are unpredictable. [It turned out to be economic trauma, which began near the end of Bush’s second term with a stock-market crash and continued on into the Great Recession.]
Will the continuation of the current Republican era be good or bad for the cause of libertarianism? Republicans, for the most part, seem to have given up on “limited government” for the sake of winning elections. Now, the GOP is the party of “relatively limited government.” But the alternative — a return to Democratic dominance — is probably worse. What’s a libertarian to do?
I’ll end this post with that question.
* * *
Sandefur
Realignment tactics
I like Mr. X’s question about what a libertarian to do in light of the Republican Party’s purge of its Goldwater elements. It seems to me in retrospect that the notion of libertarians being part of the conservative coalition is a massive accident. “Fusionism,” as promulgated in In Defense of Freedom, for example, seems to be built on a major misunderstanding of libertarianism, if not of conservatism. As I argued just the other day, libertarianism is a variety of liberalism. Its primary concern is with the liberation of the individual. Conservatism, properly understood—I mean, real, honest to god conservatism of the Russell Kirk, Richard Weaver, Robert Nisbet variety—is nothing like this. It is about the stability of society. Ken Masugi’s comment today that the Raich case represents a “clash of conservatisms” is typical of this misunderstanding. Social conservatives—who, again, I think are real conservatives—believe in the Drug War because their primary political concern is the health of “Society” (which they abstract into a sort of God, with rights valid against individuals). But libertarians are opposed to the Drug War because their primary political concern is the freedom of individuals. The surface issue of drug policy is just a cover for a profound difference over essential elements of political philosophy.
Libertarianism began dating the Republican Party because of Barry Goldwater. His opposition to government programs, and defense individual freedom upset the genuine conservatives within the Republican Party (whom we loosely call Rockefeller Republicans). And as long as the Goldwater element was ascendant in the Republican Party, as with Ronald Reagan, we felt more or less at home, although we were irked by the inclusion of the Kirkian platitudes in the speeches.
Now that Dole and the Bushes have almost perfected the elimination of the Goldwater faction of the GOP—to such a degree that party members ridicule Goldwater’s latter-day defense of gay rights as though it was evidence of senility—there is an ever-diminishing role for us in that party. Some large libertarian segments, most notably Reason magazine, have simply given up on the right wing, and are overtly courting the left, hoping that social issues will draw the left into greater embrace of economic freedom. I’m really not sure whether that strategy will work—I think the left is as resolutely hostile to individualism as the conservatives are—but do we really have anything to lose? “Libertarian” has become an epithet within the controlling faction of the Republican Party. I for one am sick of it, and were it not for the war, as I’ve said, I would have voted Democrat this year. And I suspect at least some leftists will be drawn to our side if we tell our story right: if we show that the liberation of previously oppressed people must include economic liberty.
But in the end, I can’t say. As Washington said, “If to please the people, we offer what we ourselves disapprove, how can we afterward defend our work? Let us raise a standard to which the wise and the honest can repair; the event is in the hand of God!”
* * *
Me
Who Defines Reality?
On reading Timothy Sandefur’s recent posts about creationism vs. evolution (here and here), I’m prompted to ask who is the “we” who decides what to teach? Toward the end of the post linked second above, Mr. Sandefur says this about the teaching of evolution:
I believe that all men are created equal, and that they deserve to be treated like responsible adults—which means, confronted with the reality, and charged with the obligation to recognize it, or evade it and bear the consequences….
Who defines reality, and who decides to confront us with it? The state?
* * *
Sandefur
Defining? No, teaching
Mr. X’s question is more rhetorical than substantive. Reality is not “defined” by some entity standing outside of it and determining its contents; it simply is. It is discovered, and observed, by all of us—some more skillfully and carefully than others. These people can choose to confront us with that reality. I believe that respect for people as thinking beings requires them to do so at least sometimes (this is one reason for blogging). Obviously you have no right to intrude on a person’s seclusion, and force them to confront something they don’t want to—just as you have no right to break into their seclusion and force them to do anything they don’t want to do. So obviously no, the state does not confront us in that sense. I have repeatedly stated my opposition to government-run education, so much so that I don’t think I need to do so again.
(However, the state clearly has the right to confront us with reality in some situations. For instance, if a parent believes that blood transfusions violate the will of God, and therefore refuses to get a blood transfusion for his ailing child, the state may legitimately require that the child receive a blood transfusion. If a parent believes that sexual molestation or other abuse of a child is the will of God, the state has the right to stop that. The state also has the right to say that a person cannot simply evade responsibility for torts by refusing to believe they exist. So yes, the state does have the right to confront us with reality as a side effect of its pursuits of other legitimate goals.)
My post, however, assumes that we have a government school system in place. My question is, if it is okay for people to wander around believing whatever makes them feel good, then why not abandon the attempt to teach them evolution at all (even in private schools)? Moreover, my point remains even if we abolish government schools. It is a scientist’s professional obligation—as well, I think, as an obligation of honor—to confront people with reality. Again, that does not mean intruding on their privacy, obviously. But a scientist who sits idly by while nonsense is propagated, is betraying something essential about his profession and about his mind. The same, of course, is true of lawyers, and to put it in a lawyerly way, if we are going to educate, then it is incumbent upon us to educate people reasonably—not to do so negligently. And it is negligent to tell people that they can believe in fairy tales.
I might turn Mr. X’s rhetoric back on him, to make my real point clearer: Who defines the myth that we are going to allow people to believe, so as to soothe their fragile little hearts? And who decides to propagate it? The state? That, at least, is what many ID proponents believe.
* * *
Me
Reality and Government Schools
In my previous post here I commented on two of Timothy Sandefur’s posts … about creationism vs. evolution. I closed my post by asking: “Who defines reality, and who decides to confront us with it? The state?” Mr. Sandefur responds thusly:
…Reality is not “defined” by some entity standing outside of it and determining its contents; it simply is. It is discovered, and observed, by all of us—some more skillfully and carefully than others….
All right, then, who decides which of us is the more skillful and careful observer of reality? It shouldn’t be the state. (I believe that Mr. Sandefur and I are firmly agreed on that point.) But, we do have government-run schools, and they do dominate education in the United States. Perforce, it is those schools, in their vast inadequacy, that decide what to teach as “reality.”
I share Mr. Sandefur’s concern that proponents of “intelligent design” would use the state to compel the teaching of ID as an alternative to evolution. But government schools that teach evolution are also the schools that teach a lot of things that skillful observers like Mr. Sandefur and I do not recognize as truth — things that might be wrapped up in the phrase “government as ultimate problem-solver.”
Now, I do not mean to suggest that government schools might just as well go for broke and teach more untruth by adding ID to their curricula. What I mean to suggest is that government schools already teach — and have long taught — ideas that are far more subversive of liberty and the pursuit of happiness than ID.
It’s annoying to think that “creationism” is widely believed, and it’s galling to think that it might be taught in public schools. But I find that far less threatening than the widespread belief in government as ultimate problem-solver. That is why, given the limited amount of time I have for blogging, I tend to shoot at the left and ignore the right.
* * *
Sandefur
Teaching myths
Mr. X has a point. Consider, for instance, socialism. Socialism is as flawed an economic theory as creationism is a biological theory. It no more deserves to be taught as if it were true than creationism does. Yet, obviously, there are many people who think otherwise, just as there are many people who think that creationism is true, and that it should be taught. If the government is going to teach, then that means these folks will be disappointed. And if they are disappointed, then they would be just as willing to command that the government schools not teach classical liberalism, since in these folks’ minds, classical liberalism is as flawed as creationists believe evolution to be. Government schooling—like all government redistributionary schemes—is subject to the public choice effect.
That’s a good argument against the existence of government schools, not against insisting that schools (whatever their form) teach things that are true as true, and teach things that are false as false. It is certainly true that “government schools already teach—and have long taught—ideas that are far more subversive of liberty and the pursuit of happiness than ID.” But that doesn’t mean that we may throw up our hands and say “well, fine, teachers can tell kids whatever they want.” No, they can’t. A school that teaches kids socialism and never mentions the price problem, for example, is committing exactly the same wrong as a school that teaches kids that evolution isn’t true.
But, again, my point isn’t about the content of the material taught in the classroom. It’s about the real purpose of evolution in the classroom. The real purpose of evolution, like the real purpose of physics or anything else, really, is to inculcate in students the habit of thinking rationally and demanding reasons for believing things. That is far more important than the actual substance of the things a student learns, and forgets, and can look up in an almanac after he graduates. It’s the habit of mind that’s important. Carl Sagan puts it well:
If we teach only the findings and products of science—no matter how useful and even inspiring they may be—without communicating its critical method, how can the average person possibly distinguish science from pseudoscience? Both then are presented as unsupported assertion. In Russia and China, it used to be easy. Authoritative science was what the authorities taught. The distinction between science and pseudoscience was made for you. No perplexities needed to be muddled through. But when profound political changes occurred and strictures on free thought were loosened, a host of confident or charismatic claims—espeically those that told us what we wanted to hear—gained a vast following. Every notion, however improbable, became authoritative….
It is enormously easier to present in an appealing way the wisdom distilled from centuries of patient and collective interrogation of Nature than to detail the messy distillation apparatus. The method of science, as stodgy and grumpy as it may seem, is far more important than the findings of science.
The First Amendment says that “Congress shall make no law…abridging the freedom of speech, or of the press….” Great stuff. I buy it. But then there’s this, from a story at latimes.com:
On the evening of Oct. 14, a young Marine spokesman near Fallouja appeared on CNN and made a dramatic announcement.
“Troops crossed the line of departure,” 1st Lt. Lyle Gilbert declared, using a common military expression signaling the start of a major campaign. “It’s going to be a long night.” CNN, which had been alerted to expect a major news development, reported that the long-awaited offensive to retake the Iraqi city of Fallouja had begun.
In fact, the Fallouja offensive would not kick off for another three weeks. Gilbert’s carefully worded announcement was an elaborate psychological operation — or “psy-op” — intended to dupe insurgents in Fallouja and allow U.S. commanders to see how guerrillas would react if they believed U.S. troops were entering the city, according to several Pentagon officials.
In the hours after the initial report, CNN’s Pentagon reporters were able to determine that the Fallouja operation had not, in fact, begun.
“As the story developed, we quickly made it clear to our viewers exactly what was going on in and around Fallouja,” CNN spokesman Matthew Furman said.
Officials at the Pentagon and other U.S. national security agencies said the CNN incident was not an isolated feint — the type used throughout history by armies to deceive their enemies — but part of a broad effort underway within the Bush administration to use information to its advantage in the war on terrorism….
Surely the viewers of CNN included our enemies, or persons friendly to them who passed along the information broadcast by CNN.
I know the arguments about undermining the credibility of the news media — and the government — by using the media to broadcast disinformation. But those are just arguments. The fact is that the U.S. is engaged in a legal war against a determined and ruthless enemy, and the use of disinformation is a time-honored tactic of warfare. Why not risk undermining the credibility of the media — to the extent that the media have much credibility left — if it helps to win the war?
Unless CNN’s report and the news story I’ve quoted are part of a disinformation campaign, it seems that media may be undermining the war effort by revealing particular instances of disinformation and giving the enemy hints as to the shape of our disinformation campaign.
That leads to my question: Is there an interpretation of the Constitution that would make it illegal for the media to publish information that compromises military operations?
ADDENDUM: If there is a compelling governmental interest in the regulation of political speech (i.e., campaign-finance “reform”) and a compelling governmental interest in allowing publicly funded universities to pursue “diversity” (a concept that I cannot find in the Constitution), why not a compelling governmental interest in the suppression of media reports that undermine the prosecution of a constitutional war?
I’m being provocative here because I hope to draw out my host and some of his readers on this issue.
* * *
Sandefur
National security and the First Amendment
In answer to Mr. X’s question about the First Amendment, preventing the press from publishing military information that could compromise military success is the quintessential compelling government interest justifying limits on freedom of the press. The most famous case on the issue is New York Times v. United States, 403 U.S. 713 (1971), also known as the “Pentagon Papers case.” There the Supreme Court held that the government could not prevent the publication of military documents on the history of the Vietnam War, which the Nixon Administration said would compromise national security if they were published. The decision was a “per curiam” decision—meaning it was not signed by an individual justice, but issued in the name of the Court. The decision reads, in its entirety:
We granted certiorari…in these cases in which the United States seeks to enjoin the New York Times and the Washington Post from publishing the contents of a classified study entitled “History of U.S. Decision-Making Process on Viet Nam Policy.”
“Any system of prior restraints of expression comes to this Court bearing a heavy presumption against its constitutional validity.” Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 70 (1963);see also Near v. Minnesota ex rel. Olson, 283 U.S. 697 (1931). The Government “thus carries a heavy burden of showing justification for the imposition of such a restraint.” Organization for a Better Austin v. Keefe, 402 U.S. 415, 419 (1971). The District Court for the Southern District of New York in the New York Times case, 328 F.Supp. 324, and the District Court for the District of Columbia and the Court of Appeals for the District of Columbia Circuit, 446 F.2d 1327, in the Washington Post case held that the Government had not met that burden. We agree.
The judgment of the Court of Appeals for the District of Columbia Circuit is therefore affirmed. The order of the Court of Appeals for the Second Circuit is reversed, 444 F.2d 544, and the case is remanded with directions to enter a judgment affirming the judgment of the District Court for the Southern District of New York. The stays entered June 25, 1971, by the Court are vacated. The judgments shall issue forthwith.
So ordered.
What does this mean? Well, the Justices then wrote separate concurring and dissenting opinions on the degree to which the Constitution allows government to censor speech that might compromise military operations. The case mostly hinged on the term “prior restraint,” which means, a government action forbidding the publication of material. This is generally considered the most severe form of censorship, and the one the First Amendment was primarily written to prevent. Justices Hugo Black and William Douglas, notoriously First Amendment “absolutists,” (who liked to say “no law means no law,”) argued that the government may never prohibit publication of information, apparently regardless of the military effect of such dissemination. As Erwin Chemerinsky notes, it’s hard to believe their absolutism was really sincere: “one wonders whether even they would allow such restrictions if there were compelling proof of a need to protect national security. For example, if a newspaper during World War II were going to report that America had broken the Nazi code, probably even Black and Douglas would have allowed an injunction to stop that information from being published….”Erwin Chemerinsky, Constitutional Law: Principles And Policies 778 (1997). But hard as it is to believe, I see no reason they would have.
Justice William Brennan’s concurring opinion argued for strict scrutiny of prior restraints, meaning that they would need to be “narrowly tailored to advance a compelling government interest.” That is, he would allow them, but rarely, and only if a judge was convinced of a powerfully good reason for them. Moreover, “[o]ur cases have thus far indicated that such cases may arise only when the Nation ‘is at war.’” New York Times, 403 U.S. at 726 (Brennan, J., concurring). Justices Byron White and Thurgood Marshall argued that courts lacked the statutory power to issue an injunction against the publication of the Pentagon Papers. A good argument, but it doesn’t really say what the First Amendment’s limitations are. Justice John Harlan, Justice Harry Blackmun, and Chief Justice Warren Burger wrote dissents, arguing that the case had been heard so fast that they didn’t really know whether these materials were a threat to national security or not, and that an injunction should at least be granted to allow them to figure that out.
Justice Blackmun wrote that
I cannot subscribe to a doctrine of unlimited absolutism for the First Amendment at the cost of downgrading other provisions. First Amendment absolutism has never commanded a majority of this Court…. What is needed here is a weighing, upon properly developed standards, of the broad right of the press to print and of the very narrow right of the Government to prevent. Such standards are not yet developed. The parties here are in disagreement as to what those standards should be. But even the newspapers concede that there are situations where restraint is in order and is constitutional. Mr. Justice Holmes gave us a suggestion when he said in Schenck, “It is a question of proximity and degree. When a nation is at war many things that might be said in time of peace are such a hindrance to its effort that their utterance will not be endured so long as men fight and that no Court could regard them as protected by any constitutional right.”
Because the per curiam opinion isn’t very clear on what degree of evidence would justify a prior restraint, and because there was no solid majority of justices supporting Douglas’ and Black’s opinion that prior restraints are entirely forbidden, the New York Times case seems to stand for the proposition that government may censor the publication of information, even in the most extreme manner, but only when there is really, really, really strong reason to believe that the publication of the information would harm the national interest, especially by causing “‘the death of soldiers, the destruction of alliances, the greatly increased difficulty of negotiation with our enemies, the inability of our diplomats to negotiate,’ to which list I might add the factors of prolongation of the war and of further delay in the freeing of United States prisoners,”id. at 763 (Blackmun, J., dissenting). But, as Chemerinsky notes, “[n]o Supreme Court case has dealt with these issues since the Pentagon Papers case.”Supra at 780. So the answer is, nobody really knows.
I suspect that the very high standard of proof that seems to be required by the Pentagon Papers case would not be met if the government tried to stop CNN from informing our enemies that the Fallujah offensive was not really starting. For one thing, the Pentagon Papers case seems to allow censorship only if the press is going to report information, like, say, troop movements, that the government has tried to keep secret, and which will result in harm to our military. In the CNN situation, though, the government would be attempting to stop the media from reporting the fact that there aren’t American troops in the field—that is, it would be trying to get CNN to participate in the promulgation of false information. As far as tactics are concerned, that may be the same thing, but as far as the Pentagon Papers case is concerned, I don’t think it is. The publication of the information by CNN would not directly result in the deaths of American soldiers (theoretically). So I doubt the First Amendment would allow the government to prohibit CNN from publishing this information.
Incidentally, in the foregoing, I’ve assumed that we the current war is a declared war, which I do believe. Someone who thinks not might think the government would have an even harder time justifying such censorship of CNN.
* * *
Me
Affirmative Action: A Modest Proposal
Recent posts by Alex Tabarrok at Marginal Revolution discuss a study that reveals the effects of nature and nurture on income. (Tabarrok’s original post is here. He has posted some clarifying remarks here.) The study shows that the income of a Korean orphan who was adopted in the U.S. between 1970 and 1980, through a process of random selection, is about the same regardless of the income of the adoptive parents. On the other hand, the income of the biological children of the same parents is highly correlated with the parents’ income; that is, low -income parents tend to produce low-income children, whereas high-income parents tend to produce high-income children. The obvious implication of these findings is that intelligence (and hence income) is a heritable trait, one that remains differentiated along racial lines (a consistent but controversial finding discussed here, for example). Thus the findings give further evidence, if any were needed, that affirmative action policies — whether government-prescribed or voluntarily adopted — tend to undermine the quality of workplaces and educational institutions. (I am speaking here of the quality of effort and thought, not the value of workers and students as human beings.)
The premise of affirmative action finds expression in a 1986 speech to the Second Circuit Judicial Conference by Justice Thurgood Marshall, where he
urged Americans to “face the simple fact that there are groups in every community which are daily paying the cost of the history of American injustice. The argument against affirmative action is… an argument in favor of leaving that cost to lie where it falls. Our fundamental sense of fairness, particularly as it is embodied in the guarantee of equal protection under the laws, requires us,” Marshall said, “to make an effort to see that those costs are shared equitably while we continue to work for the eradication of the consequences of discrimination. Otherwise,” Marshall concluded, “we must admit to ourselves that so long as the lingering effects of inequality are with us, the burden will [unfairly] be borne by those who are least able to pay.” [From “Looking Ahead: The Future of Affirmative Acton after Grutter and Gratz,” by Professor Susan Low Bloch, Georgetown University Law Center.]
In sum, affirmative action is a way of exacting reparations from white Americans for the sins of their slave-owning, discriminating forbears — even though most of those forbears didn’t own slaves and many of them didn’t practice discrimination. Those reparations come at a cost, aside from the resentment toward the beneficiaries of affirmative action and doubt about their qualifications for a particular job or place in a student body. As I wrote here:
Because of affirmative action — and legal actions brought and threatened under its rubric — employers do not always fill every job with the person best qualified for the job. The result is that the economy produces less than it would in the absence of affirmative action….
[A]ffirmative action reduces GDP by about 2 percent. That’s not a trivial amount. In fact, it’s just about what the federal government spends on all civilian agencies and their activities — including affirmative action….
Moreover, that effect is compounded to the extent that affirmative action reduces the quality of education at universities, which it surely must do. But let us work with 2 percent of GDP, which comes to about $240 billion a year, or more than $6,000 a year for every black American.
Thus my modest proposal to improve the quality of education and the productivity of the workforce: End affirmative action and give every black American an annual voucher for, say, $5,000 (adjusted annually for inflation). The vouchers could be redeemed for educational expenses (tuition, materials, books, room and board, and mandatory fees). Recipients who didn’t need or want their vouchers could sell them to others (presumably at a discount), give them away, or bequeath them for use by later generations. The vouchers would be issued for a limited time (perhaps the 25 years envisioned by Justice O’Connor in Grutter), but they would never expire.
That settles affirmative action, reparations, and school vouchers (for blacks), at a stroke. If only I could solve the Social Security mess as easily.
* * *
Me
Nonsense and Sense about Social Security
E.J. Dionne Jr., writing in The Washington Post on November 30, opined that
…President Bush carries a heavy burden in trying to sell the country on his plan to carve private accounts out of Social Security. Bush has been pushing privatization since he first ran for the presidency in 2000. But he keeps changing his explanation of how the program will be paid for and what its effect on the deficit will be….
Dionne goes on in that vein throughout his column, using what seems to be a discrepancy between what Bush said four years ago and what he and his aides are saying now to play “gotcha.” Worse than that, however, Dionne — who is a Washington insider of sorts — spends much of his column spreading confusion about Social Security; for example:
The big cost of privatization comes from allowing individuals to keep a share of the Social Security taxes they now pay into the system and use it for private investment accounts. This reduces the amount of money available to pay current beneficiaries. Since Bush has promised the retired and those near retirement that their benefits won’t be cut, he needs to find cash somewhere. The only options are to raid the rest of the budget, to raise taxes or to borrow big time….
[During the 2000 presidential campaign] Gore…challenged Bush on his numbers. “He has promised a trillion dollars out of the Social Security trust fund for young working adults to invest and save on their own, but he’s promised seniors that their Social Security benefits will not be cut and he’s promised the same trillion dollars to them,” Gore said at that third presidential debate. “Which one of those promises will you keep and which will you break, Governor?”
…Bush is about to offer an easy answer to Gore’s challenge: More borrowing….
…Last week The Post’s Jonathan Weisman reported that Republicans were considering moving the costs of social security reform “off-budget” so that, on paper at least, they wouldn’t inflate the deficit. And Joshua B. Bolten, the director of the White House’s Office of Management and Budget, let the cat out of the bag over the weekend in an interview with Richard W. Stevenson of the New York Times. “The president does support personal accounts, which need not add over all to the cost of the program but could in the short run require additional borrowing to finance the transition,” Bolten said. “I believe there’s a strong case that this approach not only makes sense as a matter of savings policy, but is also fiscally prudent.”
A huge new borrowing — “from hundreds of billions to trillions of dollars over a decade,” as Stevenson notes — is suddenly “fiscally prudent” in the administration’s eyes….
Dionne betrays such stupendous misunderstanding of the issue that the only way to deal with his ignorance is to explain the whole megillah, step-by-step:
1. The cost of Social Security is the cost of the benefits paid out, not the payroll taxes or borrowing required to finance those benefits. There are two basic issues: how much to pay in benefits and how to finance those benefits.
2. Assuming, for the moment, that benefits will be paid to future retirees (today’s workers) in accordance with the present formula for computing benefits — which today’s workers believe is a “promise” they have been made — something must “give” when payroll taxes no longer cover benefits, beginning in 2018.*
3. No matter how you slice it, someone will pay for those future benefits. The question is: who and when? There are three conventional ways to do it:
Raise future workers’ payroll taxes by enough to cover benefits.
Borrow enough to cover benefits, thus shifting the immediate burden from future workers to willing lenders, who are also the “future generations” that “bear the burden” of the debt. The cost of borrowing (i.e., interest) raises the cost of the program a bit, but interest is also income to those who lend money to the government. In other words, borrowing — on balance — doesn’t create a burden, it merely shifts it, voluntarily.**
Raise taxes and borrow, in combination.
4. There’s an “unconventional” way to deal with the looming deficit in Social Security: invest payroll taxes in real assets (i.e., stocks, corporate bonds, mortgages). Why? Because money invested in real assets yields a real return that’s far higher than the “return” today’s workers will receive on their payroll taxes. (See, for example, figure 2 in this paper.) There are three ways to “privatize” Social Security by investing in real assets:
Abolish Social Security and make individuals responsible for their retirement (perhaps with a minimal “safety net” funded by general taxation).
Let the government do it, through a “blind trust” run by an independent agency.
Let individuals do it, through mandatory private accounts.
5. I assume that the first option is off the table, for now, even though Social Security (like so many other government programs and activities) is unconstitutional. Given the large sums of money involved, the second and third options would yield about the same result, on average. I’ll continue by outlining the third option, which is the proposal that has drawn the ire of E.J. Dionne and so many other anti-privatization leftists.
6. Workers would invest some (or all) of their payroll taxes in real assets (private accounts). Those same workers would agree to receive lower Social Security benefits when they retire. (The precise tradeoff would depend on the age at which a worker opens a private account and how much the worker has already paid into Social Security. Workers who are over a certain age — say 50 or 55 — when privatization begins wouldn’t be allowed to drop out, but would receive the Social Security benefits they expect to receive.) That leads to a series of questions and answers:
Q: What happens when the shift of payroll taxes to private accounts results in a deficit, that is, when payroll tax receipts are less than benefit payments? A: The government borrows to make up the difference. (See the discussion of borrowing in point 3 and the second footnote, below.)
Q: What happens to the money invested in private accounts? A: It would belong to the workers who invested it. They’d receive smaller payments from “regular” Social Security, but those smaller payments would be more than made up by the income they’d receive from their private accounts.
Q: When does it all end? A: It would depend on how much workers are allowed to invest in private accounts and how much those private accounts earn. If workers were allowed to invest all of their payroll taxes in private accounts, and if all workers elected to do so, Social Security — as we know it — would wither away. Every worker would have his or her own source of retirement income. That income come from earnings on real assets, not from taxes paid by those who are then working. And that income would exceed what the retiree would have received in Social Security benefits — even for private accounts invested “safely” in high-grade corporate bonds or mortgage-backed securities.
In sum, whether or not Bush is telling the same “story” now that he told four years ago, there is no shell game of the kind suggested by Dionne, and Gore before him. Dionne (and Gore) are simply unable to grasp the notion that by diverting payroll taxes to real investments, with real returns, no one would be made worse off, and many would be better off. They’re hung up on the borrowing that must take place in the initial stage of privatization, and they overlook the return on that borrowing, namely, higher income for future retirees and lower payroll taxes on future workers. And the threat of borrowing, as I have explained, is a bogeyman, which the economically illiterate use to scare the economically illiterate.
__________ * As I’ve explained here, here, and here, the so-called Social Security trust fund, which won’t be exhausted (on paper) until 2042 [now 2034], is just a myth.
shows that the income of a Korean orphan who was adopted in the U.S. between 1970 and 1980, through a process of random selection, is about the same regardless of the income of the adoptive parents. On the other hand, the income of the biological children of the same parents is highly correlated with the parents’ income; that is, low -income parents tend to produce low-income children, whereas high-income parents tend to produce high-income children….
I went on to say this:
…The obvious implication of these findings is that intelligence (and hence income) is a heritable trait, one that remains differentiated along racial lines (a consistent but controversial finding discussed here, for example). Thus the findings give further evidence, if any were needed, that affirmative action policies — whether government-prescribed or voluntarily adopted — tend to undermine the quality of workplaces and educational institutions. (I am speaking here of the quality of effort and thought, not the value of workers and students as human beings.)
A reader objects — sort of. He begins by saying:
[T]here’s a flaw in your guest blogger’s logic. He takes the adoption study as evidence that intelligence is a heritable trait and thus passed through racial lines (fine). He then says since affirmative action rewards racial minorities who may be less qualified (fine), that affirmative action tends to undermine quality of work.
[H]is conclusion may be correct, but is only tenuously related to the first premise. he seems to be saying that, on average, if you give preference to minorities, the quality of work will suffer, because on average minorities are less intelligent….
Let’s stop right there and take things one step at a time. What I said is that intelligence “is a heritable trait, one that remains differentiated along racial lines (a consistent but controversial finding discussed here, for example).” There is less controversy about the persistence of the racial differential and more controversy about race, per se, being the underlying cause of that differential. For a sample of the controversy, go to the linked article and follow the many links in the article. One of those links leads to a statement by Charles Murray, co-author of the infamous The Bell Curve, who says in a footnote:
Intelligence is known to be substantially heritable in human beings as a species, but this does not mean that group differences are also heritable. Despite our explicit treatment of the issue, it is perhaps the single most widespread source of misstatement about The Bell Curve.
How is it that intelligence is “substantially heritable” and yet “group differences” may not be heritable? Here is Professor Richard E. Nisbett of the University of Michigan, a noted opponent of the notion of inherent racial disparity:
Estimates of heritability within a given population tell us nothing about the degree to which differences between populations are genetically determined. The classic example is an experiment in which a random mix of wheat seeds is grown on two different plots of land. Within either plot, the environment is kept uniform, so the height of the different plants is largely or entirely genetically-determined. Yet the average difference between the two plots is still entirely environmental, because the mix of genotypes in each plot is identical….
In other words, there’s a school of thought that a racial group that starts out “behind” because of environmental causes (e.g., nutrition and exposure to education and other experiences that “stretch” the mind) stays behind, even as the average intelligence of all racial groups seems to advance over time (a phenomenon known as the Flynn effect). In any event, inter-racial differences in intelligence seem to be real and persistent, and racially related genetic causes cannot be ruled out. (Again, refer to this article.)
The distribution of those differences does not follow the pattern supposed by the reader, who goes on to say this:
[I]f I understand the studies correctly, they say that each race has members that represent the full spectrum of intelligence, and that it’s only on average that the scores are lower.
I’m not sure that the reader correctly understands the distribution of intelligence and its implications for the labor market. Let’s say there’s a pool of 200 “typical” black applicants and 1,200 “typical” white applicants for a “typical” job that requires an IQ of 100. (I use 200 blacks and 1,200 whites because the 1:6 ratio reflects the relative numbers of blacks and whites in the U.S. I take an IQ of 100 because that’s about the mean for whites, whereas the mean for blacks is about 85. IQs are assumed to be normally distributed around those means, with a standard deviation of 15 IQ points.) Now, of the “typical” applicants for this “typical” job, only 32 (16 percent) of the blacks would have an IQ of at least 100, whereas 600 (one-half) of the whites would have an IQ of at least 100. Thus the ratio of qualified blacks to qualified whites would be about 1:19 for the “typical” job.
Bump it up a notch and set the intelligence qualification at an IQ of 115. Then, only 5 (2.5 percent) of the 200 black applicants would qualify, whereas 192 (16 percent) of the 1,200 white applicants would qualify — a ratio of about 1:38. In other words, it gets harder and harder to find qualified blacks as jobs require more intelligence (not to mention specific kinds of education and training). So, it’s irrelevant that there are some blacks at the higher end of the spectrum of intelligence. Why? Because there are proportionately few of them, and fewer still who have the requisite education and training for the kinds of jobs that are associated with high intelligence (e.g., astrophysics, computer engineering, advanced mathematics).
To look at it another way, take 200 randomly selected blacks and 200 randomly selected whites: 100 of the blacks and 168 of the whites would have an IQ of at least 85 (a ratio of 1:1.7); 5 of the blacks and 32 of the whites would have an IQ of at least 115 (a ratio of 1:6.4).
The black-white difference in average intelligence is meaningful, despite what the reader seems to think, because it reflects a significant difference in the distribution of intelligence. University slots and jobs that require at least average (white) intelligence can’t be filled in proportion to the number of blacks in the population, or in proportion to the number of black applicants, without tending to dilute the quality of universities and workplaces. (Again, I am speaking of the quality of effort and thought, not the value of workers and students as human beings.)
That leads me affirmative action, about which the reader says:
Thus [because there are some blacks at the high end of the spectrum of intelligence], affirmative action can be structured in such a way as to give special preference to the higher-achieving members of any minority, who face the difficult task of not being stereotyped by the lower scores of their fellow minorities. I.e., If a white person and a black person have the same or nearly the same qualifications, then you pick the black person.
Yes, as I have just shown, there are blacks at the high end of the spectrum of intelligence, and those blacks are courted assiduously by universities and employers. Why? Because universities and employers are anxious to demonstrate their commitment to affirmative action, diversity, racial equality, or whatever you want to call it. What better way to do that than to admit or hire the “best and brightest” blacks, which is a relatively risk-free proposition for universities and employers. What happens to those blacks who aren’t in the higher reaches of the spectrum of intelligence? Well, that’s where affirmative action, as most Americans know it, kicks in.
Here’s how it seems to work at universities: Blacks get preferential treatment for being black, to the extent that universities can concoct and defend affirmative-action plans that allow them to give preferential treatment. Sometimes a university fails (as in Gratz v. Bollinger), and sometimes it succeeds (as in Grutter v. Bollinger). But if there’s a prevailing tendency among the left-dominated universities of the United States, it’s to allow blacks to meet a lower standard of intelligence, thus displacing some whites who would have made better students and, eventually, better employees. So, at universities, affirmative action isn’t just about “picking the black person” who has “the same or nearly the same qualifications.”
What about affirmative action in the workplace? Here, I speak from long experience. (See my credentials.) Affirmative action, in theory, is supposed to be about hiring and promoting regardless of race, among other attributes. As an example, here’s the Department of Labor’s summary of its guidelines for federal contractors and subcontractors:
Each contracting agency in the Executive Branch of government must include the equal opportunity clause in each of its nonexempt government contracts. The equal opportunity clause requires that the contractor will take affirmative action to ensure that applicants are employed, and that employees are treated during employment, without regard to their race, color, religion, sex or national origin….
It doesn’t say “If a white person and a black person have the same or nearly the same qualifications, then you pick the black person,” as the reader would have it. What it says, in effect, is this: Faced with two equally qualified candidates for hiring or promotion, you can’t discriminate against a black person or a person who belongs to any of the other protected groups. To act in the way that the reader suggests would amount to blatant discrimination in favor of black job candidates over white job candidates, and that’s facially illegal, even though universities sometimes get away with similar discrimination in the name of “diversity.”
Nevertheless, what happens, in practice, is what the reader suggests, and then some: If a black person seems to have something like the minimum qualifications for a job, and if the black person’s work record and interviews aren’t off-putting, the black person is likely to be hired or promoted ahead of equally or better-qualified whites. Why?
Pressure from government affirmative-action offices, which focus on percentages of minorities hired and promoted, not on the qualifications of applicants for hiring and promotion.
The ability of those affirmative-action offices to put government agencies and private employers through the pain and expense of extensive audits, backed by the threat of adverse reports to higher ups (in the case of government agencies) and fines and the loss of contracts (in the case of private employers).
The ever-present threat of complaints to the EEOC (or its local counterpart) by rejected minority candidates for hiring and promotion. Those complaints can then be followed by costly litigation, settlements, and court judgments.
Boards of directors and senior managers who (a) fear the adverse publicity that can accompany employment-related litigation and (b) push for special treatment of minorities because they think it’s “the right thing to do.”
Managers down the line learn to go along and practice just enough reverse discrimination to keep affirmative-action offices and upper management happy.
The following case, about an employee who was victimized by reverse discrimination, illustrates just about everything I’ve said about the practice of affirmative action in the workplace:
A Federal Aviation Administration employee recently settled an employment discrimination case where he said he was passed over for promotions because of his gender and race.
Michael C. Ryan of Toms River, N.J., who worked at an FAA research and development facility as a GS-14 manager, said that between 1995 and 1997 he was denied eight promotions to GS-15.
After complaining to the FAA, Ryan went to the Equal Employment Opportunity Commission. Nine years later, a formal consent order gives Ryan, a 28-year FAA worker, the managerial and supervisory position he wanted. The order also begins a three-year agencywide policy review intended to reform FAA’s affirmative action policies.
Ryan, a white male, said he was qualified for the promotions he applied for at the William J. Hughes Technical Center in Atlantic City, N.J., but was passed over by people with less experience because he was not a woman or a minority. During the trial, Ryan’s attorney, Hanan Isaacs, argued that four of the seven minority candidates who were promoted were not selected using merit principles, including one person that Ryan trained who had 13 years less seniority.
According to Isaacs, the 22-day trial showed that the candidates were promoted ahead of Ryan so that minority and women promotion quotas could be met. Isaacs said FAA’s 1988 affirmative action plan, which called for “a workforce that looks like America by 2000,” started to go afoul when it compared the racial and gender composition of technical positions to the general population rather than to the minority composition of the comparable workforce.
Isaacs said that an unwritten but well publicized “50-50″ policy” required FAA managers to promote women and minorities at least 50 percent of the time in order to get career and financial incentives. This type of affirmative action has no end-plan and perpetually discriminates against nonminorities, Isaacs argued.
Ryan was offered a settlement a year ago that would have given him back pay – which could total about $100,000 – and the promotion, but Isaacs said Ryan refused because he wanted to see the agency’s policy change.
John G. Larsen, a FAA senior policy analyst, testified during the trial that the agency was not in compliance with the law after 1992 and that its affirmative action program would “almost always come up with the appearance of under-representation.”
Larsen, a 36-year FAA employee, said that after a 1995 Supreme Court ruling which found that preferential treatment based on race almost always is unconstitutional, even when it is intended to benefit minority groups that suffered injustices in the past, the agency’s affirmative action policies became illegal.
He said the FAA refused to conduct a review requested by the Clinton administration following the ruling that would have brought the agency back into compliance with affirmative action laws. “The culture of the agency was one, in my opinion, that did not entertain challenges or disagreement … and nothing changed,” Larsen said.
The agency did not admit liability in the settlement, but did agree to start a three-step comprehensive review of its programs and policies on hiring and promotion to put them into compliance.
A Justice Department spokesman said the department was happy to resolve the nine-year-old case. He said that with the assistance of the court, the department was able to reach a settlement that is fair to both parties and upholds the FAA’s commitment to ensure a workplace free of unlawful discrimination of any form.
That’s the real, illegal, world of affirmative action. And here is the price tag:
Because of affirmative action — and legal actions brought and threatened under its rubric — employers do not always fill every job with the person best qualified for the job. The result is that the economy produces less than it would in the absence of affirmative action….
[A]ffirmative action reduces GDP by about 2 percent. That’s not a trivial amount. In fact, it’s just about what the federal government spends on all civilian agencies and their activities — including affirmative action….
Moreover, that effect is compounded to the extent that affirmative action reduces the quality of education at universities, which it surely must do. But let us work with 2 percent of GDP, which comes to about $240 billion a year, or more than $6,000 a year for every black American….
So, the reader has it about right when he says, in his closing sentence,
It may be true that in practice affirmative action tends to downgrade quality….
But he glosses over the high price we pay for affirmative action, in dollars and divisiveness. And then he closes with this:
…but this [downgrading of quality] doesn’t follow necessarily from the heritability premise, and I find [the guest blogger’s] attempt to use this to bolster his argument inflammatory and intellectually dishonest.
The downgrading of quality — and the price we pay for that — follows directly from the demonstrable premise that affirmative action — as it is practiced — puts race ahead of quality in the selection of students and workers. Putting race first affects quality because of the unequal distribution of intelligence between the races, as intelligence is usually measured. The cause of the unequal distribution of intelligence may be controversial, but as far as I can tell there is no settled science in the matter. The notion of inherent racial differences in intelligence is still on the table, and it carries with it stark implications for the long-term success of blacks in an economy that increasingly demands more intellectual skills and fewer physical skills.
Therefore, it isn’t “intellectually dishonest” to raise the issue of inherent racial differences in intelligence. Nor is it “inflammatory,” except to those who — unlike me — are unwilling to review dispassionately the evidence on all sides of the issue. But dispassion is hard to come by in any discussion of race or affirmative action. That is why I offered my “modest proposal” — which I mean to be taken seriously. It cuts through all the cant and controversy about race, intelligence, and affirmative action. Here it is, again:
…End affirmative action and give every black American an annual voucher for, say, $5,000 (adjusted annually for inflation). The vouchers could be redeemed for educational expenses (tuition, materials, books, room and board, and mandatory fees). Recipients who didn’t need or want their vouchers could sell them to others (presumably at a discount), give them away, or bequeath them for use by later generations. The vouchers would be issued for a limited time (perhaps the 25 years envisioned by Justice O’Connor in Grutter), but they would never expire.
That settles affirmative action, reparations, and school vouchers (for blacks), at a stroke….
* * *
Me
The State of Nature
Is the “state of nature” literal or metaphorical? I have always thought it metaphorical, but I may have to think again after reading a review by Denis Dutton of Paul H. Rubin’s Darwinian Politics: The Evolutionary Origin of Freedom. Here’s a relevant sample of Dutton’s very long review:
The scene of evolution is the Environment of Evolutionary Adapted-ness, the EEA, essentially the Pleistocene, the whole, long period lasting from 1.6 million years ago up until the shift to the Holocene with the invention of agriculture and large settlements 10,000 years ago. Our present intellectual constitution was achieved by about 50,000 years ago, or 40,000 before the Holocene….It was in the earlier, much longer period that selective pressures created genetically modern humans….
Pleistocene evolution is often associated with the savannahs of East Africa, but human evolution occurred in many places out of Africa — in Europe, Asia, and the Near East. It was going on in the Ice Ages and during interglacial periods. The wide-ranging, hunter-gather species we became did not evolve in a single habitat, but adapted itself to all sorts of environmental extremes….It is all of these forces acting in concert that eventually produced the intensely social, robust, love-making, murderous, convivial, organizing, squabbling, friendly, upright walking, omnivorous, knowledge-seeking, arguing, clubby, raiding-party, language using, versatile species of primate we became: along the way to developing all of this, politics was born.
Rubin begins with that bracing idea that the often-coercive political control placed on human beings since the advent of cities is characteristic only of the Holocene. The human desire for freedom, he argues, is an older, deeper prehistoric adaptation: for most of their existence, human beings have experienced relative freedom from political coercion. Many readers will find Rubin’s thesis counterintuitive: we tend to assume that political liberty is a recent development, having appeared for a while with the Greeks, only to be reborn in the eighteenth century, after millennia of despotisms, for the benefit of the modern world. This is a false assumption, a bias produced by the fact that what we know best is recorded history, those 500 generations since the advent of cities and writing.
Our more durable social and political preferences emerged in prehistory, during the 80,000 hunter-gather generations that took us from apes to humans….
…In what follows, I’ll review a few basic components of hunter-gatherer political structures as described by Rubin.
Group size. Hunter-gatherer bands in the EEA were in the range of 25 to 150 individuals: men, women, and children….
This group size for hunting parties remains a persistent unit of organization even in mass societies of millions of people — or, say, industrial firms or college faculties of thousands. It is in fact the default “comfortable” size for human working groups….We can try as a thought experiment to imagine alternative default group sizes: under different conditions….In our actual world, however, hunting with two hundred people would be an organizational challenge, if not a nightmare, as are most working parties of that size: that is why working groups such as company boards, university committees, and fielded soccer, football, and baseball teams tend to be hunting-band size.
Dominance Hierarchies. The formation of hierarchies, common among animals and found in all primates, is another trait universal in human societies. In the EEA, Rubin surmises, social life was generally organized by so-called dominance or pecking-order principles….
Dominance hierarchies of the Pleistocene did not feature strong coercion from the top of the order, what we might term dictatorship, but required cooperation down the line….A desire for freedom, then, for relative personal autonomy within the group, is a powerful Pleistocene adaptation pitted against extreme coercive hierarchy….
Envy in a zero-sum society. One difference between a hunter-gatherer mentality and understandings needed today involves the nature of hierarchy itself. Hierarchies in the EEA evolved for a zero-sum resource environment: whatever was available was divided according to power or status. Trading in such circumstances is a zero-sum game: every bit of resource one person or family owns is something another family does not own. This default Pleistocene view of a zero-sum economy dogs our thinking today and results for the modern world in two undesirable features. First, we are prone to envy, to feeling dispossessed or cheated by the mere fact that others own what we do not own….Second, zero-sum thinking….makes it hard for us easily to understand how trade and investment of capital can increase the sum total of wealth available to all. We are therefore not well adapted to make sense of today’s economic system….
Risk and welfarism. Rubin speculates that in the EEA, resource availability fluctuated unpredictably (owing to weather change, disease, and natural events beyond a group’s control). Skill and hard work could help to meet these threats when they occurred, but individuals still would be “subject to significant variations in income” that could be fatal. Such risks, Rubin argues, predisposed humans to look for ways to insure survival through periods of hardship. An evolved moral preference for resource sharing is one form of such insurance, one way of handling risk. Societies of families, which is what we were in the EEA, are generally risk-averse….
Such… conservatism goes along with two other impulses. The first is our impulse to share as a form of insurance for lean times. The second, intrinsically connected with envy, is our desire to knock down pecking-order hierarchies, to foil the concentration of too much wealth at the top of the order. The first tendency, part of ancestral altruism, is a source of welfare in the modern state, but so is the second, which inclines us to tax the rich: an impulse toward income redistribution for the poor is a deeply Pleistocene adaptation, according to Rubin.
These preferences produce much tension in modern polity….
Youth, defense, and monogamy. Sports teams gather in stadiums over the world to engage in combat, cheered by their home fans….Despite the odd, wasteful way organized team sport consumes time and resources for very little utility beyond amusement, it is a human universal. This seems less strange, Rubin says, if we consider two aspects of sport: “First, the actions of the players are closely related to what would have been military actions in the evolutionary environment. Running, throwing projectiles (balls), kicking, hitting with clubs (bats, hockey sticks), and knocking down opponents — all of these actions are direct modifications of ancestral actions that would have been related to defense from others or offense against them.” The second aspect gets down to the evolutionary use of strong, aggressive young men: “the lives of our ancestors often depended on the strength and prowess of their young males….”
…Young fighters have a place in a general pattern of thinking in the Pleistocene: “human tastes for defense, and sometimes offense, are natural. . . . Pacifism is not a belief that would have been selected for inthe EEA.”…
Untenable libertarianism. Rubin’s summary of the political impulses and preferences of the Pleistocene presents a mixed and contradictory picture. This makes it possible for most political theorists to find inspiration for a favored point of view somewhere in hunter-gatherer psychology. Looking at life in the EEA, fascists and militarists can take heart, and so can Rawlsian egalitarians, Peter Singer socialists, and liberals of either the free-market or welfarist stripe. Still, the big picture for Rubin shows behavioral tendencies that we ignore at our peril. One, for example, is that as practiced in recent U.S. history, affirmative action programs are liable to create social friction and undermine the legitimacy of the state, perhaps outweighing benefits of such programs in the long term….
Before anyone jumps to the conclusion that Rubin is using evolutionary psychology merely to support his own political predispositions (an antipathy to affirmative action being one of them), we should note what he says about libertarianism. Rubin confesses that libertarianism — the minimal interference by the state in the life of the individual — appeals to him personally: “in a libertarian regime, government would define and protect property rights, enforce contracts, and provide true public goods, but would do nothing else.” That is obviously not what people want, or there would have been more libertarian governments, Rubin says. Libertarianism was not a viable strategy for the EEA. The actions of individuals produce by-products to affect whole communities, and “we have evolved preferences to control these actions.” We are genetically predisposed, it seems, “to interfere in the behavior of others,” even where the behavior has little demonstrable adverse effect on a community….We are fundamentally meddlesome creatures.
Rubin speculates that this impulse to control our fellows, even in matters that have little or no material effect on living standards or resource allocation, is an adaptation designed to increase group solidarity….
Darwinian Politics in its way exemplifies Kant’s famous remark that “from the crooked timber of humanity no truly straight thing can be made.” It is not, to play on Kant’s metaphor, that no beautiful carving or piece of furniture can be produced from twisted wood; it is rather that whatever is finally created will only endure if it takes into account the grain, texture, natural joints, knotholes, strengths and weaknesses of the original material. Social constructionism in politics treats human nature as indefinitely plastic, a kind of fiberboard building material for utopian political theorists. Evolutionary psychology advises that political architects consider the intrinsic qualities of the wood before they build….
If Dutton correctly interprets Rubin, and if Rubin is on the right track, it’s no wonder that libertarianism seems to succeed only at the margin. For every success (e.g., deregulation of airlines and telephone service, abolition of the draft) there have been many countervailing and costly failures (e.g., Social Security, Medicare, excessive environmentalism, campaign-finance “reform”, affirmative action, gross abuse of the Commerce Clause, and on and on). And that may the best we can hope for. The instincts ingrained in a long-ago state of nature may be far more powerful than libertarian rationality.
* * *
Me
Libertarianism and Conservatism
Timothy Sandefur said this in a recent post:
…As I argued just the other day, libertarianism is a variety of liberalism. Its primary concern is with the liberation of the individual. Conservatism, properly understood—I mean, real, honest to god conservatism of the Russell Kirk, Richard Weaver, Robert Nisbet variety—is nothing like this. It is about the stability of society….
…Some large libertarian segments, most notably Reason magazine, have simply given up on the right wing, and are overtly courting the left, hoping that social issues will draw the left into greater embrace of economic freedom. I’m really not sure whether that strategy will work—I think the left is as resolutely hostile to individualism as the conservatives are—but do we really have anything to lose?…
I’ve been pondering that question, because I am rather a Hayekian, whereas Mr. Sandefur is an Objectivist. (Libertarianism is a big tent, isn’t it?) I, too, am concerned with the liberation of the individual — but I view a stable society as a necessary condition of liberation. Stability helps to ensure that we keep the liberation we’ve gained as individuals, without sacrificing other values, such as the prosperity we enjoy because of somewhat free markets and the security we enjoy because we remain resolute about fighting criminals and terrorists.
Of course, there is such a thing as too much stability. For example, a society that frowns on actions that do no harm to others (e.g., a white person’s trading with or marrying a black person) and then uses the government to bar and penalize such actions is not conducive to liberty.
But efforts to secure personal liberation can be destabilizing, and even damaging to “liberated” groups, when “liberation” proceeds too swiftly or seems to come at the expense of other groups (e.g., the use of affirmative action to discriminate in favor of blacks, the insistence that marriage between man and woman is “nothing special” compared with homosexual marriage). For, as I said here, “[t]he instincts ingrained in a long-ago state of nature may be far more powerful than libertarian rationality.”
Where does that leave libertarians? Well, it leaves this libertarian rather more sympathetic to conservatives, who are more reliable than leftists about defending life and economic liberty. As I said here:
…Social freedom has advanced markedly in my lifetime, in spite of rearguard efforts by government to legislate “morality.” Government control of economic affairs through taxation and regulation has advanced just as markedly, especially under Democrats.
In sum, libertarians may be repulsed by the moralists who have taken over the Republican Party, but that moralizing, I think, is a lesser threat to liberty than regulation and taxation. For that reason — and because Republicans are more likely than Democrats to defend my life — I’m not ready to give up on the GOP.
When I say “defend my life,” I mean on city streets as well as overseas.
So, yes, in answer to Mr. Sandefur’s question, I think libertarians have a lot to lose by throwing in with leftists. And they probably have nothing to gain that won’t be gained anyway, as society proceeds — in its glacial way — to liberate individuals from the bonds of repressive laws.
Why should libertarians make a Faustian bargain with the left to achieve personal liberation — which, with persistence, will come in due time — when the price of that bargain is further economic enslavement and greater insecurity?
* * *
Me
No Way Out?
The three branches of the federal government, individually and severally, have been harassing the Constitution since 1789, and raping it since the New Deal. When the legislative and executive branches aren’t conspiring to infuse new meaning into the Constitution, the judicial branch seems to take up the slack. What to do?
Secede and form a more libertarian union? Even if secession were a realistic option, the alternatives are stark: the quasi-theocracy of the Republic of Red or the quasi-socialist paradise of the Republic of Blue. The idea of “taking over” a State, propounded by the Free State Project, seems to be going nowhere. And besides, what’s the good of taking over a State when the central government already has usurped most of the powers of the States and many of the liberties of their citizens?
Nullify disagreeable statutes and court rulings? That’s beentried, but it’s no more likely to succeed than secession. Anyway, nullification is a recipe for legal chaos. It would yield lucrative, lifetime employment for yet another army of lawyers, who would advise individuals and businesses with interests in several States as to their rights and obligations, and who would represent those individuals and businesses in endless litigation.
Strip courts of jurisdiction or invoke the doctrine of departmentalism? Those might be good solutions if courts were the only problem. But jurisdiction stripping and departmentalism, to the extent they’re constitutionally valid, leave us defenseless against legislative and executive fiat. The courts aren’t entirely useless, it’s just that you never when they’re going to stop the rape of the Constitution or join in.
Promote federalism? Well, that’s where the Supreme Court could help the cause of liberty. But to get there, the president must nominate the right judges and the Senate must confirm them. I don’t think that the left is really ready to accept devolution of power to the States (even to Blue States), especially if it seems likely that a federalism-minded Supreme Court would overturn Roe v. Wade.
That’s my list of not-so-serious and serious options for restoring the law to something resembling the meaning of the Constitution. Promoting federalism seems the most promising option, but it requires an unlikely (unholy?) alliance between left and right.
Thoughts, anyone?
* * *
Me
Rights and Obligations
When it comes to the origin of rights,* I’m with Maxwell Borders, who — in the course of a long, delightful post at Jujitsui Generis — says this in reply to another blogger:
…“Real rights are conferred by political institutions” is not the same as saying “real rights are conferred by a sovereign.” The former expresses the complex relationship in a social contract between agents, their laws, and their government. So, yes, they are both conferred and protected by such institutions, unless you are one of these anarcho-capitalists who lives in a fantasy world where private Team Americas will go off and protect us from the baddies….
I would put it just a bit differently: Human beings — having a primordial yearning for rights — form a political institution and adopt a constitution for the purpose of defining and securing those rights, as they define them through bargaining.** The U.S. Constitution, as amended, therefore amounts to a contract. (It’s an unusual sort of contract, to be sure, in that breaches are hard to remedy and those who inherit it can amend it only by an arduous process.)
A contract that grants rights usually assigns obligations, as well. What obligations does the U.S. Constitution implicitly or explicitly assign to Americans, as citizens? Here’s my list, in no particular order:
Obey the law, generally
Pay taxes
Accept the money of the United States as legal tender
Respect patents, copyrights, and other recognized forms of intellectual property
Refrain from rebellion and insurrection
Serve in the armed forces (if the law requires it)
Refrain from committing treason
Serve on juries
Do not take anyone into slavery or involuntary servitude.
The list doesn’t seem onerous. Then I think about some of the laws we must obey and the burden of taxation we bear. That line of thinking enables me to understand what drove a brave band of men to rebel against British rule, create a new nation, and establish the Constitution — which has been so badly breached.
__________ * I’ve addressed the nature and origin of rights in several posts at Liberty Corner: here, here, here, here, here, and here. (Please overlook the somewhat sloppy treatment of natural rights in the earlier posts.)
** Of course, things don’t always work out as intended. See here, here, and here, for example.
* * *
Me
Farewell and Thanks
I’ve posted my final post as a guest at Freespace. I’m truly grateful to Timothy Sandefur for inviting me to guest-blog, and for commenting so generously and graciously on several of my offerings.
Being a guest carries with it an obligation to be on one’s best behavior. (That’s the way I was brought up.) In this instance, being on my best behavior meant striving to be substantive, lucid, and provocative. I say “provocative” because the kind of blogging that Mr. Sandefur and I do is meant to promote the clash of ideas. For, the free clash of ideas advances truth — and truth does set us free.
I hope that I have been substantive, lucid, and — above all — provocative. You’ll find plenty more to provoke thought at my home blog: Liberty Corner. Please visit me there.
* * *
Me
An Emerging Left-Right Consensus?
Timothy Sandefur, in his recent response to this post, said:
Now that Dole and the Bushes have almost perfected the elimination of the Goldwater faction of the GOP…there is an ever-diminishing role for us [libertarians] in that party. Some large libertarian segments, most notably Reason magazine, have simply given up on the right wing, and are overtly courting the left, hoping that social issues will draw the left into greater embrace of economic freedom. I’m really not sure whether that strategy will work—I think the left is as resolutely hostile to individualism as the conservatives are—but do we really have anything to lose? “Libertarian” has become an epithet within the controlling faction of the Republican Party. I for one am sick of it, and were it not for the war, as I’ve said, I would have voted Democrat this year. And I suspect at least some leftists will be drawn to our side if we tell our story right: if we show that the liberation of previously oppressed people must include economic liberty….
Perhaps Mr. Sandefur is on to something. Here’s Jonah Goldberg, writing at NRO yesterday:
Federalism! It’s not just for conservatives anymore! That’s right. All of a sudden, liberals have discovered federalism and states’ rights. I discovered this while listening to a recent episode of NPR’s Talk of the Nation, in which host Neal Conan and various callers discussed the idea as if some lab had just invented it….It’s not surprising that liberals would suddenly be interested in federalism, given that a sizable fraction of them think George Bush is an evangelical mullah, determined to convert America to his brand of Christianity. As conservatives have known for decades, federalism is the defense against an offensive federal government….
The problem with the last half-century of public policy is that liberals have abused the moral stature of the civil rights struggle to use the federal government to impose their worldview — not just on racial issues but on any old issue they pleased. But now, all of a sudden, because they can’t have their way at the federal level anymore, the incandescently brilliant logic of federalism has become apparent: Liberals in blue states can live like liberals! Wahoo! (Whereas, according to liberals, conservatives could never have been sincere when they talked about states’ rights; surely, they meant only to “restore Jim Crow” or some such.)
The bad news, alas, is that conservative support for federalism has waned at exactly the moment they could have enshrined the ideal in policy. Just this week, the Bush administration argued against California’s medical-marijuana law. Bush is also moving ahead toward a constitutional prohibition on gay marriage (which many conservatives, including National Review, support). After decades of arguments that Washington should stay out of education, Bush has made it his signature domestic issue.
It’s not that the White House doesn’t have good arguments for its policies. But it is impossible to restore federalism unless you start by allowing states to make decisions you dislike. Otherwise, it’s not federalism, it’s opportunism.
If large numbers of liberals (or leftists, as I prefer) begin to understand that a powerful federal government can do things they don’t like — as well as things they like — those leftists might just get on board with federalism. I imagine there are still enough pro-federalism conservatives out there to forge a formidable, pro-federalism coalition.
Now, federalism isn’t libertarianism, by any means. Some States might have strict gun-control laws and other States might have none at all, for example. But, to the extent that individual States can’t repeal the Bill of Rights and related law, federalism strikes me as a good second-best to the present regime, in which Washington seems willing and able to micro-manage almost all social and economic activity.
Libertarian purists argue that government should have almost no power. Libertarian pragmatists argue that government power should be devolved to the lowest practical level. The pragmatists’ case is the better one, given that the urge to regulate social and economic practices is especially strong where people (and votes) are concentrated….
…City dwellers prefer more government because they “need” more; country folk feel less “need” for government because they don’t rub up against each other as much as city dwellers.
Thus the ultimate argument for devolution: Push government functions to the lowest practical level and allow citizens to express their preferences by voting with their feet.
To extend the caricature, those who like guns and oppose abortion can move to Texas, and those who hate guns and approve abortion can move to New York. A typical Austinite (which I am not) might prefer New York’s policies but Austin’s weather. Well, it’s a tough choice, but at least it’s a choice.
ADDENDUM: Jesse Walker, writing at Tech Central Station on November 8 (“The War Between the Statists“) offered this bit of wisdom about federalism:
…The authoritarian conservative wants to maintain the old taboos. The authoritarian liberal wants to introduce some new ones, and he’s had a lot more success. The religious right may despise homosexuality and pornography, but the gay movement is thriving, despite last week’s losses, and porn is more freely available than ever before.
The liberal puritans, by contrast, are riding high in the media and in the courts. For many Americans, the Democrats are the party that hates their guns, cigarettes, and fatty foods (which is worse: to rename a french fry or to take it away?); that wants to impose low speed limits on near-abandoned highways; that wants to tell local schools what they can or can’t teach. There is no party of tolerance in Washington — just a party that wages its crusades in the name of Christ and a party that wages its crusades in the name of Four Out Of Five Experts Agree. Sometimes they manage to work together. I say fie on both. Since Election Day, a series of satiric proposals for blue-state secession have been floating around the Internet. Here’s an idea for liberals looking for a more realistic political project: Team up with some hard-core conservatives and make a push for states’ rights and local autonomy. If you have to get the government involved in everything under the sun, do it on a level where you’ll have more of a popular consensus. Aim for a world where it won’t matter what Washington has to say about who can marry who and whether they can smoke after sodomy….
* * *
Sandefur
Mr. X and Douglas
There are many reasons to object to Mr. X’s view that rights are conferred by “political institutions.” One of the primary objections is the circularity of this notion. If rights originate in “a social contract between agents, their laws, and their government,” or from “bargaining” through the “political institution[s]” that the people create “for the purpose of defining…[their] rights,” then what justifies them in doing so? On what grounds do they presume to engage in such bargains, or create the institutions where such bargaining can take place? Pursuing this “bargaining” analogy, consider a stock market. It is legitimately created by people who own assets which they wish to trade. But if rights are not pre-political, on what grounds do the people create the “market” through which they bargain for the creation of rights? X, who calls himself a Hobbesian, would probably answer, mere physical force—which reveals the fact that for the Hobbesian, might makes right, and there is simply no getting around that. If X acknowledges that he believes that might makes right, that’s fair enough, but then he must accept all the problems that inescapably attend that belief: that is, that in a society where might makes right, Josef Stalin cannot be blamed for failing to provide a marketplace where people might bargain for the “creation” of their rights, because they do not have a right to such a marketplace in the first instance.
X insists that believing that rights are created by political institutions is not the same thing as believing that rights are created by sovereigns. (By the way, Don Boudreaux has a great post here pointing out the similarities between that view and creationism.) But while it is true that the former view is more sophisticated, it commits the same wrong: it assumes that morality is whatever the authority says it is: that there is no moral limit on political behavior. And if there is no moral limit on political behavior, and since political behavior is not distinguishable on principle from individual behavior, then there is nothing the state cannot rightfully do. And this is the conclusion that X would call libertarian?
In 1858, Abraham Lincoln and Stephen Douglas were debating the issue of the expansion of slavery into the Western territories. Douglas defended a proposition he called “popular sovereignty,” which meant that the white citizens in these territories (Kansas and Nebraska) should be allowed to decide by vote whether to have slavery or not. Lincoln vigorously objected to this, insisting that no majority ever had the right to enslave people. In Lincoln’s view, there were moral limits on what sort of political bargaining the people could engage in—he attacked what he called Douglas’ “‘gur-reat pur-rinciple’ that ‘if one man would enslave another, no third man should object,’ fantastically called ‘Popular Sovereignty.’” In Douglas’ view, such limits were irrelevant, because “the people” did not include blacks. As Lincoln noted,
Judge Douglas frequently, with bitter irony and sarcasm, paraphrases our argument by saying “The white people of Nebraska are good enough to govern themselves, but they are not good enough to govern a few miserable negroes!!”
Well I doubt not that the people of Nebraska are, and will continue to be as good as the average of people elsewhere. I do not say the contrary. What I do say is, that no man is good enough to govern another man, without that other’s consent. I say this is the leading principle—the sheet anchor of American republicanism. Our Declaration of Independence says:
“We hold these truths to be self evident: that all men are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty and the pursuit of happiness. That to secure these rights, governments are instituted among men, DERIVING THEIR JUST POWERS FROM THE CONSENT OF THE GOVERNED.”
Now, X says that the American Constitution assigns us obligations in addition to “granting” us rights: among these obligations are paying taxes, obeying the military draft, and refrain from rebellion. On what grounds does the Constitution assign these obligations? What moral right does it have to impose these upon us? The mere fact that it exists, apparently—that is, he denies that it is based on pre-political notions, such as equality or consent: he has denied the sheet anchor of American Republicanism. So how is X to object when the “political institutions” which create our rights are drawn in such a way as to exclude blacks, or other politically unpopular minorities from the “bargaining” process? X has (I hope, unknowingly) adopted the view of Stephen Douglas, which is as far from libertarianism as one can go. True libertarianism reveres the freedom of the individual. X, however, has adopted a principle that reveres the freedom of states. It is not neo-libertarian; it is paleo-conservative.
* * *
Me
Sandefur and God
Timothy Sandefur has responded to “Rights and Obligations“…. I will not address Sandefur’s post point-by-point here. Rather, I will address two of Sandefur’s key points, then, in a later post I will state systematically what I believe about rights, government, and governance. I have written dozens of posts about my views on those subjects, but I have never strung my ideas together in a single post, which may explain Sandefur’s apparent misunderstanding of my views.Sandefur opens by saying that “[t]here are many reasons to object to [my] view that rights are conferred by ‘political institutions’.” I have many reasons to object to Sandefur’s mischaracterization of my political philosophy throughout his post, not the least of which is the allegation that I deny that the Constitution “is based on pre-political notions, such as equality or consent.” To the contrary, as I said in the very post that he attacks,
[h]uman beings — having a primordial yearning for rights — form a political institution and adopt a constitution for the purpose of defining and securing those rights, as they define them through bargaining.
He has a problem with the final clause of that sentence, which I’ll come to. For now, I want to set the record straight about my view of the origin of rights. When I agree with Maxwell Borders (whom I was quoting) where he says that “[r]eal rights are conferred by political institutions,” I agree because the operative word is “real” — as opposed “dreamt of” or “hoped for” or “recognized and enforced by common consent within a band of hunter-gatherers, only to be violated by a rival band of hunter-gatherers.” The rights of Americans are not “real” unless they are secured (to the extent practicable) through the police, courts, and armed forces — and sometimes even by Americans acting in their own defense.
Sandefur might object to the sense in which I am using “real.” For, he says that “[r]eality is not ‘defined’ by some entity standing outside of it and determining its contents; it simply is.” Regardless of where rights come from, I don’t think they’re “real” until they’re actually recognized and enforced (realized) — be that by a band of hunter-gatherers that’s able to police itself and repel marauders; be that by the police, courts, and armed forces of the United States of America; or be that by those relatively few Americans who have the wherewithal to defend themselves against direct attacks on their persons and property. If Sandefur means to imply that rights are “real” in a Platonic sense, that is, existing independent of the human mind, then he simply believes in a different kind of god than that of the religionists whose beliefs he rejects. (I apologize if I’m misinterpreting him as badly as he misinterprets me.)
The second point I will address here is Sandefur’s suggestion that my view about the role of political institutions in the realization of rights makes me something other than (less than?) a libertarian:
[H]ow is [he] to object when the “political institutions” which create our rights are drawn in such a way as to exclude blacks, or other politically unpopular minorities from the “bargaining” process? [He] has (I hope, unknowingly) adopted the view of Stephen Douglas, which is as far from libertarianism as one can go. True libertarianism reveres the freedom of the individual. [He], however, has adopted a principle that reveres the freedom of states. It is not neo-libertarian; it is paleo-conservative.
Here, Sandefur conflates the ideal and the real. Libertarianism is an ideal (perhaps a Platonic ideal in Sandefur’s mind). Its tenets can be realized only through political bargaining — whether that’s in a band of hunter-gatherers or in the United States of America — which sometimes takes the extreme form of warfare. The ideal and the real would be identical only in a world in which almost everyone believed and practiced the tenets of libertarianism. (The holdouts could be bribed or coerced into going along.) There is no such world. To believe otherwise is to believe in a vision of human nature that is belied by history and current events. (Hobbesianism is merely a realistic view of the world.)
None of that means acceptance of the status quo by libertarians. Political bargaining led to the recognition of slavery in the original Constitution and left the question of slavery to the States. But political bargaining — in the extreme form of warfare — led to the abolition of slavery. Further political bargaining, led to Brown v. Board of Education, its enforcement, and the Civil Rights Act of 1964, and so on. The end of slavery and the recognition of equal rights for blacks couldn’t have been attained without political bargaining.
Do I want to devolve some power to the States and, thereby, to the people? You bet (as I have discussed here and here). But the power I would devolve wouldn’t include the power to roll back those rights now recognized in the Constitution. Rather, I would devolve legislation, regulation, and taxation to the maximum extent consistent with preserving those rights. (For more about my view of the respective powers and rights of the central government, the States, and the people, read this, this, and this.)
So, you can call me a classical liberal, a libertarian, a neo-libertarian, or Hobbesian libertarian (and I have called myself each of those things at one time or another, in an effort to label my principles) — but I don’t see how anyone can suggest that I might be a paleo-conservative. That would be as off-target as suggesting that Sandefur is a Christian.
Sandefur may choose to comment on this post; that’s his prerogative. But he might want to wait until I’ve systematically exposed my views about rights, government, and governance in the followup post.
* * *
Me
A Final Volley
Timothy Sandefur’s latest post in our exchange of views about the origin of rights is helpful because it enables me to pinpoint the source of our apparent disagreement. We have been talking past each other. Sandefur has insisted that “rights are real even when they are not being enforced.” He finally makes clear (to me) that his basis for saying that rests on the proposition of self-ownership. That is, if we own ourselves — and I agree that we do — then our right to be left alone, as long as we leave others alone, arises from within and is not a grant from anyone else — not a family, a Pleistocene hunting party, a tribe, or a formally constituted state.I’ve been taking all of that for granted. I’ve expressed the concept of self-ownership — ineptly, it seems — in my notion of a primordial yearning for rights among humans. Sandefur asks,”Is their “yearning” based on the fact that, as human beings, there are certain things that may be done and not done to them…?” Yes, of course.
What I’ve been talking about in my exchange with Sandefur isn’t whether rights are real, or whence they flow, but how they are given force. There is a difference between having a right and being able to exercise that right. That’s where groups come in, be they families, Pleistocene hunting parties, tribes, or formally constituted states. Even though there are certain things that may not, by right, be done to an individual, the individual may not be able to prevent those things without help from others. And it often takes political bargaining to procure that help. (Politics precedes the state and goes on independently of it, for politics is “the process and method of decision-making for groups of human beings. Although it is generally applied to governments, politics is also observed in all human group interactions….”)
In the extreme case, a multitude of individuals will band together to overthrow a government and to form a nation-state for the purpose of preventing the existing government from oppressing the rights of the multitude. That was the rationale for the American Revolution. It was also the ostensible rationale for the Russian Revolution (among many others of its ilk), and look where that led.
So, I don’t insist — and have never insisted — that the state is the source of rights. All I’ve been trying to say is that the state may (or may not) enable persons to exercise their rights. In the United States, for example, the central government was formed so that Americans could exercise certain rights, but the same government was an accessory to the practice of slavery, for almost 80 years following the end of the Revolutionary War. Then, with Lincoln’s accession to the presidency, the central government not only opposed slavery but fought a war that ended it. The central government also since acted to recognize and enforce rights in other instances (e.g., securing votes for women, securing votes for blacks, and ending the military draft).
Yet that same central government has done much through taxation, legislation, and regulation — especially in the last 70 years — to suppress the free exercise of rights. The big question is how to reverse that suppression, as I discuss here. The two most promising ways, in my view, are through the appointment of Supreme Court justices who are federalists (in the contemporary meaning of the word) and the devolution of power to the States. As I said in my previous post,
the power I would devolve wouldn’t include the power to roll back those rights now recognized in the Constitution. Rather, I would devolve legislation, regulation, and taxation to the maximum extent consistent with preserving those rights.
There’s a lot more to be said about federalism and devolution than I have time to say right now. I’ll save it — and many other things — for the promised post in which I will state systematically what I believe about rights, government, and governance.