War, Slavery, and Reparations

A tangled ethical web.

The centuries-old practice of forcing a nation that lost a war to pay reparations to the victor seems to have ended after World War II, with the exception of payments by Iraq to Kuwait in the aftermath of the Gulf War of 1990-91.

Was there ever an ethical case for the payment of reparations? Not really. War reparations are a form of victor’s justice. What about the citizens on the losing side who suffered great losses at the hands of the winning side? What about members of the winning side whose conduct of the war was atrocious at times (e.g., the USSR in World War II)?

Which brings me to reparations for slavery in the United States? Who pays? The descendants of Africans who sold other Africans to slave-traders? The descendants of the slave-traders who live in countries other than the United States? All Americans who pay federal income taxes, regardless of any benefites their distant ancestors might have derived from slavery? Only non-black Americans, even if their ancestors did not benefit from slavery and immigrated to this country long after slavery was abolished?

For the reasons implied in my questions — and other reasons that you can readily devise — it would be impossible to determine what living persons and estates benefited from slavery, and by how much. It would also be impossible to itemize the damage, given the tortuous path of personal circumstances and the (often counterproductive) “pro-black” government programs enacted since the ratification of the Thirteenth Amendment in 1865.

If there’s an ethical problem with reparations for slavery, what about reparations for the victims of Jim Crow laws in the South, which persisted for a century after slavery? Jim Crow is still far enough in the past to eliminate the problem of identifying “winners” and “losers” and tallying gains and losses remain, despite the shorter passage of time. And why should the descendants of Southerners who benefited from Jim Crow bear the burden of reparations for practices that weren’t “official” but were condoned and encouraged in other parts of the country?

Ethically, reparations for slavery (or Jim Crow) would be on a par with war reparations: victor’s justice. When did American blacks become “victors” rather than “victims”? When, as individuals, they began to be excused — and even rewarded — for their personal failings and shortcomings (e.g., misbeaving in class, being insufficiently intelligent to merit admission to a college, rioting) because of the color of their skin.

But what about whites who enjoyed the same special treatment in the past, and even in the present (though less overtly)? Well, it was and is wrong. And I believe in the adage that “two wrongs don’t make a right”.

Penalizing an innocent white living today for the sins committed by a dead white 70 or 170 years ago isn’t justice. (It may be vengeance, but vengeance is justice only when it is visited upon a known wrong-doer.) Moreover, as I have explained, righting a past wrong on the basis of skin color (or any other general characteristic, such as country of citizenship) is an ethical impossibility.


Related post: The Myth of Social Welfare

Presidential Trivia

With some side commentary.

KEY DATES

BIRTHPLACE AND RELIGION

AGE AT DEATH

FREQUENCY OF BIRTH YEARS

The year which saw the births of the most presidents is 1946: Clinton, G.W. Bush and Trump. There was a 24-year span between the inauguration of Clinton (the second-youngest elected president) and Trump (the oldest elected president).

RECURRENCE OF FIRST NAME

Eight different first names appear more than once in the list of presidents. Here are the names (listed in order of first appearance), with the middle and last names of the presidents to which the names are attached:

Presidents-repeated first names

Stephen counts as a multiple entry because, officially, Cleveland is the 22nd and 24th president. (Note that I carefully opened this section with the statement that “Eight different first names appear more than once in the list of presidents.”

The unique first names (unique to a president, that is) are Martin, Zachary, Millard, Abraham, Ulysses (born Hiram), Rutherford, Chester, Benjamin, Theodore, Warren, Harry, Dwight (born David), Lyndon, Richard, Gerald (born Leslie), Ronald, Barack, Donald, and Joseph.

RECURRENCE OF FIRST LETTER OF LAST NAME

Counting Cleveland only once, and assigning V to Van Buren and M to McKinley (à l’américaine), here’s how many times each letter of he alphabet occurs as the first letter of a president’s last name:

You will note that several letters are as yet unused: D, I, Q, S, U, X, Y, and Z.

DEATHS DURING THE ADMINISTRATIONS OF SITTING PRESIDENTS

The chart below depicts the death years of presidents. The years are plotted in a saw-tooth pattern, from left to right — row 1, row 2, row 3, row 4, row 5, row 1, row 2, etc. The vertical green and white bands delineate presidential administrations. Washington’s is the first green band, followed by a white band for John Adams, and so on.

Many administrations didn’t experience any presidential deaths. Those administrations with more than one presidential death are as follows:

  • John Quincy Adams — Thomas Jefferson and John Adams

  • Andrew Jackson — James Monroe and James Madison

  • Abraham Lincoln — John Tyler, Martin Van Buren, and Abraham Lincoln (I consider the death of a sitting president to have occurred during his administration.)

  • Ulysses S. Grant — Franklin Pierce, Millard Fillmore, and Andrew Johnson

  • Grover Cleveland (first administration) — Ulysses S. Grant and Chester Alan Arthur

  • William McKinley — Benjamin Harrison and William McKinley

  • Herbert C. Hoover — William Howard Taft and Calvin Coolidge

  • Richard M. Nixon — Dwight D. Eisenhower, Harry S Truman, and Lyndon B. Johnson

  • George W. Bush — Ronald W. Reagan and Gerald R. Ford.

LIVING EX-PRESIDENTS AT THE START OF EACH ADMINISTRATION

Lincoln, Clinton, G.W. Bush, Trump, and Biden are tied for the most living ex-presidents (5 each):

HEIGHTS

No president has yet equaled or surpassed Lincoln’s 6’4″. Next are LBJ at 6’3-1/2”; Trump at 6’3″; Jefferson at 6’2-1/2″; Washington, FDR, G.H.W. Bush, and Clinton at 6’2″. Rounding out the 6′ and over list are Jackson, Reagan, and Obama at 6’1″; and Monroe, Buchanan, Garfield, Harding, Kennedy, and Biden at 6′.

ELECTORAL FACTS ABOUT “MODERN” PRESIDENTS

The modern presidency began with the adored “activist”, Teddy Roosevelt. From TR to the present, there have been only four (of twenty-one) presidents who first competed in a general election as a candidate for the presidency: Taft, Hoover, Eisenhower, and Trump. Trump was alone in having had no previous governmental service before becoming president.

ELECTORAL RESULTS

The results of general elections from the birth of the Republican Party in 1856 to the election of 2020:

Note the unusual era from 1952 through 1988, when Republican presidential candidates outpolled their congressional counterparts.

The table below compares the GOP candidates’ shares of the two-party vote, by State, in the presidential elections of 2012, 2016, and 2020. The changes from 2012 to 2016 that resulted in the election of Trump are highlighted in red. In sum, Trump won by flipping Florida, Michigan, Ohio, Pennsylvania, and Wisconsin. The official (but still disputed) changes from 2016 to 2020 that resulted in the election of Biden are highlighted in blue. In sum, Biden won by reversing Trump’s wins in Michigan, Pennsylvania, and Wisconsin, and by flipping Arizona and Georgia.

Did the GOP Under-Perform in House Races?

Surprisingly, the answer is no.

The Red Ripple-Trickle-Fizzle may not have been as bad (for the GOP) as it seems. Expectations were inflated, which led to the deflation of partisans (like me) who wanted the Dems to get it good and hard.

As it turns out, the GOP may have done about as well as could be expected, based on the history of general elections since World War II. The following analysis draws on the official history of biennial elections through 2020 (here), an estimate of the GOP’s share of the two-party vote for House seats in 2022 (51.6 percent), and an informed guesstimate of the number of seats the GOP will hold when all of the votes are counted (221).

In the following graph, the point representing the 2022 election is circled. So, yes, winning 50.8 percent of House seats (i.e, 221) is less than expected when the GOP wins 51.6 percent of the two-party vote in a general election:

Why did the GOP under-perform in 2022 (and other years)? I derived a regression equation that explains (robustly) the differences between the estimated values (the straight line in the graph above) and the actual values (the data points). The equation has two explanatory variables:

  • the party of the incumbent president at the time of the election

  • whether or not the GOP holds a minority of House seats at the time of the election.

Relative to the estimates based solely on percentage of two-party vote (the regression equation in the graph above), the GOP does less well than expected when (a) there’s a Democrat is in the White House and (b) the GOP is the minority party in the House. Both conditions prevailed this year.

Here’s a graph of the record for every general election since World War II:

My method of adjusting the raw relationship between vote share and share of seats yields an estimate for 2022 that is very close to reality: actual = 50.8 percent; estimate = 50.4 percent.

Were other factors in play? Of course; to name some of them:

  • reapportionment of House seats and redistricting after the 2020 census, which probably helped Republicans

  • the perception of Biden as a senile and dangerous leftist, which should have energized Republicans

  • the Dobbs decision on abortion, which did energize Democrats and took some energy away from Republicans

  • Trump’s toxic, egoistic visibility, which also energized Democrats and may not have done much for Republicans

  • some loony pro-Trump GOP candidates, nominated with the help of Democrat funding and crossover votes in the GOP primaries (Looniness hurts Republicans more than Democrats — witness John Fetterman and the “Squad” — because it’s not expected of Republicans.)

How did those factors (and others) combine to affect the percentage of votes garnered by GOP House candidates? I have no idea and neither does anyone else. I will say that if the GOP’s percentage of the vote was lower than it could have been, it was probably because Trump is still on the political stage. He did a lot of good as president, and I defended him staunchly in my blog. But it’s time for him to remove himself from the public sphere, for the sake of his party and the country.

A final note: Given the leftward drift of the country since World War II, it’s almost miraculous that the GOP emerged from minority status in the 1990s and remains a force to be reckoned with in Congress. Granted, the GOP has also moved somewhat to the left, but it still espouses conservative values, even if it doesn’t always uphold them.

The Hardening of Political Affiliations in America

Following the leftward lurch of the Republican Party under the influence of Teddy Roosevelt, it returned to “normalcy” in 1920 with the election of Warren G. Harding and his running mate-cum-successor, Calvin Coolidge. By “normalcy” I mean that Harding and subsequent GOP nominees have paid lip service, and sometimes actual service, to the project of limited, constitutional government. In any event, GOP presidential candidates, whatever their platforms and programs, have been consistently to the right of their Democrat opponents.

Given that, the division of the popular vote between the two major parties gives an approximation of the left-right divide in America:

This image has an empty alt attribute; its file name is image.png

The wide swings that prevailed through the 1980s have given way to much narrower ones. In fact, the outccomes of the presidential elections of 2008-2020 suggest that there is now a permanent and possibly growing tilt toward the left. Some States and regions will remain reliably on the right, for a long while, at least. But — barring the resurgence of a charismatic Republican or a national catasprophe that can be ascribed solely to Democrat policies — it’s beginning to look like there’s a permanent Democrat (leftist) majority in the nation as a whole. Party-switchers won’t disappear, but their number (relative to the rising number of voters) seems to have shrunk considerably.

The hardening of ideological positions strikes me as another reason to sue for a national divorce. “United we stand, divided we fall” has become a hollow slogan.

A formal division is preferable to the pretense of unity. The latter weakens the nation, emboldens its enemies, and enables the domestic enemies of liberty to trample on their foes. With a divorce, at least half of America would be able to mount a credible deterrent to economic and military blackmail, while also restoring liberty to part of the land.

What to Believe about Inflation?

It depends on which trend you follow.

The release today of “good” news about inflation sparked a huge stock-market rally. The S&P 500 is up by more than 4 percent as I write this.

How good was the news? The year-over-year change in the CPI-U for the 12 months ending in October 2022 was “only” 7.75 percent, down from 8.20 percent for the 12 months ending in September 2022. Both rates are below the recent peak of 9.06 percent for the 12 months ending in June 2022.

On the other hand, as an economist might say, there’s bad news in the month-to-month inflation figures. After the peak in June, prices dropped slightly in July and August: -0.14 percent and -0.42 percent, annualized. But prices rose again in September and October: 2.61 percent and 4.98 percent, annualized.

The October jump is — or should be — of concern to the Fed. Today’s stock-market exuberance may turn out to be irrational.

Philosophical Musings: Part VI

Beliefs, herds, and oppression.

This post, which is a lightly edited version of one that I wrote 17 months ago, reprises a theme of several of my recent posts: the dire outlook for America given its political direction. I am grateful for your indulgence, and I will reward it by retiring the theme for a while.


To come to the point of this series: Human beings can and will believe anything. And much of what they believe — even “science” — is either mistaken or beyond proof. Belief, at bottom, is a matter of faith; it is a matter of what one chooses to believe.

Why do we choose what to believe? We choose to believe those things that make us feel good about ourselves in one way or another. Here are four (not mutually exclusive) ways in which our beliefs serve that purpose:

  • Logical or epistemic consistency, which can be intellectually satisfying even if the logic is fatally flawed or the knowledge is cherry-picked to fit a worldview.

  • The (usually false) reassurance that a belief has been proclaimed “true” by an authority — “science”, religious leaders, political leaders, etc.

  • No skin in the game: The holding of views (for reasons listed above) that are inconsequential to the holder of the views but which (when put into action) are harmful to others (e.g., a rich person whose life and property are secured by private means but who calls for defunding the police).

  • Groupthink: Going along to get along, also known as “taking sides”.

On the last point, I defer to Michael Huemer:

There’s … a study that finds that political beliefs are heritable. (Alford et al, “Are Political Orientations Genetically Transmitted?”) They get a heritability estimate of 0.53 for political orientation (p. 162), much larger than the influence of either shared environment or unshared environment. That’s kind of weird, isn’t it — who knew that you could genetically transmit political beliefs? But of course, you don’t directly transmit beliefs; you genetically transmit personality traits, and people pick their political beliefs based on their personality traits.

But as Huemer notes,

the primary choice people make is not so much which propositions they want to be wedded to, but which group of people they want to affiliate with. Maybe there’s only a very tenuous link between some personality trait and some particular political position, but it’s enough to make that position slightly more prevalent, initially, among people with that trait. But once those people decide that they belong to “the same side” in society, there’s psychological pressure for individual members of the tribe to conform their beliefs to the majority of their tribe, and to oppose the beliefs of “the other side”.

So, e.g., you decide that fetuses don’t have rights because the fetus-rights position is associated with the other tribe, and you don’t want to be disloyal to your own side by embracing one of the other side’s positions. Of course, you never say this to yourself; you just automatically find all of your side’s arguments “more plausible”.

And, as we have seen, belonging to a “side” and signaling one’s allegiance to that “side” seems to have become the paramount desideratum among huge numbers of Americans. “Liberals”, who not long ago were ardent upholders of freedom of speech are now its leading opponents. And many “liberals” – executives and employees of Big Tech companies, for example – demonstrate their opposition daily by suppressing the expression of ideas that they don’t like and denying the means of expression to persons whose views they oppose. They can conjure sophisticated excuses for their hypocrisy, but they are obvious and shallow excuses for their evident unwillingness to countenance “heretical” views.

This hypocrisy extends beyond partisan politics. It extends into discussions of race (i.e., the suppression of “bad news” about blacks and research findings about the intelligence of blacks). It extends into discussions of scientific matters (e.g., labeling as a “science denier” any scientist who writes objectively about the evidence against CO2 as the primary cause of a recent warming trend that is probably overstated, in any case, because of the urban-heat-island effect). It extends elsewhere, of course, but there’s no point in belaboring the odious.

The worst part of it is that the hypocrisy isn’t practiced just by lay persons who wish to signal their allegiance to “progressivism”. It’s practiced by scientists, academicians, and highly educated persons who hold important positions in the business world (witness Big Tech’s censorship practices and the “wokeness” of major corporations).

In other words, the herd instinct is powerful. It sweeps all before it. Even truth. Especially truth when it contravenes the herd’s dogmas — which are its “truths”.

And a herd that runs wild — driven hither and thither by ever-shifting “truths” — is dangerous, as we are seeing now in the suppression of actual truth, the suppression of political speech, firings for being associated with the wrong “side”, etc.

Today’s state of affairs is often likened to that which prevailed in the years leading up to the Civil War. There is a good reason for that comparison, for the two epochs are alike in a fundamental way: One side (Unionists then, the “woke” now) assumes the mantle of virtue and thus garbed presumes to dictate to the other side.

Yes, slavery was wrong. But that did not justify the (successful) attempt of the Unionists to prevent the Confederacy’s secession on the principle of self-determination — the very principle that inspired the American Revolution that led to the Union.

Yes, it is fitting and proper to treat the (relatively) poor, persons of color, and persons whose sexual proclivities are “unusual” with respect and equality under the law. But that does not justify the wholesale violation of immigration laws, the advancement of the “oppressed” at the expense of blameless others (who are mainly straight, white, males of European descent), the repudiation of America’s past (the good with the bad), or the destruction of the religious, social, and economic freedoms that have served all Americans well.

Ironically, the power of the central government, which was enabled by the victory of the Unionists, now enables “progressivism” to advance its dictatorial agenda through acqueiscence and assistance.

Donald J. Trump did oppose that agenda, and opposed it with some success for four years. That is why it was imperative for the “progressive” establishment — abetted by pusillanimous “conservatives” and never-Trumpers — to undermine Trump from the outset and, in the end, to remove Trump from power by stealing the election of 2020. There has never, in American politics, been a more heinous case of wholesale corruption than was evidenced in the machinations against Trump.

Having said all of that, what will happen to America? The slide toward fascism, which has been underway (with interruptions) for more than a century, now seems to be near its destination: the dictation of myriad aspects of social and economic intercourse by our “betters” in Washington and their cronies in the academy, the media, public schools, and corporate America.

And most Americans — having been brainwashed by the “education system”, bought off by various forms of welfare, and cowed by officious officials and mobs — will simply acquiesce in their own enslavement.

Finito.


Related reading:

Matt, “Varieties of Opinion“, Imlac’s Journal, March 14, 2021

Frank Furedi, “Big Brother Comes to America“, Spiked, February 8, 2021

Victor Davis Hanson, “Our Animal Farm“, American Greatness, February 7, 2021

Arnold Kling, “Rationalist Epistemology“, askblog, February 26, 2021

Arnold Kling, “Cultural Brain Hypothesis“, askblog, March 5, 2021

Mark J. Perry, “Quotation of the Day on Truths That We Are No Longer Allowe to Speak About … “, Carpe Diem, February 2, 2021

Malcolm Pollack, “The Enemy Within“, American Greatness, February 13, 2021

Quilette editorial, “With a Star Science Reporter’s Purging, Mob Culture at The New York Times Enters a Strange New Phase“, Quilette, February 9, 2021

Philosophical Musings: Part V

Desiderata as beliefs.


How many things does a human being believe because he wants to believe them, and not because there is compelling evidence to support his beliefs? Here is a small sample of what must be an extremely long list:

  • There is a God. (1a)

  • There is no God. (1b)

  • There is a Heaven. (2a)

  • There is no Heaven. (2b)

  • Jesus Christ was the Son of God. (3a)

  • Jesus Christ, if he existed, was a mere mortal. (3b)

  • Marriage is the eternal union, blessed by God, of one man and one woman. (4a)

  • Marriage is a civil union, authorized by the state, of one or more consenting adults (or not) of any gender, as the participants in the marriage so define themselves to be. (4b)

  • All human beings should have equal rights under the law, and those rights should encompass both negative rights (e.g., not to be murdered or defrauded) and positive rights (e.g., to benefit from government-granted promotions, college admissions, and other peoples’ money ). (5a)

  • Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons. (5b)

  • The rise in global temperatures over the past 170 years has been caused primarily by a greater concentration of carbon dioxide in the atmosphere, which rise has been caused by human activity – and especially by the burning of fossil fuels. This rise, if it isn’t brought under control will make human existence far less bearable and prosperous than it has been in recent human history. (6a)

  • The rise in global temperatures over the past 170 years has not been uniform across the globe, and has not been in lockstep with the rise in the concentration of atmospheric carbon dioxide. The temperatures of recent decades, and the rate at which they are supposed to have risen, are not unprecedented in the long view of Earth’s history, and may therefore be due to conditions that have not been given adequate consideration by believers in anthropogenic global warming (e.g., natural shifts in ocean currents that have different effects on various regions of Earth, the effects of cosmic radiation on cloud formation as influenced by solar activity and the position of the solar system and the galaxy with respect to other objects in the universe, the shifting of Earth’s magnetic field, and the movement of Earth’s tectonic plates and its molten core). In any event, the models of climate change have been falsified against measured temperatures (even when the temperature record has been adjusted to support the models). And predictions of catastrophe do not take into account the beneficial effects of warming (e.g., lower mortality rates, longer growing seasons), whatever causes it, or the ability of technology to compensate for undesirable effects at a much lower cost than the economic catastrophe that would result from preemptive reductions in the use of fossil fuels. (6b)

Not one of those assertions, even the ones that seem to be supported by facts, is true beyond a reasonable doubt. I happen to believe 1a (with some significant qualifications about the nature of God), 2b, 3b (given my qualified version of 1a), a modified version of 4a (monogamous, heterosexual marriage is socially and economically preferable, regardless of its divine blessing or lack thereof), 5a (but only with negative rights) and 5b, and 6b.  But I cannot “prove” that any of my beliefs is the correct one, nor should anyone believe that anyone can “prove” such things.

Take the belief that all persons are created equal. No one who has eyes, ears, and a minimally functioning brain believes that all persons are created equal, though they may (if they are law-abiding) deserve equal treatment under the law (restricted to the enforcement of their negative rights).

Abraham Lincoln, the Great Emancipator, didn’t believe that all persons are created equal:

On September 18, 1858 at Charleston, Illinois, Lincoln told the assembled audience:

I am not, nor ever have been, in favor of bringing about in any way the social and political equality of the white and black races, that I am not, nor ever have been, in favor of making voters or jurors of negroes, nor of qualifying them to hold office, nor to intermarry with white people; and I will say in addition to this that there is a physical difference between the white and black races which I believe will forever forbid the two races living together on terms of social and political equality … I will add to this that I have never seen, to my knowledge, a man, woman, or child who was in favor of producing a perfect equality, social and political, between negroes and white men….

This was before Lincoln was elected president and before the outbreak of the Civil War, but Lincoln’s speeches, writings, and actions after these events continued to reflect this point of view about race and equality.

African American abolitionist Frederick Douglass, for his part, remained very skeptical about Lincoln’s intentions and program, even after the p[resident issued a preliminary emancipation in September 1862.

Douglass had good reason to mistrust Lincoln. On December 1, 1862, one month before the scheduled issuing of an Emancipation Proclamation, the president offered the Confederacy another chance to return to the union and preserve slavery for the foreseeable future. In his annual message to congress, Lincoln recommended a constitutional amendment, which if it had passed, would have been the Thirteenth Amendment to the Constitution.

The amendment proposed gradual emancipation that would not be completed for another thirty-seven years, taking slavery in the United States into the twentieth century; compensation, not for the enslaved, but for the slaveholder; and the expulsion, supposedly voluntary but essentially a new Trail of Tears, of formerly enslaved Africans to the Caribbean, Central America, and Africa….

Douglass’ suspicions about Lincoln’s motives and actions once again proved to be legitimate. On December 8, 1863, less than a month after the Gettysburg Address, Abraham Lincoln offered full pardons to Confederates in a Proclamation of Amnesty and Reconstruction that has come to be known as the 10 Percent Plan.

Self-rule in the South would be restored when 10 percent of the “qualified” voters according to “the election law of the state existing immediately before the so-called act of secession” pledged loyalty to the union. Since blacks could not vote in these states in 1860, this was not to be government of the people, by the people, for the people, as promised in the Gettysburg Address, but a return to white rule.

It is unnecessary, though satisfying, to read Charles Murray’s account in Human Diversity of the broad range of inherent differences in intelligence and other traits that are associated with the sexes, various genetic groups of geographic origin (sub-Saharan Africans, East Asians, etc.), and various ethnic groups (e.g., Ashkenazi Jews).

But even if all persons are not created equal, either mentally or physically, aren’t they equal under the law? If you believe that, you might just as well believe in the tooth fairy. As it says in 5b,

Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons.

Yes, it’s only a hypothesis, but one for which there is ample evidence in the history of mankind. It is confirmed by every instance of theft, murder, armed aggression, scorched-earth warfare, mob violence as catharsis, bribery, election fraud, gratuitous cruelty, and so on into the night.

And yet, human beings (Americans especially, it seems) persist in believing tooth-fairy stories about the inevitable triumph of good over evil, self-correcting science, and the emergence of truth from the marketplace of ideas. Balderdash, all of it.

But desiderata become beliefs. And beliefs are what bind people – or make enemies of them.

The Meaning of the Red Ripple

Fasten your seat belts and get ready for a hard landing.

Like many other conservatives, I expected much more from the mid-term election than (perhaps) a slim majority in the House of Representatives. I said this (optimistically) in “The Bitter Fruits of America’s Disintegration”,

Is there hope for an American renaissance? The upcoming mid-term election will be pivotal but not conclusive. It will be a very good thing if the GOP regains control of Congress. But it will take more than that to restore sanity to the land.

A Republican (of the right kind) must win in 2024. The GOP majority in Congress must be enlarged. A purge of the deep state must follow, and it must scour every nook and cranny of the central government to remove every bureaucrat who has a leftist agenda and the ability to thwart the administration’s initiatives.

Beyond that, the American people should be rewarded for their (aggregate) return to sanity by the elimination several burdensome (and unconstitutional departments of the executive branch), by the appointment of dozens of pro-constitutional judges, and by the appointment of a string of pro-constitutional justices of the Supreme Court.

After that, the rest will take care of itself: Renewed economic vitality, a military whose might deters our enemies, and something like the restoration of sanity in cultural matters. (Bandwagon effects are powerful, and they can go uphill as well as downhill.)

But all of that is hope. The restoration of America’s greatness will not be easy or without acrimony and setbacks.

If America’s greatness isn’t restored, America will become a vassal state. And the leftists who made it possible will be the first victims of their new masters.

The election was pivotal, but not in the way that I expected it to be. It marked the end of hope for a restoration of sanity. All that happened is that some Red States got Redder, some Blue States got Bluer, and the loony left in the House gained several new members. The latter development is probably the best indication of what will happen in the coming elections.

If there was no “Red wave” this year, it’s unlikely that there’ll be one in the future. What’s more likely is that the election of 2024 will return a Democrat (not Biden) to the White House and Democrat control of Congress will be restored. From there, expect the following:

  • “Wokeness” will be in the saddle.

  • Influential institutions (Big Tech, the media, the academy, public “education”, and most government bureaucracies) will be more than ever dominated by the left.

  • Violent crime and the coddling of criminals will continue apace, or get worse.

  • There will be more suppression of conservative views through electronic censorship, financial blackmail, and selective enforcement of laws.

  • The insanity of replacing reliable and cheap fossil fuels with unreliable and therefore expensive “renewables” will continue with a vengeance.

  • The regulatory-welfare state will control more and more of the economy.

  • Inflation will continue to wreak economic havoc until price controls make things worse by further disincentivizing productive capital investments and entrepreneurship.

  • Defense spending will continue to be well below what is required to deter America’s enemies and protects Americans’ overseas interests.

In short, the decline of America — social, economic, and military — will continue. Our enemies will be able to dictate the terms on which America may survive — economically and politically. There will be no Chamberlain-esque moment of surrender, it will just happen gradually and with official approval.

The only hope for (some) Americans is a national divorce. But with the left in the saddle, that is no more likely to happen than was the emancipation of slaves by peaceful means.

I hope I’m wrong again, but I fear that I am right.

Philosophical Musings: Part IV

Irrational rationality.


I discussed Type 1 and Type 2 thinking in the previous entry. Type 2 thinking — deliberate reasoning — has two main branches: scientific and scientistic.

The scientific branch leads (often in roundabout ways) to improvements in the lot of mankind: better and more abundant food, better clothing, better shelter, faster and more comfortable means of transportation, better sanitation, a better understanding of diseases and more effective means of combating them, and on and on.

You might protest that not all of those things, and perhaps only a minority of them, emanated from formal scientific endeavors conducted by holders of Ph.D. and M.D. degrees working out of pristine laboratories or with delicate equipment. But science is much more than that. Science includes learning by doing, which encompasses everything from the concoction of effective home remedies to the hybridization of crops to the invention and refinement of planes, trains, and automobiles – and, needless to say, to the creation and development of much of the electronic technology and related software with which we are “blessed” today.

The scientific branch yields its fruits because it is based on facts about the so-called material universe. The essence of the universe may be unknown and unknowable, as discussed in an earlier entry, but it manifests themselves in observable and often predictable ways.

The scientific branch, in sum, is inductive at its core. Observations of specific phenomena lead to guesses about the causes of those phenomena or the relationships between them. The guesses are codified as hypotheses, often in mathematical form. The hypotheses are tested against new observations of the same kinds of phenomena. If the hypotheses are found wanting, they are either rejected outright or modified to take into account the new observations. Revised hypotheses are then tested against newer observations, and so on. (There is nothing scientific about testing a new hypothesis against the observations that led to it; that is a scientistic trick used by, among others, climate “scientists” who wish to align their models with historical climate data.)

If new observations are found to comport with a new hypothesis, the hypothesis is said to be confirmed. Confirmed doesn’t mean proven, it just means not disproved. Lay persons — and a lot of scientists, apparently — mistake confirmation, in the scientific sense, for proof. There is no such thing in science.

The scientistic branch of Type 2 thinking is deductive. It assumes truths and then generalizes from those assumptions; for example:

  • All Cretans are liars, according to Epimenides (a Cretan who lived ca. 600 BC).

  • Epimenides was a Cretan.

  • Therefore, Epimenides was a liar.

But if Epimenides was lying, not all Cretans are liars. The conclusion therefore doesn’t follow from the premises. The flaw in the argument is the unprovable generalization that all Cretans are liars.

The syllogism illustrates the fatuousness of deductive reasoning, that is, reasoning which proceeds from general statements that cannot be disproven (falsified).

Though deductive reasoning can be useful in contriving hypothesis, it cannot be used to “prove” anything. But there are persons who claim to be scientists, or who claim to “believe” science, who do reason deductively. It starts when a hypothesis that has been advanced by a scientist becomes an article of faith to that scientist, to a group of scientists, or to non-scientists who use their belief to justify political positions – which they purport to be “scientific” or “science-based”.

There is no more “science” in such positions as there is in the belief that the Sun revolves around the Earth or that all persons are created equal. The Sun may seem to revolve around the Earth if one’s perspective is limited to the relative motions of Sun and Earth and anchored in the implicit assumption that Earth’s position is fixed. All persons may be deemed equal in a narrow and arbitrary way — as in the legal doctrine of equal treatment under the law — but that hardly makes all persons equal in every respect; for example, in intelligence, physical strength, athletic ability, attractiveness to the opposite sex, work ethic, conditions of birth, or proneness to various ailments. (I will say more about equality as a non-scientific desideratum in the next entry.)

This isn’t to say that some scientific hypotheses — and their implications — can’t be relied upon. If they couldn’t be, humans wouldn’t have benefited from the many things mentioned earlier in this post — and much more. But confirmed hypotheses can be relied upon because they are based on observed phenomena, tested in the acid of use, and — most important — employed with ample safeguards, which still may be inadequate to real-world conditions. Despite the best efforts of physicists, chemists, and engineers, airplanes crash, bridges collapse, and so on, because there is never enough knowledge to foresee all of the conditions that might arise in the real world.

This isn’t to say that human beings would be better off without science. Far from it. Science and its practical applications have made us far better off than we would be without them. But neither scientists nor those who apply the (tentative) findings of science are infallible.

Economists and Voting

Extreme economism on tap.

It’s the time of year when economists like to remind the unwashed that voting is a waste of time. And right on schedule there’s “Sorry, But Your Vote Doesn’t Count” by Pierre Lemiux, writing at EconLog. A classic of the genre appeared 17 years ago, in the form of  “Why Vote?” by Stephen J. Dubner and Steven D. Levitt (of Freakonomics fame). Here are some relevant passages:

The odds that your vote will actually affect the outcome of a given election are very, very, very slim. This was documented by the economists Casey Mulligan and Charles Hunter, who analyzed more than 56,000 Congressional and state-legislative elections since 1898. For all the attention paid in the media to close elections, it turns out that they are exceedingly rare. The median margin of victory in the Congressional elections was 22 percent; in the state-legislature elections, it was 25 percent. Even in the closest elections, it is almost never the case that a single vote is pivotal. Of the more than 40,000 elections for state legislator that Mulligan and Hunter analyzed, comprising nearly 1 billion votes, only 7 elections were decided by a single vote, with 2 others tied. Of the more than 16,000 Congressional elections, in which many more people vote, only one election in the past 100 years – a 1910 race in Buffalo – was decided by a single vote….

Still, people do continue to vote, in the millions. Why? Here are three possibilities:

1. Perhaps we are just not very bright and therefore wrongly believe that our votes will affect the outcome.

2. Perhaps we vote in the same spirit in which we buy lottery tickets. After all, your chances of winning a lottery and of affecting an election are pretty similar. From a financial perspective, playing the lottery is a bad investment. But it’s fun and relatively cheap: for the price of a ticket, you buy the right to fantasize how you’d spend the winnings – much as you get to fantasize that your vote will have some impact on policy.

3. Perhaps we have been socialized into the voting-as-civic-duty idea, believing that it’s a good thing for society if people vote, even if it’s not particularly good for the individual. And thus we feel guilty for not voting. [The New York Times Magazine, November 6, 2005]

In true economistic fashion, Dubner and Levitt omit a key reason for voting: It makes a person feel good. Even if one’s vote will not change the outcome of an election, one attains a degree of satisfaction from taking an official (even if secret) stand in favor of or in opposition to a certain candidate, bond issue, or other issue on a ballot.

Dubner and Levitt (and their ilk) seem to inhabit a world in which a thing is not worth doing unless the payoff can be measured with some precision and compared with other, similarly quantifiable, uses of one’s time and money. I doubt that they govern their own lives accordingly. If they do, they must be missing out on a lot of life’s pleasures: sex and ice cream, to name only two.

Their article continues on a different tack:

But wait a minute, you say. If everyone thought about voting the way economists do, we might have no elections at all. No voter goes to the polls actually believing that her single vote will affect the outcome, does she? And isn’t it cruel to even suggest that her vote is not worth casting?

This is indeed a slippery slope – the seemingly meaningless behavior of an individual, which, in aggregate, becomes quite meaningful. Here’s a similar example in reverse. Imagine that you and your 8-year-old daughter are taking a walk through a botanical garden when she suddenly pulls a bright blossom off a tree.

“You shouldn’t do that,” you find yourself saying.

“Why not?” she asks.

“Well,” you reason, “because if everyone picked one, there wouldn’t be any flowers left at all.”

“Yeah, but everybody isn’t picking them,” she says with a look. “Only me.”

Clever, what? Too clever by half. This argument overlooks the powerful effect of exemplary behavior — where “exemplary”, as used here, does not imply “laudable”. By Dubner and Levitt’s account, allowing a vandal to deface a public building would not encourage other vandals to do the same thing, and would not lead to the widespread defacement of buildings and other anti-social acts. (I refer, of course, to James Q. Wilson’s Broken Windows Theory, on which Levitt and Dubner tried to cast doubt on Freakonomics. They wrongly suggested that the onset of legalized abortion was instrumental in the reduction of crime rates.)

Dubner and Levitt’s argument also overlooks the key fact that when economists preach against voting, they are not just preaching to themselves. Dubner and Levitt’s sermon appeared in the pages of one of the country’s most widely read and influential publications. It was not addressed to an individual person, but to thousands upon thousands of persons. And I doubt that they would have objected if the article had appeared in every newspaper and magazine in the country. In effect, the Dubner-Levitt argument is not just an argument that the marginal vote makes little difference — it is advice to millions of Americans that they should abstain from voting.

That’s paradoxical advice. Abstention by millions of Americans could very well make a difference in the outcome of an election. The tendency to abstain might, in a particular election, be disproportionate to party affiliation. That’s why political campaigns try to counter apathy by whipping up enthusiasm. For example, Democrats might be able to pare their losses in the coming election if they can convince enough pro-Democrat voters that defeat isn’t inevitable, and that their votes will make a difference.

In any event, Levitt, Dubner, and their ilk are guilty of paternalism as well as economism.


Related posts:

The Rationality Fallacy

Externalities and Statism

Extreme Economism

Irrational Rationality

Not-So-Random Thoughts (III) (third item)

Obesity and Statism

“Libertarian Paternalism” Revisited

Share this:

Philosophical Musings: Part III

Survival and thinking.


It’s true that instinctive (or impulsive) actions can be foolish, dangerous, and deadly. But they can also be beneficial. If, in your peripheral vision, you see an object hurtling toward you at high speed, you don’t deliberately compute its trajectory and decide whether to move out of its path. No, your brain does that for you without your having to “think” about it. And if your brain works quickly enough, you will have moved out of the object’s path before you would have finished “thinking” about what to do.

In sum, you (your brain) engaged in Type 1 thinking about the problem at hand and resolved it quickly. If you had engaged in deliberate, Type 2, thinking you might have been killed by the impact of the object that was hurtling toward you.

The distinction that I’m making here is one that Daniel Kahneman labors over in Thinking Fast and Slow. But I won’t bore you with the details of that boring book. Life is too short, and certainly shorter for me than for most of you. Let’s just say that there’s nothing especially meritorious about Type 2 thinking, and that it can lead to actions that are as foolish, dangerous, and deadly as those that result from “instinct”.

I will go further and say that Type 2 thinking has brought Americans to the brink of bankruptcy, serfdom, and civil war. But to understand why I say that, you will have to follow this series to its bitter-sweet ending.

*     *     *

If the need to survive ever had anything to do with the advancement of human intelligence and knowledge, that day is long past for most human beings in “developed” nations.

Type 1 thinking is restricted mainly to combat, competitive sports, operating motorized equipment, playing video games, and reacting to photos of Donald Trump or Joe Biden. It is the key to survival in a narrow range of activities aside from combat, such as driving on a busy highway, ducking when a lethal projectile is headed your way, and instinctively avoiding persons whose actions or appearance seem menacing. The erosion of the avoidance instinct is due in part to the cosseted lives that most Westerners (and Japanese) lead, and in part to the barrage of propaganda that denies differences in the behavior of various classes, races, and ethnic groups. (Thus, for example, disruptive black children aren’t to be ejected from classrooms unless an equal proportion of white children, disruptive or not, is likewise ejected.)

Type 2 thinking of the kind that might advance useful knowledge and its beneficial application is a specialty of the educated, intermarrying elite – a class that dominates academia and the applied sciences (e.g., medicine, medical research, and the various fields of engineering). The same class also dominates the media (including so-called entertainment), “technology” companies (most of which don’t really produce technology), the upper echelons of major corporations, and the upper echelons of government.

But, aside from academicians and professionals whose work advances practical knowledge (how to build a better mousetrap, a more earthquake-resistant building, a less collapsible bridge, or an effective vaccine), the members of aforementioned class have nothing on the yeomen who become skilled in sundry trades (construction, plumbing, electrical work) by the heuristic method — learning and improving by doing. That, too, is Type 2 thinking (though it often incorporates the sudden insights yielded by type 1 thinking). But practical knowledge accumulates over years and is tested in the acid of use, unlike the kind of Type 2 thinking that produces intricate but wildly inaccurate climate models whose designers believe in and defend because they are emotional human beings, like all of us.

Type 2 thinking, despite the stereotype that it is deliberate and dispassionate, is riddled with emotion. Emotion isn’t just rage, lust, and the like. Those are superficial manifestations of the thing that drives us all: egoism.

No matter how you slice it, everything that a person does deliberately — including type 2 thinking — is done to bolster his own sense of well-being. Altruism is merely the act of doing good for others so that one may feel better about oneself. You cannot be another person, and actually feel what another person is experiencing. You can only be a person whose sense of self is invested in loving another person or being thought of as loving mankind — whatever that means.

Type 2 thinking — the Enlightenment’s exalted “reason” — is both an aid to survival and a hindrance to it. It is an aid in ways such as those mentioned above, that is, in the advancement of practical knowledge to defeat disease, move people faster and more safely, build dwellings that will stand up against the elements, and so on.

It is a hindrance when, as Shakespeare’s Hamlet says, “the native hue of resolution Is sicklied o’er with the pale cast of thought”. Type 1 thinking causes us to smite an enemy. Type 2 thinking causes us to believe, quite wrongly, that by sparing an enemy we somehow become a law-abiding exemplar whose forbearance diminishes the level of violence in the world and the likelihood that violence will be visited upon us in the future.

Neville Chamberlain exemplified Type 2 thinking when he settled for Hitler’s empty promise of peace instead of gearing up to fight an inevitable war. Lyndon Johnson exemplified Type 2 thinking in his vacillating prosecution of the war in Vietnam, where he was more concerned with “world opinion” (whatever that is) and “public opinion” (i.e., the bleating of pundits and protestors) than he was with the real job of the commander-in-chief, which it to fight and win or don’t fight at all. George H.W. Bush exemplified Type 2 thinking when he declined to depose Saddam Hussein in 1991. Barack Obama exemplified Type 2 thinking when he made a costly deal with Iran’s ayatollahs that profited them greatly for an easily betrayed promise to refrain from the development of nuclear weapons. Type 2 thinking of the kind exemplified by Chamberlain, Johnson, Bush, and Obama is egoistic and delusional: It reflects and justifies the thinker’s inner view of the world as he wants it to be, not the world as it is.

Type 2 thinking is valuable to the survival of humanity when it passes the acid test of use. It is a danger to the survival of humanity when it arises from a worldview that excludes the facts of life. One of those facts of life is that predators exist and must be killed or somehow (and usually at greater expense) neutralized.

To be continued in Part IV.

Philosophical Musings: Part II

Evolution or devolution?


Evolution is simply change in organic (living) objects. Evolution, as a subject of scientific inquiry, is an attempt to explain how humans (and other animals) came to be what they are today.

Evolution (as a discipline) is a much scientism as it is science. Scientism, according to thefreedictionary.com is “the uncritical application of scientific or quasi-scientific methods to inappropriate fields of study or investigation.” When scientists proclaim truths instead of propounding hypotheses they are guilty of practicing scientism. Two notable scientistic scientists are Richard Dawkins and Peter Singer. It is unsurprising that Dawkins and Singer are practitioners of scientism. Both are strident atheists, and strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side.

Dawkins, Singer, and many other scientistic atheists share an especially “religious” view of evolution. In brief, they seem to believe that evolution rules out God. Evolution rules out nothing. Evolution may be true in outline but it does not bear close inspection. On that point, I turn to David Gelertner’s “Giving Up Darwin” (Claremont Review of Books, Spring 2019):

Darwin himself had reservations about his theory, shared by some of the most important biologists of his time. And the problems that worried him have only grown more substantial over the decades. In the famous “Cambrian explosion” of around half a billion years ago, a striking variety of new organisms—including the first-ever animals—pop up suddenly in the fossil record over a mere 70-odd million years. This great outburst followed many hundreds of millions of years of slow growth and scanty fossils, mainly of single-celled organisms, dating back to the origins of life roughly three and half billion years ago.

Darwin’s theory predicts that new life forms evolve gradually from old ones in a constantly branching, spreading tree of life. Those brave new Cambrian creatures must therefore have had Precambrian predecessors, similar but not quite as fancy and sophisticated. They could not have all blown out suddenly, like a bunch of geysers. Each must have had a closely related predecessor, which must have had its own predecessors: Darwinian evolution is gradual, step-by-step. All those predecessors must have come together, further back, into a series of branches leading down to the (long ago) trunk.

But those predecessors of the Cambrian creatures are missing. Darwin himself was disturbed by their absence from the fossil record. He believed they would turn up eventually. Some of his contemporaries (such as the eminent Harvard biologist Louis Agassiz) held that the fossil record was clear enough already, and showed that Darwin’s theory was wrong. Perhaps only a few sites had been searched for fossils, but they had been searched straight down. The Cambrian explosion had been unearthed, and beneath those Cambrian creatures their Precambrian predecessors should have been waiting—and weren’t. In fact, the fossil record as a whole lacked the upward-branching structure Darwin predicted.

The trunk was supposed to branch into many different species, each species giving rise to many genera, and towards the top of the tree you would find so much diversity that you could distinguish separate phyla—the large divisions (sponges, mosses, mollusks, chordates, and so on) that comprise the kingdoms of animals, plants, and several others—take your pick. But, as [David] Berlinski points out, the fossil record shows the opposite: “representatives of separate phyla appearing first followed by lower-level diversification on those basic themes.” In general, “most species enter the evolutionary order fully formed and then depart unchanged.” The incremental development of new species is largely not there. Those missing pre-Cambrian organisms have still not turned up. (Although fossils are subject to interpretation, and some biologists place pre-Cambrian life-forms closer than others to the new-fangled Cambrian creatures.)

Some researchers have guessed that those missing Precambrian precursors were too small or too soft-bodied to have made good fossils. Meyer notes that fossil traces of ancient bacteria and single-celled algae have been discovered: smallness per se doesn’t mean that an organism can’t leave fossil traces—although the existence of fossils depends on the surroundings in which the organism lived, and the history of the relevant rock during the ages since it died. The story is similar for soft-bodied organisms. Hard-bodied forms are more likely to be fossilized than soft-bodied ones, but many fossils of soft-bodied organisms and body parts do exist. Precambrian fossil deposits have been discovered in which tiny, soft-bodied embryo sponges are preserved—but no predecessors to the celebrity organisms of the Cambrian explosion.

This sort of negative evidence can’t ever be conclusive. But the ever-expanding fossil archives don’t look good for Darwin, who made clear and concrete predictions that have (so far) been falsified—according to many reputable paleontologists, anyway. When does the clock run out on those predictions? Never. But any thoughtful person must ask himself whether scientists today are looking for evidence that bears on Darwin, or looking to explain away evidence that contradicts him. There are some of each. Scientists are only human, and their thinking (like everyone else’s) is colored by emotion.

Yes, emotion, the thing that colors thought. Emotion is something that humans and other animals have. If Darwin and his successors are correct, emotion must be a faculty that improves the survival and reproductive fitness of a species.

But that can’t be true because emotion is the spark that lights murder, genocide, and war. World War II, alone, is said to have occasioned the deaths of more than one-hundred million humans. Prominently among those killed were six million Ashkenzi Jews, members of a distinctive branch of humanity whose members (on average) are significantly more intelligent than other branches, and who have contributed beneficially to science, literature, and the arts (especially music).

The evil by-products of emotion – such as the near-extermination of peoples (Ashkenazi Jews among them) – should cause one to doubt that the persistence of a trait in the human population means that the trait is beneficial to survival and reproduction.

David Berlinski, in The Devil’s Delusion: Atheism and Its Scientific Pretensions, addresses the lack of evidence for evolution before striking down the notion that persistent traits are necessarily beneficial:

At the very beginning of his treatise Vertebrate Paleontology and Evolution, Robert Carroll observes quite correctly that “most of the fossil record does not support a strictly gradualistic account” of evolution. A “strictly gradualistic” account is precisely what Darwin’s theory demands: It is the heart and soul of the theory….

In a research survey published in 2001, and widely ignored thereafter, the evolutionary biologist Joel Kingsolver reported that in sample sizes of more than one thousand individuals, there was virtually no correlation between specific biological traits and either reproductive success or survival. “Important issues about selection,” he remarked with some understatement, “remain unresolved.”

Of those important issues, I would mention prominently the question whether natural selection exists at all.

Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not. Thomas Ray has for years been conducting computer experiments in an artificial environment that he has designated Tierra. Within this world, a shifting population of computer organisms meet, mate, mutate, and reproduce.

Sandra Blakeslee, writing for The New York Times, reported the results under the headline “Computer ‘Life Form’ Mutates in an Evolution Experiment: Natural Selection Is Found at Work in a Digital World.”

Natural selection found at work? I suppose so, for as Blakeslee observes with solemn incomprehension, “the creatures mutated but showed only modest increases in complexity.” Which is to say, they showed nothing of interest at all. This is natural selection at work, but it is hardly work that has worked to intended effect.

What these computer experiments do reveal is a principle far more penetrating than any that Darwin ever offered: There is a sucker born every minute….

“Contemporary biology,” [Daniel Dennett] writes, “has demonstrated beyond all reasonable doubt that natural selection— the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge— has the power to generate breathtakingly ingenious designs” (italics added).

These remarks are typical in their self-enchanted self-confidence. Nothing in the physical sciences, it goes without saying— right?— has been demonstrated beyond all reasonable doubt. The phrase belongs to a court of law. The thesis that improvements in life appear automatically represents nothing more than Dennett’s conviction that living systems are like elevators: If their buttons are pushed, they go up. Or down, as the case may be. Although Darwin’s theory is very often compared favorably to the great theories of mathematical physics on the grounds that evolution is as well established as gravity, very few physicists have been heard observing that gravity is as well established as evolution. They know better and they are not stupid….

The greater part of the debate over Darwin’s theory is not in service to the facts. Nor to the theory. The facts are what they have always been: They are unforthcoming. And the theory is what it always was: It is unpersuasive. Among evolutionary biologists, these matters are well known. In the privacy of the Susan B. Anthony faculty lounge, they often tell one another with relief that it is a very good thing the public has no idea what the research literature really suggests.

“Darwin?” a Nobel laureate in biology once remarked to me over his bifocals. “That’s just the party line.”

In the summer of 2007, Eugene Koonin, of the National Center for Biotechnology Information at the National Institutes of Health, published a paper entitled “The Biological Big Bang Model for the Major Transitions in Evolution.”

The paper is refreshing in its candor; it is alarming in its consequences. “Major transitions in biological evolution,” Koonin writes, “show the same pattern of sudden emergence of diverse forms at a new level of complexity” (italics added). Major transitions in biological evolution? These are precisely the transitions that Darwin’s theory was intended to explain. If those “major transitions” represent a “sudden emergence of new forms,” the obvious conclusion to draw is not that nature is perverse but that Darwin was wrong….

Koonin is hardly finished. He has just started to warm up. “In each of these pivotal nexuses in life’s history,” he goes on to say, “the principal ‘types’ seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate ‘grades’ or intermediate forms between different types are detectable.”…

[H[is views are simply part of a much more serious pattern of intellectual discontent with Darwinian doctrine. Writing in the 1960s and 1970s, the Japanese mathematical biologist Motoo Kimura argued that on the genetic level— the place where mutations take place— most changes are selectively neutral. They do nothing to help an organism survive; they may even be deleterious…. Kimura was perfectly aware that he was advancing a powerful argument against Darwin’s theory of natural selection. “The neutral theory asserts,” he wrote in the introduction to his masterpiece, The Neutral Theory of Molecular Evolution, “that the great majority of evolutionary changes at the molecular level, as revealed by comparative studies of protein and DNA sequences, are caused not by Darwinian selection but by random drift of selectively neutral or nearly neutral mutations” (italics added)….

Writing in the Proceedings of the National Academy of Sciences, the evolutionary biologist Michael Lynch observed that “Dawkins’s agenda has been to spread the word on the awesome power of natural selection.” The view that results, Lynch remarks, is incomplete and therefore “profoundly misleading.” Lest there be any question about Lynch’s critique, he makes the point explicitly: “What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms.”

Survival and reproduction depend on many traits. A particular trait, considered in isolation, may seem to be helpful to the survival and reproduction of a group. But that trait may not be among the particular collection of traits that is most conducive to the group’s survival and reproduction. If that is the case, the trait will become less prevalent.

Alternatively, if the trait is an essential member of the collection that is conducive to survival and reproduction, it will survive. But its survival depends on the other traits. The fact that X is a “good trait” does not, in itself, ensure the proliferation of X. And X will become less prevalent if other traits become more important to survival and reproduction.

In any event, it is my view that genetic fitness for survival has become almost irrelevant in places like the North America, Europe, and Japan. The rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

In fact, there is a supportable hypothesis that humans in cosseted realms (i.e., the West) are, on average, becoming less intelligent. But, first, it is necessary to explain why it seemed for a while that humans were becoming more intelligent.

David Robson is on the case:

When the researcher James Flynn looked at [IQ] scores over the past century, he discovered a steady increase – the equivalent of around three points a decade. Today, that has amounted to 30 points in some countries.

Although the cause of the Flynn effect is still a matter of debate, it must be due to multiple environmental factors rather than a genetic shift.

Perhaps the best comparison is our change in height: we are 11cm (around 5 inches) taller today than in the 19th Century, for instance – but that doesn’t mean our genes have changed; it just means our overall health has changed.

Indeed, some of the same factors may underlie both shifts. Improved medicine, reducing the prevalence of childhood infections, and more nutritious diets, should have helped our bodies to grow taller and our brains to grow smarter, for instance. Some have posited that the increase in IQ might also be due to a reduction of the lead in petrol, which may have stunted cognitive development in the past. The cleaner our fuels, the smarter we became.

This is unlikely to be the complete picture, however, since our societies have also seen enormous shifts in our intellectual environment, which may now train abstract thinking and reasoning from a young age. In education, for instance, most children are taught to think in terms of abstract categories (whether animals are mammals or reptiles, for instance). We also lean on increasingly abstract thinking to cope with modern technology. Just think about a computer and all the symbols you have to recognise and manipulate to do even the simplest task. Growing up immersed in this kind of thinking should allow everyone [hyperbole alert] to cultivate the skills needed to perform well in an IQ test….

[Psychologist Robert Sternberg] is not alone in questioning whether the Flynn effect really represented a profound improvement in our intellectual capacity, however. James Flynn himself has argued that it is probably confined to some specific reasoning skills. In the same way that different physical exercises may build different muscles – without increasing overall “fitness” – we have been exercising certain kinds of abstract thinking, but that hasn’t necessarily improved all cognitive skills equally. And some of those other, less well-cultivated, abilities could be essential for improving the world in the future.

Here comes the best part:

You might assume that the more intelligent you are, the more rational you are, but it’s not quite this simple. While a higher IQ correlates with skills such as numeracy, which is essential to understanding probabilities and weighing up risks, there are still many elements of rational decision making that cannot be accounted for by a lack of intelligence.

Consider the abundant literature on our cognitive biases. Something that is presented as “95% fat-free” sounds healthier than “5% fat”, for instance – a phenomenon known as the framing bias. It is now clear that a high IQ does little to help you avoid this kind of flaw, meaning that even the smartest people can be swayed by misleading messages.

People with high IQs are also just as susceptible to the confirmation bias – our tendency to only consider the information that supports our pre-existing opinions, while ignoring facts that might contradict our views. That’s a serious issue when we start talking about things like politics.

Nor can a high IQ protect you from the sunk cost bias – the tendency to throw more resources into a failing project, even if it would be better to cut your losses – a serious issue in any business. (This was, famously, the bias that led the British and French governments to continue funding Concorde planes, despite increasing evidence that it would be a commercial disaster.)

Highly intelligent people are also not much better at tests of “temporal discounting”, which require you to forgo short-term gains for greater long-term benefits. That’s essential, if you want to ensure your comfort for the future.

Besides a resistance to these kinds of biases, there are also more general critical thinking skills – such as the capacity to challenge your assumptions, identify missing information, and look for alternative explanations for events before drawing conclusions. These are crucial to good thinking, but they do not correlate very strongly with IQ, and do not necessarily come with higher education. One study in the USA found almost no improvement in critical thinking throughout many people’s degrees.

Given these looser correlations, it would make sense that the rise in IQs has not been accompanied by a similarly miraculous improvement in all kinds of decision making.

So much for the bright people who promote and pledge allegiance to socialism and its various manifestations (e.g., the Green New Deal, and Medicare for All). So much for the bright people who suppress speech with which they disagree because it threatens the groupthink that binds them.

Robson also discusses evidence of dysgenic effects in IQ:

Whatever the cause of the Flynn effect, there is evidence that we may have already reached the end of this era – with the rise in IQs stalling and even reversing. If you look at Finland, Norway and Denmark, for instance, the turning point appears to have occurred in the mid-90s, after which average IQs dropped by around 0.2 points a year. That would amount to a seven-point difference between generations.

Psychologist (and intelligence specialist) James Thompson has addressed dysgenic effects at his blog on the website of The Unz Review. In particular, he had a lot to say about the work of an intelligence researcher named Michael Woodley. Here’s a sample from a post by Thompson:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong?

From a later post by Thompson:

The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by [Matthew] Sarraf.

The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.

The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”….

Here are the findings which I have arranged by generational decline (taken as 25 years).

  • Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.

  • 3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.

  • Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.

  • Working memory, over 85 years (3.4 generations) 0.16 drop/decade.

  • Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.

  • Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.

Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.…

My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.

How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

Here’s my hypothesis: The less-intelligent portions of the populace are breeding faster than the more-intelligent portions. As I said earlier, the rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

Evolution — in the absence of challenges that ensure survival of the fittest — seems to result in devolution.

Philosophical Musings: Part I

Time, existence, and science.

In this post I use the academic “we”, as opposed to the royal “we” and the politician’s presumptuous “we”.


Before we can consider time and existence, we must consider whether they are illusions.

Regarding time, there’s a reasonable view that nothing exists but the present — the now — or, rather, an infinite number of nows. In the conventional view, one now succeeds another, which creates the illusion of the passage of time. A problem with the conventional view of time is that not everyone perceives the same now. The compilation of a comprehensive now is a practical impossibility, though it could be done in theory. (Einstein’s theories of relativity notwithstanding, the Lorentz transformation restores simultenaity.)

In the view of some physicists, however, all nows exist at once, and we merely perceive sequential slices of all nows. A problem with the view that all nows exist at once (known as the many-worlds view), is that it’s purely a mathematical concoction. Inasmuch as there seems to be general agreement as to the contents of the slice, the only (very weak) evidence that many nows exist in parallel is provided by claims about such phenomena as clairvoyance, visions, and co-location. I won’t wander into that thicket.

What distinguishes one now from another now? The answer is change. If things didn’t change, there would be only a now, not an infinite series of them. More precisely, if things didn’t seem to change, time would seem to stand still. This is another way of saying that a succession of nows creates the illusion of the passage of time.

What happens between one now and the next now? Change, not the passage of time. What we think of as the passage of time is really an artifact of change.

Time is really nothing more than the awareness of events that supposedly occur at set intervals — the “ticking” of an atomic clock, for example. I say supposedly because there’s no absolute measure of time against which one can calibrate the “ticking” of an atomic clock, or any other kind of clock.

In summary: Clocks don’t measure time, which is an illusion caused by change. Clocks merely change (“tick”) at supposedly regular intervals, and those intervals are used in the representation of other things, such as the speed of an automobile or the duration of a 100-yard dash.

Change is real. But change in what — of what does reality consist?

There are two basic views of reality. One of them, posited by Bishop Berkeley and his followers, is that the only reality is that which goes on in one’s own mind. But that’s just another way of saying that humans don’t perceive the external world directly. Rather, it is perceived second-hand, through the senses that detect external phenomena and transmit signals to the brain, which is where a person’s “reality” is formed.

The sensible view, held by most humans (even most scientists), is that there is an objective reality out there, beyond the confines of one’s mind. How can so many people agree about the existence of certain things (e.g., Cleveland) if there’s not something out there? Mass psychosis, perhaps? No, because that arises from a desire to believe that a thing exists. Cleveland is real because it is and has been actually experienced by myriad persons. The widespread belief in catastrophic “climate change” is a kind of mass psychosis, triggered by a combination of scientific malfeasance; greed (for research grants and notoriety); politicians’ and bureaucrats’ naïveté and power-lust; and laypersons’ naïveté, virtue-signaling, and conformity to peers’ beliefs.

The big question is how reality came into being. This has been debated for millennia. There are two main schools of thought:

  • Things just exist and have always existed.

  • Things can’t come into existence on their own, so some non-thing must have caused things to exist. The non-thing must necessarily have always existed apart from things; that is, it is timeless and immaterial.

How can the issue be resolved? It can’t be resolved by logic alone, though logic is on the side of the second option. It can’t be resolved by facts because facts are about perceptible things. If it could be resolved by facts, there would be wide agreement about the answer. (Not perfect agreement because many human beings are impervious to facts.)

In sum, existence is a profound mystery.

Why is that? Can’t scientists someday trace the existence of things – call it the universe – back to a source? Isn’t that what the Big Bang Theory is all about? No and no. If the universe has always existed, there’s no source to be tracked down. And if the universe was created by a non-thing, how can scientists detect the non-thing if they’re only equipped to deal with things?

The Big Bang Theory posits a definite beginning, at a more or less definite point in time. But even if the theory is correct, it doesn’t tell us how that beginning began. Did things start from scratch, and if they did, what caused them to do so? And maybe they didn’t; maybe the Big Bang was just the result of the collapse of a previous universe, which was the result of a previous one, etc., etc., etc., ad infinitum. But that gets back to the question of what started it all.

Some scientists who think about such things don’t believe that the universe was created by a non-thing. But they don’t believe it because they don’t want to believe it. The much smaller number of similar scientists who believe that the universe was created by a non-thing hold that belief because they want to hold it, and because logic is on their side.

That’s life in the world of science, just as it is in the world of non-science, where believers, non-believers, and those who can’t make up their minds find all kinds of ways in which to rationalize what they believe (or don’t believe), even though they know less than scientists do about the universe.

Let’s just accept that and move on to another big question: What is it that exists?  It’s not “stuff” as we usually think of it – like mud or sand or water droplets. It’s not even atoms and their constituent particles. Those are just convenient abstractions for what seem to be various manifestations of electromagnetic forces, or emanations thereof, such as light.

But what are electromagnetic forces? And what does their behavior (to be anthropomorphic about it) have to do with the way that the things like planets, stars, and galaxies move in relation to one another? Those are more big questions that probably won’t be answered, or answered definitively.

That’s the thing about science: It’s a process, not a particular result. Human understanding of the universe offers a good example. Here’s a short list of beliefs about the universe that were considered true and then rejected:

  • Thales (c. 620 – c. 530 BC): The Earth rests on water.

  • Aneximenes (c. 540 – c. 475 BC): Everything is made of air.

  • Heraclitus (c. 540 – c. 450 BC): All is fire.

  • Empodecles (c. 493 – c. 435 BC): There are four elements: earth, air, fire, and water.

  • Democritus (c. 460 – c. 370 BC): Atoms (basic elements of nature) come in an infinite variety of shapes and sizes.

  • Aristotle (384 – 322 BC): Heavy objects must fall faster than light ones. The universe is a series of crystalline spheres that carry the sun, moon, planets, and stars around Earth.

  • Ptolemey (90 – 168 AD): Ditto the Earth-centric universe,  with a mathematical description.

  • Copernicus (1473 – 1543): The planets revolve around the sun in perfectly circular orbits.

  • Brahe (1546 – 1601): The planets revolve around the sun, but the sun and moon revolve around Earth.

  • Kepler (1573 – 1630): The planets revolve around the sun in elliptical orbits, and their trajectory is governed by magnetism.

  • Newton (1642 – 1727): The course of the planets around the sun is determined by gravity, which is a force that acts at a distance. Light consists of corpuscles; ordinary matter is made of larger corpuscles. Space and time are absolute and uniform.

  • Rutherford (1871 – 1937), Bohr (1885 – 1962), and others: The atom has a center (nucleus), which consists of two elemental particles, the neutron and proton.

  • Einstein (1879 – 1955): The universe is neither expanding nor shrinking.

That’s just a small fraction of the mistaken and incomplete theories that have held sway in the field of physics. There are many more such mistakes and lacunae in the other natural sciences: biology, chemistry, and earth science — each of which, like physics, has many branches. And in all of the branches there are many unresolved questions. For example, the Standard Model of particle physics, despite its complexity, is known to be incomplete. And it is thought (by some) to be unduly complex; that is, there may be a simpler underlying structure waiting to be discovered.

Given all of this, it is grossly presumptuous to claim that climate science – to take a salient example — is “settled” when the phenomena that it encompasses are so varied, complex, often poorly understood, and often given short shrift (e.g., the effects of solar radiation on the intensity of cosmic radiation reaching Earth, which affects low-level cloud formation, which affects atmospheric temperature and precipitation).

Anyone who says that any aspect of science is “settled” is either ignorant, stupid, or a freighted with a political agenda. Anyone who says that “science is real” is merely parroting an empty slogan.

Matt Ridley (quoted by Judith Curry) explains:

In a lecture at Cornell University in 1964, the physicist Richard Feynman defined the scientific method. First, you guess, he said, to a ripple of laughter. Then you compute the consequences of your guess. Then you compare those consequences with the evidence from observations or experiments. “If [your guess] disagrees with experiment, it’s wrong. In that simple statement is the key to science. It does not make a difference how beautiful the guess is, how smart you are, who made the guess or what his name is…it’s wrong….

In general, science is much better at telling you about the past and the present than the future. As Philip Tetlock of the University of Pennsylvania and others have shown, forecasting economic, meteorological or epidemiological events more than a short time ahead continues to prove frustratingly hard, and experts are sometimes worse at it than amateurs, because they overemphasize their pet causal theories….

Peer review is supposed to be the device that guides us away from unreliable heretics. Investigations show that peer review is often perfunctory rather than thorough; often exploited by chums to help each other; and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field.

Herbert Ayres, an expert in operations research, summarized the problem well several decades ago: “As a referee of a paper that threatens to disrupt his life, [a professor] is in a conflict-of-interest position, pure and simple. Unless we’re convinced that he, we, and all our friends who referee have integrity in the upper fifth percentile of those who have so far qualified for sainthood, it is beyond naive to believe that censorship does not occur.” Rosalyn Yalow, winner of the Nobel Prize in medicine, was fond of displaying the letter she received in 1955 from the Journal of Clinical Investigation noting that the reviewers were “particularly emphatic in rejecting” her paper.

The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto.

As I said, there is no such thing as “settled science”. Real science is a vast realm of unsettled uncertainty. Newton put it thus:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Certainty is the last refuge of a person whose mind is closed to new facts and new ways of looking at old facts.

How uncertain is the real world, especially the world of events yet to come? Consider a simple, three-parameter model in which event C depends on the occurrence of event B, which depends on the occurrence of event A; in which the value of the outcome is the summation of the values of the events that occur; and in which value of each event is binary – a value of 1 if it happens, 0 if it doesn’t happen. Even in a simple model like that, there is a wide range of possible outcomes; thus:

  • A doesn’t occur (B and C therefore don’t occur) = 0.

  • A occurs but B fails to occur (and C therefore doesn’t occur) = 1.

  • A occurs, B occurs, but C fails to occur = 2.

  • A occurs, B occurs, and C occurs = 3.

Even when A occurs, subsequent events (or non-events) will yield final outcomes ranging in value from 1 to 3 times 1. A factor of 3 is a big deal. It’s why .300 hitters make millions of dollars a year and .100 hitters sell used cars.

Let’s leave it at that and move on.

The White House Brochures on Climate Change

Down the memory hole — well, not quire.

BACKGROUND

Roy Spencer posted this at Roy Spencer, Ph.D. on January 8, 2021:

White House Brochures on Climate (There is no climate crisis)

January 8th, 2021 by Roy W. Spencer, Ph. D.

Late last year, several of us were asked by David Legates (White House Office of Science and Technology Policy) to write short, easily understandable brochures that supported the general view that there is no climate crisis or climate emergency, and pointing out the widespread misinformation being promoted by alarmists through the media.

Below are the resulting 9 brochures, and an introduction by David. Mine is entitled, “The Faith-Based Nature of Human Caused Global Warming”.

David hopes to be able to get these posted on the White House website by January 20 (I presume so they will become a part of the outgoing Administration’s record) but there is no guarantee given recent events.

He said we are free to disseminate them widely. I list them in no particular order. We all thank David for taking on a difficult job in more hostile territory that you might imagine.

Introduction(Dr. David Legates)

The Sun Climate Connection(Drs. Michael Connolly, Ronan Connolly, Willie Soon)

Systematic Problems in the Four National Assessments of Climate Change Impacts on the US(Dr. Patrick Michaels)

Record Temperatures in the United States(Dr. John Christy)

Radiation Transfer(Dr. William Happer)

Is There a Climate Emergency(Dr. Ross McKitrick)

Hurricanes and Climate Change(Dr. Ryan Maue)

Climate, Climate Change, and the General Circulation(Dr. Anthony Lupo)

Can Computer Models Predict Climate(Dr. Christopher Essex)

The Faith-Based Nature of Human-Caused Global Warming(Dr. Roy Spencer)

Spencer followed up with this post on January 12, 2021:

At the White House, the Purge of Skeptics Has Started

January 12th, 2021 by Roy W. Spencer, Ph. D.

Dr. David Legates has been Fired by White House OSTP Director and Trump Science Advisor, Kelvin Droegemeier

[Image of the seal of the Executive Office of the President]

President Donald Trump has been sympathetic with the climate skeptics’ position, which is that there is no climate crisis, and that all currently proposed solutions to the “crisis” are economically harmful to the U.S. specifically, and to humanity in general.

Today I have learned that Dr. David Legates, who had been brought to the Office of Science and Technology Policy to represent the skeptical position in the Trump Administration, has been fired by OSTP Director and Trump Science Advisor, Dr. Kelvin Droegemeier.

The event that likely precipitated this is the invitation by Dr. Legates for about a dozen of us to write brochures that we all had hoped would become part of the official records of the Trump White House. We produced those brochures (no funding was involved), and they were formatted and published by OSTP, but not placed on the WH website. My understanding is that David Legates followed protocols during this process.

So What Happened?

What follows is my opinion. I believe that Droegemeier (like many in the administration with hopes of maintaining a bureaucratic career in the new Biden Administration) has turned against the President for political purposes and professional gain. If Kelvin Droegemeier wishes to dispute this, let him… and let’s see who the new Science Advisor/OSTP Director is in the new (Biden) Administration.

I would also like to know if President Trump approved of his decision to fire Legates.

In the meantime, we have been told to remove links to the brochures, which is the prerogative of the OSTP Director since they have the White House seal on them.

But their content will live on elsewhere, as will Dr. Droegemeier’s decision.

I have saved the ten brochures in their original (.pdf) format. The following links to the files are listed in the order in which Dr. Spencer listed them in his post of January 8, 2021:

Introduction

The Sun Climate Connection

Systematic Problems in the Four National Assessments of Climate Change Impacts on the US

Record Temperatures in the United States

Radiation Transfer

Is There a Climate Emergency?

Hurricanes and Climate Change

Climate, Climate Change, and the General Circulation

Can Computer Models Predict Climate?

The Faith-Based Nature of Human-Caused Global Warming

U.S. Supreme Court: Lines of Succession

From the beginning to the present.

Though there are now only nine seats on the U.S. Supreme Court, the tables below list eleven lines of succession. There is one for the chief justiceship and ten for the associate justiceships that Congress has created at one time and another as it has changed the size of the Court. In other words, two associate justiceships have “died out” in the course of the Court’s history. The present members of the Court, in addition to the chief justice, hold the first, second, third, fourth, sixth, eighth, ninth, and tenth associate justiceships created by Congress.

Reading across, there is a column for each president, a column for each chief justice, and columns for the ten associate justiceships. Justices who have held each seat are listed in chronological order, beginning with the justices nominated by the heroic George Washington and ending with the justices nominated by the lying, fear-mongering, chameleon-like Joe Whatshisname.

There are two horizontal divisions. The first, indicated by double red lines, delineates presidencies. The beginning of every justice’s term is associated with the president who nominated that person to a seat on the Court. The end of each justice’s term is associated with the president who was in office when the justice’s term ended by resignation or death.

The second horizontal division, indicated by alternating bands of gray and white, delineates chief justiceships. Thus the reader can see which justices served with a particular chief justice. The “Roberts Court”, for example, has thus far included Roberts and — in order of ascension to the Court — Stevens, O’Connor, Scalia, Kennedy, Souter, Thomas, Ginsburg, Breyer, Alito, Sotomayor, Kagan, Gorsuch, Kavanaugh, Barrett, and Brown-Jackson.

Because there is a separate line of succession for the chief justiceship, persons who were already on the Court and then elevated to the chief justiceship are listed in two different places. Also, the names of a few other justices appear in more than one place because they served non-consecutive terms on the Court.

The table is divided into three parts for ease of reading. (Zoom in if the type is too small for you.) Part I covers the chief justiceship (currently Roberts) and associate justice positions 1-3 (currently Sotomayor, Brown-Jackson, and Kavanaugh). Part II covers associate justice positions 4-7 (currently Kagan, 4; Barrett, 6). Part III covers associate justice positions 8-10 (currently Alito, Gorsuch, and Thomas).

Part I

Part II

Part III

Looking for Something?

I have moved many posts and pages to my new blog, Loquitur’s Letter. You may find what you’re looking for at my list of favorite posts.

Reflections on Aging

It’s better than the alternative — so far.

Aging is of interest to me because I am among the oldest five percent Americans. Not that I feel old — I don’t — but objectively I am old.

I am probably also among the more solitary of Americans. But I am not lonely in my solitude, for it is and long has been of my own choosing.

This is so because of my strong introversion. I suppose that the seeds of my introversion are genetic, but the symptoms didn’t appear in earnest until I was in my early thirties. After that I became steadily more focused on a few friendships (which eventually dwindled to none) and decidedly uninterested in the aspects of work that required more than brief meetings (one-on-one preferred). Finally, enough became more than enough and I quit full-time work at the age of fifty-six. There followed, a few years later, a stint of part-time work that also became more than enough. And so, at the age of fifty-nine, I banked my final paycheck. Happily.

What does my introversion have to do with my aging? I suspected that my continued withdrawal from social intercourse (more about that, below) might be a symptom of aging. And I found this, in the Wikipedia article “Disengagement Theory“:

The disengagement theory of aging states that “aging is an inevitable, mutual withdrawal or disengagement, resulting in decreased interaction between the aging person and others in the social system he belongs to”. The theory claims that it is natural and acceptable for older adults to withdraw from society….

Disengagement theory was formulated by [Elaine] Cumming and [William Earl] Henry in 1961 in the book Growing Old, and it was the first theory of aging that social scientists developed….

The disengagement theory is one of three major psychosocial theories which describe how people develop in old age. The other two major psychosocial theories are the activity theory and the continuity theory, and the disengagement theory [is at] odds with both.

The continuity theory

states that older adults will usually maintain the same activities, behaviors, relationships as they did in their earlier years of life. According to this theory, older adults try to maintain this continuity of lifestyle by adapting strategies that are connected to their past experiences [whatever that means].

I don’t see any conflict between the continuity theory and the disengagement theory. A strong introvert like me, for example, finds it easy to maintain the same activities, behaviors, and relationships as I did before I retired. Which is to say that I had begun minimizing my social interactions before retiring, and continued to do so after retiring.

What about the activity theory? Well, it’s a normative theory, unlike the other two (which are descriptive), and it goes like this:

The activity theory … proposes that successful aging occurs when older adults stay active and maintain social interactions. It takes the view that the aging process is delayed and the quality of life is enhanced when old people remain socially active.

That’s just a social worker’s view of “appropriate” behavior for older persons. Take my word for it, introverts don’t need social activity, which is stressful for them, and they resent those who try to push them into it. The life of the mind is far more rewarding than chit-chat and bingo with geezers.

The life of the mind is certainly more rewarding than “social media”. My use of that peculiar institution was limited to Facebook. And my use of it dwindled from occasional to never a few years ago. And there it will stay.

Anyway, I mentioned my continued withdrawal from social intercourse. A particular, recent instance of withdrawal sparked this post. For about fifteen years I corresponded regularly with a former colleague. He  has a malady that I have dubbed email-arrhea: several messages a day (links and jokes, nothing original) to a large mailing list, with many insipid replies from recipients whose choose “reply all”. Enough of that finally became too much, and I declared to him my intention to refrain from correspondence until … whenever. (“Don’t call me, I’ll call you.”) So all of his messages and those of his other correspondents were dumped automatically into my email trash folder. He finally got the message, so to speak, and quit transmitting.

My withdrawal from that particular mode of social intercourse was eased by the fact that the correspondent is a “collaborator”” with a deep-state mindset. So it was satisfying to terminate our relationship — and to devote more time to things that I enjoy, like blogging.

Daylight Saving Time Doesn't Kill

But “springing forward” does.

It’s almost time to “fall back”, which reminds me of the perennial controversy about daylight saving time (that’s “saving” not “savings”). The main complaint seems to be the stress that results from moving clocks ahead in March:

Springing forward may be hazardous to your health. The Monday following the start of daylight saving time (DST) is a particularly bad one for heart attacks, traffic accidents, workplace injuries and accidental deaths. Now that most Americans have switched their clocks an hour ahead, studies show many will suffer for it.

Most Americans slept about 40 minutes less than normal on Sunday night, according to a 2009 study published in the Journal of Applied Psychology…. Since sleep is important for maintaining the body’s daily performance levels, much of society is broadly feeling the impact of less rest, which can include forgetfulness, impaired memory and a lower sex drive, according to WebMD.

One of the most striking affects of this annual shift: Last year, Colorado researchers reported finding a 25 percent increase in the number of heart attacks that occur on the Monday after DST starts, as compared with a normal Monday…. A cardiologist in Croatia recorded about twice as many heart attacks than expected during that same day, and researchers in Sweden have also witnessed a spike in heart attacks in the week following the time adjustment, particularly among those who were already at risk.

Workplace injuries are more likely to occur on that Monday, too, possibly because workers are more susceptible to a loss of focus due to too little sleep. Researchers at Michigan State University used over 20 years of data from the Mine Safety and Health Administration to determine that three to four more miners than average sustain a work-related injury on the Monday following the start of DST. Those injuries resulted in 2,649 lost days of work, which is a 68 percent increase over the hours lost from injuries on an average day. The team found no effects following the nation’s one-hour shift back to standard time in the fall….

There’s even more bad news: Drivers are more likely to be in a fatal traffic accident on DST’s first Monday, according to a 2001 study in Sleep Medicine. The authors analyzed 21 years of data on fatal traffic accidents in the U.S. and found that, following the start of DST, drivers are in 83.5 accidents as compared with 78.2 on the average Monday. This phenomenon has also been recorded in Canadian drivers and British motorists.

If all that wasn’t enough, a researcher from the University of British Columbia who analyzed three years of data on U.S. fatalities reported that accidental deaths of any kind are more likely in the days following a spring forward. Their 1996 analysis showed a 6.5 percent increase, which meant that about 200 more accidental deaths occurred immediately after the start of DST than would typically occur in a given period of the same length.

I’m convinced. But the solution to the problem isn’t to get rid of DST. No, the solution is to get rid of standard time and use DST year around.

I’m not arguing for year-around DST from an economic standpoint. The evidence about the economic advantages of DST is inconclusive.

I’m arguing for year-around DST as a way to eliminate “spring forward” stress and enjoy an extra hour of daylight in the winter.

Don’t you enjoy those late summer sunsets? I sure do, and a lot other people seem to enjoy them, too. That’s why daylight saving time won’t be abolished.

But if you love those late summer sunsets, you should also enjoy an extra hour of daylight at the end of a drab winter day. I know that I would. And it’s not as if you’d miss anything if sunrise occurs an hour later in the winter, as it would with DST. Even with standard time, most working people and students have to be up and about before winter sunrise.

How would year-around DST affect you? The following table gives the times of sunrise and sunset on the longest and shortest days for nine major cities, north to south and west to east:

I report, you decide. If it were up to me, the decision would be year-around DST. I hate “spring forward”.

The Bitter Fruits of America's Disintegration

The lunatics are in charge of the asylum, and have set it on fire.

Almost 250 years ago, a relatively small but determined band of revolutionaries overthrew British rule of the colonies that became known as the United States of America. The act of defying the Crown and establishing a new government was an open conspiracy, but it was nevertheless a conspiracy because it arose from “an agreement to perform together … [a] subversive act.” That conspiracy, of course, was the American Revolution.

Now, twelve score and six years since that conspiracy was announced to the world in the Declaration of Independence, the resulting polity — the United States of America — is approaching a crisis that is the result of another conspiracy, which I have described here.

What the conspirators seek is a secular theocracy, in which they are the high priests and theologians. If that reminds you of Mussolini’s Italy, Hitler’s Germany, the USSR, Communist China, and similar regimes, that’s because it’s of the same ilk: leftism

Leftists have a common trait: wishful thinking. Thomas Sowell calls it the unconstrained vision; I call it the unrealistic vision. It’s also known as magical thinking, in which “ought” becomes “is” and the forces of nature and human nature can be held in abeyance by edict; for example:

  • California wildfires caused by misguided environmentalism.

  • The excremental wasteland that is San Francisco. (And Blue cities, generally, because of the encouragement of homelessness.)

  • The killing of small businesses, especially restaurants, by minimum wage laws.

  • The killing of jobs for people who need them the most, by ditto.

  • Bloated pension schemes for Blue-State (and city) employees, which are bankrupting those States (and cities) and penalizing their citizens who aren’t government employees.

  • The idea that men can become women and should be allowed to compete with women in athletic competitions because the men in question have endured some surgery and taken some drugs.

  • The idea that it doesn’t and shouldn’t matter to anyone that a self-identified “woman” uses women’s rest-rooms where real women and girls became prey for prying eyes and worse.

  • Mass murder on a Hitlerian-Stalinist scale in the name of a “woman’s right to choose”, when she made that choice (in almost every case) by engaging in consensual sex.

  • Disrespect for and attacks on the police and military personnel who keep the spoiled children of capitalism safe in their cosseted existences.

  • The under-representation of women and blacks in certain fields is due to rank discrimination, not genetic differences (but it’s all right if blacks dominate certain sports and women now far outnumber men on college campuses).

  • Peace can be had without preparedness for war.

  • Regulation doesn’t reduce the rate of economic growth and foster “crony capitalism”.

  • The cost of health care will go down while the number of mandates is increased.

  • Every “right” under the sun can be granted without cost (e.g., affirmative action racial-hiring quotas, which penalize blameless whites; the Social Security Ponzi scheme, which burdens today’s workers and cuts into growth-inducing saving).

Closely related to magical thinking is the nirvana fallacy (hypothetical perfection always seems better than feasible reality), large doses of neurotic hysteria (e.g., the overpopulation fears of Paul Ehrlich, the AGW hoax of Al Gore et al.), and rampant adolescent rebelliousness (e.g., instant protests about everything, the post-election tantrum-riots of 2016).

But to say any of the foregoing about the left’s agenda, the assumptions and attitudes underlying it, the left’s strategic and tactical methods, or the psychological underpinnings of leftism, is to be “hateful”. (In my observation, nothing is more full of hate than a lefitst who has been contradicted or thwarted.) So, through the magic of psychological projection, those who dare speak the truth about leftism are called “haters”, “racists”, “fascists”, “Nazis”, and other things that apply to leftists themselves.

Labeling anti-leftists as evil “justifies” the left’s violent enforcement of its agenda. The violence takes many forms, from riots (as in the George Floyd “protests”), to suppression by force (e.g., Stalin’s war on the Cossacks), to genocide (e.g., the Holocaust), to overtly peaceful but coercive state action (e.g., forced unionization of American industry, the J6 committee’s Stalinesque “show trial”, suppression of religious liberty and freedom of association in the name of same-sex “marriage”, and the vast accumulation of economic regulations).

In a word: disintegration.

THE “GREATEST GENERATION” AND THE WASP ESTABLISHMENT SET THE STAGE FOR DISINTEGRATION

Every line of human endeavor reaches a peak, from which decline is sure to follow if the things that caused it to peak are mindlessly rejected for the sake of novelty (i.e., rejection of old norms just because they are old). This is nowhere more obvious than in the arts.

I have written elsewhere that 1963 (or thereabouts) was a “year zero” in American history. It was then that the post-World War II promise of social and economic progress, built on a foundation of unity (or as much of it as a heterogeneous nation is likely to muster), began to crumble.

At first, the “adults in the room” forgot their main duty: to be exemplars for the next generation.

As I wrote here,

the world in which we live … seems more and more to resemble the kind of world in which parents have failed in their duty to inculcate in their children the values of honesty, respect, and hard work….

I subscribe to the view that the rot set in after World War II….

The[] sudden emergence [in the 1960s of “campus rebels”] was due to the failure of too many members of the so-called Greatest Generation to inculcate in their children the values of honesty, respect, and hard work. How does one do that? By being clear about expectations and by setting limits on behavior — limits that are enforced swiftly, unequivocally, and sometimes with the palm of a hand. When children learn that they can “get away” with dishonesty, disrespect, and sloth, guess what? They become dishonest, disrespectful, and slothful. They give vent to their disrespect through whining, tantrum-like behavior, and even violence.

But the rot goes deeper than that. Wallace S. Moyle, writing in The New Criterion (“Facing the Music: On the Decline of the WASP Establishment”, September 2022), takes Averill Harriman as an exemplar of his class:

H]eir to an immense railroad fortune, a polo champion, and a Groton graduate, Harriman had been the first man in his class tapped for Yale’s Skull and Bones society…. Harriman … founded Brown Brothers Harriman, implemented Roosevelt’s National Industrial Recovery Act, and served as the U.S. ambassador to the Soviet Union. Later, he was elected governor of New York and advised Presidents Kennedy, Johnson, and Carter.

… E. Digby Baltzell, the preeminent exponent of the “Protestant Establishment” and the coiner of the term “wasp,” lamented in the early 1980s that the United States no longer boasted patricians with the moral authority to keep down the McCarthyite rabble, then newly ascendant, in Baltzell’s depressingly conventional opinion, in the figure of Ronald Reagan.

Harriman at the same time deplored Reagan as much as Baltzell did. In 1983, at the age of ninety-one, he traveled to Moscow just to reassure Communist Party General Secretary Yuri Andropov that not all Americans saw the Soviet Union as an “Evil Empire.” Before Reagan emerged, Harriman’s bête noire was Richard Nixon, whose success in exposing communists in the U.S. government was unforgiveable. Mercifully for their poor, quivering souls, neither Baltzell nor Harriman lived long enough to witness the rise of Donald Trump….

Exactly what Harriman and his brethren stood for politically is not easy to discern. Harriman’s politics shifted throughout his life. He voted for Harding in 1920. It was only at the urging of his fashionably engagée sister that he even entered the Roosevelt administration. Harriman went from Cold War hawk in 1946 and champion of George Kennan’s long telegram to dove in the aftermath of Vietnam. Unable to identify what principles wasp leaders stood for, their defenders frequently praise their dedication to what they call “service.”… [Harriman] sniffed at men who “didn’t do a damn thing.” “Service,” “giving back,” and “doing things”: these of course are polite euphemisms for exercising power. If power had to be wielded at all, America’s mid-twentieth-century Wise Men reasoned, it naturally ought to be in their hands.

Their upbringing made that an easy assumption. By the early twentieth century, the American upper class had constructed a cursus honorum as rigid as that faced by any Roman senator’s son….

The system was designed to produce custodians rather than leaders….

Having thrived in a system that rewarded conformity, the final generation of wasp leaders lacked what George H.W. Bush, the acknowledged last of their breed, called “the vision thing.”… When in the mid-1960s a guest expressed support for Vietnam protestors, Harriman denounced her as a traitor. Just a few years later, he was joining them….

From time to time, outsiders would plead with the Protestant Establishment to recover some moral fortitude. In God and Man at Yale, William F. Buckley Jr., a Catholic, warned that Yale was not only failing to uphold what Buckley called “individualism” (i.e., the free-enterprise system) and Christianity, but was also actively undermining them. For his trouble, Yale’s leaders denounced him as a reactionary bigot. McGeorge Bundy—yet another Bonesperson—called the book “dishonest in its use of facts, false in its theory, and a discredit to its author.” As a national security advisor in the Vietnam era, Bundy would go on to become literally the textbook example of policy failure. Later in life, he defended the spread of affirmative action….

[T]he wasps seemed to have erected institutions that uniquely selected for men who, as baseball scouts used to say, looked good in a uniform. Harriman may have earned middling grades and may not have been able to speak a single foreign language, but from adolescence on he looked the part of an ambassador. Harriman thus rose to the top of his Yale class. Thirty years later, he was negotiating with Stalin on behalf of the United States.

The 1960s generation is often blamed for contemporary woes. But it was the last generation of wasps that set in motion the forces that, as Buckley predicted, would lead the United States to ruin. Affirmative action, the tolerance of vagrancy (redubbed “homelessness” in the Lindsay era), the dishonoring of Christianity in public life, living constitutionalism in law, the ever-spreading blight of modern architecture, and the sacking of our cities by criminals: all of these features of the American regime were instituted by wasp patricians. America may have won the Cold War against communism, but within a generation it has fallen to a woke Marxian regime of its own making.

The wasp’s ancestors created the freest, most prosperous nation in history. By the time of the Protestant Establishment’s fading, its luminaries had left a nation ugly, depraved, and enthralled. They received a goodly heritage and squandered it.

THE NEW “ESTABLISHMENT” BECOMES THE ENEMY

The “establishment” has diversified over the years. As the “old boys” of Harriman’s generation died off, they were replaced by new men — and women. What we have is a new “elite” that has found a new way to distinguish itself from the “masses”.

Richard Hanania analyzes the phenomenon:

The American culture war is part of a global trend. The German far right marches against covid restrictions and immigration. In France, Le Pen wins the countryside and gets crushed in urban centers. Throughout the developed world you see the same cleavages opening up, with an educated urban elite that is more likely to support left-wing parties, and an exurban and rural populist backlash that looks strikingly similar across different societies….

  1. Increasing wealth causes class differentiation and segregation. One thing people with money buy is separation from poor people or others not like them, while assortative mating moves these trends along.

  2. With modern communications technology and women playing a larger role in intellectual life, genetic (i.e., true) explanations of class differentiation are disfavored, as is anything that would blame the poor or otherwise unfortunate for their own problems [i.e., leftist condescension].

  3. Despite social desirability bias leading to the triumph of egalitarian ideologies, the natural tendency towards a kind of class consciousness does not go away. The higher class therefore becomes more strenuous in defining itself as aesthetically and morally superior to the lower classes….

  4. The more egalitarian the official ideology, the harder the upper class has to work to find some other grounds on which to differentiate itself from the masses, leading to an exaggeration of the moral differences between the two tribes….

Thence the use of governmental power — directly and indirectly — to impose the left’s ideology on the “masses”. There is a government-corporate-technology-media-academic complex that moves together not just in matters of military spending or foreign policy, but in matters fundamental to the daily lives and livelihoods of Americans — “climate change”, energy policy, gender identity, the definition of marriage, immigration policy, the treatment of criminals, and much more. The approved positions on such matters are leftist, of course, and so the new establishment consists almost entirely of persons, corporations, foundations, and think-tanks that are effectively organs of the Democrat Party.

Thus did the establishment — old and new — allow, encourage, and abet the disintegration of America that is now in full spate.

In the remaining sections of this post I will trace a few of the symptoms and consequences of disintegration: military failure, economic rot, and the rise of psuedo-science in the service of leftist causes. There’s no need for me to say any more about social disintegration, the evidence of which is everywhere to be seen.

MILITARY FAILURE AS A SYMPTOM OF NATIONAL ROT

A critical element of America’s disintegration has been the unalloyed record of military futility and defeat since the end of World War II. No amount of belligerent talk can compensate for the fact that the enemies of America see that — with the exception of the Reagan and Trump years — America’s defense policy is to balk at doing what must be done to win, to disarm at the first hint of “peace”, and then fail to rearm quickly enough to prevent the next war.

The record of futility and fecklessness actual began at the end of World War II when an enfeebled FDR, guided by the Communists in his administration, gave away Eastern Europe to Stalin. The giveaway was unnecessary. The U.S. had been relatively unscathed by the war; the Soviet Union’s losses in life, property, and industrial capacity had been devastating. The U.S. (with Britain) was in a position to dictate to Stalin.

The Korean War was unnecessary, in that it was invited by the Truman administration’s policies: exclusion of Korea from the Asian defense perimeter (announced by another “old boy”) and massive cuts in the U.S. defense budget. But it was essential to defend South Korea so that the powers behind North Korea (Communist China and, by extension, the USSR) would grasp the willingness of the U.S. to maintain a forward defensive posture against aggression. That signal was blunted by Truman’s decision to sack MacArthur when the general persisted in his advocacy of attacking Chinese bases following the entry of China into the war. The end result was a stalemate, where a decisive victory might have broken the back of communistic adventurism around the globe. The Korean War, as it was fought by the U.S., became “a war to foment war”.

Anti-war propaganda disguised as journalism helped to snatch defeat from the jaws of victory in Vietnam. What was shaping up as a successful military campaign collapsed under the weight of the media’s overwrought and erroneous depiction of the Tet offensive as a Vietcong victory, the bombing of North Vietnam as “barbaric” (where the Tet offensive was given a “heroic cast”), and the deaths of American soldiers as somehow “in vain”, though many more deaths a generation earlier had not been in vain. (What a difference there was between Edward R. Murrow and Walter Cronkite and his sycophants.) Unlike Korea, U.S. forces were withdrawn from Vietnam and it took little time for North Vietnam to swallow South Vietnam.

The Gulf War of 1990-91 began with Saddam Hussein’s invasion of oil-rich Kuwait. U.S. action to repel the invasion was fully justified by the potential economic effects of Saddam’s capture of Kuwait’s petroleum reserves and oil production. The proper response to Saddam’s aggression would have been not only to defeat the Iraqi army but also to depose Saddam. The failure to do so further reinforced the pattern of compromise and retreat that had begun at the end of World War II, and necessitated the long, contentious Iraq War of the 2000s.

The quick victory in Iraq, coupled with the coincidental end of the Cold War, helped to foster a belief that the peace had been won. (That belief was given an academic imprimatur in Francis Fukuyama’s The End of History and the Last Man.) The stage was set for Clinton’s much-ballyhooed fiscal restraint, which was achieved by cutting the defense budget. Clinton’s lack of resolve in the face of terrorism underscored the evident unwillingness of American “leaders” to defend Americans’ interests, thus inviting 9/11.  (For more about Clinton’s foreign and defense policy, go here and scroll down to the section on Clinton.)

What can be said about the wars in Iraq and Afghanistan of 2001-2021 but that they were conducted in the same spirit as the wars in Korea, Vietnam, and the earlier war in Iraq. Rather than reproduce a long post that I wrote at the mid-point of the futile, post-9/11 wars, I will point you to it: “The War on Terror as It Should Have Been Fought”. Subsequent events — and especially Biden’s disgraceful bugout from Afghanistan — only underscore the main point of that post: Going to war and failing to win only encourages America’s enemies.

The war in Ukraine is a costly sideshow that detracts from the ability of the U.S. to prepare for a real showdown with Russia, China, Iran, and North Korea — a showdown that has been made more likely by the rush to arrange an unnecessary confrontation with Putin.. There are, in fact, good reasons to believe that (a) he is actually trying to protect Russia and Russians and (b) he has the facts of history on his side.

The axis of China, Russia, Iran, and North Korea can play the “long game”, which the U.S. and the West demonstrably cannot do because of their political systems and thrall to “public (elite) opinion”. By the time the axis is ready to bring the West to its knees, an outright attack of some kind probably won’t be necessary, as Putin has shown by cutting off vital fuel supplies to western Europe.

The only way to ensure that the U.S. isn’t cowed by the axis is to arm to the teeth, have a leader with moral courage, and dare the axis to harm vital U.S. interests. What is more likely to happen, given America’s present course, is a de facto surrender by the U.S. (and the West) — marked by significant concessions on trade and the scope of military operations and influence.

America — once an impregnable fortress — is on a path to becoming an isolated, subjugated, and exploited colony of the axis.

ECONOMIC ROT

The wisepersons who wrought America’s military decline are of the same breed as those who wrought its economic decline. In the first instance they rushed into wars that they were not willing to see through to victory. In the second instance they rushed into policy-making whose economic consequences they could have foreseen if they hadn’t been preoccupied with “social jutice” and similar hogwash.

America’s economic rot can be traced to the early 1900s, when the toll of Progresssivism (the original brand) began to be felt. It is no coincidence that a leading Progressive of the time was Teddy Roosevelt, a card-carrying member of the old establishment.

Consider the following graph, which is derived from estimates of constant-dollar GDP per capita that are available here:

There are four eras, as shown by the legend (1942-1946 omitted because of the vast economic distortions caused by World War II):

  • 1866-1907 — annual growth of 2.0 percent — A robust economy, fueled by (mostly) laissez-faire policies and the concomitant rise of industry, mass production, technological innovation, and entrepreneurship.

  • 1908-1941 — annual growth of 1.4 percent — A dispirited economy, shackled by the fruits of “progressivism”; for example, trust-busting; the onset of governance through regulation; the establishment of the income tax; the creation of the destabilizing Federal Reserve; and the New Deal, which prolonged the Great Depression.

  • 1947- 2007 — annual growth of 2.2 percent — A rejuvenated economy, buoyed by the end of the New Deal and the fruits of advances in technology and business management. The rebound in the rate of growth meant that the earlier decline wasn’t the result of an “aging” economy, which is an inapt metaphor for a living thing that is constantly replenished with new people, new capital, and new ideas.

  • 2008-2021 — annual growth of 1.0 percent —  An economy sagging under the cumulative weight of the fruits of “progressivism” (old and new); for example, the never-ending expansion of Medicare, Medicaid, and Social Security; and an ever-growing mountain of regulatory restrictions on business. (In a similar post, which I published in 2009, I wrote presciently that “[u]nless Obama’s megalomaniacal plans are aborted by a reversal of the Republican Party’s fortunes, the U.S. will enter a new phase of economic growth — something close to stagnation.)

Had the economy of the U.S. not been deflected from the course that it was on from 1866 to 1907, per capita GDP would now be about 1.4 times its present level. Compare the position of the dashed green line in 2021 — $83,000 — with per capita GDP in that year — $58,000.

If that seems unbelievable to you, it shouldn’t. A growing economy is a kind of compound-interest machine; some of its output is invested in intellectual and physical capital that enables the same number of workers to produce more, better, and more varied products and services. (More workers, of course, will produce even more products and services.) As the experience of 1947-2007 attests, nothing other than government interventions (or a war far more devastating to the U.S than World War II) could have kept the economy from growing along the path of 1866-1907. (I should add that economic growth in 1947-2007 would have been even greater than it was but for the ever-rising tide of government interventions.)

The sum of the annual gaps between what could have been (the dashed green line) and the reality after 1907 (omitting 1942-1946) is almost $700,000 — that’s per person in 2012 dollars. It’s $800,000 per person in 2021 dollars, and even more in 2022 dollars.

That cumulative gap represents our mega-depression.

I have identified the specific causes of the mega-depression elsewhere. They are — unsurprisingly — government spending as a fraction of GDP, government regulatory activity, reductions in private business investment (resulting from the first two items), and the rate of inflation. Based on recent values of those variables, the rate of real GDP growth for the next 10 years will be about -6 percent. Yes, that’s minus 6 percent!

Is such a thing possible in the United States? Yes! The estimates of inflation-adjusted GDP available at the website of the Bureau of Economic Analysis (an official arm of the U.S. government) yield these frightening statistics: Constant-dollar GDP dropped at an annualized rate of -9.3 percent from 1929 to 1932, and at an annualized rate of -7.4 percent from 1929 to 1933.

In any event, the outlook is gloomy:

PSEUDO-SCIENCE IN THE SADDLE: SOME EXAMPLES

The Keynesian Multiplier

It is fitting to begin this section with a summary of “The Keynesian Multiplier: Fiction vs. Fact”. When push comes to shove, the advocates of big government (which undermines economic growth) love to spend like drunken sailors (with other people’s money), claiming that such spending will stimulate the economy. And, by extension, they claim (against commons sense and statistical evidence), that government spending is economically beneficial, as well as necessary (as long as it’s not for defense).

The Keynesian multiplier is a pseudo-scientific product of the pseudo-science of macroeconomics. It is nothing more than a descriptive equation without operational significance. What it is supposed to mean is that if spending rises by X, the rise in spending will cause GDP to rise by a multiple (k) of X. What it really means is that if the relationship between GDP and spending remains constant, when GDP rises by some amount spending will have necessarily risen by a fraction of that amount. This relationship holds true regardless of the kind of spending under discussion — private investment, private consumption, or government. But proponents of government spending prefer to put “government” in front of “spending”, and then pretend (or uncritically believe) that the causation runs from government spending to GDP and not the other way around.

“Climate Change”

Here is a case of scientists becoming invested in an invalid hypothesis. The hypothesis in question is that atmospheric CO2 is largely responsible for the rise in measured temperatures (averaged “globally”) by about 1.5 degrees Celsius since the middle of the 19th century. The hypothesis has been falsified (i.e., disproved) in so many ways that I have lost count (though one will do.) You can read dozens of scientific rebuttals here, and some of my own contributions here, here, and here.

As one writer puts it,

the “science” behind the claim that human carbon emissions are heading us toward some kind of planetary catastrophe is not only not “settled,” but actually non-existent.

None of that matters — so far — because the “climatistas” have brainwashed Western political “leaders”, Western bureuacracies, and the media-information industry, which is doing its damnedest to suppress and discredit “climate deniers” (i.e., people who actually follow the science). The cost of having the “climatistas” in charge has been revealed: soaring fuel prices and freezing Europeans. There’s worse to come if the “climatistas” aren’t ejected from their positions of influence — vast economic destruction and the social distruption that goes with it.

The Response to COVID-19

I’ll start with a Washington Monthly article:

While most countries imposed draconian restrictions, there was an exception: Sweden. Early in the pandemic, Swedish schools and offices closed briefly but then reopened. Restaurants never closed. Businesses stayed open. Kids under 16 went to school.

That stood in contrast to the U.S. By April 2020, the CDC and the National Institutes of Health recommended far-reaching lockdowns that threw millions of Americans out of work. A kind of groupthink set in. In print and on social media, colleagues attacked experts who advocated a less draconian approach. Some received obscene emails and death threats. Within the scientific community, opposition to the dominant narrative was castigated and censored, cutting off what should have been vigorous debate and analysis.

In this intolerant atmosphere, Sweden’s “light touch,” as it is often referred to by scientists and policy makers, was deemed a disaster. “Sweden Has Become the World’s Cautionary Tale,” carped The New York Times. Reuters reported, “Sweden’s COVID Infections Among Highest in Europe, With ‘No Sign Of Decrease.’” Medical journals published equally damning reports of Sweden’s folly.

But Sweden seems to have been right. Countries that took the severe route to stem the virus might want to look at the evidence found in a little-known 2021 report by the Kaiser Family Foundation. The researchers found that among 11 wealthy peer nations, Sweden was the only one with no excess mortality among individuals under 75. None, zero, zip.

That’s not to say that Sweden had no deaths from COVID. It did. But it appears to have avoided the collateral damage that lockdowns wreaked in other countries. The Kaiser study wisely looked at excess mortality, rather than the more commonly used metric of COVID deaths. This means that researchers examined mortality rates from all causes of death in the 11 countries before the pandemic and compared those rates to mortality from all causes during the pandemic. If a country averaged 1 million deaths per year before the pandemic but had 1.3 million deaths in 2020, excess mortality would be 30 percent….

The Kaiser results might seem surprising, but other data have confirmed them. As of February, Our World in Data, a database maintained by the University of Oxford, shows that Sweden continues to have low excess mortality, now slightly lower than Germany, which had strict lockdowns. Another study found no increased mortality in Sweden in those under 70. Most recently, a Swedish commission evaluating the country’s pandemic response determined that although it was slow to protect the elderly and others at heightened risk from COVID in the initial stages, its laissez-faire approach was broadly correct….

One of the most pernicious effects of lockdowns was the loss of social support, which contributed to a dramatic rise in deaths related to alcohol and drug abuse. According to a recent report in the medical journal JAMAeven before the pandemic such “deaths of despair” were already high and rising rapidly in the U.S., but not in other industrialized countries. Lockdowns sent those numbers soaring.

The U.S. response to COVID was the worst of both worlds. Shutting down businesses and closing everything from gyms to nightclubs shielded younger Americans at low risk of COVID but did little to protect the vulnerable. School closures meant chaos for kids and stymied their learning and social development. These effects are widely considered so devastating that they will linger for years to come. While the U.S. was shutting down schools to protect kids, Swedish children were safe even with school doors wide open. According to a 2021 research letter, there wasn’t a single COVID death among Swedish children, despite schools remaining open for children under 16….

Of the potential years of life lost in the U.S., 30 percent were among Blacks and another 31 percent were among Hispanics; both rates are far higher than the demographics’ share of the population. Lockdowns were especially hard on young workers and their families. According to the Kaiser report, among those who died in 2020, people lost an average of 14 years of life in the U.S. versus eight years lost in peer countries. In other words, the young were more likely to die in the U.S. than in other countries, and many of those deaths were likely due to lockdowns rather than COVID.

And that isn’t all. There’s also this working paper from the National Bureau of Economic Research, which concludes:

The first estimates of the effects of COVID-19 on the number of business owners from nationally representative April 2020 CPS data indicate dramatic early-stage reductions in small business activity. The number of active business owners in the United States plunged from 15.0 million to 11.7 million over the crucial two-month window from February to April 2020. No other one-, two- or even 12-month window of time has ever shown such a large change in business activity. For comparison, from the start to end of the Great Recession the number of business owners decreased by 730,000 representing only a 5 percent reduction. In general, business ownership is relatively steady over the business cycle (Fairlie 2013; Parker 2018). The loss of 3.3 million business owners (or 22 percent) was comprised of large drops in important subgroups such as owners working roughly two days per week (28 percent), owners working four days a week (31 percent), and incorporated businesses (20 percent).

And that was more than two years ago, before the political panic had spawned a destructive tsunami of draconian measures. Such measures made the pandemic worse by creating the conditions for the evolution of more contagious strains of the coronavirus.

The correct (i.e., scientific) approach would have been to quarantine and care for the most vulnerable members of the populace: the old, those with compromised immune systems, those with diseases that left them especially vulnerable (heart disease, COPD, morbid obesity, etc.). As for the rest of us, widespread exposure to the disease would have meant the natural immunization of the populace through exposure to the coronavirus and the development of antibodies through that exposure.

In the end, millions of people have been made poorer, deprived of education and beneficial human interactions, and suffered and died needlessly because politicians and bureaucrats couldn’t (and can’t) resist the urge to do something — especially when something means trying to conquer nature and suppress human nature.

(For much more on this subject, see David Stockman’s “The Macroeconomic Consequences Of Lockdowns & The Aftermath”, reproduced at ZeroHedge.)

The Wages of Pseudo-Science

The worst thing about fallacies such as the three that I have just discussed isn’t the fact that they are widely accepted, even by scientists (if you can call economics a science). The worst thing is they have been embraced by politicians and bureaucrats eager to “solve” a “problem” whether or not it is within their power to solve. The result is the concoction and enforcement of economically and socially destructive policies. But that matters little to cosseted elites who — like their counterparts in the USSR — can live high on the hog while the masses are starving and freezing.

CODA

Is there hope for an American renaissance? The upcoming mid-term election will be pivotal but not conclusive. It will be a very good thing if the GOP regains control of Congress. But it will take more than that to restore sanity to the land.

A Republican (of the right kind) must win in 2024. The GOP majority in Congress must be enlarged. A purge of the deep state must follow, and it must scour every nook and cranny of the central government to remove every bureaucrat who has a leftist agenda and the ability to thwart the administration’s initiatives.

Beyond that, the American people should be rewarded for their (aggregate) return to sanity by the elimination several burdensome (and unconstitutional departments of the executive branch), by the appointment of dozens of pro-constitutional judges, and by the appointment of a string of pro-constitutional justices of the Supreme Court.

After that, the rest will take care of itself: Renewed economic vitality, a military whose might deters our enemies, and something like the restoration of sanity in cultural matters. (Bandwagon effects are powerful, and they can go uphill as well as downhill.)

But all of that is hope. The restoration of America’s greatness will not be easy or without acrimony and setbacks.

If America’s greatness isn’t restored, America will become a vassal state. And the leftists who made it possible will be the first victims of their new masters.

Intelligence: Selected Readings

The mind is not a blank slate.

I have treated intelligence many times; for example:

These posts include and link to an abundance of supporting material. The additional material reproduced below consists of quotations from the cited sources. The quotations (and sources) are consistent with and confirm several points made in my earlier posts:

  • Intelligence has a strong genetic component; it is heritable.

  • Race is a real manifestation of genetic differences among sub-groups of human beings. Those subgroups are not only racial but also ethnic in character.

  • Intelligence therefore varies by race and ethnicity, though it is influenced by environment.

  • Specifically, intelligence varies in the following way: There are highly intelligent persons of all races and ethnicities, but the proportion of highly intelligent persons is highest among Ashkenazi Jews, followed in order by East Asians, Northern Europeans, Hispanics (of European/Amerindian descent), and sub-Saharan Africans — and the American descendants of each group.

  • Males are disproportionately represented among highly intelligent persons, relative to females. Males have greater quantitative skills (including spatio-temporal aptitude) relative to females; whereas, females have greater verbal skills than males.

  • Intelligence is positively correlated with attractiveness, health, and longevity.

  • The Flynn effect (rising IQ) is a transitory environmental effect brought about by environment (e.g., better nutrition) and practice (e.g., learning and application of technical skills). The Woodley effect is (probably) a long-term dysgenic effect among people whose survival and reproduction depends more on technology (devised by a relatively small portion of the populace) than on the ability to cope with environmental threats (i.e., intelligence).


Researchers of group differences have pointed out until they are blue in the face that believing in equal rights is not contingent on believing all people are born with the same abilities and that merely by discussing the causes of group differences in mean IQ they are not intending to question the moral basis for sexual or racial equality. You can believe that there are between-group IQ differences – you can even believe that these differences are 80% heritable – and still remain committed to equal rights….

But anti-hereditarians seem to have extraordinary difficulty grasping this point – it is as if they want their opponents to be making this false inference even though, by imagining this sin, they are unconsciously committing it themselves. If you argue that any research into group differences is ‘dangerous’ because it threatens to undermine the basis for equal rights, you are implicitly accepting the twisted logic of the racist’s argument, namely, that if people aren’t equal in their capabilities, then we would be justified in denying some groups their civil rights. It is this inference that is racist, not any claim about group differences, whether true or not, and it is not one that most intelligence researchers are guilty of. No doubt some hereditarians are racists, but then the beliefs of some cultural determinists are pretty toxic too, such as Joseph Stalin, Chairman Mao and Pol Pot.

Source: Toby Young, “Liberal Creationism“,
Intelligence
, January-February 2018


Where are we now, in the continuing story of the genetics of intelligence? Usually, one goes to a meta-analysis to discern the pattern of results.

A combined analysis of genetically correlated traits identifies 187 loci and a role for neurogenesis and myelination in intelligence. W. D. Hill, R. E. Marioni, O. Maghzian, S. J. Ritchie, S. P. Hagenaars, A. M. McIntosh, C. R. Gale, G. Davies & I. J. Deary

https://www.nature.com/articles/s41380-017-0001-5….

Seven novel biological systems associated with intelligence differences were found.

1 Neurogenesis, the process by which neurons are generated from neural stem cells.
2 Genes expressed in the synapse, consistent with previous studies showing a role for synaptic plasticity.
3 Regulation of nervous system development.
4 Neuron projection
5 Neuron differentiation
6 Central nervous system neuron differentiation.
7 Oligodendrocyte differentiation.

In addition to these novel results, the finding that regulation of cell development (gene-set size = 808 genes, P-value 9.71 × 10−7) is enriched for intelligence was replicated.

In summary, if further proof were needed that these bits of the genetic code were associated with brainpower, the list homes in on everything likely to be required for a fast-thinking powerful biological system.…

They canter to a conclusion:

We found 187 independent associations for intelligence in our GWAS, and highlighted the role of 538 genes being involved in intelligence, a substantial advance on the 18 loci previously reported.…

Source: James Thompson, “More Genes for Intelligence: A Pattern Emerges“,
The Unz Review
, March 16, 2018


For the first time, scientists have discovered that smart people have bigger brain cells than their peers.

As well as being bulkier, the cells are better connected to their neighbours, allowing them to process more information at a faster rate….

The study is the first to ever show that the  physical size and structure of brain cells is related to a person’s intelligence levels.

Christof Koch at the Allen Institute for Brain Science in Seattle told New Scientist:  ‘We’ve known there is some link between brain size and intelligence. The team confirm this and take it down to individual neurons.”

Source: Joe Pinkstone, “Secret to Intelligence? New Link between Brain Cell Size and IQ May Help Scientists Find a Way to Enhance Human Intellect“,
DailyMail.com
, May 2, 2018


I`ve accumulated recent data on the average scores by race for five exams: the GRE for grad school, the LSAT for law school, the MCAT for medical school, the GMAT for business school, and the DAT for dental school.

To make all the numbers comprehensible, I`ve converted them to show where the mean for each race would fall in percentile terms relative to the distribution of scores among non—Hispanic white Americans….

Thus, for example, on the Graduate Management Admission Test (GMAT), the gatekeeper for the M.B.A. degree, the mean score for whites falls, by definition, at the 50th percentile of the white distribution of scores. The mean score for black test—takers would rank at the 13th percentile among whites. Asians average a little better than the typical white, scoring at the 55th percentile….

If we look at how many people of each group take the test, we can understand the variations in average score a little better.

Thus, for example, whites, who in 2007 made up 61.5 percent of the 20—24—year—old cohort, took 68.7 percent of the GMATs. Blacks took the GMAT at a per capita rate just under half (49 percent) of the white rate. Asians are more than twice (205 percent) as likely as whites to sit the GMAT. Mexicans are only a fifth (18 per cent) as likely.

Source: Steve Sailer, “Graduate School Admissions, Race, and the White Status Game“, VDare.com, April 6, 2009


[R]ace IS a social construct. But race does exist. Saying something is a “social construct” can be true and still yet not be really meaningful.

Think of it, the periodic table of chemical elements is a social construct. Do chemical elements then not exist? Or, much more relevant – in fact, exactly like race – Linnaean taxonomy is a social construct. Do kingdoms, classes, species not exist? Race is merely an extension of this.

In reality, genetic analysis can separate human populations into distinct groups. This works at the level of continental groups or even ethnic groups within a continent (or even groups within an ethnicity). At times the progression is smooth, with each group gradually giving way to the next, and at other times, the transition is abrupt….

[F]or those that accept that genetic analysis can indeed separate humanity into distinct populations, they then claim that “race” doesn’t exist because human variation is “clinial”, that is, continuous. Across continents, neighboring groups don’t separate into sharply distinct races but slowly give way to from one group to the next, so they claim. Because of this, the claim is that different racial groups don’t exist….

[T]o say that a “smooth” clinial progression of human differences renders the individual groups non-existent is equivalent to looking at [the visible color spectrum] … and concluding that each individual color does not exist because they smoothly blend into one another. That’s clearly patently ridiculous. Even if the distribution of human groups is continuous (and it often is), that wouldn’t render each group along the distribution non-existent – nor would it render the differences between each group insignificant. That would be tantamount to saying yellow is equivalent to orange.

[Further] the claim that the distribution of human populations is always clinial is not even true. Razib Khan once addressed this….

[Regarding the claim that intelligence and behavioral traits can’t be in any way inherited, because no one has found a “gene for intelligence” or for any behavioral trait]:  This is one of those things that’s not even wrong. It is a red herring, and reflects a fundamental misunderstanding of genetics and what the genes do. Firstly, the genome is not like a shopping list, where there is a 1-to-1 correspondence between each “gene” and some physiological feature. Rather, the genes are like a recipe, and it is only through the complex interaction of all the genes do physical (and hence behavioral) traits emerge….

[I]t is not necessary to know which genetic variants lead to variation in a trait to know that trait variation is affected by genetic variation. That’s like saying that you need to know the names of all the people who work in a factory to know that the people there produce widgets.

As we’ve seen, behavioral genetic methods confirm the very high heritability of intelligence and behavioral traits. “Classic” behavioral genetic methods, such as twin and adoption studies, were enough to establish this by themselves….

[Regarding the claim that non-whites score low on IQ tests because the tests are culturally biased]: No. Indeed, not all non-Whites score below “Whites” (as we’ve see above, hardly a monolithic category itself). East Asians, specifically those from China, Korea, and Japan, tend to outscore Northern Europeans on IQ tests, scoring in the 105 or so range, on average. Ashkenazi Jews also are found to outscore non-Jewish Whites, the former possessing an average IQ around 112. In the case of Blacks (that is, specifically, those of West African descent), they tend to do the best on culture-“loaded” IQ tests, and do significantly worse on more “culture-free” tests like the Raven’s Progressive Matrices (which use test questions like the one seen here). “Fresh off the boat” East Asian immigrants to the West don’t seem to have a problem with either IQ tests or eventual real-world performance….

[Regarding the claim that poverty and/or discrimination are the causes of racial gaps intelligence]: Partly true, mostly false. An adverse environment, especially when we’re talking severe poverty – of the type you find in sub-Saharan Africa today – likely does have a deleterious effect on IQ. Hence, average IQ in sub-Saharan Africa is likely quite a bit lower than it would be under optimum conditions. However, we can’t reduce all racial IQ differences to environmental deprivation.

For one, racial gaps in IQ and achievement persist even in developed countries. Interventions, like Head Start, meant to ameliorate any educational deficits do nothing for the gap, as a comprehensive study by the U.S. government showed. As well, while income is correlated with IQ and educational attainment for all races, the relationship between childhood SES and IQ is different for different racial groups….

[O]n the SAT (which is simply another IQ test), the poorest Whites collectively outscored the wealthiest Blacks. As well, as we see, Blacks whose parents have graduate degrees are matched by Whites whose parents are only high school grads.

Even more interestingly, the group IQ and achievement hierarchy visible in the U.S. is found all over the world. All across the world, Blacks, for example – as a group –  generally do poorly versus Europeans. East Asians and Ashkenazi Jews collectively do well all around the world, better than Northern Europeans do. Across the globe and across very different societies and different economic systems, you see roughly the same pattern you do in the United States. One could attempt to piece together some “cultural” explanation for any particular society, but how to explain this global consistency, then? This is true with populations who have been in these respective countries for many generations, as is the case in Brazil, for instance.

Source: JayMan, “JayMan’s Race, Inheritance, and IQ FAQ (F.R.B.)“,
JayMan’s Blog
, May 4, 2015 (This writer’s style is crude and sometimes ungrammatical, but I have includedthis excerpt because the writer has a good command of the relevant research and has summarized it well.)


Davide Piffer is a 34 year-old Italian anthropologist with a Master’s degree from England’s prestigious Durham University. He has an IQ of over 132. Piffer is currently studying for his PhD at Israel’s Ben Gurion University.

Piffer has written an analysis of a Genome Wide Association Study (GWAS). Putting it in lay terms, his “forbidden paper” explores the correlation between the percentage of people in a country who carry several dozen genetic variants that are significantly associated with very high educational attainment—based on this GWAS— and average national IQ.

National IQs are robust because they correlate very strongly, at about 0.8, with other national measures of cognitive ability, such as international assessment tests. (Intelligence: A Unifying Construct for the Social Sciences, by Richard Lynn and Tatu Vanhanen,  2012) Very high educational attainment is overwhelmingly a function of high IQ.

Piffer found that the correlation between the prevalence of the polygenic score (the average frequency of several genetic variants) in nations and national IQ was 0.9. This, of course, essentially proves that race differences in intelligence are overwhelmingly genetic.

Now, obviously, Piffer needs to get this in a high impact journal: because he deserves to, for his own career advancement, and also so that it can’t be fallaciously dismissed via an appeal to snobbery—not an insignificant factor in academic life.

And this is where the problems have arisen.

In late 2014, Piffer submitted his paper on this subject to the leading journal Intelligence. One would have assumed there’d be no problem, considering that the journal has published numerous articles on race differences in IQ and has even been condemned by SJWs for doing so [Racism is creeping back into mainstream science – we have to stop it, by Angela Saini, The Guardian, January 24, 2018]. But the editor, Doug Detterman, rejected the paper citing the reviews he received.  In fact, only one of two reviewers recommended rejection; the other was extremely positive. Nevertheless, the decision letter read as if both reviews were negative.

In 2015, Piffer re-submitted the paper to Intelligence. He had successfully dealt with all the criticisms, and the paper should have been accepted for publication.

However, in 2016 Detterman stepped down as head of ISIR and was replaced by Richard Haier. With new reviewers and a new editor, it was rejected out of hand.

Piffer doesn’t give up easily, that’s for sure. Tiring of Intelligence, he improved the paper once more, in light of the critical reviews, and sent it to Frontiers in Psychology, another highly-respected journal. It passed the review process after three rounds, with reviewers recommending publication. However, Piffer tells me, “the editor, after sitting on the reviews for three weeks, decided to reject it, overturning the reviewers’ recommendation.”

Piffer adds: “This decision was kind of unprecedented and especially weird for a journal like Frontiers, whose philosophy is based on transparent review and less editorial power.”

Despairing of getting it in anywhere worthwhile, Piffer posted the “forbidden paper” on a pre-print archive [Polygenic Selection, Polygenic Scores, Spatial Autocorrelation and Correlated Allele Frequencies. Can We Model Polygenic Selection on Intellectual Abilities?, January 27, 2017]. Still, it’s already been cited by a serious researcher in the field.  [Geographic centrality as an explanation for regional differences in intelligence. by Edward Miller, Mankind Quarterly, Spring 2018]

More recently, Piffer self-published another paper, this time on Rpubs, using data from the latest GWAS carried out on 1.1 million people [Correlation between PGS and environmental variables, ]. It confirms his earlier findings, extending them to 52 populations from all over the globe and showing what he calls “fascinating correlations with latitude and polygenic scores of other traits.”

The top place is occupied by East Asians, followed by Europeans and equatorial people further down. “Geographic or genetic distances don’t explain these findings,” stresses Piffer, “as Austronesians (e.g. Papuans and Melanesians) have scores comparable to Africans, despite being genetically more different from African than are Europeans.”

Similarly, Piffer observes that Native Americans score lower than Europeans, despite being genetically closer to East Asians. This suggests that, after the East Asian-Amerindian split, there were later selective pressures for cognitive abilities among Eurasians.

Nobody can fault the sample size. The latest GWAS boasts an army of 1.1 million people and 2400 genetic variants. Piffer has created a plot with scores for the populations from the Human Genome Diversity Project:

edupgs

Piffer is now working on getting this into a good journal. He says: “It’s to be hoped that the next editor will have enough intellectual honesty to let my findings see the light of mainstream science.”

Let’s summarize: it has now been effectively proven that racial IQ differences in intelligence are fundamentally genetic. The only counter-argument from our SJW friends is an appeal to authority: “Why hasn’t it been published in a top peer-reviewed journal, then?”

The answer: editors are so frightened of SJWs that they daren’t publish it.

But that won’t suppress results like it used to. Brave academics can simply self-publish their results until an equally brave journal editor can be found.

Postscript: Absurdly, recent developments suggest it is acceptable to note that there is a genetic explanation of the higher incidence of  prostate cancer among some populations e.g, West Africans than in others.

Just not for educational attainment.

Source: Lance Welton, “‘This Will Not Stand’: Academic Establishment Suppresses Italian Anthropologist’s Proof That Race IQ Differences Are Genetic — For Now“, VDare.com, May 5, 2018


If one accepts the theory that modern humans first evolved in Africa and began colonizing the rest of the world 50,000 to 60,000 years ago, it is obvious that there has been enormous evolutionary change since that time. Zulus and Danes presumably had a common ancestor about the time humans left Africa, but are now so different from each other that standard taxonomies might well classify them as separate species….

People consciously direct the evolution of plants and animals, but [Gregory Cochran and Henry Harpending, writing in The 10,000 Year Explosion,] point out that the process is no different from the rigors of natural selection — just quicker. Much as the race deniers hate to admit it, humans in different environments evolved in sharply different directions. As the authors conclude, “We expect that differences between human ethnic groups are qualitatively similar to those between dog breeds.”

What, however, caused human evolution suddenly to speed up ten to twelve thousand years ago? For Professors Cochran and Harpending, the short answer is “agriculture.” It did so in two ways: by sharply increasing the number of people and by radically changing the environment in which they lived.

More humans meant more children, and therefore more mutations. Most babies are born with about 100 mutations, all but one or two of which are in DNA that does not seem to do anything and therefore have no effect. Those that make a difference are usually harmful or neutral but it is the occasional helpful mutation that drives evolution. Sixty thousand years ago, before the expansion out of Africa, there were perhaps only about 250,000 humans. By the Bronze Age, 3,000 years ago, there were 60 million, so a mutation that would have taken 100,000 years to occur could appear in just 400 years. Evolution was painfully slow among Paleolithic proto-humans because beneficial mutations show up so rarely in tiny populations.

Large populations are therefore a reservoir of new mutations and their size hardly slows down the propagation of good genes. According to the authors, a genetic leg up is like the flu, and can sweep through a population of 100 million in only twice the time it takes to go through a population of just 10,000.

Agriculture also brought perhaps the most dramatic change in the biological and social environment our species has ever experienced. Farming meant that for the first time in their existence Homo sapiens stayed in one place, and could therefore own more things than they could carry with them. They could become wealthier than their neighbors, and had to guard possessions against theft. Farmers could produce more food than their families needed, and this gave rise to commerce, division of labor, artisans, and non-productive elites. This social environment was completely new.

Of particular significance from an evolutionary point of view were the change of diet, domestication of animals, and population densities….

Professors Cochran and Harpending point out that some groups took up farming long before others, and that this explains a lot. Australian aborigines never farmed, and the American Indians of Illinois and Ohio started farming only 1,000 years ago. Both groups never drank alcohol before the white man showed up, and are highly susceptible to alcoholism. Fetal alcohol syndrome is about 30 times more common in these groups than in whites.

Aborigines and American Indians suffer in other ways from only recently having adopted a farming diet. Type 2 diabetes is related to a sensitivity to carbohydrates and a metabolic tendency to obesity. It is four times more prevalent among Aborigines and 2.5 times more prevalent among Navajos than among whites.

Sub-Saharan Africa was also late to take up agriculture — 7,500 years after it arose in the Middle East — and this helps explain why intelligence differences alone do not explain differences in black and white behavior. When the two groups are matched for IQ, blacks are still more likely to be criminal, shiftless, or have illegitimate children. This is probably due in part to the persistence of the smash-and-grab mentality that suits hunters but is gradually bred out of farmers….

The brain has evolved differently among different groups just as have skin color, body type, and facial features. The authors write that there are recent variants of genes that affect synapse formation, axon growth, formation of the layers of the cerebral cortex, and brain growth. “Again, most of these new variants are regional,” they add. “Human evolution is madly galloping off in all directions.”

Sometimes, even what appear to be racial similarities are actually differences that merely resemble each other. The authors point out, for example, that although both Asians and Caucasians have much lighter skins than ancestral Africans, the genetic mechanisms that shut down melanin production are different in the two races. In both Asia and Europe it was useful to let in more sunlight for vitamin D synthesis, but evolution found different ways to do it….

The 10,000 Year Explosion has a long chapter that proposes an explanation for how Ashkenazi Jews became the smartest people in the world. Trading and money-lending were high-IQ jobs, and in 1,000 years, or about 40 generations, European Jews appear to have increased their average IQs by about 12 points.

Jewish intelligence seems to be genetically associated with such diseases as Tay-Sachs, Gaucher’s, and familiar dysautonomia, which are up to 100 times more common among Jews than European gentiles. People with one copy of these genes appear to have an IQ advantage whereas two copies cause the disease. Professors Cochran and Harpending write that over time, advantageous mutations with such dangerous side effects are usually replaced by more benign mutations. The persistence of these odd mutations in Jews suggests they are recent.

One highly speculative but stimulating chapter considers the possibility that Neanderthals might have made crucial genetic contributions to Homo sapiens. There is no doubt that something important happened 30 to 40 thousand years ago. New tools, improved weapons, art, sculpture, and more efficient use of fire made big changes in what was still a Stone Age existence. These changes took place only in Eurasia — nowhere else — and Professors Cochran and Harpending are convinced they would not have come about without some important genetic change.

As it happens, this Stone Age flowering took place during the 10,000 years or so during which modern man and Neanderthals competed against each other in the same territory. Neanderthals are gone and we are not, so it is safe to assume Homo sapiens were superior — perhaps in intelligence, language, or resistance to disease. However, the authors believe there must have been genetic mixing with Neanderthals, and explain that even if just a few Neanderthal genes were useful to modern man, they would have spread through populations while the useless ones were eliminated. “It is highly likely that out of some 20,000 genes, at least a few of theirs [Neanderthal’s] were worth having,” they write. The authors concede that the genetic evidence is inconclusive — Neanderthal DNA is hard to come by — but they cite cases of “introgression,” in which wild species have acquired useful mutations from other populations.

Source: Thomas Jackson, “Science Refutes Orthodoxy — Again“, American Renaissance, May 2009 (a review of The 10,000 Year Explosion)


Source: James Thompson, “World IQ: Latest Update“, The Unz Review, May 15, 2018


[W]hat influence does intelligence measured at age 11 have on longevity? The good news is that a standard-deviation increase in IQ score is associated with a 24% decrease in mortality risk. So, at IQ 115 lifespan is 24% longer than average*. This is good news, together with the 60% increase in wages above the average level from an OECD study.

On the wages front, the effect of intelligence had already been shown by Charles Murray in his well-known 1998 “Income inequality and IQ” in which he compared the earnings of one child in a family with that of another sibling, showing that the effect of intelligence was powerful at creating later life differences even between siblings brought up within the same family environment.…

Iveson and colleagues have extended the within-family method by linking children in their 1947 national sample to younger siblings in a “Six day” sample, such that they had families tested at the same age with the same Moray House intelligence tests, and longevity measured up to November 2015…..

Here are the results of the contrast between the living and the dead.

Lifespan and IQ

These are scary figures, and worth showing to friends who doubt that IQ has any practical meaning.

Source: James Thompson, “Vita Brevis, Dignitatis Inutilis“, The Unz Revew, August 30, 2017


Digit Span must be one of the simplest tests ever devised. The examiner says a short string of digits at the rate of one digit a second in a monotone voice, and then the examinee repeats them. The examiner then tries a string which is one digit longer, and continues in this fashion with longer and longer strings of digits until the examinee fails both trials at that particular length. That determines the number of digits forwards.

Then the examiner explains that he will say a string of digits and the examinee has to repeat them backwards, that is, in reverse order. For example, 3 – 7 is to be said back to the examiner as 7 –3. This continues until the examinee fails two trials at a particular length which determines the number of digits backwards.…

The test is not only bereft of intellectual content, but is also low on cultural content. Once you have learnt digit names you are ready to do the test. I assume that forwards and backwards are concepts understood by all cultures worthy of the name.…

If any group defined in genetic or cultural terms has a particular difficulty with digits backwards this is a strong indicator that they have difficulty with tasks as they get more intellectually demanding. The higher the g loading the more they should differ from brighter groups.

Hence the great interest in the most recent scores, to see if they conform to the usual pattern described by Jensen in the G factor (p. 405, referring to work he did in 1975 with Figueroa, ref on p 614). Over at Human Varieties, Dalliard has tried to replicate those results using data from CNLSY (these are the children of the female participants in NLSY79). Incidentally, this is a great follow-up survey: “My Mummy did your tests before I was born”. Gradually we are getting to understand the transmission of intelligence through the generations.…

Dalliard says: “That the black-white gap on forward digits is substantially smaller than on backwards digits is a robust finding confirmed in this new analysis. This poses a challenge to the argument that racial differences in exposure to the kinds of information that are needed in cognitive tests cause the black-white test score gap. The informational demands of the digit span tests are minimal, as only the knowledge of numbers from 1 to 9 is required. Forward digits is a simple memory test assessing the ability to store information and immediately recall it. The informational demands of backwards digits are the same as those of forward digits, but the requirement that the digits be repeated in the reverse order means that it is not simply a memory test but one that also requires mental transformation or manipulation of the information presented.”

Source: James Thompson, “Digit Span: The Modest Little Bombshell“, The Unz Review, March 4, 2014


You may recall that I wrote with great enthusiasm about the wonders of digit span, describing it as a modest bombshell. It is a true measure: every additional digit you can remember is of equal unit size to the next, a scaled score with a true zero. Few psychometric measures have that property (a ratio scale level of measurement in SS Steven’s terms), so the results are particularly informative about real abilities, not just abilities in relation to the rubber ruler of normative samples. If there are any differences, including group differences, on such a basic test, then it is likely they are real.

Then last November Gilles Gignac dropped another bombshell. He found that if you looked at total digit span scores since 1923 there was not a glimmer of improvement in this very basic ability. This cast enormous doubt on the Flynn effect being a real effect, rather than an artefact of re-standardisation procedures. Gignac noted that digits forwards and backwards were in opposite directions, but not significantly so.…

[Michael A.] Woodley tells me that Gignac’s “substantial and impressive body of normative data on historical means of various measures of digit span covering the period from 1923 to 2008” reveals a hidden finding: the not-very-g-loaded digits forwards scores have gone up and the g-loaded digits backwards scores have gone down. This suggests that the fluffy forwards repetition task has benefitted from secular environmental gains, while the harder reversal task reveals the inner rotting core of a dysgenic society….

Woodley has proposed a co-occurrence model: losses in national IQ brought about by the higher fertility of the less intelligent exist at the same time as IQ gains brought about by more favourable social environments.

Source: James Thompson, “Digit Span Bombshell“, The Unz Review, April 3, 2015


Woodley argues that general ability is falling because of dysgenic effects, but also becoming more specialized, which he calls the Co-Occurrence model. Should it be called “Duller but specialized”?

How do these theories fare in the light of a massive new meta-analysis?…

The paper [Peera Wongupparaj et al., “The Flynn Effect for Verbal and Visuospatial Short-Term and Working Memory: A Cross-Temporal Meta-Analysis“, Intelligence, September 2017] is massive in scope, has more study samples than previous publications on this topic, is extremely large with circa 140,000 subjects and is also a massive confirmation of Woodley’s reworking of Gignac’s data, on a far larger scale. It seems that over the last 43 years we have become able to repeat a bit more but manipulate a bit less. We can echo more, and analyze less….

The authors add that the increase on the less g loaded forwards conditions suggests environmental causes including practice effects, while the decrease on the more g loaded backwards conditions suggests dysgenic effects, probably the reduced fertility of brighter persons, but perhaps also an effect of ageing populations….

So, Woodley’s “co-occurrence” model gets a strong confirmation.

Source: James Thompson, “Working Memory Bombshell“, The Unz Review, August 13, 2017


In a previous post, I show, using an American sample from the National Longitudinal Study of Adolescent Health, that physically more attractive people are more intelligent. As I explain in a subsequent post, the association between physical attractiveness and intelligence may be due to one of two reasons. Genetic quality may be a common cause for both (such that genetically healthier people are simultaneously more beautiful and more intelligent). Alternatively, the association may result from a cross-trait assortative mating, where more intelligent and higher status men of greater resources marry more beautiful women. Because both intelligence and physical attractiveness are highly heritable, their children will be simultaneously more beautiful and more intelligent. Regardless of the reason for the association, the new evidence suggests that the association between physical attractiveness and general intelligence may be much stronger than we previously thought….

The halo-effect explanation for the association between physical attractiveness and intelligence, however, runs into three different problems. First, it presumes that the judgment of physical attractiveness is arbitrary and subjective. As I explain in an earlier post, however, beauty is not in the eye of the beholder; it is an objective, quantifiable trait of someone like height or weight. Second, as I note in the previous post, the association between beauty and intelligence has been found in the American Add Health sample, where physical attractiveness of the respondents is assessed by the interviewer who is unaware of their intelligence.

Most importantly, however, the halo-effect explanation simply leads to another question: Where does the teachers’ belief that more intelligent students are more attractive come from? The notion that more intelligent individuals are physically more attractive is a stereotype, and, just like all other stereotypes, it is empirically true, as both the American and British data show. Teachers (and everyone else in society) believe that more intelligent individuals are physically more attractive because they are.

Source: Satoshi Kanazawa, “Beautiful People Really Are More Intelligent“,
Psychology Today
, December 12, 2010


In this first whole population birth cohort study linking childhood intelligence test scores to cause of death, in a follow-up spanning age 11-79, we found inverse associations for all major causes of death, including coronary heart disease, stroke, cancer, respiratory disease, digestive disease, external causes of death, and dementia. For specific cancer types the results were heterogeneous, with only smoking related cancers showing an association with childhood ability. In general, the effect sizes were similar for women and men (albeit marginally greater for women), with the exception of death by suicide, which had an inverse association with childhood ability in men but not women. In a representative subsample with additional background data, there was evidence that childhood socioeconomic status and physical status indicators had no more than a modest confounding impact on the observed associations.

Source: Jan J. Deary et al., “Childhood Intelligence in Relation to Major Causes of Death in 68 Year Follow-Up: Prospective Population Study“, British Medical Journal, June 28, 2017


Sex differences are in the news. A male Google employee reviewed some of the literature on the topic in the context of his workplace practices, and got sacked. A book questioning the role of testosterone in sex differences, and more generally the veracity of innate biological sex differences, got the Royal Society Science Book prize, though it was not reviewed by Royal Society Fellows expert in that area of knowledge. More generally, there are frequent news items about the lack of women in STEM subjects, in technology jobs and in corporate boardrooms, and these discussions often blame a glass ceiling of misogyny impeding women’s progress. Meanwhile, with rather less publicity, Prof Richard Lynn has revisited his 1994 paper in the light of recent research, and invited critics to take his finding apart….

Prof Lynn begins with the following observation:

It is a paradox that males have a larger average brain size than females, that brain size is positively associated with intelligence, and yet numerous experts have asserted that there is no sex difference in intelligence. This paper presents the developmental theory of sex differences in intelligence as a solution to this problem. This states that boys and girls have about the same IQ up to the age of 15 years but from the age of 16 the average IQ of males becomes higher than that of females with an advantage increasing to approximately 4 IQ points in adulthood.

Source: James Thompson, “Men 4 Points Ahead?“, The Unz Review, October 5, 2017 (See also Toby Young, “Why Can’t a Woman Be More Like a Man?“,
Quillette
, May 24, 2018.)


Which way do the fair sex incline: to matters verbal or mathematical? Verbal, it would seem, and all the more so as you go up the ability spectrum.…

The authors [of the study summarized in the post] highlight the following findings:

Sex differences in math-verbal ability tilt in the right tail were examined across 35 years. Sample included >2 million gifted adolescents across multiple measures in the U.S. and India. Ability tilt favored males for math > verbal and females for verbal > math. Sex differences in ability tilt remained fairly stable over time and replicated across measures….

[S]kipping a thousand words, here is the pictorial summary, which shows that sex differences increase as ability tilt increases:

sex diff violin plot of tilt Wai

To my eye, starting from the bottom for all students, these violin plots show the following: women are almost perfectly balanced between verbal and mathematical ability, but men incline towards being better at maths than at verbal tasks. Men are more likely to calculate….

At the higher intellectual level of the top 1 in a 100 of the population [middle part of the graphic] both men and women incline more to mathematical thinking, but men predominate more.

At the eminent level of the top 1 in 10,000 of the population [top part of the graphic], men outnumber women by about 2.5.

Source: James Thompson, “Tilting at Sex Differences“, The Unz Review, March 2, 2018


[From a review by Matt Ridley of Robert Plomin’s Blueprint: How DNA Makes Us Who We Are by Robert Plomin — Why Nature Always Trumps Nurture]

The evidence for genes heavily influencing personality, intelligence and almost everything about human behaviour got stronger and stronger as more and more studies of twins and adoption came through. However, the evidence implicating any particular gene in any of these traits stubbornly refused to emerge, and when it did, it failed to replicate.

Ten years ago I recall talking to Robert Plomin about this crisis in the science of which he was and is the doyen. He was as baffled as anybody. The more genes seemed to matter, the more they refused to be identified. Were we missing something about heredity? He came close to giving up research and retiring to a sailing boat.

Fortunately, he did not. With the help of the latest genetic techniques, Plomin has now solved the mystery and this is his book setting out the answer. It is a hugely important book — and the story is very well told. Plomin’s writing combines passion with reason (and passion for reason) so fluently that it is hard to believe this is his first book for popular consumption, after more than 800 scientific publications….

[M]ost measures of the “environment” show substantial genetic influence. That is, people adapt their environment better to suit their natures. For example, Plomin discovered that the amount of television adopted children watch correlates twice as well with the amount their biological parents watch rather than with the amount watched by their adoptive parents….

Our personalities are also influenced by the environment, but Plomin’s second key insight is that we are more influenced by accidental events of short duration than by family. Incredibly, children growing up in the same family are no more similar than children growing up in different families, if you correct for their genetic similarities. Parents matter, but they do not make a difference.

Plomin says these chance events can be big and traumatic things such as war or bereavement, but are mostly small but random things, like Charles Darwin being selected for HMS Beagle because Captain Robert Fitzroy believed in “phrenology” and thought he could read Darwin’s character from the shape of his nose. Environmental influences turn out to be “unsystematic, idiosyncratic, serendipitous events without lasting effects”, says Plomin….

… [H]eritability increases as we get older. The longer we live, the more we come to express our own natures, rather than the influences of others on us. We “grow into our genes”, as Plomin puts it. An obvious example is male-pattern baldness, which shows low heritability at 20 and very high heritability at 60.

Two other findings are that normal and abnormal behaviour are influenced by the same genes, and that genetic effects are general across traits; there are not specific genes for intelligence, schizophrenia or personality — they all share sets of genes.

Source: Matt Ridley, “The Genes That Contribute to Human Intelligence and Personality“, MattRidley Online, October 21, 2018


[Nicholas Nassim] Taleb has made sweeping assertions [about intelligence testing] with great confidence and surrounded by insulting language. Those assertions may well influence people who feel unsure about intelligence, and who assume that someone who is sure of themselves must know what they are talking about. That is understandable: an unsure person is aware they need to do more reading and thinking before feeling confident, and charitably assume that only a knowledgeable person who had done the necessary reading would dare speak with confidence.

Yet, far from giving scientific references at the end of his essay, Taleb confidently asserts that he does not need to do so, because the field is broken because…. Convexity. This is presented as if it were an essential ingredient of statistical analysis, rather than one of his interesting ideas about research strategies. This is amusing, because even in the area which Taleb calls his own, as a financial instruments trader, it is easy to find a careful, long term, large sample study that shows the beneficial effects of intelligence on investment behaviour. On his own home ground he is down 1-0.

The other lapse is to ignore the decades of debate carried out by intelligence researchers, notably Jensen, to improve measures of intelligence so that they conform to the requirements set out by SS Stevens. Digit Span is such a measure. So is Digit Symbol and, if measured extensively, Vocabulary. Simple and complex reaction times are other examples. Overall, Taleb is not providing new or original insights that advance the field. But his aim does not appear to be constructive or even informative.

I don’t know why an able man is so ill-disposed to measures of ability, but can only assume he is well aware of his abilities, and regards himself as above such mundanities. He does not give references, but mentions a book he is about to publish. Better to stick to the facts.

Does Taleb’s boastful dismissal of a field he palpably does not master mean that we should dismiss his contributions to other fields? Probably not. Public figures sometimes stray out of their field of competence. It is an occupational hazard brought on by public adulation, known since Roman times. However, if he can be so bombastic when out of his depth, then it would be prudent to go back to his other writings with a slightly more critical eye. When I read his thoughts on probability I made positive assumptions about some of his pronouncements on risk on the very prudent grounds that I could not contest his mathematical excursions. Perhaps I was Fooled by Algebra. Perhaps I was not the only one.

Taleb describes himself as a flaneur, which is a stroller, the sort of person who swans about. No problem with that. Swans are beguiling, but beautiful shapes can lead us astray.

Source: James Thompson, “Swanning About: Fooled by Algebra?“, The Unz Review, January 3, 2019


Thank you to all those who commented on the “Swanning About: Fooled by Algebra” blog and associated tweets. A number of themes came up, so here are individual responses I made to some comments, and also some general points.

Since Taleb thought he could dismiss a century of psychometry, there are rather a lot of references I needed to give in reply [listed below without surrounding text or ellipses]….

http://www.unz.com/jthompson/dettermans-50-years-of-seeking/

https://www.unz.com/jthompson/intelligence-all-that-matters-stuart/

https://www.unz.com/jthompson/intelligence-in-2000-words

https://www.unz.com/jthompson/the-7-tribes-of-intellect

http://www.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf

http://www.unz.com/jthompson/multiple-emotional-intelligence/

http://www.unz.com/jthompson/can-tests-predict-academic-outcomes/

http://www.unz.com/jthompson/the-comparative-advantage-of-eminence/

http://www.unz.com/jthompson/bright-folk-do-community-stuff/

https://www.unz.com/jthompson/intelligent-brains/

https://www.unz.com/jthompson/the-well-tempered-clavichord/

https://www.unz.com/jthompson/the-nature-of-human-intelligence-a-textbook

https://www.unz.com/jthompson/heave-half-brick-at-creativity/

https://www.unz.com/jthompson/another-half-brick-of-creativity/

https://www.unz.com/jthompson/the-tricky-question-of-rationality

http://www.unz.com/jthompson/social-class-and-university-entrance_28/

Source: James Thompson, “In the Wake of the Swan“, The Unz Review, January 5, 2019


Why was it that [Stephen Jay] Gould had such an impact when he argued [in The Mismeasure of Man] that the [intelligence] tests were biased against working class and minority racial groups? Morever, how did his views ever take hold when the issue of bias in intelligence testing had just been comprehensively evaluated in Arthur Jensen’s … Bias in Mental Testing. Jensen showed that, far from under-predicting African-American achievements, they perhaps slightly over-predicted them. I presume that Jensen’s volume was less often read, though it was written by an expert, not a polemicist. Perhaps precisely because it was written by an expert, in a restrained and far from folksy style, it had less impact on popular culture, which is what tends to determine public debates….

Gould’s book made a number of assertions. Two that stuck in people’s minds were: that measures of brain size derived from the study of skulls of different races had been biased, and that many items on the Army tests of intelligence were culturally biased.

The debate about the ancient skulls has raged to and fro for a long time, but it seems highly probable that the measures were taken correctly

Now the redoubtable Russell Warne has taken a detailed look at what Gould said about the Army Beta test, and finds that on that topic he has been unreliable and incorrect….

Warne says:

Given these results from our replication, it seems that Gould’s criticism of time limits and his argument that the Army Beta did not measure intelligence are without basis. Despite the short time limits for each Army Beta subtest, the results of this replication support the World War I psychologists’ belief that the Army Beta measured intelligence.

… Bluntly, Gould mis-represented the test, and misled his readers. Gould probably achieved his objective, which was to trash intelligence testing in the eyes of a generation of academics.

Warne has shown that the Beta test still works. It is a good predictor of intelligence, which correlates with current measures of scholastic attainment, shows a positive manifold and resolves into a common factor. In a standard which Gould never attempted, Warne pre-registered his prior assumptions so that the results of his experiment could be plainly seen by the reader, and so that the facts could prove him wrong.

Warne’s achievement is to have shown that Gould got it wrong. [See also Warne’s article, “The Mismeasurement of Stephen Jay Gould“, Quillette, March 19, 2019.]

Source: James Thompson, “Gould Got It Wrong“, The Unz Review, February 25, 2019


Teachers loom large in most children’s lives, and are long remembered. Class reunions often talk of the most charismatic teacher, the one whose words and helpfulness made a difference. Who could doubt that they can have an influence on children’s learning and future achievements?

Doug Detterman is one such doubter:

Education and Intelligence: Pity the Poor Teacher because Student Characteristics are more Significant than Teachers or Schools.

Douglas K. Detterman, Case Western Reserve University (USA)
The Spanish Journal of Psychology (2016), 19, e93, 1–11.
doi:10.1017/sjp.2016.88

https://pdfs.semanticscholar.org/2223/6c0ab2cee3dd9e25daa2236557b0912b799e.pdf

….

At least in the United States and probably much of the rest of the world, teachers are blamed or praised for the academic achievement of the students they teach. Reading some educational research it is easy to get the idea that teachers are entirely responsible for the success of educational outcomes. I argue that this idea is badly mistaken. Teachers are responsible for a relatively small portion of the total variance in students’ educational outcomes. This has been known for at least 50 years. There is substantial research showing this but it has been largely ignored by educators. I further argue that the majority of the variance in educational outcomes is associated with students, probably as much as 90% in developed economies. A substantial portion of this 90%, somewhere between 50% and 80% is due to differences in general cognitive ability or intelligence. Most importantly, as long as educational research fails to focus on students’ characteristics we will never understand education or be able to improve it.

Doug Detterman is a noble toiler in the field of intelligence, and has very probably read more papers on intelligence than anyone else in the world. He notes that the importance of student ability was known by Chinese administrators in 200 BC, and by Europeans in 1698.

The main reason people seem to ignore the research is that they concentrate on the things they think they can change easily and ignore the things they think are unchangeable.

Despite some experiments, the basics of teaching have not changed very much: the teacher presents stuff on a blackboard/projector screen which the students have to learn by looking at the pages of a book/screen, and then writing answers on a page/screen. By now you might have expected all lessons to have been taught by some computer driven correspondence tutorials, cheaply delivered remotely. There is some of that, but not as much as dreamed of decades ago.

Detterman reviews Coleman et al. (1966) and Jencks et al. (1972) which first brought to attention that 10% to 20% of variance in student achievement was due to schools and 80% to 90% due to students.He then look at more recent reviews of the same issue.

Gamoran and Long (2006) reviewed the 40 years of research following the Coleman report but also included data from developing countries. They found that for countries with an average per capita income above $16,000 the general findings of the Coleman report held up well. Schools accounted for a small portion of the variance. But for countries with lower per capita incomes the proportion of variance accounted for by schools is larger. Heyneman and Loxley (1983) had earlier found that the proportion of variance accounted for by poorer countries was related to the countries per capita income. This became known as the Heyneman-Loxley effect. A recent study by Baker, Goesling, and LeTendre (2002) suggests that the increased availability of schooling in poorer countries has decreased the Heyneman-Loxley effect so that these countries are showing school effects consistent with or smaller than those in the Coleman report.

The largest effect of schooling in the developing world is 40% of variance, and that includes “schooling” where children attend school inconsistently, and staff likewise.

After being destroyed during the Second World War, Warsaw came under control of a Communist government which allocated residents randomly to the reconstructed city, to eliminate cognitive differences by avoiding social segregation. The redistribution was close to random, so they expected that the Raven’s Matrices scores would not correlate with parental class and education, since the old class neighbourhoods had been broken up, and everyone attended the schools to which they had randomly been assigned. The authorities assumed that the correlation between student intelligence and the social class index of the home would be 0.0 but in fact it was R2= 0.97, almost perfect. The difference due to different schools was 2.1%. In summary, in this Communist heaven student variance accounted for 98% of the outcome.

Angoff and Johnson (1990) showed that the type of college or university attended by undergraduates accounted for 7% of the variance in GRE Math scores. Fascinatingly, a full 35% of students did not take up the offer from the most selective college they were admitted to, instead choosing to go to a less selective college. Their subsequent achievements were better predicted by the average SAT score of the college they turned down than the average SAT scores of the college they actually attended, the place where they received their teaching. Remember the Alma Mater you could have attended.

Twins attending the same classroom are about 8% more concordant than those with different teachers, which is roughly in line with the usual school effect of 10%.…

Given all that, why bother to chose a good school? Finding somewhere safe, friendly, and close to home could be important. Even if the particular school is not going to make a big scholastic difference, it can make a difference to satisfaction, belonging, and happiness. That is worth searching for.

Source: James Thompson, “Pity the Poor Teacher“, The Unz Review, July 1, 2019


Do bright people earn more than others? If not, it would strengthen the view that intelligence tests are no more than meaningless scores on paper and pencil tests composed of arbitrary items which have no relevance to real life.…

Dalliard argues that many of the low estimates for the correlation between intelligence and income are based on single year earning figures, and it is better to look at rolling averages over several years, and to note that very early in career and very late in career figures may be a poor reflection of overall career earnings. Better to calculate “permanent” earnings of the sort achieved between ages 25 and 55. He looks at NLSY79 data, wisely taking only earnings and wages (no welfare payments). Wages above the cutoff are set to the average of all wages above the cutoff. Using a log transform he shows that one additional IQ point predicts a 2.5% boost in income. The standardized effect size, or correlation, is 0.36 and the R squared is 13%.

Men’s income is more strongly related to intelligence:

For example, the expected permanent annual income of a man with an IQ of 100 is e^(8.004 + 0.027 * 100), or $44,530. The expected permanent annual income of a woman with the same IQ is e^(8.004 + 0.021 * 100), or $24,440.

Below, income by racial group, which should be compared with group differences in intelligence, with which there are close parallels.

Looking at how to predict the effect of intelligence on each racial group, and interesting finding emerges:

Black men have a significantly lower intercept and a significantly higher slope coefficient: each additional IQ point predicts 3.6% (95% CI: 2.6%-4.5%) more income for black men.

This suggests that employers value intelligence, and pay higher wages for all brighter employees, an effect which is bigger in a group with a lower average ability level.

There are other studies which could be added, and more detail which can be explained in each of these sources, but I have picked a selection of studies to make a general point: I think it is pretty clear that intelligence has real-world implications.

Source: James Thompson, “The Wages of Intellect (2)“, The Unz Review, September 2, 2019 (See also Thompson’s “Family Fortunes” of September 9, 2019, and “Family Misfortunes” of September 17, 2019.)


Generational changes in IQ (the Flynn Effect) have been extensively researched and debated. Within the US, gains of 3 points per decade have been accepted as consistent across age and ability level, suggesting that tests with outdated norms yield spuriously high IQs. However, findings are generally based on small samples, have not been validated across ability levels, and conflict with reverse effects recently identified in Scandinavia and other countries. Using a well-validated measure of fluid intelligence, we investigated the Flynn Effect by comparing scores normed in 1989 and 2003, among a representative sample of American adolescents ages 13–18 (n = 10,073). Additionally, we examined Flynn Effect variation by age, sex, ability level, parental age, and SES. Adjusted mean IQ differences per decade were calculated using generalized linear models. Overall the Flynn Effect was not significant; however, effects varied substantially by age and ability level. IQs increased 2.3 points at age 13 (95% CI = 2.0, 2.7), but decreased 1.6 points at age 18 (95% CI = −2.1, −1.2). IQs decreased 4.9 points for those with IQ ≤ 70 (95% CI = −4.9, −4.8), but increased 3.5 points among those with IQ ≥ 130 (95% CI = 3.4, 3.6). The Flynn Effect was not meaningfully related to other background variables. Using the largest sample of US adolescent IQs to date, we demonstrate significant heterogeneity in fluid IQ changes over time. Reverse Flynn Effects at age 18 are consistent with previous data, and those with lower ability levels are exhibiting worsening IQ over time. Findings by age and ability level challenge generalizing IQ trends throughout the general population.

Jonathan M. Platt et al., “The Flynn Effect for Fluid IQ May Not Generalize to All Ages or Ability Levels: A Population-Based Study of 10,000 US Adolescents“,
Intelligence, Volume 77, November–December 2019


It takes a certain courage to title a paper: Genetic “General Intelligence,” Objectively Determined and Measured.

Javier de la Fuente, Gail Davies, Andrew D. Grotzinger, Elliot M. Tucker-Drob, Ian J. Deary

doi: https://doi.org/10.1101/766600

The authors … investigate the genetic contribution of g to variation in each of the cognitive tests. Genetic correlation is simply the correlation between the genetic contributors to each of the measured abilities. It is correlation at the level of genes, not test scores. If the brain is made up of modules, then one would expect such genetic correlations to be low. On the other hand, a brain largely based on general ability would have strong correlations. In fact, the genetic correlations range from .14 to .87, with a mean of .53 and the first principal component accounted for a total of 62.17% of the genetic variance. The genetics of intelligence is largely g based, it would seem….

So, have the authors “got away with” their combative title? The best way to answer would be to set the question “What else do you want?” The claim is that intelligence is real, and is a real aspect of the brain. To show that that is the case you can show a link between intelligence test scores and real life (this has been done many, many times, and some examples are shown below) and a link between intelligence test scores and implied measures of genetic heritability via twin and family studies (also done many times), and now finally a link between intelligence test scores broken up into general and specific factors and measures of heritability via actual genomic studies identifying locations for general and specific factors of intelligence.

Here are some correlations between intelligence and real life measures
https://archive.ph/PCvgk

In my view this is a very important advance. It shows an underlying reality, at a genetic level, between general and specific aspects of cognitive ability. It allows investigations to proceed at two levels: the test score level and the genomic code level. Further studies will drill down into yet more detail.

It is fair to say that this is an objective approach, and ought to answer any reasonable critic of the reality of cognitive ability being based on brains which are under substantial genetic control.

Source: James Thompson, “Intelligence, Objectively“, The Unz Review, September 25, 2019


You know the story, but here we go again. The standard account of sex differences in intelligence is that there aren’t any. Or not significant ones, or perhaps some slight ones, but they counter-balance each other. The standard account usually goes on to concede that males are more variable than females, that is to say, they are more widely dispersed around the mean. Although this is an oft-repeated finding, in some circles it is still referred to a merely a hypothesis. There is a standardisation sample in Ro mania which did not show this difference, and others epidemiological samples where the differences are slight, but usual finding is that men show a wider standard deviation of ability.

Against this orthodoxy, Irwing and Lynn (2006) have argued that boys and girls mature at different speeds, with girls ahead till about age 16 and with boys moving ahead thereafter, such that men are 2-4 IQ points ahead of women throughout adult life.

https://www.unz.com/jthompson/men-4-points-ahead/

Lynn further argues that if men are 4 points ahead, and have a standard deviation of 15 as opposed to women’s standard deviation of 14, those two findings almost fully explain the higher number of men in intellectually demanding occupations. There is no glass ceiling. Fewer women are capable of the higher levels required for the glittering prizes. Furthermore, this explains why men know more things. At the very highest levels of ability there are more men, and they have more knowledge, which is why they win general knowledge competitions.

https://www.unz.com/jthompson/sex-on-the-brain/

This, the seditious faction suggest, is just a fact of sexual dimorphism. Male brains are very, very much bigger than women’s, and each of the component regions of the male brain are bigger than the same regions in women, and also more variable in size.

Standardization samples ought to be good, and often are so, but they are not as good as birth cohorts or major epidemiological samples, so the latter are to be favoured when looking for reliable sex differences.

However, here is another paper on standardization samples confirming the same pattern of male advantage, though not greater male variability in one of the samples.

Sex Differences on the WAIS-III in Taiwan and the United States
Hsin-Yi Chen and Richard Lynn. Pages 324-328.

https://drive.google.com/file/d/1N3qokmbgctVMktN5pU8dUbrzMmHns1CJ/view?usp=sharing

Sex differences are reported in the standardization samples of the WAIS-III in Taiwan and the United States. In Taiwan, men obtained a significantly higher Full Scale IQ than women of 4.35 IQ points and in the United States men obtained a significantly higher Full Scale IQ than women of 2.78 IQ points. The sex differences on the 14 subtests are generally similar with a correlation between the two of .65. In the Taiwan sample there were no consistent sex differences in variability.

The authors say:

There are three points of interest in the results. First, in the Taiwan sample males obtained a higher Full Scale IQ of .29d, the equivalent of 4.35 IQ points. This confirms the thesis advanced by Lynn (1994, 1998, 1999) that in adults, males have a higher average IQ than females of around 4-5 IQ points. Males obtained a higher Full Scale IQ in the American standardization sample of the WAIS-III of .185d (2.78 IQ points). These two results disconfirm the assertions of Haier et al. (2004) and Halpern (2012) that “Comparisons of general intelligence assessed with standard measures like the WAIS show essentially no differences between men and women” (Halpern, 2012, p. 115).

Second, the sex differences in the Taiwan and American WAIS-III are generally similar. On the 14 subtests the correlation between the two is .65 (p <.001). Thus, in both samples men obtained their greatest advantage on Information and their lowest advantage on Digit Symbol – Coding.
Third, there was no consistent sex difference in variability. On the Taiwan Full Scale IQ the VR of 1.02 is negligible, and males had greater variability in 9 of the 14 subtests while females had greater variability in 5 of the subtests. These results do not confirm the greater variability of males reported in numerous previous studies e.g., Arden and Plomin (2006) and Dykiert, Gale and Deary (2009).

This study, on the gold standard Wechsler test, seems to confirm a male advantage in general intelligence. As discussed, standardisation samples are designed to be an excellent representation of the population on which the test will be used (with changes to make it culturally accurate), and there is no reason to believe that this balanced selection would favour males. Birth samples would be even better, but this is a good test of the male advantage proposal.

The Information subtest is a measure of very general General Knowledge, not requiring any specialist interests, but asking about the things which would generally be known in the general population. A .44 sd advantage on this subtest is enormous. The greater male representation in high level general knowledge competitions seems well founded. On the US sample there is almost as big a male advantage for Maths, and a large deficit for the digit symbol coding task, which measures simple processing speed.

The lack of a greater standard deviation in the Taiwanese sample goes against the general finding, as did the standardisation sample for Romania. Standardisation samples are not as representative as larger epidemiological surveys, but it is interesting nonetheless, in that it suggests some sampling restriction.

Source: James Thompson, “Sex Differences, Again“, The Unz Review, January 12, 2021


We know from twin studies that intelligence is heritable, and genome-wide association studies are trying to identify the genes responsible for this result. (We know that genetics is powerful in real life, now we need to show it in theory). Part of the problem is that larger studies have put together results from disparate tests, so [Robert Plomin’s] team has designed a 40 item intelligence game which produces a reliable (internal consistency = .78, two week retest reliability = .88) measure of g which they have given to 4,751 young adults from their twin study.

This very big sample of 4,751 25-year-olds shows significant sex differences in favour of men. The authors don’t comment on this, but it fits the emerging pattern of a male intellectual advantage in adulthood.…

The summary is that the team have created a good new 15-minute IQ test which correlates well with the many longer assessments used over the years on their very large sample of twins. It also has good predictive power. If more widely adopted, and the few bits of explanatory English language translated into other languages, it could be a very useful contribution to large GWAS investigation of the genetic basis of intelligence.

Source: James Thompson, “Game On: Pathfinder“, The Unz Review, August 14, 2021


Heritability studies cannot show definitively that race differences in intelligence have a genetic cause. It is always possible that there is some hidden environmental factor(s) – a so-called “X factor,” analogous to nitrates in the corn example – that explains differences between – but not within – races (an X factor that explains the Black–White IQ gap would have to affect all Blacks equally in order to preserve high heritability and shift the population mean downwards without changing the variance). Nevertheless, the high within-group heritability of IQ can be part of a package of evidence for between-group heritability. This point was made by Jensen (1969), who was widely misunderstood as naively inferring between- from within-group heritability (see Sesardic, 2005, pp. 128–138). Expounding on Jensen’s argument, Flynn (1980) writes that “the probability of a genetic hypothesis will be much enhanced if, in addition to evidencing high [heritability] estimates, we find we can falsify literally every plausible environmental hypothesis [i.e., X factor] one by one” (p. 40; quoted in Sesardic, 2005, p. 136). The obvious candidate X factor that could explain race differences is, of course, racism. If racism lowers IQ, this could explain why the mean is shifted downwards for victimized groups. However, as Flynn argues, attributing differences to racism

is simply an escape from hard thinking and hard research. Racism is not some magic force that operates without a chain of causality. Racism harms people because of its effects and when we list those effects, lack of confidence, low self-image, emasculation of the male, the welfare mother home, poverty, it seems absurd to claim that any one of them does not vary significantly within both black and white America. (Flynn, 1980, p. 60; quoted in Sesardic, 2005, pp. 141–142)

Now, after decades of intensive searching, the X factor remains elusive. The adult Black–White IQ gap has remained stubbornly constant at approximately one standard deviation (15 IQ points) among cohorts born since around 1970 (Murray, 2007). Dickens and Flynn (2006) report that “Blacks gained 4 to 7 IQ points on non-Hispanic Whites between 1972 and 2002” (p. 913), but these gains appear to be among Blacks born before the early seventies. Dickens and Flynn (2006, Figure 3) indicate that, in 2002, the Black–White IQ gap in among 20-year-olds was approximately one standard deviation, or 15 points. Nisbett (2017) writes that “Dickens and Flynn found [the Black–White gap in IQ to be] around 9.5 points,” but this is only the gap if we include children (as R. Nisbett confirmed in a personal communication, December 24, 2018). More recent evidence indicates that the gap has persisted or even widened. Frisby and Beaujean (2015, Table 8) find a Black–White IQ gap of 1.16 standard deviations among a population-representative sample of adults used to norm the Wechsler Adult Intelligence Scale IV in 2007. Intensive interventions can raise IQ substantially during childhood when the heritability of IQ is low. But despite some misleading claims about the success of early intervention programs, gains tend to dissolve by late adolescence or early adulthood (Baumeister & Bacharach, 2000; Lipsey, Farran, & Durkin, 2018; Protzko, 2015). Adoption by white families – one of the most extreme interventions possible – has virtually no effect on the IQ of black adoptees by adulthood. Black children adopted by middle- and upper-middle-class white families in Minnesota obtained IQ scores at age 17 that were roughly identical to the African American average. Adoptees with one black biological parent obtained IQ scores that were intermediate between the black and white means (Loehlin, 2000, Table 9.3).2

To reiterate, the high within-group heritability of IQ combined with the failure to find an environmental X factor to explain the IQ gap does not show decisively that race differences are genetic, because it is possible that an X factor will be discovered in the future. However, the environmentalist theory of race differences has not, by normal scientific standards, been an especially progressive research program (in the sense of Lakatos, 1970). Environmentalists never predicted that the Black–White IQ gap would, after reaching one standard deviation, remain impervious to early education, adoption, massive improvements in the socioeconomic status of Blacks, and the (apparent) waning of overt racism and discrimination. Commenting 45 years ago on environmentalist theories that appeal to an X factor, Urbach (1974) noted that “any data in the world can be made consistent with any theory by invoking nameless and untested factors” (p. 134). Nevertheless, we cannot technically say that the environmentalist explanation for the IQ gap has been falsified. The fact that the gap did narrow since the early twentieth century gives some credibility to the idea that environment is playing a role.

Let us turn to the second method for investigating the role of genes in development: genome-wide association studies (GWAS). Unlike heritability studies, GWAS can uncover specific genetic variants – or single-nucleotide polymorphisms (SNPs) – associated with IQ. In just the last couple years, GWAS has identified hundreds of such SNPs (Davies et al., 2018; Savage et al., 2018; Sniekers et al., 2017), which together explain around 11% of the variance in IQ (Allegrini et al., 2019).

If we find that the SNPs implicated in IQ are differentially distributed across racial groups, this would not necessarily imply that race differences in intelligence are genetic. SNPs might have different effects across races and environments due to gene–gene and gene–environment interactions. SNPs with no causal relation to intelligence can be genetically linked to SNPs that do have a causal relation in some populations but not others, so SNP–intelligence correlations may not always hold across races (Rosenberg, Edge, Pritchard, & Feldman, 2019). But if we find that many of the same SNPs predict intelligence in different racial groups, a risky prediction made by the hereditarian hypothesis will have passed a crucial test. Even then, however, GWAS will only establish a correlation between SNPs and IQ without revealing the causal chain linking SNP to phenotype. It would still be theoretically possible that these SNPs lead to differences in intelligence as a consequence of environmental factors (e.g., parenting effects) that can be manipulated so as to eliminate race differences. But if work on the genetics and neuroscience of intelligence becomes sufficiently advanced, it may soon become possible to give a convincing causal account of how specific SNPs affect brain structures that underlie intelligence (Haier, 2017). If we can give a biological account of how genes with different distributions lead to race differences, this would essentially constitute proof of hereditarianism. As of now, there is nothing that would indicate that it is particularly unlikely that race differences will turn out to have a substantial genetic component. If this possibility cannot be ruled out scientifically, we must face the ethical question of whether we ought to pursue the truth, whatever it may be.

Source: Nathan Cofnas, “Research on Group Differences in Intelligence: A Defense of Free Inquiry”, Philosophical Psychology, Volume 33 2020 — Issue 1


As you may have noticed, it is not popular to suggest that genetics is a possible cause of individual differences, and distinctly unpopular to even hint that it might be a cause of genetic group differences….

Well, if facts exist, someone will notice them and, if brave, find a way of letting people know what they have noticed.

A Multimodal MRI-based Predictor of Intelligence and Its Relation to Race/Ethnicity. Emil O. W. Kirkegaard and John G.R. Fuerst. The Mankind Quarterly · March 2023.
DOI: 10.46469/mq.2023.63.3.2
https://drive.google.com/file/d/1iVjNT1Uv9fC3uZzwBPpVQDRGWbmiPXoV/view?usp=sharing

What have they found?

They say:

We used data from the Adolescent Brain Cognitive Development Study to create a multimodal MRI-based predictor of intelligence. We applied the elastic net algorithm to over 50,000 neurological variables. We find that race can confound models when a multiracial training sample is used, because models learn to predict race and use race to predict intelligence.
When the model is trained on non-Hispanic Whites only, the MRI-based predictor has an out-of-sample model accuracy of r = .51, which is 3 to 4 times greater than the validity of whole brain volume in this dataset. This validity generalized across the major socially-defined racial/ethnic groupings (White, Black, and Hispanic). There are race gaps on the predicted scores, even though the model is trained on White subjects only.

This predictor explains about 37% of the relation between both the Black and Hispanic classification and intelligence.

So, by looking at all the MRI measures they can predict IQ better than by using brain size alone. So, we should stop using the brain size measure (about 0.28) and move to this better measure (0.51) and eventually the even better ones that may be found later.

First of all, some summary data:

These are adolescents, so things may change a bit with age, but these are good sample sizes. Black adolescents have a somewhat lower than expected low score, and a high standard deviation, the latter surprisingly so, since many previous black samples have a standard deviation of 13. I don’t know how to interpret this, but it might be due to the subtests used.

The intelligence tests used were not the best sample of skills (where were Maths, or Wechsler Vocabulary or Block Design?) and they over-represented working memory tests, which I think are weak measures, though fashionable. It may account for the large standard deviation finding. I predict that a more representative range of tests would lead to even higher predictive accuracy overall, and perhaps lower standard deviations.

The learning algorithm they employed was one suitable for use in the tricky setting where there are far more variables than individual subjects. When you apply the algorithm to the whole sample, leaving aside race, then the correlation with IQ is 0.60 which is very high. Using this technique, a brain image gives a good guide to the power of the brain as a problem-solving organ.

Using an MRI-based predictive equation the authors did a better job of predicting a person’s IQ than was possible from knowing their parent-described race, despite the racial differences in intelligence being large. These “social race” labels were redundant for the purposes of predicting intelligence.
The correlations of MRI prediction with actual intelligence test results within each social race were pretty similar: white 0.51, black 0.53, Hispanic 0.54 and other 0.58.

They tried to see if their algorithm could use MRI data to predict the social race of the adolescents, and found they could do so with 73% accuracy. The distinction between blacks and whites could be drawn with almost complete accuracy, only a 2% error rate either way.

They then trained a model to predict genetic ancestry. You might want to call this race.

They say:

We find that MRI-based predictions of genetic ancestry are very accurate. The three correlations of interest are .91, .89, and .61, for European, African, and Amerindian ancestry, respectively. As expected, the correlations were higher for European and African ancestry than for White and Black social race, respectively.

So, you can study a brain image and predict the person’s race with very high accuracy.

A possible counter-argument is that the model has learned to spot race, and then sharpens its intelligence predictors. The authors decided to test a predictive equation for whites only, so as to cut out this possible confounder. In fact, it only goes down from 0.51 on the full dataset to 0.48 for whites only.

As usual, I have left out some of the additional tests carried out to make sure the findings were robust.

the model learned to predict subjects’ social race/genetic ancestry based on the MRI data, and then used this information to predict intelligence. This finding is consistent with those of Gichoya et al. (2022), who report that machine learning can recognize self-reported race/ethnicity from a wide variety of medical imaging data. This is unsurprising because in the USA, socially identified race closely tracks continental genetic ancestry (Kirkegaard et al., 2021; Tang et al., 2005), which certainly is not “only skin deep”.

We further analyzed which aspects of MRI data were most useful in the prediction of intelligence. We found that functional MRI (fMRI) task data, which measures blood flow while performing tasks, had the highest validity for predicting intelligence. Additionally, MRI datasets which had more variables, which showed larger race differences, and which had higher correlations with polygenic scores, had higher validities.

So, what can we conclude here? Using a statistical technique to study the many scores produced by magnetic resonance images, it is possible to predict the subject’s intelligence and their race with high accuracy. The predictive equations work for all races, so they have not been damaged by some presumed test bias. Some of the MRI measures are more predictive than others, (these tasks are not used to calculate IQ scores), which are measuring something about brain flow in the brain as tasks are carried out.

All this is brain deep, not skin deep.

Is there anywhere left for blank slate-ists to hide? They had trouble accepting that intelligence was heritable within genetic groups. After many decades of research some (but by no means all) made a grudging admission that heredity accounted for something. Then they argued that heredity was weaker in lower socio-economic-status groups. That was eventually shown to be unlikely, though the debate has been long and hard, so not all researchers accept that finding. Throughout, blank slate-ists denied that genetics could explain genetic group differences. They argued that each race showed the effects of heredity within race, but none between races (or not more than a bare 5%). They claimed that observed differences must (95% of them) be due to environmental differences, including those which are hard to detect, but have powerful effects. Will this paper change their minds? I think not. Not for 50 years, anyway.

This paper makes a simple point: brains differ between races, and these differences relate to differences in intelligence.

Source: James Thompson, “Not Unreasonable: Racial Differences Are Brain Deep”, The Unz Review, March 31, 2023