Is the Vaccine Making a Difference?

Perhaps. In graph below I have plotted 7-day moving averages of the daily numbers of new COVID-19 cases and deaths. The plot of deaths is moved to the right by 24 days because the highest correlation between cases and deaths occurs with a 24-day lag from cases to deaths. Although the case rate began to decline in mid-January 2021, the death rate held steady through early March, and then began to drop only after about 10 percent of the populace had been fully vaccinated.

However, the leveling-off of the case and death rates suggests that the vaccination rate is too low given (a) the more infectious nature of new strains of COVID-19 and (b) the rate at which interpersonal social and economic activity is rising.

Modeling and Science Revisited

I have written a lot about modeling and science. (See the long list of posts at “Modeling, Science, and ‘Reason’“.) I have said, more than once, that modeling isn’t science. What I should have said — though it was always implied — is that a model isn’t scientific if it is merely synthetic.

What do I mean by that? Here is an example by way of contrast. The famous equation E = mc2 is an synthetic model in that it is derived Einstein’s special theory of relativity (and other physical equations). But it is also an empirical model in that the relationship between mass (m) and energy (E) can also be confirmed by observation (given suitable instruments).

On the other hand, a complex model of the U.S. economy, a model of Earth’s “average” temperature (called misleadingly a climate model), or a model of combat (to give a few examples) is only synthetic.

Why do I say that a complex model (of the kind mentioned above) is only synthetic? Such a model consists of a large number of modules, each of which is mathematical formulation of some aspect of the larger phenomenon being modeled. Here’s a simple example: An encounter between a submarine and a surface ship, where the outcome is expressed as the probability that the submarine will sink the surface ship. The outcome could be expressed in this way:

S = D x F x H x K x C, where S = probability that submarine sinks surface ship, which is the product of:

D = probability that submarine detects surface ship within torpedo range

F = probability that, given detection, submarine is able to “fix” the target and fire a torpedo (or salvo of them)

H = probability that, given the firing of a torpedo (or salvo), the surface ship is hit

K = probability that, given a hit (or hits), the surface ship is sunk

C = probability that the submarine survives efforts to find and nullify it before it can detect a surface ship

This is a simple model by comparison with a model of the U.S. economy, a global climate model, or a model of a battle involving large numbers of various kinds of weapons. In fact, it is a simplistic model of combat. Each of the modules could be decomposed into many sub-modules; for example, the module for D could consist of sub-modules for sonar accuracy, sonar operator acuity, acoustic conditions in the area of operation, countermeasures deployed by the target, etc.. In any event, the module for D will consist of a mathematical relationship, based perhaps on some statistics collected from tests or exercises (i.e., not actual combat). The mathematical relationship will encompass many assumptions (mainly implicit ones) about sonar accuracy, sonar operator acuity, etc. The same goes for the other modules — C, in particular, which encompasses all of the effects of D, F, H, and K — at a minimum.

In sum, the number of unknowns completely swamps the number of knowns. There is nothing close to certainty about the model — or any model of its kind. (In the case of the model of S, for example, relatively small errors — say, 25 percent from the actual value of each variable — can yield an estimate of S that is three times greater than or one-third as much as the actual value of S.) The mathematical operations involved do nothing to resolve the uncertainty, they merely multiply it. But the mathematical operations nevertheless convey the appearance of certainty because they yield numbers. The numbers merely represent a lot of guesses, but they seem authoritative because numbers mesmerize most people — even scientists who should be always be skeptical of them.

Despite all of that, analysts have for many decades been producing — and decision-makers have been consuming — the results of such models as the basis for choosing defense systems. Models of similar complexity have been and are being used in making decisions about a broad range of policies affecting the economy, health care, transportation, education, the environment, the climate (i.e., “global warming”), and on into the night.

The unfounded confidence that modelers have in their models, because the models produce numbers, captivates most decision-makers, who simply want answers. And so, modelers will go to ridiculous extremes. One not untypical example that I recall from my days as an in-house critic of analysts’ work is the model that purported to compare competing weapons (on of which was still in development) based on their relative contribution to the outcome of a hypothetical battle. The specific measure was the movement of the forward edge of the battle area (FEBA) to within a yard.

Global climate models are like that warfare model: Their creators pretend that they can estimate the change in the average temperature of the globe to within less than a tenth of a degree. If you believe that, I have a bridge to sell you.


Related pages:

Climate Change

Modeling, Science, and “Reason”

Old Wisdom Revisited

To paraphrase Kurt von Hammerstein-Equord (1878-1943), an anti-Nazi German general, there are four personality types:

Smart and hard-working (good middle manager)

Stupid and lazy (lots of these around, hire for simple, routine tasks and watch closely)

Smart and lazy (promote to senior management — delegates routine work and keeps his eye on the main prize)

Stupid and hard-working (dangerous to have around, screws up things, should be taken out and shot)

It would be fun to classify presidents accordingly, but my target today is a former boss. He wasn’t very smart, but he put up a good front by deploying rhetorical tricks (e.g., Socratic logic-chopping of a most irritating kind). But he was hard-working, if you call constant motion without a notion (my coinage) hard-working.

His rhetorical tricks and aimless energy impressed outsiders who couldn’t appreciate the damage that he did to the company. Both traits irritated intelligent insiders, who were smart enough to pierce his facade and understand the damage that he did to the company.

He kept his job for 25 years because he had only to impress outsiders — the board of directors and the senior officials of client agencies — who had no idea what he actually did from day to day. Good things were accomplished in spite of him, but the glory reflected on him, undeservedly.

The “Pause” Redux: The View from Austin

Christopher Monckton of Brenchley — who, contrary to Wikipedia, is not a denier of “climate change” but a learned critic of its scale and relationship to CO2 — posits a new “pause” in global warming:

At long last, following the warming effect of the El Niño of 2016, there are signs of a reasonably significant La Niña, which may well usher in another Pause in global temperature, which may even prove similar to the Great Pause that endured for 224 months from January 1997 to August 2015, during which a third of our entire industrial-era influence on global temperature drove a zero trend in global warming:

As we come close to entering the la Niña, the trend in global mean surface temperature has already been zero for 5 years 4 months:

There is not only a global pause, but a local one in a place that I know well: Austin, Texas. I have compiled the National Weather Service’s monthly records for Austin, which go back to the 1890s. More to the point here, I have also compiled daily weather records since October 1, 2014, for the NWS station at Camp Mabry, in the middle of Austin’s urban heat island. Based on those records, I have derived a regression equation that adjusts the official high-temperature readings for three significant variables: precipitation (which strongly correlates with cloud cover), wind speed, and wind direction (the combination of wind from the south has a marked, positive effect on Austin’s temperature).

Taking October 1, 2014, as a starting point, I constructed cumulative plots of the average actual and adjusted  deviations from normal:

Both averages have remained almost constant since April 2017, that is, almost four years ago. The adjusted deviation is especially significant because the hypothesized effect of CO2 on temperature doesn’t depend on other factors, such as precipitation, wind speed, or wind direction. Therefore, there has been no warming in Austin — despite some very hot spells — since April 2017.

Moreover, Austin’s population grew by about 5 percent from 2017 to 2020. According to the relationship between population and temperature presented here, that increase would have induced an temperature increase of 0.1 degrees Fahrenheit. That’s an insignificant number in the context of this analysis — though one that would have climate alarmists crying doom — but it reinforces my contention that Austin’s “real” temperature hasn’t risen for the past 3.75 years.


Related page and posts:

Climate Change
AGW in Austin?
AGW in Austin? (II)
UHI in Austin Revisited

The White House Brochures on Climate Change

You will find working links to them later in this post. Also, I have created a page to memorialize the links and the back story about the preparation of the ten brochures and the attempt to send them down the memory hole.

Here is the page, reproduced in its entirety:

Post by Dr. Roy W. Spencer at Roy Spencer, Ph.D. on January 8 2021:

White House Brochures on Climate (There is no climate crisis)

January 8th, 2021 by Roy W. Spencer, Ph. D.

Late last year, several of us were asked by David Legates (White House Office of Science and Technology Policy) to write short, easily understandable brochures that supported the general view that there is no climate crisis or climate emergency, and pointing out the widespread misinformation being promoted by alarmists through the media.

Below are the resulting 9 brochures, and an introduction by David. Mine is entitled, “The Faith-Based Nature of Human Caused Global Warming”.

David hopes to be able to get these posted on the White House website by January 20 (I presume so they will become a part of the outgoing Administration’s record) but there is no guarantee given recent events.

He said we are free to disseminate them widely. I list them in no particular order. We all thank David for taking on a difficult job in more hostile territory that you might imagine.

Introduction(Dr. David Legates)

The Sun Climate Connection(Drs. Michael Connolly, Ronan Connolly, Willie Soon)

Systematic Problems in the Four National Assessments of Climate Change Impacts on the US(Dr. Patrick Michaels)

Record Temperatures in the United States(Dr. John Christy)

Radiation Transfer(Dr. William Happer)

Is There a Climate Emergency(Dr. Ross McKitrick)

Hurricanes and Climate Change(Dr. Ryan Maue)

Climate, Climate Change, and the General Circulation(Dr. Anthony Lupo)

Can Computer Models Predict Climate(Dr. Christopher Essex)

The Faith-Based Nature of Human-Caused Global Warming(Dr. Roy Spencer)

Post by Dr. Roy W. Spencer at Roy Spencer, Ph.D. on January 12, 2021:

At the White House, the Purge of Skeptics Has Started

January 12th, 2021 by Roy W. Spencer, Ph. D.

Dr. David Legates has been Fired by White House OSTP Director and Trump Science Advisor, Kelvin Droegemeier

[Image of the seal of the Executive Office of the President]

President Donald Trump has been sympathetic with the climate skeptics’ position, which is that there is no climate crisis, and that all currently proposed solutions to the “crisis” are economically harmful to the U.S. specifically, and to humanity in general.

Today I have learned that Dr. David Legates, who had been brought to the Office of Science and Technology Policy to represent the skeptical position in the Trump Administration, has been fired by OSTP Director and Trump Science Advisor, Dr. Kelvin Droegemeier.

The event that likely precipitated this is the invitation by Dr. Legates for about a dozen of us to write brochures that we all had hoped would become part of the official records of the Trump White House. We produced those brochures (no funding was involved), and they were formatted and published by OSTP, but not placed on the WH website. My understanding is that David Legates followed protocols during this process.

So What Happened?

What follows is my opinion. I believe that Droegemeier (like many in the administration with hopes of maintaining a bureaucratic career in the new Biden Administration) has turned against the President for political purposes and professional gain. If Kelvin Droegemeier wishes to dispute this, let him… and let’s see who the new Science Advisor/OSTP Director is in the new (Biden) Administration.

I would also like to know if President Trump approved of his decision to fire Legates.

In the meantime, we have been told to remove links to the brochures, which is the prerogative of the OSTP Director since they have the White House seal on them.

But their content will live on elsewhere, as will Dr. Droegemeier’s decision

I have saved the ten brochures in their original (.pdf) format. The following links to the files are listed in the order in which Dr. Spencer listed them in his post of January 8, 2021:

Introduction

The Sun Climate Connection

Systematic Problems in the Four National Assessments of Climate Change Impacts on the US

Record Temperatures in the United States

Radiation Transfer

Is There a Climate Emergency?

Hurricanes and Climate Change

Climate, Climate Change, and the General Circulation

Can Computer Models Predict Climate?

The Faith-Based Nature of Human-Caused Global Warming

Thinking about Thinking — and Other Things: Desiderata As Beliefs

This is the fifth post in a series. (The previous posts are here, here, here, and here.)This post, like its predecessors, will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

How many things does a human being believe because he wants to believe them, and not because there is compelling evidence to support his beliefs? Here is a small sample of what must be an extremely long list:

There is a God. (1a)

There is no God. (1b)

There is a Heaven. (2a)

There is no Heaven. (2b)

Jesus Christ was the Son of God. (3a)

Jesus Christ, if he existed, was a mere mortal. (3b)

Marriage is the eternal union, blessed by God, of one man and one woman. (4a)

Marriage is a civil union, authorized by the state, of one or more consenting adults (or not) of any gender, as the participants in the marriage so define themselves to be. (4b)

All human beings should have equal rights under the law, and those rights should encompass not only negative rights (e.g., the right not to be murdered) but also positive rights (e.g., the right to a minimum wage). (5a)

Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons. (5b)

The rise in global temperatures over the past 170 years has been caused primarily by a greater concentration of carbon dioxide in the atmosphere, which rise has been caused by human activity – and especially by the burning of fossil fuels. This rise, if it isn’t brought under control will make human existence far less bearable and prosperous than it has been in recent human history. (6a)

The rise in global temperatures over the past 170 years has not been uniform across the globe, and has not been in lockstep with the rise in the concentration of atmospheric carbon dioxide. The temperatures of recent decades, and the rate at which they are supposed to have risen, are not unprecedented in the long view of Earth’s history, and may therefore be due to conditions that have not been given adequate consideration by believers in anthropogenic global warming (e.g., natural shifts in ocean currents that have different effects on various regions of Earth, the effects of cosmic radiation on cloud formation as influenced by solar activity and the position of the solar system and the galaxy with respect to other objects in the universe, the shifting of Earth’s magnetic field, and the movement of Earth’s tectonic plates and its molten core). In any event, the models of climate change have been falsified against measured temperatures (even when the temperature record has been adjusted to support the models). And predictions of catastrophe do not take into account the beneficial effects of warming (e.g., lower mortality rates, longer growing seasons), whatever causes it, or the ability of technology to compensate for undesirable effects at a much lower cost than the economic catastrophe that would result from preemptive reductions in the use of fossil fuels. (6b)

Not one of those assertions, even the ones that seem to be supported by facts, is true beyond a reasonable doubt. I happen to believe 1a (with some significant qualifications about the nature of God), 2b, 3b (given my qualified version of 1a), a modified version of 4a (monogamous, heterosexual marriage is socially and economically preferable, regardless of its divine blessing or lack thereof), 5a (but only with negative rights) and 5b, and 6b.  But I cannot “prove” that any of my beliefs is the correct one, nor should anyone believe that anyone can “prove” such things.

Take the belief that all persons are created equal. No one who has eyes, ears, and a minimally functioning brain believes that all persons are created equal. Abraham Lincoln, the Great Emancipator, didn’t believe it:

On September 18, 1858 at Charleston, Illinois, Lincoln told the assembled audience:

I am not, nor ever have been, in favor of bringing about in any way the social and political equality of the white and black races, that I am not, nor ever have been, in favor of making voters or jurors of negroes, nor of qualifying them to hold office, nor to intermarry with white people; and I will say in addition to this that there is a physical difference between the white and black races which I believe will forever forbid the two races living together on terms of social and political equality … I will add to this that I have never seen, to my knowledge, a man, woman, or child who was in favor of producing a perfect equality, social and political, between negroes and white men….

This was before Lincoln was elected president and before the outbreak of the Civil War, but Lincoln’s speeches, writings, and actions after these events continued to reflect this point of view about race and equality.

African American abolitionist Frederick Douglass, for his part, remained very skeptical about Lincoln’s intentions and program, even after the p[resident issued a preliminary emancipation in September 1862.

Douglass had good reason to mistrust Lincoln. On December 1, 1862, one month before the scheduled issuing of an Emancipation Proclamation, the president offered the Confederacy another chance to return to the union and preserve slavery for the foreseeable future. In his annual message to congress, Lincoln recommended a constitutional amendment, which if it had passed, would have been the Thirteenth Amendment to the Constitution.

The amendment proposed gradual emancipation that would not be completed for another thirty-seven years, taking slavery in the United States into the twentieth century; compensation, not for the enslaved, but for the slaveholder; and the expulsion, supposedly voluntary but essentially a new Trail of Tears, of formerly enslaved Africans to the Caribbean, Central America, and Africa….

Douglass’ suspicions about Lincoln’s motives and actions once again proved to be legitimate. On December 8, 1863, less than a month after the Gettysburg Address, Abraham Lincoln offered full pardons to Confederates in a Proclamation of Amnesty and Reconstruction that has come to be known as the 10 Percent Plan.

Self-rule in the South would be restored when 10 percent of the “qualified” voters according to “the election law of the state existing immediately before the so-called act of secession” pledged loyalty to the union. Since blacks could not vote in these states in 1860, this was not to be government of the people, by the people, for the people, as promised in the Gettysburg Address, but a return to white rule.

It is unnecessary, though satisfying, to read Charles Murray’s account in Human Diversity of the broad range of inherent differences in intelligence and other traits that are associated with the sexes, various genetic groups of geographic origin (sub-Saharan Africans, East Asians, etc.), and various ethnic groups (e.g., Ashkenazi Jews).

But even if all persons are not created equal, either mentally or physically, aren’t they equal under the law? If you believe that, you might just as well believe in the tooth fairy. As it says in 5b,

Human beings are, at bottom, feral animals and cannot therefore be expected to abide always by artificial constructs, such as equal rights under the law. Accordingly, there will always be persons who use the law (or merely brute force) to set themselves above other persons.

Yes, it’s only a hypothesis, but one for which there is ample evidence in the history of mankind. It is confirmed by every instance of theft, murder, armed aggression, scorched-earth warfare, mob violence as catharsis, bribery, election fraud, gratuitous cruelty, and so on into the night.

And yet, human beings (Americans especially) persist in believing tooth-fairy stories about the inevitable triumph of good over evil, self-correcting science, and the emergence of truth from the marketplace of ideas. Balderdash, all of it.

But desiderata become beliefs. And beliefs are what bind people – or make enemies of them.

Thinking about Thinking — and Other Things: Irrational Rationality

This is the fourth post in a series. (The previous posts are here, here, and here.)This post, like its predecessors, will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

Type 2 thinking has two main branches: scientific and scientistic.

The scientific branch leads (often in roundabout ways) to improvements in the lot of mankind: better and more abundant food, better clothing, better shelter, faster and more comfortable means of transportation, better sanitation, a better understanding of diseases and more effective means of combatting them, and on and on. (You might protest that not all of those things, and perhaps only a minority of them, emanated from formal scientific endeavors conducted by holders of Ph.D. and M.D. degrees working out of pristine laboratories or with delicate equipment. But science is much more than that. Science includes learning by doing, which encompasses everything from the concoction of effective home remedies to the hybridization of crops to the invention and refinement of planes, trains, and automobiles – and, needless to say, to the creation and development of much of the electronic technology and related software with which we are “blessed” today.)

The scientific branch yields its fruits because it is based on facts about the so-called material universe. The underlying constituents of that universe may be unknown and unknowable, as discussed earlier, but they manifest themselves in observable and seemingly predictable ways.

The scientific branch, in sum, is inductive at its core. Observations of specific things lead to guesses about the causes of those things or the relationships between them. The guesses are codified as hypothesis, often in mathematical form. The hypotheses are tested against new observations of the same kinds of things. If the hypotheses are found wanting, they are either rejected outright or modified to take into account the new observations. Revised hypotheses are then tested against newer observations, and on into the night. (There is nothing scientific about testing a new hypothesis against the observations that led to it; that is a scientistic trick used by, among others, climate “scientists” who wish to align their models with historic climate data.)

If new observations are found to comport with a hypothesis (guess), the hypothesis is said to be confirmed. Confirmed doesn’t mean proven, it just means not proven to be wrong. Lay persons – and a lot of scientists, apparently – mistake confirmation, in the scientific sense, for proof. There is no such thing in science.

The scientistic branch of Type 2 thinking is deductive. It assumes truths and then generalizes from those assumptions; for example:

All Cretans are liars, according to Epimenides (a Cretan who lived ca. 600 BC).

Epimenides was a Cretan.

Therefore, Epimenides was a liar.

This syllogism exemplifies a self-referential paradox. If the major and minor premises are true, Epimenides was a liar. But if Epimenides was lying when he said that all Cretans are liars, Epimenides – Cretan — wasn’t necessarily a liar, though he might have been one because it is plausible that some Cretans are liars, at least some of the time.

What the syllogism really exemplifies is the fatuousness of deductive reasoning, that is, reasoning which proceeds from general statements that cannot be subjected to scientific examination.

Though deductive reasoning can be useful in contriving hypothesis, it cannot be used to “prove” anything. But there are persons who claim to be scientists, or who claim to “believe” science, who do reason deductively. It starts when a hypothesis that has been advanced by a scientist becomes an article of faith to that scientist, to a group of scientists, or to non-scientists who use their belief to justify political positions – which they purport to be “scientific” or “science-based”.

There is no more “science” in such positions as there is in the belief that the Sun revolves around the Earth or that all persons are created equal. The Sun may seem to revolve around the Earth if one’s perspective is limited to the relative motions of Sun and Earth and anchored in the implicit assumption that Earth’s position is fixed. All persons may be deemed equal in a narrow and arbitrary way – as in a legal doctrine of equal treatment under the law – but that hardly makes all persons equal in every respect; for example, in intelligence, physical strength, athletic ability, attractiveness to the opposite sex, work ethic, conditions of birth, or proneness to various ailments. (I will say more about equality as a non-scientific desideratum in the next post.)

This isn’t to say that some scientific hypotheses – and their implications – can’t be relied upon. If they couldn’t be, humans wouldn’t have benefited from better and more abundant food, the many other things mentioned above, and much more. But they can be relied upon because they are based on observed phenomena, tested in the acid of use, and – most important – employed with ample safeguards, which still may be inadequate to real-world conditions. Airplanes crash, bridges collapse, and so on, because there is never enough knowledge to foresee all of the conditions that may arise in the real world.

An honest person would admit that an airplane crash falsifies the “science” of aircraft design and operation because it shows, irrefutably, that the “science” was incomplete in some crucial way. The same goes for collapsed bridges, collapsed buildings, and so on.

That isn’t to say that human beings would be better off without science. Far from it. Science and its practical applications have made us far better off than we would be without them. But neither scientists nor those who apply the (tentative) findings of science are infallible.

Thinking about Thinking — and Other Things: Survival and Thinking

This is the third post in a series. (The first post is here; the second post is here.)This post, like its predecessors, will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

It’s true that instinctive (or impulsive) actions can be foolish, dangerous, and deadly ones. But they can also be beneficial. If, in your peripheral vision, you see an object hurtling toward you at high speed, you don’t deliberately compute its trajectory and decide whether to move out of its path. No, your brain does that for you without your having to “think” about it, and you move out of the object’s path before you have had time to “think” about doing it.

In sum, you (your brain) engaged in Type 1 thinking about the problem at hand and resolved it quickly. If you had engaged in deliberate, Type 2, thinking you might have been killed by the impact of the object that was hurtling toward you.

The distinction that I’m making here is one that Daniel Kahneman labors over in Thinking Fast and Slow. But I won’t bore you with the details of that boring book. Life is too short, and certainly shorter for me than for most of you. Let’s just say that there’s nothing especially meritorious about Type 2 thinking, and that it can lead to actions that are as foolish, dangerous, and deadly as those that result from “instinct”.

I will go further and say that Type 2 thinking has brought Americans to the brink of bankruptcy, serfdom, and civil war. But to understand why I say that, you will have to follow this series to the bitter end.

*     *     *

If the need to survive ever had anything to do with the advancement of human intelligence and knowledge, that day is long past for most human beings in much of the world.

Type 1 thinking is restricted mainly to combat, competitive sports, operating motorized equipment, playing video games, and reacting to photos of Donald Trump. It is the key to survival in a narrow range of activities aside from combat, such as driving on a busy highway, ducking when a lethal projectile is headed your way, and instinctively avoiding persons whose actions or appearance seem menacing. The erosion of the avoidance instinct is due in part to the cosseted lives that most Westerners lead, and in part to the barrage of propaganda that denies differences in the behavior of various classes, races, and ethnic groups. (Thus, for example, disruptive black children aren’t to be ejected from classrooms unless an equal proportion of white children, disruptive or not, is likewise ejected.)

Type 2 thinking of the kind that might advance useful knowledge and its beneficial application is a specialty of the educated, intermarrying elite – a class that dominates academia and the applied sciences (e.g., medicine, medical research, and the various fields of engineering). The same class also dominates the media (including so-called entertainment), “technology” companies (most of which don’t really produce technology), the upper echelons of major corporations, and the upper echelons of government.

But, aside from academicians and professionals whose work advances practical knowledge (how to build a better mousetrap, a more earthquake-resistant building, a less collapsible bridge, or an effective vaccine), the members of aforementioned class have nothing on the yeomen who become skilled in sundry trades (construction, plumbing, electrical work) by the heuristic method – learning and improving by doing. That, too, is Type 2 thinking. But it accumulates over years and is tested in the acid of use, unlike the Type 2 thinking that produces, for example, intricate and even elegant climate models whose designers believe in and defend because they are emotional human beings, like all of us.

Type 2 thinking, despite the stereotype that it is deliberate and dispassionate, is riddled with emotion. Emotion isn’t just rage, lust, and the like. Those are superficial manifestations of the thing that drives us all: egoism.

No matter how you slice it, everything that a person does deliberately – including type 2 thinking – is done to bolster his own sense of well-being. Altruism is merely the act of doing good for others so that one may feel better about oneself. You cannot be another person, and actually feel what another person is experiencing. You can only be a person whose sense of self is invested in loving another person or being thought of as loving mankind – whatever that means.

Type 2 thinking – the Enlightenment’s exalted “reason” – is both an aid to survival and a hindrance to it. It is an aid in ways such as those mentioned above, that is, in the advancement of practical knowledge to defeat disease, move people faster and more safely, build dwellings that will stand up against the elements, and so on.

It is a hindrance when, as Shakespeare’s Hamlet says, “the native hue of resolution Is sicklied o’er with the pale cast of thought”. Type 1 thinking causes us to smite an attacker. Type 2 thinking causes us to believe, quite wrongly, that by sparing him we somehow become a law-abiding exemplar whose forbearance diminishes the level of violence in the world and the likelihood that violence will be visited upon us in the future.

Harry S Truman emulated Type 1 thinking when he loosed the atomic bombs on Japan, ended World War II, saved at least a million lives, and made Japan a peaceful nation for at least the next 75 years. In the vernacular, Truman followed his “gut”.

Neville Chamberlain exemplified Type 2 thinking when he settled for Hitler’s empty promise of peace instead of gearing up to fight an inevitable war. Lyndon Baines Johnson exemplified Type 2 thinking in his vacillating prosecution of the war in Vietnam, where he was more concerned with “world opinion” (whatever that is) and public opinion (i.e., the bleating of pundits and protestors) than he was with the real job of the commander-in-chief, which it to fight and win or don’t fight at all. George H.W. Bush exemplified Type 2 thinking when he declined to depose Saddam Hussein in 1991. Barack Hussein Obama exemplified Type 2 thinking when he made a costly deal with Iran’s ayatollahs that profited them greatly for an easily betrayed promise to refrain from the development of nuclear weapons. Type 2 thinking of the kind exemplified by Chamberlain, Bush, and Obama is egoistic and delusional: It reflects and justifies the thinker’s inner view of the world as he wants it to be, not the world as it is.

Type 2 thinking is valuable to the survival of humanity when it passes the acid test of use. It is a danger to the survival of humanity when it arises from a worldview that excludes the facts of life. One of those facts of life is that predators exist and can only be eliminated – one at a time – by killing them. This is as true of murderous thugs as it is of murderous dictators.

There’s much more to be said about the dangerous delusions fostered by Type 2 thinking.

Thinking about Thinking — and Other Things: Evolution

This is the second post in a series. (The first post is here.) This post, like its predecessor, will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

Evolution is simply change in organic (living) objects. Evolution, as a subject of scientific inquiry, is an attempt to explain how humans (and other animals) came to be what they are today.

Evolution (as a discipline) is a much scientism as it is science. Scientism, according to thefreedictionary.com is “the uncritical application of scientific or quasi-scientific methods to inappropriate fields of study or investigation.” When scientists proclaim truths instead of propounding hypotheses they are guilty of practicing scientism. Two notable scientistic scientists are Richard Dawkins and Peter Singer. It is unsurprising that Dawkins and Singer are practitioners of scientism. Both are strident atheists, and strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side.

Dawkins, Singer, and many other scientistic atheists share an especially “religious” view of evolution. In brief, they seem to believe that evolution rules out God. Evolution rules out nothing. Evolution may be true in outline but it does not bear close inspection. On that point, I turn to David Gelertner’s “Giving Up Darwin” (Claremont Review of Books, Spring 2019):

Darwin himself had reservations about his theory, shared by some of the most important biologists of his time. And the problems that worried him have only grown more substantial over the decades. In the famous “Cambrian explosion” of around half a billion years ago, a striking variety of new organisms—including the first-ever animals—pop up suddenly in the fossil record over a mere 70-odd million years. This great outburst followed many hundreds of millions of years of slow growth and scanty fossils, mainly of single-celled organisms, dating back to the origins of life roughly three and half billion years ago.

Darwin’s theory predicts that new life forms evolve gradually from old ones in a constantly branching, spreading tree of life. Those brave new Cambrian creatures must therefore have had Precambrian predecessors, similar but not quite as fancy and sophisticated. They could not have all blown out suddenly, like a bunch of geysers. Each must have had a closely related predecessor, which must have had its own predecessors: Darwinian evolution is gradual, step-by-step. All those predecessors must have come together, further back, into a series of branches leading down to the (long ago) trunk.

But those predecessors of the Cambrian creatures are missing. Darwin himself was disturbed by their absence from the fossil record. He believed they would turn up eventually. Some of his contemporaries (such as the eminent Harvard biologist Louis Agassiz) held that the fossil record was clear enough already, and showed that Darwin’s theory was wrong. Perhaps only a few sites had been searched for fossils, but they had been searched straight down. The Cambrian explosion had been unearthed, and beneath those Cambrian creatures their Precambrian predecessors should have been waiting—and weren’t. In fact, the fossil record as a whole lacked the upward-branching structure Darwin predicted.

The trunk was supposed to branch into many different species, each species giving rise to many genera, and towards the top of the tree you would find so much diversity that you could distinguish separate phyla—the large divisions (sponges, mosses, mollusks, chordates, and so on) that comprise the kingdoms of animals, plants, and several others—take your pick. But, as [David] Berlinski points out, the fossil record shows the opposite: “representatives of separate phyla appearing first followed by lower-level diversification on those basic themes.” In general, “most species enter the evolutionary order fully formed and then depart unchanged.” The incremental development of new species is largely not there. Those missing pre-Cambrian organisms have still not turned up. (Although fossils are subject to interpretation, and some biologists place pre-Cambrian life-forms closer than others to the new-fangled Cambrian creatures.)

Some researchers have guessed that those missing Precambrian precursors were too small or too soft-bodied to have made good fossils. Meyer notes that fossil traces of ancient bacteria and single-celled algae have been discovered: smallness per se doesn’t mean that an organism can’t leave fossil traces—although the existence of fossils depends on the surroundings in which the organism lived, and the history of the relevant rock during the ages since it died. The story is similar for soft-bodied organisms. Hard-bodied forms are more likely to be fossilized than soft-bodied ones, but many fossils of soft-bodied organisms and body parts do exist. Precambrian fossil deposits have been discovered in which tiny, soft-bodied embryo sponges are preserved—but no predecessors to the celebrity organisms of the Cambrian explosion.

This sort of negative evidence can’t ever be conclusive. But the ever-expanding fossil archives don’t look good for Darwin, who made clear and concrete predictions that have (so far) been falsified—according to many reputable paleontologists, anyway. When does the clock run out on those predictions? Never. But any thoughtful person must ask himself whether scientists today are looking for evidence that bears on Darwin, or looking to explain away evidence that contradicts him. There are some of each. Scientists are only human, and their thinking (like everyone else’s) is colored by emotion.

Yes, emotion, the thing that colors thought. Emotion is something that humans and other animals have. If Darwin and his successors are correct, emotion must be a facility that improves the survival and reproductive fitness of a species.

But that can’t be true because emotion is the spark that lights murder, genocide, and war. World War II, alone, is said to have occasioned the deaths of more than one-hundred million humans. Prominently among those killed were six million Ashkenzi Jews, members of a distinctive branch of humanity whose members (on average) are significantly more intelligent than other branches, and who have contributed beneficially to science, literature, and the arts (especially music).

The evil by-products of emotion – such as the near-extermination of peoples (Ashkenazi Jews among them) – should cause one to doubt that the persistence of a trait in the human population means that the trait is beneficial to survival and reproduction.

David Berlinski, in The Devil’s Delusion: Atheism and Its Scientific Pretensions, addresses the lack of evidence for evolution before striking down the notion that persistent traits are necessarily beneficial:

At the very beginning of his treatise Vertebrate Paleontology and Evolution, Robert Carroll observes quite correctly that “most of the fossil record does not support a strictly gradualistic account” of evolution. A “strictly gradualistic” account is precisely what Darwin’s theory demands: It is the heart and soul of the theory….

In a research survey published in 2001, and widely ignored thereafter, the evolutionary biologist Joel Kingsolver reported that in sample sizes of more than one thousand individuals, there was virtually no correlation between specific biological traits and either reproductive success or survival. “Important issues about selection,” he remarked with some understatement, “remain unresolved.”

Of those important issues, I would mention prominently the question whether natural selection exists at all.

Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not. Thomas Ray has for years been conducting computer experiments in an artificial environment that he has designated Tierra. Within this world, a shifting population of computer organisms meet, mate, mutate, and reproduce.

Sandra Blakeslee, writing for The New York Times, reported the results under the headline “Computer ‘Life Form’ Mutates in an Evolution Experiment: Natural Selection Is Found at Work in a Digital World.”

Natural selection found at work? I suppose so, for as Blakeslee observes with solemn incomprehension, “the creatures mutated but showed only modest increases in complexity.” Which is to say, they showed nothing of interest at all. This is natural selection at work, but it is hardly work that has worked to intended effect.

What these computer experiments do reveal is a principle far more penetrating than any that Darwin ever offered: There is a sucker born every minute….

“Contemporary biology,” [Daniel Dennett] writes, “has demonstrated beyond all reasonable doubt that natural selection— the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge— has the power to generate breathtakingly ingenious designs” (italics added).

These remarks are typical in their self-enchanted self-confidence. Nothing in the physical sciences, it goes without saying— right?— has been demonstrated beyond all reasonable doubt. The phrase belongs to a court of law. The thesis that improvements in life appear automatically represents nothing more than Dennett’s conviction that living systems are like elevators: If their buttons are pushed, they go up. Or down, as the case may be. Although Darwin’s theory is very often compared favorably to the great theories of mathematical physics on the grounds that evolution is as well established as gravity, very few physicists have been heard observing that gravity is as well established as evolution. They know better and they are not stupid….

The greater part of the debate over Darwin’s theory is not in service to the facts. Nor to the theory. The facts are what they have always been: They are unforthcoming. And the theory is what it always was: It is unpersuasive. Among evolutionary biologists, these matters are well known. In the privacy of the Susan B. Anthony faculty lounge, they often tell one another with relief that it is a very good thing the public has no idea what the research literature really suggests.

“Darwin?” a Nobel laureate in biology once remarked to me over his bifocals. “That’s just the party line.”

In the summer of 2007, Eugene Koonin, of the National Center for Biotechnology Information at the National Institutes of Health, published a paper entitled “The Biological Big Bang Model for the Major Transitions in Evolution.”

The paper is refreshing in its candor; it is alarming in its consequences. “Major transitions in biological evolution,” Koonin writes, “show the same pattern of sudden emergence of diverse forms at a new level of complexity” (italics added). Major transitions in biological evolution? These are precisely the transitions that Darwin’s theory was intended to explain. If those “major transitions” represent a “sudden emergence of new forms,” the obvious conclusion to draw is not that nature is perverse but that Darwin was wrong….

Koonin is hardly finished. He has just started to warm up. “In each of these pivotal nexuses in life’s history,” he goes on to say, “the principal ‘types’ seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate ‘grades’ or intermediate forms between different types are detectable.”…

[H[is views are simply part of a much more serious pattern of intellectual discontent with Darwinian doctrine. Writing in the 1960s and 1970s, the Japanese mathematical biologist Motoo Kimura argued that on the genetic level— the place where mutations take place— most changes are selectively neutral. They do nothing to help an organism survive; they may even be deleterious…. Kimura was perfectly aware that he was advancing a powerful argument against Darwin’s theory of natural selection. “The neutral theory asserts,” he wrote in the introduction to his masterpiece, The Neutral Theory of Molecular Evolution, “that the great majority of evolutionary changes at the molecular level, as revealed by comparative studies of protein and DNA sequences, are caused not by Darwinian selection but by random drift of selectively neutral or nearly neutral mutations” (italics added)….

Writing in the Proceedings of the National Academy of Sciences, the evolutionary biologist Michael Lynch observed that “Dawkins’s agenda has been to spread the word on the awesome power of natural selection.” The view that results, Lynch remarks, is incomplete and therefore “profoundly misleading.” Lest there be any question about Lynch’s critique, he makes the point explicitly: “What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms.”

Survival and reproduction depend on many traits. A particular trait, considered in isolation, may seem to be helpful to the survival and reproduction of a group. But that trait may not be among the particular collection of traits that is most conducive to the group’s survival and reproduction. If that is the case, the trait will become less prevalent.

Alternatively, if the trait is an essential member of the collection that is conducive to survival and reproduction, it will survive. But its survival depends on the other traits. The fact that X is a “good trait” does not, in itself, ensure the proliferation of X. And X will become less prevalent if other traits become more important to survival and reproduction.

In any event, it is my view that genetic fitness for survival has become almost irrelevant in places like the United States. The rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

In fact, there is a supportable hypothesis that humans in cosseted realms (i.e., the West) are, on average, becoming less intelligent. But, first, it is necessary to explain why it seemed for a while that humans were becoming more intelligent.

David Robson is on the case:

When the researcher James Flynn looked at [IQ] scores over the past century, he discovered a steady increase – the equivalent of around three points a decade. Today, that has amounted to 30 points in some countries.

Although the cause of the Flynn effect is still a matter of debate, it must be due to multiple environmental factors rather than a genetic shift.

Perhaps the best comparison is our change in height: we are 11cm (around 5 inches) taller today than in the 19th Century, for instance – but that doesn’t mean our genes have changed; it just means our overall health has changed.

Indeed, some of the same factors may underlie both shifts. Improved medicine, reducing the prevalence of childhood infections, and more nutritious diets, should have helped our bodies to grow taller and our brains to grow smarter, for instance. Some have posited that the increase in IQ might also be due to a reduction of the lead in petrol, which may have stunted cognitive development in the past. The cleaner our fuels, the smarter we became.

This is unlikely to be the complete picture, however, since our societies have also seen enormous shifts in our intellectual environment, which may now train abstract thinking and reasoning from a young age. In education, for instance, most children are taught to think in terms of abstract categories (whether animals are mammals or reptiles, for instance). We also lean on increasingly abstract thinking to cope with modern technology. Just think about a computer and all the symbols you have to recognise and manipulate to do even the simplest task. Growing up immersed in this kind of thinking should allow everyone [hyperbole alert] to cultivate the skills needed to perform well in an IQ test….

[Psychologist Robert Sternberg] is not alone in questioning whether the Flynn effect really represented a profound improvement in our intellectual capacity, however. James Flynn himself has argued that it is probably confined to some specific reasoning skills. In the same way that different physical exercises may build different muscles – without increasing overall “fitness” – we have been exercising certain kinds of abstract thinking, but that hasn’t necessarily improved all cognitive skills equally. And some of those other, less well-cultivated, abilities could be essential for improving the world in the future.

Here comes the best part:

You might assume that the more intelligent you are, the more rational you are, but it’s not quite this simple. While a higher IQ correlates with skills such as numeracy, which is essential to understanding probabilities and weighing up risks, there are still many elements of rational decision making that cannot be accounted for by a lack of intelligence.

Consider the abundant literature on our cognitive biases. Something that is presented as “95% fat-free” sounds healthier than “5% fat”, for instance – a phenomenon known as the framing bias. It is now clear that a high IQ does little to help you avoid this kind of flaw, meaning that even the smartest people can be swayed by misleading messages.

People with high IQs are also just as susceptible to the confirmation bias – our tendency to only consider the information that supports our pre-existing opinions, while ignoring facts that might contradict our views. That’s a serious issue when we start talking about things like politics.

Nor can a high IQ protect you from the sunk cost bias – the tendency to throw more resources into a failing project, even if it would be better to cut your losses – a serious issue in any business. (This was, famously, the bias that led the British and French governments to continue funding Concorde planes, despite increasing evidence that it would be a commercial disaster.)

Highly intelligent people are also not much better at tests of “temporal discounting”, which require you to forgo short-term gains for greater long-term benefits. That’s essential, if you want to ensure your comfort for the future.

Besides a resistance to these kinds of biases, there are also more general critical thinking skills – such as the capacity to challenge your assumptions, identify missing information, and look for alternative explanations for events before drawing conclusions. These are crucial to good thinking, but they do not correlate very strongly with IQ, and do not necessarily come with higher education. One study in the USA found almost no improvement in critical thinking throughout many people’s degrees.

Given these looser correlations, it would make sense that the rise in IQs has not been accompanied by a similarly miraculous improvement in all kinds of decision making.

So much for the bright people who promote and pledge allegiance to socialism and its various manifestations (e.g., the Green New Deal, and Medicare for All). So much for the bright people who suppress speech with which they disagree because it threatens the groupthink that binds them.

Robson also discusses evidence of dysgenic effects in IQ:

Whatever the cause of the Flynn effect, there is evidence that we may have already reached the end of this era – with the rise in IQs stalling and even reversing. If you look at Finland, Norway and Denmark, for instance, the turning point appears to have occurred in the mid-90s, after which average IQs dropped by around 0.2 points a year. That would amount to a seven-point difference between generations.

Psychologist (and intelligence specialist) James Thompson has addressed dysgenic effects at his blog on the website of The Unz Review. In particular, he had a lot to say about the work of an intelligence researcher named Michael Woodley. Here’s a sample from a post by Thompson:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong?

From a later post by Thompson:

The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by [Matthew] Sarraf.

The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.

The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”….

Here are the findings which I have arranged by generational decline (taken as 25 years).

  • Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.
  • 3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.
  • Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.
  • Working memory, over 85 years (3.4 generations) 0.16 drop/decade.
  • Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.
  • Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.

Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.…

My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.

How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

Here’s my hypothesis: The less-intelligent portions of the populace are breeding faster than the more-intelligent portions. As I said earlier, the rise of technology and the “social safety net” (state-enforced pseudo-empathy) have enabled the survival and reproduction of traits that would have dwindled in times past.

Thinking about Thinking … and Other Things: Time, Existence, and Science

This is the first post in a series. It will leave you hanging. But despair not, the series will come to a point — eventually. In the meantime, enjoy the ride.

Before we can consider time and existence, we must consider whether they are illusions.

Regarding time, there’s a reasonable view that nothing exists but the present — the now — or, rather, an infinite number of nows. In the conventional view, one now succeeds another, which creates the illusion of the passage of time. In the view of some physicists, however, all nows exist at once, and we merely perceive sequential slice of the all nows. Inasmuch as there seems to be general agreement as to the contents of the slice, the only evidence that many nows exist in parallel are claims about such phenomena as clairvoyance, visions, and co-location. I won’t wander into that thicket.

A problem with the conventional view of time is that not everyone perceives the same now at the same time. Well, not according to Einstein’s special theory of relativity, at least. A problem with the view that all nows exist at once (known as the many-worlds view), is that it’s purely a mathematical concoction. Unless you’re a clairvoyant, visionary, or the like.

Oh, wait, the special theory of relativity is also a mathematical concoction. Further it doesn’t really show that not everyone perceives the same now at the same time. The key to special relativity – the Lorentz transformation — enables one to reconcile the various nows; that is, to be a kind of omniscient observer. So, in effect, there really is a now.

This leads to the question of what distinguishes one now from another now. The answer is change. If things didn’t change, there would be only a now, not an infinite series of them. More precisely, if things didn’t seem to change, time would seem to stand still. This is another way of saying that a succession of nows creates the illusion of the passage of time.

What happens between one now and the next now? Change, not the passage of time. What we think of as the passage of time is really an artifact of change.

Time is really nothing more than the counting of events that supposedly occur at set intervals — the “ticking” of an atomic clock, for example. I say supposedly because there’s no absolute measure of time against which one can calibrate the “ticking” of an atomic clock, or any other kind of clock.

In summary: Clocks don’t measure time. Clocks merely change (“tick”) at supposedly regular intervals, and those intervals are used in the representation of other things, such as the speed of an automobile or the duration of a 100-yard dash.

Time is an illusion. Or, if that conclusion bothers you, let’s just say that time is an ephemeral quality that depends on change.

Change is real. But change in what — of what does reality consist?

There are two basic views of reality. One of them, posited by Bishop Berkeley and his followers, is that the only reality is that which goes on in one’s own mind. But that’s just another way of saying that humans don’t perceive the external world directly. Rather, it is perceived second-hand, through the senses that detect external phenomena and transmit signals to the brain, which is where “reality” is formed.

There is an extreme version of the Berkeleyan view: Everything perceived is only a kind of dream or illusion. But even a dream or illusion is something, not nothing, so there is some kind of existence.

The sensible view, held by most humans (even most scientists), is that there is an objective reality out there, beyond the confines one’s mind. How can so many people agree about the existence of certain things (e.g., Cleveland) unless there’s something out there?

Over the ages, scientists have been able to describe objective reality in ever-minute detail. But what is it? What is the stuff of which it consists? No one knows or is likely ever to know. All we know is that stuff changes, and those changes give rise to what we call time.

The big question is how things came to exist. This has been debated for millennia. There are two schools of thought:

Things just exist and have always existed.

Things can’t come into existence on their own, so some non-thing must have caused things to exist.

The second option leaves open the question of how the non-thing came into existence, and can be interpreted as a variant of the first option; that is, some non-thing just exists and has always existed.

How can the big question be resolved? It can’t be resolved by facts or logic. If it could be, there would be wide agreement about the answer. (Not perfect agreement because a lot of human beings are impervious to facts and logic.) But there isn’t and never will be wide agreement.

Why is that? Can’t scientists someday trace the existence of things – call it the universe – back to a source? Isn’t that what the Big Bang Theory is all about? No and no. If the universe has always existed, there’s no source to be tracked down. And if the universe was created by a non-thing, how can scientists detect the non-thing if they’re only equipped to deal with things?

The Big Bang Theory posits a definite beginning, at a more or less definite point in time. But even if the theory is correct, it doesn’t tell us how that beginning began. Did things start from scratch, and if they did, what caused them to do so? And maybe they didn’t; maybe the Big Bang was just the result of the collapse of a previous universe, which was the result of a previous one, etc., etc., etc., ad infinitum.

Some scientists who think about such things (most of them, I suspect) don’t believe that the universe was created by a non-thing. But they don’t believe it because they don’t want to believe it. The much smaller number of similar scientists who believe that the universe was created by a non-thing hold that belief because they want to hold it.

That’s life in the world of science, just as it is in the world of non-science, where believers, non-believers, and those who can’t make up their minds find all kinds of ways in which to rationalize what they believe (or don’t believe), even though they know less than scientists do about the universe.

Let’s just accept that and move on to another big question: What is it that exists?  It’s not “stuff” as we usually think of it – like mud or sand or water droplets. It’s not even atoms and their constituent particles. Those are just convenient abstractions for what seem to be various manifestations of electromagnetic forces, or emanations thereof, such as light.

But what are electromagnetic forces? And what does their behavior (to be anthropomorphic about it) have to do with the way that the things like planets, stars, and galaxies move in relation to one another? There are lots of theories, but none of them has as yet gained wide acceptance by scientists. And even if one theory does gain wide acceptance, there’s no telling how long before it’s supplanted by a new theory.

That’s the thing about science: It’s a process, not a particular result. Human understanding of the universe offers a good example. Here’s a short list of beliefs about the universe that were considered true by scientists, and then rejected:

Thales (c. 620 – c. 530 BC): The Earth rests on water.

Aneximenes (c. 540 – c. 475 BC): Everything is made of air.

Heraclitus (c. 540 – c. 450 BC): All is fire.

Empodecles (c. 493 – c. 435 BC): There are four elements: earth, air, fire, and water.

Democritus (c. 460 – c. 370 BC): Atoms (basic elements of nature) come in an infinite variety of shapes and sizes.

Aristotle (384 – 322 BC): Heavy objects must fall faster than light ones. The universe is a series of crystalline spheres that carry the sun, moon, planets, and stars around Earth.

Ptolemey (90 – 168 AD): Ditto the Earth-centric universe,  with a mathematical description.

Copernicus (1473 – 1543): The planets revolve around the sun in perfectly circular orbits.

Brahe (1546 – 1601): The planets revolve around the sun, but the sun and moon revolve around Earth.

Kepler (1573 – 1630): The planets revolve around the sun in elliptical orbits, and their trajectory is governed by magnetism.

Newton (1642 – 1727): The course of the planets around the sun is determined by gravity, which is a force that acts at a distance. Light consists of corpuscles; ordinary matter is made of larger corpuscles. Space and time are absolute and uniform.

Rutherford (1871 – 1937), Bohr (1885 – 1962), and others: The atom has a center (nucleus), which consists of two elemental particles, the neutron and proton.

Einstein (1879 – 1955): The universe is neither expanding nor shrinking.

That’s just a small fraction of the mistaken and incomplete theories that have held sway in the field of physics. There are many more such mistakes and lacunae in the other natural sciences: biology, chemistry, and earth science — each of which, like physics, has many branches. And in all of the branches there are many unresolved questions. For example, the Standard Model of particle physics, despite its complexity, is known to be incomplete. And it is thought (by some) to be unduly complex; that is, there may be a simpler underlying structure waiting to be discovered.

Given all of this, it is grossly presumptuous to claim that climate science – to take a salient example — is “settled” when the phenomena that it encompasses are so varied, complex, often poorly understood, and often given short shrift (e.g., the effects of solar radiation on the intensity of cosmic radiation reaching Earth, which affects low-level cloud formation, which affects atmospheric temperature and precipitation).

Anyone who says that any aspect of science is “settled” is either ignorant, stupid, or a freighted with a political agenda. Anyone who says that “science is real” is merely parroting an empty slogan.

Matt Ridley (quoted by Judith Curry) explains:

In a lecture at Cornell University in 1964, the physicist Richard Feynman defined the scientific method. First, you guess, he said, to a ripple of laughter. Then you compute the consequences of your guess. Then you compare those consequences with the evidence from observations or experiments. “If [your guess] disagrees with experiment, it’s wrong. In that simple statement is the key to science. It does not make a difference how beautiful the guess is, how smart you are, who made the guess or what his name is…it’s wrong….

In general, science is much better at telling you about the past and the present than the future. As Philip Tetlock of the University of Pennsylvania and others have shown, forecasting economic, meteorological or epidemiological events more than a short time ahead continues to prove frustratingly hard, and experts are sometimes worse at it than amateurs, because they overemphasize their pet causal theories….

Peer review is supposed to be the device that guides us away from unreliable heretics. Investigations show that peer review is often perfunctory rather than thorough; often exploited by chums to help each other; and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field.

Herbert Ayres, an expert in operations research, summarized the problem well several decades ago: “As a referee of a paper that threatens to disrupt his life, [a professor] is in a conflict-of-interest position, pure and simple. Unless we’re convinced that he, we, and all our friends who referee have integrity in the upper fifth percentile of those who have so far qualified for sainthood, it is beyond naive to believe that censorship does not occur.” Rosalyn Yalow, winner of the Nobel Prize in medicine, was fond of displaying the letter she received in 1955 from the Journal of Clinical Investigation noting that the reviewers were “particularly emphatic in rejecting” her paper.

The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto.

As I said, there is no such thing as “settled science”. Real science is a vast realm of unsettled uncertainty. Newton put it thus:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Certainty is the last refuge of a person whose mind is closed to new facts and new ways of looking at old facts.

How uncertain is the real world, especially the world of events yet to come? Consider a simple, three-parameter model in which event C depends on the occurrence of event B, which depends on the occurrence of event A; in which the value of the outcome is the summation of the values of the events that occur; and in which value of each event is binary – a value of 1 if it happens, 0 if it doesn’t happen. Even in a simple model like that, there is a wide range of possible outcomes; thus:

A doesn’t occur (B and C therefore don’t occur) = 0.

A occurs but B fails to occur (and C therefore doesn’t occur) = 1.

A occurs, B occurs, but C fails to occur = 2.

A occurs, B occurs, and C occurs = 3.

Even when A occurs, subsequent events (or non-events) will yield final outcomes ranging in value from 1 to 3 times 1. A factor of 3 is a big deal. It’s why .300 hitters make millions of dollars a year and .100 hitters sell used cars.

Let’s leave it at that and move on.

CO2 Fail

Anthony Watts of Watts Up With That? catches the U.N. in a moment of candor:

From a World Meteorological Organization (WMO) press release titled “Carbon dioxide levels continue at record levels, despite COVID-19 lockdown,” comes this statement about the effects of carbon dioxide (CO2) reductions during the COVID-19 lockdown:

“Preliminary estimates indicate a reduction in the annual global emission between 4.2% and 7.5%. At the global scale, an emissions reduction this scale will not cause atmospheric CO2 to go down. CO2 will continue to go up, though at a slightly reduced pace (0.08-0.23 ppm per year lower). This falls well within the 1 ppm natural inter-annual variability. This means that on the short-term the impact [of CO2 reduction] of the COVID-19 confinements cannot be distinguished from natural variability…”

Let this sink in: The WMO admits reduce carbon dioxide emissions are having no effect on climate that is distinguishable from natural variability.

The WMO acknowledges that after our global economic lockdown, where CO2 emissions from travel, industry, and power generation were all curtailed, there wasn’t any measurable difference in global atmospheric CO2 levels. Zero, zilch, none, nada.

Of course, we already knew this and wrote about it on Climate at a Glance: Coronavirus Impact on CO2 Levels. An analysis by climate scientist Dr. Roy Spencer showed that despite crashing economies and large cutbacks in travel, industry, and energy generation, climate scientists have yet to find any hint of a drop in atmospheric CO2 levels.

The graph in Watts’s post depicts CO2 readings only for Mauna Loa, and only through April 2020. The following graph covers CO2 readings for Mauna Loa (through October 2020) and for a global average of marine surface sites (through August 2020):

The bottom line remains the same: There’s nothing to see here, folks, just an uninterrupted pattern of seasonal variations.

Climate-change fanatics will have to look elsewhere than human activity for the rise in atmospheric CO2.


Data definitions and source:

https://www.esrl.noaa.gov/gmd/ccgg/trends/mlo.html

ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt

https://www.esrl.noaa.gov/gmd/ccgg/trends/global.html

ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_gl.txt

Further Thoughts about COVID-19

UPDATED AS NOTED BELOW.

It seems that the economy — and with it the livelihoods of millions of Americans — has been severely damaged for no good reason. Not only are face masks useless, or worse than that, but lockdowns are ineffective in halting the spread of COVID-19. If the reports that I’ve linked to are correct, COVID-19 is essentially unstoppable until an effective vaccine is produced and administered in vast quantities.

In the early going, there was a lot written about misdiagnosis and wrongful attribution of deaths. I haven’t seen much of that lately, but that doesn’t mean that those things aren’t still happening. Regarding misdiagnosis, consider what has happened this year to the official tally of flu cases, as against the tallies of the preceding five years:

This year, the cumulative number of flu cases was on track to set a new record, surpassing the totals of 2018 and 2019 (source). Then, just as word of COVID-19 was beginning to seep into the “news” media, the number of recorded cases suddenly stopped growing. How can that be? Did COVID-19 kill the all of the flu germs that were floating in the air and lurking on surfaces? Hah! I wonder what other conditions are being misdiagnosed as COVID-19.

UPDATE 11/27/20: Well I wonder no more. Because it seems that a lot of deaths have been attributed to COVID-19 when they should have been attributed to other causes. (See this and this.) In fact, for the period studied, the rise in COVID-19 deaths was almost exactly offset by the decline in deaths due to other causes.

UPDATE 11/28/20: For a detailed discussion of the meaning of “cause of death”, see this. It should reinforce your skepticism about the validity of official tallies of COVID-19 deaths.

If you were to believe the media — and I’m sure you don’t– COVID-19 is the worst scourge since the Black Plague. Well, it’s not even close:

As of yesterday, about 1 in 100 COVID-19 diagnoses becomes a death statistic. And the rate seems to be dropping. Which can mean that the number of cases is overstated; the most vulnerable persons have already been killed by COVID-19; or that both statements are true and the vast, healthy majority of the populace is being penalized out of political fear — fomented mainly by leftists, of course.

Finally, let’s put COVID-19 in perspective:

Enough said, for today.


Related reading: Kip Hansen, “Survey Results: Where Are All the Sick People?“, Watts Up With That?, November 21, 2020 (see especially the discussion of the number of flu cases)

Election 2020 and Occam’s Razor

Occam’s razor

is the problem-solving principle that “entities should not be multiplied without necessity” or, more simply, the simplest explanation is usually the right one…. This philosophical razor advocates that when presented with competing hypotheses about the same [phenomenon], one should select the solution with the fewest assumptions, and that this is not meant to be a way of choosing between hypotheses that make different predictions.

Similarly, in science, Occam’s razor is used as an abductive heuristic in the development of theoretical models rather than as a rigorous arbiter between candidate models. In the scientific method, Occam’s razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives. Since failing explanations can always be burdened with ad hoc hypotheses to prevent them from being falsified, simpler theories are preferable to more complex ones because they are more testable.

But simplicity isn’t a guarantee of correctness. More complexity may be necessary in order to explain a phenomenon or to make accurate predictions about it. Thus the weasel-words “without necessity”. If a thing is well explained by two independent variables, and a third independent variable adds nothing to the explanation, only two were necessary. But the number of “necessary” variables isn’t known ahead of time. It takes data-gathering, testing, and statistical analysis of the tests to determine how many are “necessary”.

Occam’s razor, in other words, is merely a tautology. The correct number of “necessary” explanatory variables is an empirical matter, not one that can be determined a priori by a vague and meaningless aphorism.

With that in mind, let us apply Occam’s razor to the presidential election of 2020. The tentative outcome of that election is a victory for Joe Biden. There are at least four explanations for the tentative outcome:

1. Every State that Biden won, he won fair and square. There were no fraudulent votes, no fraudulent counting of votes, and no errors in the counting of votes.

2. Biden’s victories in key States, though perhaps tainted by some degree of fraud or error, are legitimate; that is, the victories would have occurred absent fraud or error.

3. Biden’s victories in at least some States are illegitimate; that is, the victories wouldn’t have occurred absent fraud or error. But overturning the fraudulent or erroneous victories in some States wouldn’t change the outcome; Biden would still have enough electoral votes to be elected president.

4. Biden’s victories in at least some States are illegitimate; that is, the victories wouldn’t have occurred absent fraud or error. And overturning the fraudulent or erroneous victories in those States would change the outcome; Biden wouldn’t have enough electoral votes to be elected president. But this explanation, if true, may not be confirmed in time to change the tentative outcome of the election.

The simplest explanation, number 1, is almost certainly false. So much for Occam’s razor. What about explanations 2, 3, and 4?

I honestly do not know which of them to believe because I am withholding judgement until all of the legal votes have been counted correctly. That probably won’t happen before January 20, 2021, and so it will never happen. And so I will forever suspend judgement — but I will also suspend belief that Biden was elected honestly.

Why won’t the facts emerge before January 20, 2021? Dov Fischer explains:

[F]or those who have actual real-life professional high-stakes litigation experience, people like Rudy Giuliani and those of us who know what’s what, the reality is that no one can just walk into a courtroom a week or two after a massive fraud has taken place and just lay all the fraud on the table. It takes weeks, months, and years to unpack this stuff. No experienced attorney can just show up with all the evidence in a week or two. For example, who among us, even a week ago, had ever heard of “Dominion Voting Systems”? In only a matter of days, we now know not only of them but of their software and that they donated to the Clinton Foundation. And, oh by the way, their equipment was used in the election by North Carolina, Nevada, Georgia, Michigan, Arizona, and Pennsylvania — comprising 84 electoral college votes in six of the tightest battleground states. On the other hand, Texas rejected using them

Or take the dead voters. (Please.) Or the harvested and dumped ballots. Was it legal in the respective state to harvest the votes? If so, were the votes harvested by legally authorized harvesters — or by unauthorized out-of-state college kids who had nothing to do once it got too cold to march with Black Lives Matters thugs and threaten octogenarians at restaurants? It takes times — weeks, months — to unpeel that onion. And, again, what about the dead voters? It takes time to go through the voters’ rolls and to compare them with the rolls of the living.

And signatures. It is commonplace in litigations that, when disputes arise over signatures, handwriting experts are called in. One place to find handwriting experts is at a website called — take a seat for this one — www.handwritingexperts.com. But that is the point. When the stakes are high, you can’t just have volunteer housewives and househusbands comparing signatures. Not only valleys forge but people do, too. Could there be stakes greater than whether we have four more years of a Trump presidency or an alternative quadrennium of a Harris and Biden White House? Who is comparing the signatures on the mail-in envelopes with the actual signatures on registration rolls? How is it done? How carefully? How expertly?

It is wrong, unfair, and preposterous for media, including Fox News, regularly to parrot the Democrats and say that the Republicans so far do not have buckets and suitcases full of vote-fraud evidence. This kind of evidence — fraud — is the hardest to uncover and the hardest to gather. Mueller took two years. Durham, assuming there is such a person, already has been at it for a year and a half. States allow three years for claims of fraud. Usually, it takes document demands, demands for computer discs and drives, interrogatories, and depositions to root out the fraud and corruption. That is how long it takes. I know: I personally did this stuff for 10 years in matters entailing multi-million-dollar complex business disputes. Those of us who actually know the practice of law, not from Ally McBeal and from the 45 cable stations that simultaneously televise Law and Order reruns but from real life, know that the Trump team cannot possibly have all its evidence at hand yet.

Nevertheless,

they indeed are compiling anecdotal evidence, testimony of poll watchers who saw abuses and were kept away from monitoring ballot counting. The Trump team is gathering sworn affidavits, a recognized form of admissible evidence, and they are going as fast as they can. They report that they already have 234 sworn affidavits. Steve Cortes has published a wonderful piece raising four examples of circumstantial evidence arising from logical improbabilities:

1. Incomprehensibly high turnout in Wisconsin. For example, Milwaukee ended up with an 84 percent turnout, while a nearby Midwest city with a comparable demographic, Cleveland, had a 51 percent turnout. In all, Wisconsin reported voting by 90 percent of their registered voters. Numbers like that are off the charts. Biden inched ahead of Trump in Wisconsin by under 1 percent. By contrast, Trump’s lead in Ohio was too large to overcome with shenanigans.

2. The improbability of a lethargic Biden scoring significantly stronger voter turnouts than did an energetic Obama in certain battleground Obama districts.

3. The quirk of over 450,000 Biden-only ballots, on which the submitted ballots showed a vote for Biden but no one else at all, even in states where there were tight congressional and Senate contests down-ticket. That of course is technically possible, and certainly some such ballots could be expected. Curiously, Biden-only ballots were predominantly prevalent in battleground states like Georgia. By contrast, there were only 725 such ballots in Wyoming, which was a Trump–Republican blowout. For comparison, there was only a fraction of Trump-only ballots in Georgia.

4. The virtual absence of mail-in vetting. In New York, which tried large-scale mail-in balloting for the first time last June, the natural process of vetting saw 21 percent of ballots disqualified. Likewise, it is common that, among people mailing in ballots for the first time in their lives, usually some 3 percent get disqualified. That simply is the human nature of some who forget to sign, forget to date the ballot, fill it in wrong, and otherwise mess up. That’s people. Yet, in Pennsylvania only 0.03 percent of such ballots were rejected, 10 times fewer than all experience would have anticipated.

Circumstantial evidence matters and carries serious evidentiary weight. Murderers have been sentenced to death based solely on circumstantial evidence. Honest, reasonable minds cannot expect all evidence of fraud to be at hand only 10 days or even a month or two after the election has ended. Democrats had half a year and more to plan strategies for aspects of their fraud and ways to cover it up. If given enough time, enough production demands for computer drives and discs, enough time to read secret and deleted emails, the Trump team would have an opportunity to say “We have the evidence” or to present as Mueller did after his two-year investigation. Any shorter time frame is unrealistic.

Conclusion: The correct explanation of the tentative outcome of the presidential election of 2020 will never be known. Biden will be inaugurated on January 20, 2021 (if he lives that long), and the memory hole will swallow almost all doubts about how Biden won. Remaining doubts, and even hard evidence, will be dismissed as delusional and fabricated.

And so we will be frog-marched into a brave new world.

Election 2020: Liberty Is at Stake

I have written many times over the years about what will happen to liberty in America the next time a Democrat is in the White House and Congress is controlled by Democrats. Many others have written or spoken about the same, dire scenario. Recently, for example, Victor Davis Hanson and Danielle Pletka addressed the threat to liberty that lies ahead if Donald Trump is succeeded by Joe Biden, in tandem with a Democrat takeover of the Senate. This post reprises my many posts about the clear and present danger to liberty if Trump is defeated and the Senate flips, and adds some points suggested by Hanson and Pletka. There’s much more to be said, I’m sure, but what I have to say here should be enough to make every liberty-loving American vote for Trump — even those who abhor the man’s persona.

Court Packing

One of the first things on the agenda will be to enlarge the Supreme Court and fill the additional seats with justices who can be counted on to support the following policies discussed below, should those policies get to the Supreme Court. (If they don’t, they will be upheld in lower courts or go unchallenged because challenges will be perceived as futile.)

Abolition of the Electoral College

The Electoral College helps to protect the sovereignty of less-populous States from oppression by more-populous States. This has become especially important with the electoral shift that has seen California, New York, and other formerly competitive States slide into leftism. The Electoral College therefore causes deep resentment on the left when it yields a Republican president who fails to capture a majority of the meaningless nationwide popular vote, as Donald Trump failed (by a large margin) in 2016), despite lopsided victories by H. Clinton in California, New York, etc.

The Electoral College could be abolished formally by an amendment to the Constitution. But amending the Constitution by that route would take years, and probably wouldn’t succeed because it would be opposed by too many State legislatures.

The alternative, which would succeed with Democrat control of Congress and a complaisant Supreme Court, is a multi-State compact to this effect: The electoral votes of each member State will be cast for the candidate with the most popular votes, nationwide, regardless of the popular vote in the member State. This would work to the advantage of a Democrat who loses narrowly in a State where the legislature and governor’s mansion is controlled by Democrats – which is the whole idea.

Some pundits deny that the scheme would favor Democrats, but the history of presidential elections contradicts them.

Electorate Packing

If you’re going to abolish the Electoral College, you want to ensure a rock-solid hold on the presidency and Congress. What better way to do that than to admit Puerto Rico and the District of Columbia? Residents of D.C. already vote in presidential elections, but the don’t have senators and or a voting representative in the House. Statehood would give them those things. And you know which party’s banner the additional senators and representative would fly.

Admitting Puerto Rico would be like winning the trifecta (for Democrats): a larger popular-vote majority for Democrat presidential candidates, two more Democrat senators, and five more Democrat representatives in the House.

“Climate Change”

The “science” of “climate change” amounts to little more than computer models that can’t even “predict” recorded temperatures accurately because the models are based mainly on the assumption that CO2 (a minor greenhouse gas) drives the atmosphere’s temperature. This crucial assumption rests on a coincidence – rising temperatures from the late 1970s and rising levels of atmospheric CO2. But atmospheric CO2 has been far higher in earlier geological eras, while Earth’s temperature hasn’t been any higher than it is now. Yes, CO2 has been rising since the latter part of the 19th century, when industrialization began in earnest. Despite that, temperatures have fluctuated up and down for most of the past 150 years. (Some so-called scientists have resolved that paradox by adjusting historical temperatures to make them look lower than the really are.)

The deeper and probably more relevant causes of atmospheric temperature are to be found in the Earth’s core, magma flow, plate dynamics, ocean currents and composition, magnetic field, exposure to cosmic radiation, and dozens of other things that — to my knowledge — are ignored by climate models. Moreover, the complexity of the interactions of such factors, and others that are usually included in climate models cannot possibly be modeled.

The urge to “do something” about “climate change” is driven by a combination of scientific illiteracy, power-lust, and media-driven anxiety.

As a result, trillions of dollars have been and will be wasted on various “green” projects. These include but are far from limited to the replacement of fossil fuels by “renewables”, and the crippling of industries that depend on fossil fuels. Given that CO2 does influence atmospheric temperature slightly, it’s possible that such measures will have a slight effect on Earth’s temperature, even though the temperature rise has been beneficial (e.g., longer growing seasons; fewer deaths from cold weather, which kills more people than hot weather).

The main result of futile effort to combat “climate change” will be greater unemployment and lower real incomes for most Americans — except for the comfortable elites who press such policies.

Freedom of Speech

Legislation forbidding “hate speech” will be upheld by the packed Court. “Hate speech” will be whatever the bureaucrats who are empowered to detect and punish it say it is. And the bureaucrats will be swamped with complaints from vindictive leftists.

When the system is in full swing (which will take only a few years) it will be illegal to criticize, even by implication, such things as illegal immigration, same-sex marriage, transgenderism, anthropogenic global warming, or the confiscation of firearms. Violations will be enforced by huge fines and draconian prison sentences (sometimes in the guise of “re-education”).

Any hint of Christianity and Judaism will be barred from public discourse, and similarly punished. Islam will be held up as a model of unity and tolerance – at least until elites begin to acknowledge that Muslims are just as guilty of “incorrect thought” as persons of other religions and person who uphold the true spirit of the Constitution.

Reverse Discrimination

This has been in effect for several decades, as jobs, promotions, and college admissions have been denied the most capable persons in favor or certain “protected group” – manly blacks and women.

Reverse-discrimination “protections” will be extended to just about everyone who isn’t a straight, white male of European descent. And they will be enforced more vigorously than ever, so that employers will bend over backward to favor “protected groups” regardless of the effects on quality and quantity of output. That is, regardless of how such policies affect the general well-being of all Americans. And, of course, the heaviest burden – unemployment or menial employment – will fall on straight, white males of European descent. Except, of course, for the straight while males of European descent who are among the political, bureaucratic, and management elites who favor reverse discrimination.

Rule of Law

There will be no need for protests riots because police departments will become practitioners and enforcers of reverse discrimination (as well as “hate speech” violations and attempts to hold onto weapons for self-defense). This will happen regardless of the consequences, such as a rising crime rate, greater violence against whites and Asians, and flight from the cities (which will do little good because suburban police departments will also be co-opted).

Sexual misconduct (as defined by the alleged victim), will become a crime, and any straight, male person will be found guilty of it on the uncorroborated testimony of any female who claims to have been the victim of an unwanted glance, touch (even if accidental), innuendo (as perceived by the victim), etc.

There will be parallel treatment of the “crimes” of racism, anti-Islamism, nativism, and genderism.

Health Care

All health care and health-care related products and services (e.g., drug research) will be controlled and rationed by an agency of the federal government. Private care will be forbidden, though ready access to doctors, treatments, and medications will be provided for high officials and other favored persons.

Drug research – and medical research, generally – will dwindle in quality and quantity. There will be fewer doctors and nurses who are willing to work in a regimented system.

The resulting health-care catastrophe that befalls most of the populace (like that of the UK) will be shrugged off as a residual effect of “capitalist” health care.

Regulation

The regulatory regime, which already imposes a deadweight loss of 10 percent of GDP, will rebound with a vengeance, touching every corner of American life and regimenting all businesses except those daring to operate in an underground economy. The quality and variety of products and services will decline – another blow to Americans’ general well-being.

Taxation

Incentives to produce more and better products and services will be further blunted by increases on corporate profits, a more “progressive” structure of marginal tax rates (i.e., soaking the “rich”), and — perhaps worst of all — taxing wealth. Such measures will garner votes by appealing to economic illiterates, the envious, social-justice warriors, and guilt-ridden elites who can afford the extra taxes but don’t understand how their earnings and wealth foster economic growth and job creation. (A Venn diagram would depict almost the complete congruence of economic illiterates, the envious, social-justice warriors, and guilt-ridden elites.)

Government Spending and National Defense

The dire economic effects of the foregoing policies will be compounded by massive increases in government spending on domestic welfare programs, which reward the unproductive at the expense of the productive. All of this will suppress investment in business formation and expansion, and in professional education and training. As a result, the real rate of economic growth will approach zero, and probably become negative.

Because of the emphasis on domestic welfare programs, the United States will maintain token armed forces (mainly for the purpose of suppressing domestic uprisings). The U.S. will pose no threat to the new superpowers — Russia and China. They won’t threaten the U.S. militarily as long as the U.S. government acquiesces in their increasing dominance.

Immigration

Illegal immigration will become legal, and all illegal immigrants now in the country – and the resulting flood of new immigrants — will be granted citizenship and all associated rights. The right to vote, of course, is the right that Democrats most dearly want to bestow because most of the newly-minted citizens can be counted on to vote for Democrats. The permanent Democrat majority will ensure permanent Democrat control of the White House and both houses of Congress.

Future Elections and the Death of Democracy

Despite the prospect of a permanent Democrat majority, Democrats won’t stop there. In addition to the restrictions on freedom of speech discussed above, there will be election laws requiring candidates to pass ideological purity tests by swearing fealty to the “law of the land” (i.e., unfettered immigration, same-sex marriage, freedom of gender choice for children, etc., etc., etc.). Those who fail such a test will be barred from holding any kind of public office, no matter how insignificant.

COVID-19 and Probability

This was posted by a Facebook “friend” (who is among many on FB who seem to believe that figuratively hectoring like-minded friends on FB will instill caution among the incautious):

The point I want to make here isn’t about COVID-19, but about probability. It’s a point that I’ve made many times, but the image captures it perfectly. Here’s the point:

When an event has more than one possible outcome, a single trial cannot replicate the average outcome of a large number of trials (replications of the event).

It follows that the average outcome of a large number of trials — the probability of each possible outcome — cannot occur in a single trial.

It is therefore meaningless to ascribe a probability to any possible outcome of a single trial.

Suppose you’re offered a jelly bean from a bag of 100 jelly bean, and are told that two of the jelly beans contain a potentially fatal poison. Do you believe that you have only a 2-percent chance of being poisoned, and would you bet accordingly? Or do you believe, correctly, that you might choose a poisoned jelly bean, and that the “probability” of choosing a poisoned one is meaningless and irrelevant if you want to be certain of surviving the trial at hand (choosing a jelly bean or declining the offer). That is, would you bet (your life) against choosing a poisoned jelly bean?

I have argued (futilely) with several otherwise smart persons who would insist on the 2-percent interpretation. But I doubt (and hope) that any of them would bet accordingly and then choose a jelly bean from a bag of 100 that contains even a single poisoned one, let alone two. Talk is cheap; actions speak louder than words.

COVID-19: The Disconnect between Cases and Deaths

As many (including me) have observed, COVID-19 case statistics don’t give a reliable picture of the spread of COVID-19 in the U.S. Just a few of the reasons are misdiagnosis; asymptomatic (and untested) cases; and wide variations in the timing, location, and completeness of testing. As a result, the once-tight correlation between reported cases and deaths has loosened to the point of meaninglessness:


Source: Derived from statistics reported here.

So when you hear about a “surge” in cases, do not assume that they are actually new cases. It’s just that new cases are being discovered because more tests are being conducted. The death toll, overstated as it is, is a better indicator of the state of affairs. And the death toll continues to drop.

Reflections on Aging and Social Disengagement

Aging is of interest to me because I suddenly and surprisingly find myself among the oldest ten percent of Americans.

I also find myself among the more solitary of Americans. My wife and I rattle about in a house that could comfortably accommodate a family of six, with plenty of space in which to have sizeable gatherings (which we no longer do). But I am not lonely in my solitude, for it is and long has been of my own choosing. Lockdowns and self-isolation haven’t affected me a bit. Life, for me, goes on as usual and as I like it.

This is so because of my strong introversion. I suppose that the seeds of my introversion are genetic, but the symptoms didn’t appear in earnest until I was in my early thirties. After that I became steadily more focused on a few friendships (which eventually dwindled to none) and decidedly uninterested in the aspects of work that required more than brief meetings (one-on-one preferred). Finally, enough became more than enough and I quit full-time work at the age of fifty-six. There followed, a few years later, a stint of part-time work that also became more than enough. And so, at the age of fifty-nine, I banked my final paycheck. Happily.

What does my introversion have to do with my aging? I suspected that my continued withdrawal from social intercourse (more about that, below) might be a symptom of aging. And I found this, in the Wikipedia article “Disengagement Theory“:

The disengagement theory of aging states that “aging is an inevitable, mutual withdrawal or disengagement, resulting in decreased interaction between the aging person and others in the social system he belongs to”. The theory claims that it is natural and acceptable for older adults to withdraw from society….

Disengagement theory was formulated by [Elaine] Cumming and [William Earl] Henry in 1961 in the book Growing Old, and it was the first theory of aging that social scientists developed….

The disengagement theory is one of three major psychosocial theories which describe how people develop in old age. The other two major psychosocial theories are the activity theory and the continuity theory, and the disengagement theory [is at] odds with both.

The continuity theory

states that older adults will usually maintain the same activities, behaviors, relationships as they did in their earlier years of life. According to this theory, older adults try to maintain this continuity of lifestyle by adapting strategies that are connected to their past experiences [whatever that means].

I don’t see any conflict between the continuity theory and the disengagement theory. A strong introvert like me, for example, finds it easy to maintain the same activities, behaviors, and relationships as I did before I retired. Which is to say that I had begun minimizing my social interactions before retiring, and continued to do so after retiring.

What about the activity theory? Well, it’s a normative theory, unlike the other two (which are descriptive), and it goes like this:

The activity theory … proposes that successful aging occurs when older adults stay active and maintain social interactions. It takes the view that the aging process is delayed and the quality of life is enhanced when old people remain socially active.

That’s just a social worker’s view of “appropriate” behavior for older persons. Take my word for it, introverts don’t need it social activity, which is stressful for them, and resent those who try to push them into it. The life of the mind is far more rewarding than chit-chat with geezers. Why do you suppose my wife and I will do everything in our power to stay in our own home until we die? It’s not just because we love our home so much (and we do), but we can’t abide the idea of communal living, even in an upscale retirement community.

Anyway, I mentioned my continued withdrawal from social intercourse. A particular, recent instance of withdrawal sparked this post. For about fifteen years I corresponded regularly with a former colleague. He  has a malady that I have dubbed email-arrhea: several messages a day to a large mailing list, with many insipid replies from recipients whose choose “reply all”. Enough of that finally became too much, and I declared to him my intention to refrain from correspondence until … whenever. (“Don’t call me, I’ll call you.”) So all of his messages and those of his other correspondents are dumped automatically into my Gmail trash folder, and I no longer use Gmail.

My withdrawal from that particular node of social intercourse was eased by the fact that the correspondent is a collaborationist “conservative” with a deep-state mindset. So it was satisfying to terminate our relationship — and devote more time to things that I enjoy, like blogging.

Just Another Thing or Two about COVID-19

Though it’s tough to make predictions, especially about the future, and I sort of promised not to make any more predictions about the spread of COVID-19 in the United States because the data are unreliable (examples at the link and here). But I can’t resist saying a few more things about the matter.

Specifically, since my last substantive post about COVID-19 statistics, I now project 2 million cases and 135,000 deaths by mid-August, as against my earlier projections of 1.3 million and 90,000. The new estimates rely on the same database as the old ones, so they aren’t any more reliable than the old ones.

But I have revised my calculations so that they are based on 7-day average numbers of cases and deaths. This is an attempt to smooth over obvious lags in reporting (sudden drops in numbers of cases and deaths followed by sudden surges).

The equations in these two graphs …

… yield these projections:

Those are nationwide numbers. The good news (pending the results of “re-opening”) is that the daily number of new cases has declined sharply from the peaks of late March and late April. But there’s still a long way to go. The first graph in this post is worrisome because recent observations are a bit above the trend line; that is, the incidence of new cases may not be declining quite as rapidly as the equation suggests.

The number of new deaths has declined also, from the peak 7-day average of 2,041 on April 21 to 1,430 as of May 15. Overall, the rate of new deaths per new case seems to have stabilized at 5.7 percent. (The overall percentage will be somewhat higher because the deaths/case rate was higher than 5.7 for quite a while.)

Of course, the situation varies widely from State to State (and, obviously, within each State):

Regional and state variations in death rates
(I am using same assignment of States to regions as used by my data source.)

Nine of the 12 States of the Northeast (including D.C.) are among the top 12 in deaths per resident. The exceptions are the more rural Northeastern States: Main, New Hampshire, and Vermont.

In general, States with large, densely populated metropolitan areas have fared worse than less-urbanized States with smaller cities. That’s unsurprising, of course. But it also underscores the resistance of large swaths of the populace to “New York” rules.


Other related posts:

Contagion Nation?
“Give Me Liberty or Give Me Death”

“It’s Tough to Make Predictions, Especially about the Future”

A lot of people have said it, or something like it, though probably not Yogi Berra, to whom it’s often attributed.

Here’s another saying, which is also apt here: History does not repeat itself. The historians repeat one another.

I am accordingly amused by something called cliodynamics, which is discussed at length by Amanda Rees in “Are There Laws of History?” (Aeon, May 2020). The Wikipedia article about cliodynamics describes it as

a transdisciplinary area of research integrating cultural evolution, economic history/cliometrics, macrosociology, the mathematical modeling of historical processes during the longue durée [the long term], and the construction and analysis of historical databases. Cliodynamics treats history as science. Its practitioners develop theories that explain such dynamical processes as the rise and fall of empires, population booms and busts, spread and disappearance of religions. These theories are translated into mathematical models. Finally, model predictions are tested against data. Thus, building and analyzing massive databases of historical and archaeological information is one of the most important goals of cliodynamics.

I won’t dwell on the methods of cliodynamics, which involve making up numbers about various kinds of phenomena and then making up models which purport to describe, mathematically, the interactions among the phenomena. Underlying it all is the practitioner’s broad knowledge of historical events, which he converts (with the proper selection of numerical values and mathematical relationships) into such things as the Kondratiev wave, a post-hoc explanation of a series of arbitrarily denominated and subjectively measured economic eras.

In sum, if you seek patterns you will find them, but pattern-making (modeling) is not science. (There’s a lot more here.)

Here’s a simple demonstration of what’s going on with cliodynamics. Using the RANDBETWEEN function of Excel, I generated two columns of random numbers ranging in value from 0 to 1,000, with 1,000 numbers in each column. I designated the values in the left column as x variables and the numbers in the right column as y variables. I then arbitrarily chose the first 10 pairs of numbers and plotted them:

As it turns out, the relationship, even though it seems rather loose, has only a 21-percent chance of being due to chance. In the language of statistics, two-tailed p=0.21.

Of course, the relationship is due entirely to chance because it’s the relationship between two sets of random numbers. So much for statistical tests of “significance”.

Moreover, I could have found “more significant” relationships had I combed carefully through the 1,000 pairs of random number with my pattern-seeking brain.

But being an honest person with scientific integrity, I will show you the plot of all 1,000 pairs of random numbers:

I didn’t bother to find a correlation between the x and y values because there is none. And that’s the messy reality of human history. Yes, there have been many determined (i.e., sought-for) outcomes  — such as America’s independence from Great Britain and Hitler’s rise to power. But they are not predetermined outcomes. Their realization depended on the surrounding circumstances of the moment, which were myriad, non-quantifiable, and largely random in relation to the event under examination (the revolution, the putsch, etc.). The outcomes only seem inevitable and predictable in hindsight.

Cliodynamics is a variant of the anthropic principle, which is that he laws of physics appear to be fine-tuned to support human life because we humans happen to be here to observe the laws of physics. In the case of cliodynamics, the past seems to consist of inevitable events because we are here in the present looking back (rather hazily) at the events that occurred in the past.

Cliodynametricians, meet Nostradamus. He “foresaw” the future long before you did.