100,000 to 240,000 COVID-19 Deaths in the U.S.?

REVISED ESTIMATES (INDICATED BY STRIKETHROUGHS) TO ADJUST FOR THE ADDITION OF DATA FOR APRIL 1, 2020, AND 3 DAYS WORTH OF MISSING DATA FOR THE STATE OF WASHINGTON.

Using some simple linear models of the rates of growth in U.S. coronavirus cases and deaths, I predicted that cases would top out at 250,000 and deaths wouldn’t exceed 10,000. I assumed that

lockdowns, quarantines, and social distancing continue for at least two more weeks, and assuming that there isn’t a second wave of COVID-19 because of early relaxation or re-infection.

I also said that, in any event,

the final numbers will be well below the totals for the swine-flu epidemic of 2009-10 (59 million and 12,000) but you won’t hear about it from the leftist media.

Is it time for me to back off my bold predictions, in light of the “official” estimate of 100,000 to 240,000 deaths that was announced at yesterday’s White House briefing? Probably.

First, let’s look at the numbers:

  • The Spanish flu pandemic of 1918-19 resulted in a maximum of 675,000 deaths out of 29 million U.S. cases — a fatality rate of 2.4 percent.
  • The H1N1 pandemic of 2009-10 resulted in 12,000 deaths out of 59 million U.S. cases — a fatality rate of 2/100 of one percent.
  • COVID-19 is (thus far) more lethal than the two earlier pandemics — 4.6 percent of the number of cases reached 5 days earlier. (A 5-day lag yields the strongest correlation between new cases and new deaths.)

Rounding up to 5 percent — and assuming that the rate remains constant — a total of 100,000 deaths means a total of 2 million cases, and a total of 240,000 deaths means a total of 4.8 million cases. As of today, the number of cases is somewhere above 200,000. Will the number of cases increase 10-fold to 24-fold, or more? Will the fatality rate rise?

Lacking detailed knowledge of how the official estimates are derived, I computed non-linear estimates of the rates at which new cases and new deaths will occur, based on statistics through March 31, 2020 (though numbers for Washington haven’t been posted for three days) April 1, 2020. The rate of increase in new cases declines gradually (the function is a decaying exponential) and never reaches zero, but approaches it by the end of June 2020. The rate at which new cases yield new deaths declines gradually, from a base of 5.8 6.6 percent, as the number of new cases increases (the relationship is a power function with an exponent of less than 1).

The bottom line: By July 1, 2020, the total number of cases in the U.S. will may reach 800,000 870,000, resulting in 35,000 40,000 deaths. That’s a fatality rate of about 4.3 4.6 percent per case and 1/100 12/1,000 of 1 percent of the country’s population.

Again, I assume that lockdowns, quarantines, and social distancing continue (though for how long I can’t say). I also assume that there won’t be a second wave of COVID-19 because of re-infections or an early relaxation of precautions.

Why is my revised and more sophisticated estimate of the number of deaths so much lower than the official one? It’s probably true that the official estimate simulates the spread of the contagion, whereas my general model doesn’t do that. But my guess is that the official estimate has been inflated to scare people into staying at home, which will reduce the rate at which new cases arise and prevent the number of deaths from reaching 100,000 or more.

COVID-19 in the United States

I have created several charts based on official (State-by-State) statistics on COVID-19 cases in the U.S. that are reported here. The statistics exclude cases and deaths occurring among repatriated persons (i.e., Americans returned from other countries or cruise ships).

The source tables include the U.S. territories of Guam, the Northern Mariana Islands, Puerto Rico, and the Virgin Islands, but I have excluded them from my analysis. I would also exclude Alaska and Hawaii, given their distance from the coterminous U.S., but it would be cumbersome to do so. Further, both States have low numbers of cases and (as yet) only 3 deaths (in Alaska), so leaving them in has almost no effect on the results displayed below.

All of the following charts are current through March 30, 2020. Based on statistical relationships underlying the charts, I stand by the prediction that I made on March 29, 2020:

Assuming that lockdowns, quarantines, and social distancing continue for at least two more weeks, and assuming that there isn’t a second wave of COVID-19 because of early relaxation or re-infection:

  • The total number of COVID-19 cases in the U.S. won’t exceed 250,000.
  • The total number of U.S. deaths attributed to COVID-19 won’t exceed 10,000.

In any event, the final numbers will be well below the totals for the swine-flu epidemic of 2009-10 (59 million and 12,000) but you won’t hear about it from the leftist media.

UPDATE 03/31/20: Some sources are reporting higher numbers of U.S. cases and deaths than the source that I am using for my analysis and predictions. It is therefore possible that the final numbers (according to some sources) will be higher than my predictions. But I will be in the ballpark.

UPDATE 04/01/20: See my revised estimate.

*   *   *

As indicated by Figure 1, the number of cases is about 1/20th of 1 percent of the population of the U.S.; the number of deaths, about 1/1,000th of 1 percent. Only 1.8 percent of cases have thus far resulted in deaths. Note the logarithmic scale on the vertical axis. Every major division (e.g., 0.01%) is 10 times the preceding major division (e.g., 0.001%).

Figure 1

I have seen some comparisons of the U.S. with other countries, but they use raw numbers of cases and deaths rather than cases and deaths per unit of population. The comparisons therefore make the situation in the U.S. look far worse than it really is.

Nor do the publishers of such comparisons address cross-country differences the completeness of data-collection or standards for identifying cases and deaths as resulting from COVID-19.

In any event, Figure 2 shows how the coronavirus outbreak compares with earlier pandemics when the numbers for those pandemics are adjusted upward to account for population growth since their occurrence. (Again, note that the vertical axis is logarithmic.) The number of COVID-19 cases is thus far only about 2 percent of the number of swine-flu cases; the number of COVID-19 deaths is thus far about 16 percent of the number of swine-flu deaths. So far, COVID-19 seems to kill a higher fraction of those infected than did the swine flu, but on present trends (discussed below) it may not prove to be any more lethal than the swine flu.

Figure 2

As shown in Figure 3, the daily percentage change in new cases is declining, as is the daily percentage change in new deaths.

Figure 3

However, new deaths necessarily lag new cases. As of yesterday, the best fit between new cases and new deaths is a 5-day lag (Figure 4). At the present rate, every 1,000 new cases will yield about 34 new deaths. This ratio has been declining daily, which is another bit of good news.

Figure 4

Figure 5 shows the tighter relationship between new cases and new deaths (especially in the past two weeks) when Figure 3 is adjusted to introduce the 5-day lag.

Figure 5

Figure 6 shows the similarly tight relationship that results from the removal of the 6 “hot spots” — Connecticut, the District of Columbia, Louisiana, Massachusetts, New Jersey, and New York — which have the highest incidence of cases per capita.

Figure 6

The good news here is the the declining rate of increase in the incidence of new cases, both nationwide (including the “hot spots”) and in the States that have been less hard-hit by COVID-19. The rest of the good news is that if the rate of new cases continues to decline, so will the rate of new deaths (though with a lag). Thus the prediction at the top of this post.

COVID-19 Update and Prediction

I have updated my statistical analysis here. Note especially the continued decline in the daily rate of new cases and the low rate of new deaths per new case.

Now for the prediction. Assuming that lockdowns, quarantines, and social distancing continue for at least two more weeks, and assuming that there isn’t a second wave of COVID-19 because of early relaxation or re-infection:

  • The total number of COVID-19 cases in the U.S. won’t exceed 250,000.
  • The total number of U.S. deaths attributed to COVID-19 won’t exceed 10,000.

In any event, the final numbers will be well below the totals for the swine-flu epidemic of 2009-10 (59 million and 12,000) but you won’t hear about it from the leftist media.

UPDATE 03/31/20: Some sources are reporting higher numbers of U.S. cases and deaths than the source that I am using for my analysis and predictions. It is therefore possible that the final numbers (according to some sources) will be higher than my predictions. But I will be in the ballpark.

UPDATE 04/10/20: See my revised estimate.

What Is Natural?

Back-to-nature types, worriers about what “humans are doing to the planet”, and neurotics (leftists) generally take a dim view of the artifacts of human existence. There’s a lot of hypocrisy in that view, of course, mixed with goodly doses of envy and virtue-signalling.

A lot of the complaints heard from back-to-nature types, etc., are really esthetic. They just don’t like to envision a pipeline running across some open far away and well out of sight, ditto a distant and relatively small cluster of oil rigs. Such objections would seem to conflict with their preference for ugly bird-killing highway straddling, skyline cluttering wind farms. Chalk it up economically ignorant indoctrination in the “evils” of fossil fuels.

At any rate, what makes a pipeline, an oil rig, or even a wind farm any less natural than the artifacts constructed by lower animals to promote their survival? The main difference between the artifacts of the lower animals — bird’s nests, bee hives, beaver dams, underground burrows, etc. — and those of human beings is that human artifacts are far more ingenious and complex. Moreover, because humans are far more ingenious than the lower animals, the number of different human artifacts is far greater than the number arising from any other species, or even all of them taken together.

Granted, there are artifacts that aren’t necessary to the survival of human beings (e.g., movies, TV, and electric guitars), but those aren’t the ones that the back-to-nature crowd and its allies find objectionable. No, they object to the artifacts that enable the back-to-earthers, etc., to live in comfort.

In sum, a pipeline is just as natural as a bird’s nest. Remember that the next time you encounter an aging “flower child”. And ask her if a wind farm is more natural than a pipeline, and how she would like it if she had to forage for firewood to stay warm and cook her meals.

Lesson from the Diamond Princess: Panic Is Unwarranted

As of today there have been 696 reported cases of coronavirus among the 3,711 passengers who were aboard the Diamond Princess cruise ship. The ship was quarantined on February 1; all passengers and crew had disembarked by March 1. As of March 1, there were 6 deaths among those infected, and the number hasn’t grown (as of today).

Given the ease with which the virus could be transmitted on a ship, the Diamond Princess may represent an upper limit on contagion and mortality:

  • an infection rate of 19 percent of those onboard the ship
  • a fatality rate of less than 1 percent among those known to have contracted the disease
  • a fatality rate of less than 2/10 of 1 percent of the population potentially exposed to the disease.

Conclusion: There is no question that coronavirus represents a significant threat to life, health, and economic activity. But the panic being fomented by the media and opportunistic politicians is unwarranted.

Preliminary Thoughts about “Theism, Atheism, and Big Bang Cosmology”

I am in the early sections of Theism, Atheism, and Big Bang Cosmology by William Lane Craig and Quentin Smith, but I am beginning to doubt that it will inform my views about cosmology. (These are spelled out with increasing refinement here, here, here, and here.) The book consists of alternating essays by Craig and Smith, in which Craig defends the classical argument for a creation (the Kalām cosmological argument) against Smith’s counter-arguments.

For one thing, Smith — who takes the position that the universe wasn’t created — seems to pin a lot on the belief prevalent at the time of the book’s publication (1993) that the universe was expanding but at a decreasing rate. It is now believed generally among physicists that the universe is expanding at an accelerating rate. I must therefore assess Smith’s argument in light of the current belief.

For another thing, Craig and Smith (in the early going, at least) seem to be bogged down in an arcane argument about the meaning of infinity. Craig takes the position, understandably, that an actual infinity is impossible in the physical world. Smith, of course, takes the opposite position. The problem here is that Craig and Smith argue about what is an empirical (if empirically undecidable) matter by resorting to philosophical and mathematical concepts. The observed and observable facts are on Craig’s side: Nothing is known to have happened in the material universe without an antecedent material cause. Philosophical and mathematical arguments about the nature of infinity seem beside the point.

For a third thing, Craig seems to pin a lot on the Big Bang, while Smith is at pains to deny its significance. Smith seems to claim that the Big Bang wasn’t the beginning of the universe; rather, the universe was present in the singularity from which the Big Bang arose. The singularity might therefore have existed all along.

Craig, on the other hand, sees the hand of God in the Big Bang. The presence of the singularity (the original clump of material “stuff”) had to have been created so that the Big Bang could follow. That’s all well and good, but what was God doing before the Big Bang, that is, in the infinite span of time before 15 billion years ago? (Is it presumptuous of me to ask?) And why should the Big Bang prove God’s existence any more than, say, a universe that came into being at an indeterminate time? The necessity of God (or some kind of creator) arises from the known character of the universe: material effects follow from material causes, which cannot cause themselves. In short, Craig pins too much on the Big Bang, and his argument would collapse if the Big Bang is found to be a figment of observational error.

There’s much more to come, I hope.

Fifty-Two Weeks on the Learning Curve

I first learned of the learning curve when I was a newly hired analyst at a defense think-tank. A learning curve

is a graphical representation of how an increase in learning (measured on the vertical axis) comes from greater experience (the horizontal axis); or how the more someone (or something) performs a task, the better they [sic] get at it.

In my line of work, the learning curve figured importantly in the estimation of aircraft procurement costs. There was a robust statistical relationship between the cost of making a particular model of aircraft and the cumulative number of such aircraft produced. Armed with the learning-curve equation and the initial production cost of an aircraft, it was easy to estimate of the cost of producing any number of the same aircraft.

The learning curve figures prominently in tests that purport to measure intelligence. Two factors that may explain the Flynn effect — a secular rise in average IQ scores — are aspects of learning: schooling and test familiarity and a generally more stimulating environment in which one learns more. The Flynn effect doesn’t measure changes in intelligence, it measures changes in IQ scores resulting from learning. There is an essential difference between ignorance and stupidity. The Flynn effect is about the former, not the latter.

Here’s a personal example of the Flynn effect in action. I’ve been doing The New York Times crossword puzzle online since February 18, 2019. I have completed all 365 puzzles published by TNYT from that date through the puzzle for February 17, 2020, with generally increasing ease:

The fitted curve is a decaying exponential, which means that progress continues but at an increasingly slower rate, which is typical of a learning curve.

The difficulty of the puzzle varies from day to day, with Monday puzzles being the easiest and Sunday puzzles being the hardest (as measured by time to complete):

For each day of the week, my best time is more recent than my worst time, and the trend of time to complete is downward for every day of the week (as reflected in the first graph above). In fact:

  • My worst times were all recorded in March through June of last year.
  • Today I tied my best time for a Monday puzzle.
  • I set or tied my best time for the Wednesday, Friday, and Sunday puzzles in the last three weeks.
  • In the same three weeks, my times for the Tuesday puzzle have twice been only a minute higher than my best.

I know that that I haven’t become more intelligent in the last 52 weeks. And being several decades past the peak of my intelligence, I am certain that it diminishes steadily, though in tiny increments (I hope). I have simply become more practiced at doing the crossword puzzle because I have learned a lot about it. For example, certain clues recur with some frequency, and they always have the same answers. Clues often have double meanings, which are hard to decipher at first, but which become easier to decipher with practice. There are other subtleties, all of which reflect the advantages of learning.

In a nutshell, I am no smarter than I was 52 weeks ago, but my ignorance of TNYT crossword puzzle has diminished significantly.

(See also “More about Intelligence“, “Selected Writings about Intelligence“, and especially “Intelligence“, in which I quote experts about the Flynn Effect.)

Psychiatry Is a Disorder

I happened upon “Schizoid personality disorder” (SPD) at Wikipedia, and wondered why it is a disorder, that is, a “bad thing”. A footnote in the article leads to a summary of SPD. Here are some excerpts:

A person with schizoid personality disorder often:

  • Appears distant and detached
  • Avoids social activities that involve emotional closeness with other people
  • Does not want or enjoy close relationships, even with family members….

People with schizoid personality disorder often do well in relationships that don’t focus on emotional closeness. They tend to be better at handling relationships that focus on:

  • Work
  • Intellectual activities
  • Expectations

In other words, persons who “suffer” from SPD may in fact be highly productive in pursuits that demand (and reward) prowess in science, technology, engineering, and mathematics — a.k.a. STEM. But because they don’t conform strictly to a psychiatric definition of normality they are said to have a disorder.

What is the psychiatric definition of a normal personality? This is from a page at the website of the American Psychiatric Association (APA):

Personality is the way of thinking, feeling and behaving that makes a person different from other people. An individual’s personality is influenced by experiences, environment (surroundings, life situations) and inherited characteristics. A person’s personality typically stays the same over time. A personality disorder is a way of thinking, feeling and behaving that deviates from the expectations of the culture, causes distress or problems functioning, and lasts over time.

There are 10 specific types of personality disorders. Personality disorders are long-term patterns of behavior and inner experiences that differs significantly from what is expected. The pattern of experience and behavior begins by late adolescence or early adulthood and causes distress or problems in functioning. Without treatment, personality disorders can be long-lasting. Personality disorders affect at least two of these areas:

  • Way of thinking about oneself and others
  • Way of responding emotionally
  • Way of relating to other people
  • Way of controlling one’s behavior

Types of Personality Disorders

  • Antisocial personality disorder: a pattern of disregarding or violating the rights of others. A person with antisocial personality disorder may not conform to social norms, may repeatedly lie or deceive others, or may act impulsively.
  • Avoidant personality disorder: a pattern of extreme shyness, feelings of inadequacy and extreme sensitivity to criticism. People with avoidant personality disorder may be unwilling to get involved with people unless they are certain of being liked, be preoccupied with being criticized or rejected, or may view themselves as not being good enough or socially inept.
  • Borderline personality disorder: a pattern of instability in personal relationships, intense emotions, poor self-image and impulsivity. A person with borderline personality disorder may go to great lengths to avoid being abandoned, have repeated suicide attempts, display inappropriate intense anger or have ongoing feelings of emptiness.
  • Dependent personality disorder: a pattern of needing to be taken care of and submissive and clingy behavior. People with dependent personality disorder may have difficulty making daily decisions without reassurance from others or may feel uncomfortable or helpless when alone because of fear of inability to take care of themselves.
  • Histrionic personality disorder: a pattern of excessive emotion and attention seeking. People with histrionic personality disorder may be uncomfortable when they are not the center of attention, may use physical appearance to draw attention to themselves or have rapidly shifting or exaggerated emotions.
  • Narcissistic personality disorder: a pattern of need for admiration and lack of empathy for others. A person with narcissistic personality disorder may have a grandiose sense of self-importance, a sense of entitlement, take advantage of others or lack empathy.
  • Obsessive-compulsive personality disorder: a pattern of preoccupation with orderliness, perfection and control. A person with obsessive-compulsive personality disorder may be overly focused on details or schedules, may work excessively not allowing time for leisure or friends, or may be inflexible in their morality and values. (This is NOT the same as obsessive compulsive disorder.)
  • Paranoid personality disorder: a pattern of being suspicious of others and seeing them as mean or spiteful. People with paranoid personality disorder often assume people will harm or deceive them and don’t confide in others or become close to them.
  • Schizoid personality disorder: being detached from social relationships and expressing little emotion. A person with schizoid personality disorder typically does not seek close relationships, chooses to be alone and seems to not care about praise or criticism from others.
  • Schizotypal personality disorder: a pattern of being very uncomfortable in close relationships, having distorted thinking and eccentric behavior. A person with schizotypal personality disorder may have odd beliefs or odd or peculiar behavior or speech or may have excessive social anxiety.

Holy mackerel, Andy, there’s hardly a “normal” person alive. And certainly none of them is a psychiatrist. The very compilation of a list of personality traits that one considers “abnormal” is a manifestation of narcissistic personality disorder and obsessive-compulsive personality disorder, at the very least.

Other than an actual disease of the brain, there is only one kind of mental “disorder” that requires treatment — criminal behavior. And the proper treatment for it is the application of criminal justice, sans psychiatric intervention. (See the articles by Thomas Szasz at FEE.)


Related posts:

I’ll Never Understand the Insanity Defense
Does Capital Punishment Deter Homicide?
Libertarian Twaddle about the Death Penalty
Crime and Punishment
Saving the Innocent?
Saving the Innocent?: Part II
More Punishment Means Less Crime
More About Crime and Punishment
More Punishment Means Less Crime: A Footnote
Clear Thinking about the Death Penalty
Let the Punishment Fit the Crime
A Precedent for the Demise of the Insanity Defense?
Another Argument for the Death Penalty
Less Punishment Means More Crime
Clear Thinking about the Death Penalty
What Is Justice?
Why Stop at the Death Penalty?
In Defense of Capital Punishment
Lock ‘Em Up
Free Will, Crime, and Punishment
Stop, Frisk, and Save Lives
Poverty, Crime, and Big Government
Crime Revisited
Rush to Judgment?
Stop, Frisk, and Save Lives II

Intuition vs. Rationality

To quote myself:

[I]ntuition [is] a manifestation of intelligence, not a cause of it. To put it another way, intuition is not an emotion; it is the opposite of emotion.

Intuition is reasoning at high speed. For example, a skilled athlete knows where and when to make a move (e.g., whether and where to swing at a pitched ball) because he subconsciously makes the necessary calculations, which he could not make consciously in the split-second that is available to him once the pitcher releases the ball.

Intuition is an aspect of reasoning (rationality) that is missing from “reason” — the cornerstone of the Enlightenment. The Enlightenment’s proponents and defenders are always going on about the power of logic applied to facts, and how that power brought mankind (or mankind in the West, at least) out of the benighted Middle Ages (via the Renaissance) and into the light of Modernity.

But “reason” of the kind associated with the Enlightenment is of the plodding variety, whereby “truth” is revealed at the conclusion of deliberate, conscious processes (e.g., the scientific method). But those processes, as I point out in the preceding paragraph, are susceptible of error because they rest on errors and assumptions that are hidden from view — often wittingly, as in the case of “climate change“.

Science, for all of its value to mankind, requires abstraction from reality. That is to say, it is reductionist. A good example is the arbitrary division of continuous social and scientific processes into discrete eras (the Middle Ages, the Renaissance, the Enlightenment, etc.). This ought to be a warning that mere abstractions are often, and mistakenly, taken as “facts”.

Reductionism makes it possible to “prove” almost anything by hiding errors and assumptions (wittingly or not) behind labels. Thus: x + y = z only when x and y are strictly defined and commensurate. Otherwise, x and y cannot be summed, or their summation can result in many correct values other than z. Further, as in the notable case of “climate change”, it is easy to assume (from bias or error) that z is determined only by x and y, when there are good reasons to believe that it is also determined by other factors: known knowns, known unknowns, and unknown unknowns.

Such things happen because human beings are ineluctably emotional and biased creatures, and usually unaware of their emotions and biases. The Enlightenment’s proponents and defenders are no more immune from emotion and bias than the “lesser” beings whom they presume to lecture about rationality.

The plodding search for “answers” is, furthermore, inherently circumscribed because it dismisses or minimizes the vital role played by unconscious deliberation — to coin a phrase. How many times have you found the answer to a question, a problem, or a puzzle by putting aside your deliberate, conscious search for the answer, only to have it come to you in a “Eureka!” moment sometime later (perhaps after a nap or good night’s sleep). That’s your brain at work in ways that aren’t well understood.

This process (to put too fine a word on it) is known as combinatorial play. Its importance has been acknowledged by many creative persons. Combinatorial play can be thought of as slow-motion intuition, where the brain takes some time to assemble (unconsciously) existing knowledge into an answer to a question, a problem, or a puzzle.

There is also fast-motion intuition, an example of which I invoked in the quotation at the top of this post: the ability of a batter to calculate in a split-second where a pitch will be when it reaches him. Other examples abound, including such vital ones as the ability of drivers to maneuver lethal objects in infinitely varied and often treacherous conditions. Much is made of the number of fatal highway accidents; too little is made of their relative infrequency given the billions of daily opportunities for their occurrence.  Imagine the carnage if drivers relied on plodding “reason” instead of fast-motion intuition.

The plodding version of “reason” that has been celebrated since the Enlightenment is therefore just one leg of a triad: thinking quickly and unconsciously, thinking somewhat less quickly and unconsciously, and thinking slowly and consciously.

Wasn’t it ever thus? Of course it was. Which means that the Enlightenment and its sequel unto the present day have merely fetishized one mode of dealing with the world and its myriad uncertainties. I would have said arriving at the truth, but it is well known (except by ignorant science-idolaters) that scientific “knowledge” is provisional and ever-changing. (Just think of the many things that were supposed to be bad for you but are now supposed to be good for you, and conversely.)

I am not a science-denier by any means. But scientific “knowledge” must be taken with copious quantities of salt because it is usually inadequate in the face of messy reality. A theoretical bridge, for example, may hold up under theoretical conditions, but it is likely to collapse when built in the real world, where there is much uncertainty about present and future conditions (e.g., the integrity of materials, adherence to best construction practices, soil conditions, the cumulative effects of traffic). An over-built bridge — the best kind — is one that allows wide margins of error for such uncertainties. The same is true of planes, trains, automobiles, buildings, and much else that our lives depend on. All such things fail less frequently than in the past not only because of the advance of knowledge but also because greater material affluence enables the use of designs and materials that afford wider margins of error.

In any event, too little credit is given to the other legs of reason’s triad: fast-motion and slow-motion intuition. Any good athlete, musician, or warrior will attest the the value former. I leave it to Albert Einstein to attest to the value of the latter,

combinatory [sic] play seems to be the essential feature in productive thought — before there is any connection with logical construction in words or other kinds of signs which can be communicated to others….

[F]ull consciousness is a limit case which can never be fully accomplished. This seems to me connected with the fact called the narrowness of consciousness.


Related page and category:

Modeling and Science
Science and Understanding

An Implication of Universal Connectedness

A couple of provocative pieces (here and here) led me to an observation that is so obvious that I had never articulated it: Everything is connected.

The essence of it is captured in the first verse of “Dem Bones“:

Toe bone connected to the foot bone
Foot bone connected to the heel bone
Heel bone connected to the ankle bone
Ankle bone connected to the shin bone
Shin bone connected to the knee bone
Knee bone connected to the thigh bone
Thigh bone connected to the hip bone
Hip bone connected to the back bone
Back bone connected to the shoulder bone
Shoulder bone connected to the neck bone
Neck bone connected to the head bone …

The final line gets to the bottom of it (if you are a deist or theist):

… Now hear the word of the Lord.

But belief in the connectedness of everything in the universe doesn’t depend on one’s cosmological views. In fact, a strict materialist who holds that the universe “just is” will be obliged to believe in universal connectedness because everything is merely physical (or electromagnetic), and one thing touches other things, which touch other things, ad infinitum. (The “touching” may be done by light.)

Connectedness isn’t necessarily causality; it can be mere observation. Though observation — which involves the electromagnetic spectrum — is thought to be causal with respect to sub-atomic particles. As I put it here:

There’s no question that [a] particle exists independently of observation (knowledge of the particle’s existence), but its specific characteristic (quantum state) is determined by the act of observation. Does this mean that existence of a specific kind depends on knowledge? No. It means that observation determines the state of the particle, which can then be known.

I should have been clear about the meaning of “determine”, as used above. In my view it isn’t that observation causes the quantum state that is observed. Rather, observation measures (determines) the quantum state at the instant of measurement. Here’s an illustration of what I mean:

A die is rolled. Its “quantum state” is determined (measured) when it stops rolling and is readily observed. But the quantum state isn’t caused by the act of observation. In fact, the quantum state can be observed (determined, measured) — but not caused — at any point while the die is rolling by viewing it, sufficiently magnified, with the aid of a high-speed camera.

Connectedness can also involve causality, of course. The difficult problem — addressed at the two links in the opening paragraph — is sorting out causal relationships given so much connectedness. Another term for the problem is “causal density“, which leads to spurious findings:

When there are many factors that have an impact on a system, statistical analysis yields unreliable results. Computer simulations give you exquisitely precise unreliable results. Those who run such simulations and call what they do “science” are deceiving themselves.

Is it any wonder that “scientists” tell us that one thing or another is bad for us, only to tell us at a later date that it isn’t bad for us and may even be good for us? This is a widely noted phenomenon (though insufficiently documented). But its implications for believing in, say, anthropogenic global warming seem to be widely ignored — most unfortunately.

(See also “Predicting ‘Global’ Temperatures — An Analogy with Baseball“.)

Existence and Knowledge

Philosophical musings by a non-philosopher which are meant to be accessible to other non-philosophers.

Ontology is the branch of philosophy that deals with existence. Epistemology is the branch of philosophy that deals with knowledge.

I submit (with no claim to originality) that existence (what really is) is independent of knowledge (proposition A), but knowledge is impossible without existence (proposition B).

In proposition A, I include in existence those things that exist in the present, those things that have existed in the past, and the processes (happenings) by which past existences either end (e.g., death of an organism, collapse of a star) or become present existences (e.g., an older version of a living person, the formation of a new star). That which exists is real; existence is reality.

In proposition B, I mean knowledge as knowledge of that which exists, and not the kind of “knowledge” that arises from misperception, hallucination, erroneous deduction, lying, and so on. Much of what is called scientific knowledge is “knowledge” of the latter kind because, as scientists know (when they aren’t advocates) scientific knowledge is provisional. Proposition B implies that knowledge is something that human beings and other living organisms possess, to widely varying degrees of complexity. (A flower may “know” that the Sun is in a certain direction, but not in the same way that a human being knows it.) In what follows, I assume the perspective of human beings, including various compilations of knowledge resulting from human endeavors. (Aside: Knowledge is self-referential, in that it exists and is known to exist.)

An example of proposition A is the claim that there is a falling tree (it exists), even if no one sees, hears, or otherwise detects the tree falling. An example of proposition B is the converse of Cogito, ergo sum, I think, therefore I am; namely, I am, therefore I (a sentient being) am able to know that I am (exist).

Here’s a simple illustration of proposition A. You have a coin in your pocket, though I can’t see it. The coin is, and its existence in your pocket doesn’t depend on my act of observing it. You may not even know that there is a coin in your pocket. But it exists — it is — as you will discover later when you empty your pocket.

Here’s another one. Earth spins on its axis, even though the “average” person perceives it only indirectly in the daytime (by the apparent movement of the Sun) and has no easy way of perceiving it (without the aid of a Foucault pendulum) when it is dark or when asleep. Sunrise (or at least a diminution of darkness) is a simple bit of evidence for the reality of Earth spinning on its axis without our having perceived it.

Now for a somewhat more sophisticated illustration of proposition A. One interpretation of quantum mechanics is that a sub-atomic particle (really an electromagnetic phenomenon) exists in an indeterminate state until an observer measures it, at which time its state is determinate. There’s no question that the particle exists independently of observation (knowledge of the particle’s existence), but its specific characteristic (quantum state) is determined by the act of observation. Does this mean that existence of a specific kind depends on knowledge? No. It means that observation determines the state of the particle, which can then be known. Observation precedes knowledge, even if the gap is only infinitesimal. (A clear-cut case is the autopsy of a dead person to determine his cause of death. The autopsy didn’t cause the person’s death, but came after it as an act of observation.)

Regarding proposition B, there are known knowns, known unknowns, unknown unknowns, and unknown “knowns”. Examples:

Known knowns (real knowledge = true statements about existence) — The experiences of a conscious, sane, and honest person: I exist; am eating; I had a dream last night; etc. (Recollections of details and events, however, are often mistaken, especially with the passage of time.)

Known unknowns (provisional statements of fact; things that must be or have been but which are not in evidence) — Scientific theories, hypotheses, data upon which these are based, and conclusions drawn from them. The immediate causes of the deaths of most persons who have died since the advent of homo sapiens. The material process by which the universe came to be (i.e., what happened to cause the Big Bang, if there was a Big Bang).

Unknown unknowns (things that exist but are unknown to anyone) — Almost everything about the universe.

Unknown “knowns” (delusions and outright falsehoods accepted by some persons as facts) — Frauds, scientific and other. The apparent reality of a dream.

Regarding unknown “knowns”, one might dream of conversing with a dead person, for example. The conversation isn’t real, only the dream is. And it is real only to the dreamer. But it is real, nevertheless. And the brain activity that causes a dream is real even if the person in whom the activity occurs has no perception or memory of a dream. A dream is analogous to a movie about fictional characters. The movie is real but the fictional characters exist only in the script of the movie and the movie itself. The actors who play the fictional characters are themselves, not the fictional characters.

There is a fine line between known unknowns (provisional statements of fact) and unknown “knowns” (delusions and outright falsehoods). The former are statements about existence that are made in good faith. The latter are self-delusions of some kind (e.g., the apparent reality of a dream as it occurs), falsehoods that acquire the status of “truth” (e.g., George Washington’s false teeth were made of wood), or statements of “fact” that are made in bad faith (e.g., adjusting the historic temperature record to make the recent past seem warmer relative to the more distant past).

The moral of the story is that a doubting Thomas is a wise person.

Wicked Problems: The Pretense of Rationality

Arnold Kling points to a paper by Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning” (Policy Sciences, June 1973). As Kling says, the paper is “notable for the way in which it describes — in 1973 — the fallibility of experts relative to technocratic expectations”.

Among the authors’ many insights are these about government planning:

The kinds of problems that planners deal with-societal problems-are inherently different from the problems that scientists and perhaps some classes of engineers deal with. Planning problems are inherently wicked.

As distinguished from problems in the natural sciences, which are definable and separable and may have solutions that are findable, the problems of governmental planning-and especially those of social or policy planning-are ill-defined; and they rely upon elusive political judgment for resolution. (Not “solution.” Social problems are never solved. At best they are only re-solved-over and over again.) Permit us to draw a cartoon that will help clarify the distinction we intend.

The problems that scientists and engineers have usually focused upon are mostly “tame” or “benign” ones. As an example, consider a problem of mathematics, such as solving an equation; or the task of an organic chemist in analyzing the structure of some unknown compound; or that of the chessplayer attempting to accomplish checkmate in five moves. For each the mission is clear. It is clear, in turn, whether or not the problems have been solved.

Wicked problems, in contrast, have neither of these clarifying traits; and they include nearly all public policy issues-whether the question concerns the location of a freeway, the adjustment of a tax rate, the modification of school curricula, or the confrontation of crime….

In the sciences and in fields like mathematics, chess, puzzle-solving or mechanical engineering design, the problem-solver can try various runs without penalty. Whatever his outcome on these individual experimental runs, it doesn’t matter much to the subject-system or to the course of societal affairs. A lost chess game is seldom consequential for other chess games or for non-chess-players.

With wicked planning problems, however, every implemented solution is consequential. It leaves “traces” that cannot be undone. One cannot build a freeway to see how it works, and then easily correct it after unsatisfactory performance. Large public-works are effectively irreversible, and the consequences they generate have long half-lives. Many people’s lives will have been irreversibly influenced, and large amounts of money will have been spent-another irreversible act. The same happens with most other large-scale public works and with virtually all public-service programs. The effects of an experimental curriculum will follow the pupils into their adult lives.

Rittel and Webber address a subject about which I know a lot, from first-hand experience — systems analysis. This is a loose discipline in which mathematical tools are applied to broad and seemingly intractable problems in an effort to arrive at “optimal” solutions to those problems. In fact, as Rittel and Webber say:

With arrogant confidence, the early systems analysts pronounced themselves ready to take on anyone’s perceived problem, diagnostically to discover its hidden character, and then, having exposed its true nature, skillfully to excise its root causes. Two decades of experience have worn the self-assurances thin. These analysts are coming to realize how valid their model really is, for they themselves have been caught by the very same diagnostic difficulties that troubled their clients.

Remember, that was written in 1973, a scant five years after Robert Strange McNamara — that supreme rationalist — left the Pentagon, having discovered that the Vietnam War wasn’t amenable to systems analysis. McNamara’s demise as secretary of defense also marked the demise of the power that had been wielded by his Systems Analysis Office (though it lives on under a different name, having long since been pushed down the departmental hierarchy).

My own disillusionment with systems analysis came to a head at about the same time as Rittel and Webber published their paper. A paper that I wrote in 1981 (much to the consternation of my colleagues in the defense-analysis business) was an outgrowth of a memorandum that I had written in 1975 to the head of the defense think-tank where I worked. Here is the crux of the 1981 paper:

Aside from a natural urge for certainty, faith in quantitative models of warfare springs from the experience of World War II, when they seemed to lead to more effective tactics and equipment. But the foundation of this success was not the quantitative methods themselves. Rather, it was the fact that the methods were applied in wartime. Morse and Kimball put it well [in Methods of Operations Research (1946)]:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [p. 10]

Contrast this attitude with the attempts of analysts for the past twenty years to evaluate weapons, forces, and strategies with abstract models of combat. However elegant and internally consistent the models, they have remained as untested and untestable as the postulates of theology.

There is, of course, no valid test to apply to a warfare model. In peacetime, there is no enemy; in wartime, the enemy’s actions cannot be controlled….

Lacking pertinent data, an analyst is likely to resort to models of great complexity. Thus, if useful estimates of detection probabilities are unavailable, the detection process is modeled; if estimates of the outcomes of dogfights are unavailable, aerial combat is reduced to minutiae. Spurious accuracy replaces obvious inaccuracy; untestable hypotheses and unchecked calibrations multiply apace. Yet the analyst claims relative if not absolute accuracy, certifying that he has identified, measured, and properly linked, a priori, the parameters that differentiate weapons, forces, and strategies.

In the end, “reasonableness” is the only defense of warfare models of any stripe.

It is ironic that analysts must fall back upon the appeal to intuition that has been denied to military men — whose intuition at least flows from a life-or-death incentive to make good guesses when choosing weapons, forces, or strategies.

This generalizes to government planning of almost every kind, at every level, and certainly to the perpetually recurring — and badly mistaken — belief that an entire economy can be planned and its produce “equitably” distributed according to needs rather than abilities.

(For much more in this vein, see the posts listed at “Modeling, Science, and ‘Reason’“. See also “Why I Am Bunkered in My Half-Acre of Austin“.)

Not-So-Random Thoughts (XXV)

“Not-So-Random Thoughts” is an occasional series in which I highlight writings by other commentators on varied subjects that I have addressed in the past. Other entries in the series can be found at these links: I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI, XXII, XXIII, and XXIV. For more in the same style, see “The Tenor of the Times” and “Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration“.

CONTENTS

The Real Unemployment Rate and Labor-Force Participation

Is Partition Possible?

Still More Evidence for Why I Don’t Believe in “Climate Change”

Transgenderism, Once More

Big, Bad Oligopoly?

Why I Am Bunkered in My Half-Acre of Austin

“Government Worker” Is (Usually) an Oxymoron


The Real Unemployment Rate and Labor-Force Participation

There was much celebration (on the right, at least) when it was announced that the official unemployment rate, as of November, is only 3.5 percent, and that 266,000 jobs were added to the employment rolls (see here, for example). The exultation is somewhat overdone. Yes, things would be much worse if Obama’s anti-business rhetoric and policies still prevailed, but Trump is pushing a big boulder of deregulation uphill.

In fact, the real unemployment rate is a lot higher than official figure I refer you to “Employment vs. Big Government and Disincentives to Work“. It begins with this:

The real unemployment rate is several percentage points above the nominal rate. Officially, the unemployment rate stood at 3.5 percent as of November 2019. Unofficially — but in reality — the unemployment rate was 9.4 percent.

The explanation is that the labor-force participation rate has declined drastically since peaking in January 2000. When the official unemployment rate is adjusted to account for that decline (and for a shift toward part-time employment), the result is a considerably higher real unemployment rate.

Arnold Kling recently discussed the labor-force participation rate:

[The] decline in male labor force participation among those without a college degree is a significant issue. Note that even though the unemployment rate has come down for those workers, their rate of labor force participation is still way down.

Economists on the left tend to assume that this is due to a drop in demand for workers at the low end of the skill distribution. Binder’s claim is that instead one factor in declining participation is an increase in the ability of women to participate in the labor market, which in turn lowers the advantage of marrying a man. The reduced interest in marriage on the part of women attenuates the incentive for men to work.

Could be. I await further analysis.


Is Partition Possible?

Angelo Codevilla peers into his crystal ball:

Since 2016, the ruling class has left no doubt that it is not merely enacting chosen policies: It is expressing its identity, an identity that has grown and solidified over more than a half century, and that it is not capable of changing.

That really does mean that restoring anything like the Founders’ United States of America is out of the question. Constitutional conservatism on behalf of a country a large part of which is absorbed in revolutionary identity; that rejects the dictionary definition of words; that rejects common citizenship, is impossible. Not even winning a bloody civil war against the ruling class could accomplish such a thing.

The logical recourse is to conserve what can be conserved, and for it to be done by, of, and for those who wish to conserve it. However much force of what kind may be required to accomplish that, the objective has to be conservation of the people and ways that wish to be conserved.

That means some kind of separation.

As I argued in “The Cold Civil War,” the natural, least stressful course of events is for all sides to tolerate the others going their own ways. The ruling class has not been shy about using the powers of the state and local governments it controls to do things at variance with national policy, effectively nullifying national laws. And they get away with it.

For example, the Trump Administration has not sent federal troops to enforce national marijuana laws in Colorado and California, nor has it punished persons and governments who have defied national laws on immigration. There is no reason why the conservative states, counties, and localities should not enforce their own view of the good.

Not even President Alexandria Ocasio-Cortez would order troops to shoot to re-open abortion clinics were Missouri or North Dakota, or any city, to shut them down. As Francis Buckley argues in American Secession: The Looming Breakup of the United States, some kind of separation is inevitable, and the options regarding it are many.

I would like to believe Mr. Codevilla, but I cannot. My money is on a national campaign of suppression, which will begin the instant that the left controls the White House and Congress. Shooting won’t be necessary, given the massive displays of force that will be ordered from the White House, ostensibly to enforce various laws, including but far from limited to “a woman’s right to an abortion”. Leftists must control everything because they cannot tolerate dissent.

As I say in “Leftism“,

Violence is a good thing if your heart is in the “left” place. And violence is in the hearts of leftists, along with hatred and the irresistible urge to suppress that which is hated because it challenges leftist orthodoxy — from climate skepticism and the negative effect of gun ownership on crime to the negative effect of the minimum wage and the causal relationship between Islam and terrorism.

There’s more in “The Subtle Authoritarianism of the ‘Liberal Order’“; for example:

[Quoting Sumantra Maitra] Domestically, liberalism divides a nation into good and bad people, and leads to a clash of cultures.

The clash of cultures was started and sustained by so-called liberals, the smug people described above. It is they who — firmly believing themselves to be smarter, on the the side of science, and on the side of history — have chosen to be the aggressors in the culture war.

Hillary Clinton’s remark about Trump’s “deplorables” ripped the mask from the “liberal” pretension to tolerance and reason. Clinton’s remark was tantamount to a declaration of war against the self-appointed champion of the “deplorables”: Donald Trump. And war it has been. much of it waged by deep-state “liberals” who cannot entertain the possibility that they are on the wrong side of history, and who will do anything — anything — to make history conform to their smug expectations of it.


Still More Evidence for Why I Don’t Believe in “Climate Change”

This is a sequel to an item in the previous edition of this series: “More Evidence for Why I Don’t Believe in Climate Change“.

Dave Middleton debunks the claim that 50-year-old climate models correctly predicted the susequent (but not steady) rise in the globe’s temperature (whatever that is). He then quotes a talk by Dr. John Christy of the University of Alabama-Huntsville Climate Research Center:

We have a change in temperature from the deep atmosphere over 37.5 years, we know how much forcing there was upon the atmosphere, so we can relate these two with this little ratio, and multiply it by the ratio of the 2x CO2 forcing. So the transient climate response is to say, what will the temperature be like if you double CO2– if you increase at 1% per year, which is roughly what the whole greenhouse effect is, and which is achieved in about 70 years. Our result is that the transient climate response in the troposphere is 1.1 °C. Not a very alarming number at all for a doubling of CO2. When we performed the same calculation using the climate models, the number was 2.31°C. Clearly, and significantly different. The models’ response to the forcing – their ∆t here, was over 2 times greater than what has happened in the real world….

There is one model that’s not too bad, it’s the Russian model. You don’t go to the White House today and say, “the Russian model works best”. You don’t say that at all! But the fact is they have a very low sensitivity to their climate model. When you look at the Russian model integrated out to 2100, you don’t see anything to get worried about. When you look at 120 years out from 1980, we already have 1/3 of the period done – if you’re looking out to 2100. These models are already falsified [emphasis added], you can’t trust them out to 2100, no way in the world would a legitimate scientist do that. If an engineer built an aeroplane and said it could fly 600 miles and the thing ran out of fuel at 200 and crashed, he might say: “I was only off by a factor of three”. No, we don’t do that in engineering and real science! A factor of three is huge in the energy balance system. Yet that’s what we see in the climate models….

Theoretical climate modelling is deficient for describing past variations. Climate models fail for past variations, where we already know the answer. They’ve failed hypothesis tests and that means they’re highly questionable for giving us accurate information about how the relatively tiny forcing … will affect the climate of the future.

For a lot more in this vein, see my pages “Climate Change” and “Modeling and Science“.


Transgenderism, Once More

Theodore Dalrymple (Anthony Daniels, M.D.) is on the case:

The problem alluded to in [a paper in the Journal of Medical Ethics] is, of course, the consequence of a fiction, namely that a man who claims to have changed sex actually has changed sex, and is now what used to be called the opposite sex. But when a man who claims to have become a woman competes in women’s athletic competitions, he often retains an advantage derived from the sex of his birth. Women competitors complain that this is unfair, and it is difficult not to agree with them….

Man being both a problem-creating and solving creature, there is, of course, a very simple way to resolve this situation: namely that men who change to simulacra of women should compete, if they must, with others who have done the same. The demand that they should suffer no consequences that they neither like nor want from the choices they have made is an unreasonable one, as unreasonable as it would be for me to demand that people should listen to me playing the piano though I have no musical ability. Thomas Sowell has drawn attention to the intellectual absurdity and deleterious practical consequences of the modern search for what he calls “cosmic justice.”…

We increasingly think that we live in an existential supermarket in which we pick from the shelf of limitless possibilities whatever we want to be. We forget that limitation is not incompatible with infinity; for example, that our language has a grammar that excludes certain forms of words, without in any way limiting the infinite number of meanings that we can express. Indeed, such limitation is a precondition of our freedom, for otherwise nothing that we said would be comprehensible to anybody else.

That is a tour de force typical of the good doctor. In the span of three paragraphs, he addresses matters that I have treated at length in “The Transgender Fad and Its Consequences” (and later in the previous edition of this series), “Positive Rights and Cosmic Justice“, and “Writing: A Guide” (among other entries at this blog).


Big, Bad Oligopoly?

Big Tech is giving capitalism a bad name, as I discuss in “Why Is Capitalism Under Attack from the Right?“, but it’s still the best game in town. Even oligopoly and its big brother, monopoly, aren’t necessarily bad. See, for example, my posts, “Putting in Some Good Words for Monopoly” and “Monopoly: Private Is Better than Public“. Arnold Kling makes the essential point here:

Do indicators of consolidation show us that the economy is getting less competitive or more competitive? The answer depends on which explanation(s) you believe to be most important. For example, if network effects or weak resistance to mergers are the main factors, then the winners from consolidation are quasi-monopolists that may be overly insulated from competition. On the other hand, if the winners are firms that have figured out how to develop and deploy software more effectively than their rivals, then the growth of those firms at the expense of rivals just shows us that the force of competition is doing its work.


Why I Am Bunkered in My Half-Acre of Austin

Randal O’Toole takes aim at the planners of Austin, Texas, and hits the bullseye:

Austin is one of the fastest-growing cities in America, and the city of Austin and Austin’s transit agency, Capital Metro, have a plan for dealing with all of the traffic that will be generated by that growth: assume that a third of the people who now drive alone to work will switch to transit, bicycling, walking, or telecommuting by 2039. That’s right up there with planning for dinner by assuming that food will magically appear on the table the same way it does in Hogwarts….

[W]hile Austin planners are assuming they can reduce driving alone from 74 to 50 percent, it is actually moving in the other direction….

Planners also claim that 11 percent of Austin workers carpool to work, an amount they hope to maintain through 2039. They are going to have trouble doing that as carpooling, in fact, only accounted for 8.0 percent of Austin workers in 2018.

Planners hope to increase telecommuting from its current 8 percent (which is accurate) to 14 percent. That could be difficult as they have no policy tools that can influence telecommuting.

Planners also hope to increase walking and bicycling from their current 2 and 1 percent to 4 and 5 percent. Walking to work is almost always greater than cycling to work, so it’s difficult to see how they plan to magic cycling to be greater than walking. This is important because cycling trips are longer than walking trips and so have more of a potential impact on driving.

Finally, planners want to increase transit from 4 to 16 percent. In fact, transit carried just 3.24 percent of workers to their jobs in 2018, down from 3.62 percent in 2016. Changing from 4 to 16 percent is a an almost impossible 300 percent increase; changing from 3.24 to 16 is an even more formidable 394 percent increase. Again, reality is moving in the opposite direction from planners’ goals….

Planners have developed two main approaches to transportation. One is to estimate how people will travel and then provide and maintain the infrastructure to allow them to do so as efficiently and safely as possible. The other is to imagine how you wish people would travel and then provide the infrastructure assuming that to happen. The latter method is likely to lead to misallocation of capital resources, increased congestion, and increased costs to travelers.

Austin’s plan is firmly based on this second approach. The city’s targets of reducing driving alone by a third, maintaining carpooling at an already too-high number, and increasing transit by 394 percent are completely unrealistic. No American city has achieved similar results in the past two decades and none are likely to come close in the next two decades.

Well, that’s the prevailing mentality of Austin’s political leaders and various bureaucracies: magical thinking. Failure is piled upon failure (e.g., more bike lanes crowding out traffic lanes, a hugely wasteful curbside composting plan) because to admit failure would be to admit that the emperor has no clothes.

You want to learn more about Austin? You’ve got it:

Driving and Politics (1)
Life in Austin (1)
Life in Austin (2)
Life in Austin (3)
Driving and Politics (2)
AGW in Austin?
Democracy in Austin
AGW in Austin? (II)
The Hypocrisy of “Local Control”
Amazon and Austin


“Government Worker” Is (Usually) an Oxymoron

In “Good News from the Federal Government” I sarcastically endorse the move to grant all federal workers 12 weeks of paid parental leave:

The good news is that there will be a lot fewer civilian federal workers on the job, which means that the federal bureaucracy will grind a bit more slowly when it does the things that it does to screw up the economy.

The next day, Audacious Epigone put some rhetorical and statistical meat on the bones of my informed prejudice in “Join the Crooks and Liars: Get a Government Job!“:

That [the title of the post] used to be a frequent refrain on Radio Derb. Though the gag has been made emeritus, the advice is even better today than it was when the Derb introduced it. As he explains:

The percentage breakdown is private-sector 76 percent, government 16 percent, self-employed 8 percent.

So one in six of us works for a government, federal, state, or local.

Which group does best on salary? Go on: see if you can guess. It’s government workers, of course. Median earnings 52½ thousand. That’s six percent higher than the self-employed and fourteen percent higher than the poor shlubs toiling away in the private sector.

If you break down government workers into two further categories, state and local workers in category one, federal workers in category two, which does better?

Again, which did you think? Federal workers are way out ahead, median earnings 66 thousand. Even state and local government workers are ahead of us private-sector and self-employed losers, though.

Moral of the story: Get a government job! — federal for strong preference.

….

Though it is well known that a government gig is a gravy train, opinions of the people with said gigs is embarrassingly low as the results from several additional survey questions show.

First, how frequently the government can be trusted “to do what’s right”? [“Just about always” and “most of the time” badly trail “some of the time”.]

….

Why can’t the government be trusted to do what’s right? Because the people who populate it are crooks and liars. Asked whether “hardly any”, “not many” or “quite a few” people in the federal government are crooked, the following percentages answered with “quite a few” (“not sure” responses, constituting 12% of the total, are excluded). [Responses of “quite a few” range from 59 percent to 77 percent across an array of demographic categories.]

….

Accompanying a strong sense of corruption is the perception of widespread incompetence. Presented with a binary choice between “the people running the government are smart” and “quite a few of them don’t seem to know what they are doing”, a solid majority chose the latter (“not sure”, at 21% of all responses, is again excluded). [The “don’t know what they’re doing” responses ranged from 55 percent to 78 percent across the same demographic categories.]

Are the skeptics right? Well, most citizens have had dealings with government employees of one kind and another. The “wisdom of crowds” certainly applies in this case.

“Human Nature” by David Berlinski: A Revew

I became fan of David Berlinksi, who calls himself a secular Jew, after reading The Devil’s Delusion: Atheism and Its Scientific Pretensions, described on Berlinkski’s personal website as

a biting defense of faith against its critics in the New Atheist movement. “The attack on traditional religious thought,” writes Berlinski, “marks the consolidation in our time of science as the single system of belief in which rational men and women might place their faith, and if not their faith, then certainly their devotion.”

Here is most of what I say in “Atheistic Scientism Revisited” about The Devil’s Delusion:

Berlinski, who knows far more about science than I do, writes with flair and scathing logic. I can’t do justice to his book, but I will try to convey its gist.

Before I do that, I must tell you that I enjoyed Berlinski’s book not only because of the author’s acumen and biting wit, but also because he agrees with me. (I suppose I should say, in modesty, that I agree with him.) I have argued against atheistic scientism in many blog posts (see below).

Here is my version of the argument against atheism in its briefest form (June 15, 2011):

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

As for scientism, I call upon Friedrich Hayek:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it. [The Counter Revolution Of Science]

As Berlinski amply illustrates and forcibly argues, atheistic scientism is rampant in the so-called sciences. I have reproduced below some key passages from Berlinski’s book. They are representative, but far from exhaustive (though I did nearly exhaust the publisher’s copy limit on the Kindle edition). I have forgone the block-quotation style for ease of reading, and have inserted triple asterisks to indicate (sometimes subtle) changes of topic. [Go to my post for the excerpts.]

On the strength of The Devil’s Delusion, I eagerly purchased Berlinski’s latest book, Human Nature. I have just finished it, and cannot summon great enthusiasm for it. Perhaps that is so because I expected a deep and extended examination of the title’s subject. What I got, instead, was a collection of 23 disjointed essays, gathered (more or less loosely) into seven parts.

Only the first two parts, “Violence” and “Reason”, seem to address human nature, but often tangentially. “Violence” deals specifically with violence as manifested (mainly) in war and murder. The first essay, titled “The First World War”, is a tour de force — a dazzling (and somewhat dizzying) reconstruction of the complex and multi-tiered layering of the historical precedent, institutional arrangements, and personalities that led to the outbreak of World War I.

Aha, I thought to myself, Berlinkski is warming to his task, and will flesh out the relevant themes at which he hints in the first essay. And in the second and third essays, “The Best of Times” and “The Cause of War”, Berlinski flays the thesis of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. But my post, “The Fallacy of Human Progress“, does a better job of it, thanks to the several critics and related sources quoted therein.

Berlinski ends the third essay with this observation:

Men go to war when they think that they can get away with murder.

Which is tantamount to an admission that Berlinski has no idea why men go to war, or would rather not venture an opinion on the subject. There is much of that kind of diffident agnosticism throughout the book, which is captured in his reply to an interviewer’s question in the book’s final essay:

Q. Would you share with us your hunches and suspicions about spiritual reality, the trend in your thinking, if not your firm beliefs?

A. No. Either I cannot or I will not. I do not know whether I am unable or unwilling. The question elicits in me a stubborn refusal. Please understand. It is not an issue of privacy. I have, after all, blabbed my life away: Why should I call a halt here? I suppose that I am by nature a counter-puncher. What I am able to discern of the religious experience often comes about reactively. V. S. Naipaul remarked recently that he found the religious life unthinkable.

He does? I was prompted to wonder. Why does he?

His attitude gives rise to mine. That is the way in which I wrote The Devil’s Delusion: Atheism and Its Scientific Pretensions.

Is there anything authentic in my religious nature?

Beats me.

That is a legitimate reply, but — I suspect — an evasive one.

Returning to the book’s ostensible subject, the second part, “Reason”, addresses human nature mainly in a negative way, that is, by pointing out (in various ways) flaws in the theory of evolution. There is no effort to knit the strands into a coherent theme. The following parts stray even further from the subject of the book’s title, and are even more loosely connected.

This isn’t to say that the book fails to entertain, for it often does that. For example, buried in a chapter on language, “The Recovery of Case”, is this remark:

Sentences used in the ordinary give-and-take of things are, of course, limited in their length. Henry James could not have constructed a thousand-word sentence without writing it down or suffering a stroke. Nor is recursion needed to convey the shock of the new. Four plain-spoken words are quite enough: Please welcome President Trump.

(I assume, given Berlinski’s track record for offending “liberal” sensibilities, that the italicized words refer to the shock of Trump’s being elected, and are not meant to disparage Trump.)

But the book also irritates, not only by its failure to deliver what the title seems to promise, but also by Berlinski’s proclivity for using the abstruse symbology of mathematical logic where words would do quite nicely and more clearly. In the same vein — showing off — is the penultimate essay, “A Conversation with Le Figaro“, which reproduces (after an introduction by Berlinksi) of a transcript of the interview — in French, with not a word of translation. Readers of the book will no doubt be more schooled in French than the typical viewer of prime-time TV fare, but many of them will be in my boat. My former fluency in spoken and written French has withered with time, and although I could still manage with effort to decipher the meaning of the transcript, it proved not to be worth the effort so I gave up on it.

There comes a time when once-brilliant persons can summon flashes of their old, brilliant selves but can no longer emit a sustained ray of brilliance. Perhaps that is true of Berlinski. I hope not, and will give him another try if he gives writing another try.

“Hurricane Hysteria” and “Climate Hysteria”, Updated

In view of the persistent claims about the role of “climate change” as the cause of tropical cyclone activity (i.e, tropical storms and hurricanes) I have updated “Hurricane Hysteria“. The bottom line remains the same: Global measures of accumulated cyclone energy (ACE) do not support the view that there is a correlation between “climate change” and tropical cyclone activity.

I have also updated “Climate Hysteria“, which borrows from “Hurricane Hysteria” but also examines climate patterns in Austin, Texas, where our local weather nazi peddles his “climate change” balderdash.

Climate Hysteria

UPDATED 01/23/20

Recent weather events have served to reinforce climate hysteria. There are the (usual) wildfires in California, which have nothing to do with “climate change” (e.g., this, this, and this), but you wouldn’t know it if you watch the evening news (which I don’t but impressionable millions do).

Closer to home, viewers have been treated to more of the same old propaganda from our local weather nazi, who proclaims it “nice” when daytime high temperatures are in the 60s and 70s, and who bemoans higher temperatures. (Why does he stay in Austin, then?) We watch him because when he isn’t proselytizing “climate change” he delivers the most detailed weather report available on Austin’s TV stations.

He was in “climate change” heaven when in September and part of October (2019) Austin endured a heat wave that saw many new high temperatures for the relevant dates. To top it off, tropical storm Imelda suddenly formed in mid-September near the gulf coast of Texas and inundated Houston. According to him, both events were due to “climate change”. Or were they just weather? My money’s on the latter.

Let’s take Imelda, which the weather nazi proclaimed to be an example of the kind of “extreme” weather event that will occur more often as “climate change” takes us in the direction of catastrophe. Those “extreme” weather events, when viewed globally (which is the only correct way to view them) aren’t occurring more often. This is from “Hurricane Hysteria“, which I have just updated to include statistics compiled as of today (11/19/19):

[T]he data sets for tropical cyclone activity that are maintained by the Tropical Meteorology Project at Colorado State University cover all six of the relevant ocean basins as far back as 1972. The coverage goes back to 1961 (and beyond) for all but the North Indian Ocean basin — which is by far the least active.

Here is NOAA’s reconstruction of ACE in the North Atlantic basin through November 19, 2019, which, if anything, probably understates ACE before the early 1960s:

The recent spikes in ACE are not unprecedented. And there are many prominent spikes that predate the late-20th-century temperature rise on which “warmism” is predicated. The trend from the late 1800s to the present is essentially flat. And, again, the numbers before the early 1960s must understate ACE.

Moreover, the metric of real interest is global cyclone activity; the North Atlantic basin is just a sideshow. Consider this graph of the annual values for each basin from 1972 through November 19, 2019:

Here’s a graph of stacked (cumulative) totals for the same period:

The red line is the sum of ACE for all six basins, including the Northwest Pacific basin; the yellow line in the sum of ACE for the next five basins, including the Northeast Pacific basin; etc.

I have these observations about the numbers represented in the preceding graphs:

  • If one is a believer in CAGW (the G stands for global), it is a lie (by glaring omission) to focus on random, land-falling hurricanes hitting the U.S. or other parts of the Western Hemisphere.
  • The overall level of activity is practically flat between 1972 and 2019, with the exception of spikes that coincide with strong El Niño events.
  • There is nothing in the long-term record for the North Atlantic basin, which is probably understated before the early 1960s, to suggest that global activity in recent decades is unusually high.

Imelda was an outlier — an unusual event that shouldn’t be treated as a typical one. Imelda happened along in the middle of a heat wave and accompanying dry spell in central Texas. This random juxtaposition caused the weather nazi to drool in anticipation of climate catastrophe.

There are some problems with the weather nazi’s reaction to the heat wave. First, the global circulation models (GCMs) that forecast ever-rising temperatures have been falsified. (See the discussion of GCMs here.) Second, the heat wave and the dry spell should be viewed in perspective. Here, for example are annualized temperature and rainfall averages for Austin, going back to the decade in which “global warming” began to register on the consciousnesses of climate hysterics:

 

What do you see? I see a recent decline in Austin’s average temperature from the El Nino effect of 2015-2016. I also see a decline in rainfall that doesn’t come close to being as severe the a dozen or so declines that have occurred since 1970.

In fact, abnormal heat is to be expected when there is little rain and a lot of sunshine. Temperature data, standing by themselves, are of little use because of the pronounced urban-heat-island (UHI) effect (discussed here). Drawing on daily weather reports for Austin for the past five years, I find that Austin’s daily high temperature is significantly affected by rainfall, wind speed, wind direction, and cloud cover. For example (everything else being the same):

  • An additional inch of rainfall induces an temperature drop of 1.4 degrees F.
  • A wind of 10 miles an hour from the north induces a temperature drop of about 5.8 degrees F relative to a 10-mph wind from the south.
  • Going from 100-percent sunshine to 100-percent cloud cover induces a temperature drop of 0.5 degrees F. (The combined effect of an inch of rain and complete loss of sunshine is therefore 1.9 degrees F, even before other factors come into play.)

The combined effects of variations in rainfall, wind speed, wind direction, and cloud cover are far more than enough to account for the molehill temperature anomalies that “climate change” hysterics magnify into mountains of doom.

Further, there is no systematic bias in the estimates, as shown by the following plot of regression residuals:

 

Summer is the most predictable of the seasons; winter, the least predicable. Over- and under-estimates seem to be evenly distributed across the seasons. In other words, the regression doesn’t mask changes in seasonal temperature patterns. Note, however, that this fall (which includes both the hot spell and cold snap discussed above) has been dominated by below-normal temperatures, not above-normal ones.

Anyway, during the spell of hot, dry weather in the first half of the meteorological fall of 2019, the maximum temperature went as high as 16 degrees F above the 30-year average for relevant date. Two days later, the maximum temperature was 12 degrees F below the 30-year average for the relevant date. Those extremes tell us a lot about the variability of weather in central Texas and nothing about “climate change”.

However, the 16-degree deviation above the 30-year average was far from the greatest during the period under analysis; above-normal deviations have ranged as high as 26 degrees F above 30-year averages. By contrast, during the subsequent cold snap, deviations reached their lowest levels for the period under analysis. The down-side deviations (latter half of meteorological fall, 2019) are obvious in the preceding graph. The pattern suggests that, if anything, fall 2019 in Austin was abnormally cold rather than abnormally hot.

Winter 2019-2020 has started on out the warm side, by not abnormally so. Further, the warming can be attributed in part to weak El Nino conditions.

What’s in a Trend?

I sometimes forget myself and use “trend”. Then I see a post like “Trends for Existing Home Sales in the U.S.” and am reminded why “trend” is a bad word. This graphic is the centerpiece of the post:

There was a sort of upward trend from June 2016 until August 2017, but the trend stopped. So it wasn’t really a trend was it? (I am here using “trend” in way that it seems to be used generally, that is, as a direction of movement into the future.)

After a sort of flat period, the trend turned upward again, didn’t it? No, because the trend had been broken, so a new trend began in the early part of 2018. But it was a trend only until August 2018, when it became a different trend — mostly downward for several months.

Is there a flat trend now, or as the author of the piece puts it: “Existing home sales in the U.S. largely continued treading water through August 2019”? Well that was the trend — temporary pattern is a better descriptor — but it doesn’t mean that the value of existing-home sales will continue to hover around $1.5 trillion.

The moral of the story: The problem with “trend” is that it implies a direction of movement into the future —  a future will look like a lot like the past. But a trend is only a trend for as long as it lasts. And who knows how long it will last, that is, when it will stop?

I hope to start a trend toward the disuse of “trend”. My hope is futile.