Insidious Algorithms

Michael Anton inveighs against Big Tech and pseudo-libertarian collaborators in “Dear Avengers of the Free Market” (Law & Liberty, October 5, 2018):

Beyond the snarky attacks on me personally and insinuations of my “racism”—cut-and-paste obligatory for the “Right” these days—the responses by James Pethokoukis and (especially) John Tamny to my Liberty Forum essay on Silicon Valley are the usual sorts of press releases that are written to butter up the industry and its leaders in hopes of . . . what?…

… I am accused of having “a fundamental problem with capitalism itself.” Guilty, if by that is meant the reservations about mammon-worship first voiced by Plato and Aristotle and reinforced by the godfather of capitalism, Adam Smith, in his Theory of Moral Sentiments (the book that Smith himself indicates is the indispensable foundation for his praise of capitalism in the Wealth of Nations). Wealth is equipment, a means to higher ends. In the middle of the last century, the Right rightly focused on unjust impediments to the creation and acquisition of wealth. But conservatism, lacking a deeper understanding of the virtues and of human nature—of what wealth is for—eventually ossified into a defense of wealth as an end in itself. Many, including apparently Pethokoukis and Tamny, remain stuck in that rut to this day and mistake it for conservatism.

Both critics were especially appalled by my daring to criticize modern tech’s latest innovations. Who am I to judge what people want to sell or buy? From a libertarian standpoint, of course, no one may pass judgment. Under this view, commerce has no moral content…. To homo economicus any choice that does not inflict direct harm is ipso facto not subject to moral scrutiny, yet morality is defined as the efficient, non-coercive, undistorted operation of the market.

Naturally, then, Pethokoukis and Tamny scoff at my claim that Silicon Valley has not produced anything truly good or useful in a long time, but has instead turned to creating and selling things that are actively harmful to society and the soul. Not that they deny the claim, exactly. They simply rule it irrelevant. Capitalism has nothing to do with the soul (assuming the latter even exists). To which I again say: When you elevate a means into an end, that end—in not being the thing it ought to be—corrupts its intended beneficiaries.

There are morally neutral economic goods, like guns, which can be used for self-defense or murder. But there are economic goods that undermine morality (e.g., abortion, “entertainment” that glamorizes casual sex) and fray the bonds of mutual trust and respect that are necessary to civil society. (How does one trust a person who treats life and marriage as if they were unworthy of respect?)

There’s a particular aspect of Anton’s piece that I want to emphasize here: Big Tech’s alliance with the left in its skewing of information.

Continuing with Anton:

The modern tech information monopoly is a threat to self-government in at least three ways. First its … consolidation of monopoly power, which the techies are using to guarantee the outcome they want and to suppress dissent. It’s working….

Second, and related, is the way that social media digitizes pitchforked mobs. Aristocrats used to have to fear the masses; now they enable, weaponize, and deploy them…. The grandees of Professorville and Sand Hill Road and Outer Broadway can and routinely do use social justice warriors to their advantage. Come to that, hundreds of thousands of whom, like modern Red Guards, don’t have to be mobilized or even paid. They seek to stifle dissent and destroy lives and careers for the sheer joy of it.

Third and most important, tech-as-time-sucking-frivolity is infantilizing and enstupefying society—corroding the reason-based public discourse without which no republic can exist….

But all the dynamism and innovation Tamny and Pethokoukis praise only emerge from a bedrock of republican virtue. This is the core truth that libertarians seem unable to appreciate. Silicon Valley is undermining that virtue—with its products, with its tightening grip on power, and with its attempt to reengineer society, the economy, and human life.

I am especially concerned here with the practice of tinkering with AI algorithms to perpetuate bias in the name of  eliminating it (e.g., here). The bias to be perpetuated, in this case, is blank-slate bias: the mistaken belief that there are no inborn differences between blacks and whites or men and women. It is that belief which underpins affirmative action in employment, which penalizes the innocent and reduces the quality of products and services, and incurs heavy enforcement costs; “head start” programs, which waste taxpayers’ money; and “diversity” programs at universities, which penalize the innocent and set blacks up for failure. Those programs and many more of their ilk are generally responsible for heightening social discord rather than reducing it.

In the upside-down world of “social justice” an algorithm is considered biased if it is unbiased; that is, if it reflects the real correlations between race, sex, and ability in certain kinds of endeavors. Charles Murray’s Human Diversity demolishes the blank-slate theory with reams and reams of facts. Social-justice warriors will hate it, just as they hated The Bell Curve, even though they won’t read the later book, just as they didn’t read the earlier one.

Flattening the Curve

What does it mean to “flatten the curve”, in the context of an epidemic? Here is Willis Eschenbach’s interpretation:

What does “flattening the curve” mean? It is based on the hope that our interventions will slow the progress of the disease. By doing so, we won’t get as many deaths on any given day. And this means less strain on a city or a country’s medical system.

Be clear, however, that this is just a delaying tactic. Flattening the curve does not reduce the total number of cases or deaths. It just spreads out the same amount over a longer time period. Valuable indeed, critical at times, but keep in mind that these delaying interventions do not reduce the reach of the infection. Unless your health system is so overloaded that people are needlessly dying, the final numbers stay the same.

I beg to differ. Or, at least, to offer a different interpretation: Flattening the curve — reducing its peak — can also reduce the total number of persons who are potentially exposed to the disease, thereby reducing the total number of persons who contract it. How does that work? It requires not only reducing the peak of the curve — the maximum number of active cases — but also reducing the length of the curve — the span of time in which a population is potentially exposed to the contagion.

Consider someone who has randomly contracted a virus from a non-human source. If that person is a hermit, the virus may kill him, or he may recover from whatever illness it causes him, but he can’t infect anyone else. Low peak, short duration.

Here’s an example of a higher peak but a relatively short duration: A person who randomly contracts a virus from a non-human source then infects many other persons in quick succession by breathing near them, sneezing on them, touching them, etc., in a short span of time (e.g., meeting and greeting at a business function). But … if the originator of the contagion and those whom he initially infects are identified and quarantined quickly enough, the contagion will spread no further.

In both cases, the “curve” will peak at some number lower than the number that would have been reached without isolation or quarantine. Moreover, and more important, the curve will terminate (go to zero) more quickly than it would have without isolation or quarantine.

The real world is more complicated than either of my examples because almost all humans aren’t hermits, and infections usually aren’t detected until after an infected person has had many encounters with uninfected persons. But the principle remains the same: The total number of persons who contract a contagious disease can be reduced through isolation and quarantine — and the sooner isolation and quarantine take effect, the lower the total number of infected persons.

Climate Hysteria: An Update

I won’t repeat all of “Climate Hysteria“, which is long but worth a look if you haven’t read it. A key part of it is a bit out of date, specifically, the part about the weather in Austin, Texas.

Last fall’s heat wave in Austin threw our local soup weather nazi into a tizzy. Of course it did; he proclaims it “nice” when daytime high temperatures are in the 60s and 70s, and complains about anything above 80. I wonder why he stays in Austin.

The weather nazi is also a warmist. He was in “climate change” heaven when, on several days in September and October, the official weather station in Austin reported new record highs for the relevant dates. To top it off, tropical storm Imelda suddenly formed in mid-September near the gulf coast of Texas and inundated Houston. According to the weather nazi, both events were due to “climate change”. Or were they just weather? My money’s on the latter.

Let’s take Imelda, which the weather nazi proclaimed to be an example of the kind of “extreme” weather event that will occur more often as “climate change” takes us in the direction of catastrophe. Those “extreme” weather events, when viewed globally (which is the only correct way to view them) aren’t occurring more often, as I document in “Hurricane Hysteria“.

Here, I want to focus on Austin’s temperature record.

There are some problems with the weather nazi’s reaction to the heat wave. First, the global circulation models (GCMs) that forecast ever-rising temperatures have been falsified. (See the discussion of GCMs here.) Second, the heat wave and the dry spell should be viewed in perspective. Here, for example are annualized temperature and rainfall averages for Austin, going back to the decade in which “global warming” began to register on the consciousnesses of climate hysterics:

What do you see? I see a recent decline in Austin’s average temperature from the El Nino effect of 2015-2016. I also see a decline in rainfall that doesn’t come close to being as severe the a dozen or so declines that have occurred since 1970.

Here’s a plot of the relationship between monthly average temperature and monthly rainfall during the same period. The 1-month lag in temperature gives the best fit. The equation is statistically significant, despite the low correlation coefficient (r = 0.24) because of the large number of observations.

Abnormal heat is to be expected when there is little rain and a lot of sunshine. In other words, temperature data, standing by themselves, are of little use in explaining a region’s climate.

Drawing on daily weather reports for the past five-and-a-half years in Austin, I find that Austin’s daily high temperature is significantly affected by rainfall, wind speed, wind direction, and cloud cover. For example (everything else being the same):

  • An additional inch of rainfall induces an temperature drop of 1.4 degrees F.
  • A wind of 10 miles an hour from the north induces a temperature drop of about 5.9 degrees F relative to a 10-mph wind from the south.
  • Going from 100-percent sunshine to 100-percent cloud cover induces a temperature drop of 0.3 degrees F.
  • The combined effect of an inch of rain and complete loss of sunshine is therefore 1.7 degrees F, even before other factors come into play (e.g., rain accompanied by wind from the north or northwest, as is often the case in Austin).

The combined effects of variations in rainfall, wind speed, wind direction, and cloud cover are far more than enough to account for the molehill temperature anomalies that “climate change” hysterics magnify into mountains of doom.

Further, there is no systematic bias in the estimates, as shown by the following plot of regression residuals:


Meteorological seasons: tan = fall (September, October, November); blue = winter (December, January, February); green = spring (March, April, May); ochre = summer (June July, August). Values greater than zero = underestimates; values less than zero = overestimates.

Summer is the most predictable of the seasons; winter, the least predicable; spring and fall are in between. However, the fall of 2019 (which included both the hot spell and cold snap discussed above) was dominated by overestimated (below-normal) temperatures, not above-normal ones, despite the weather-nazi’s hysteria to the contrary. In fact, the below-normal temperatures were the most below-normal of those recorded during the five-and-a-half year period.

The winter of 2019-2020 was on the warm side, but not abnormally so (cf. the winter of 2016-2017). Further, the warming in the winter of 2019-2020 can be attributed in part to weak El Nino conditions.

Lurking behind all of this, and swamping all other causes of the (slightly upward) temperature trend is a pronounced urban-heat-island (UHI) effect (discussed here). What the weather nazi really sees (but doesn’t understand or won’t admit) is that Austin is getting warmer mainly because of rapid population growth (50 percent since 2000) and all that has ensued — more buildings, more roads, more vehicles on the move, and less green space.

The moral of the story: If you really want to do something about the weather, move to a climate that you find more congenial (hint, hint).

COVID-19 in the United States: Latest Projections

UPDATED 04/21/20

Relying on data collected through April 20, I project about 1.3 million cases and 90,000 deaths by the middle of August. Those numbers are 50,000 and 6,000 higher than the projections that I published three days ago. However, the new numbers are based on statistical relationships that, I believe, don’t fully reflect the declining numbers of new cases and deaths discussed below. If the numbers continue to decline rapidly, the estimates of total cases and death should decline, too.

Figure 1 plots total cases and deaths — actual and projected — by date.

Figure 1

Source and notes: Derived from statistics reported by States and the District of Columbia and compiled in Template:2019–20 coronavirus pandemic data/United States medical cases at Wikipedia. The statistics exclude cases and deaths occurring among repatriated persons (i.e., Americans returned from other countries or cruise ships).

But there is good news in the actual and projected numbers of new cases and new deaths (Figure 2).

Figure 2

As shown in Figure 3, the daily percentage changes in new cases and deaths have been declining generally since March 19.

Figure 3

But there is, of course, a lag between new cases and new deaths. The best fit is a 7-day lag (Figure 4).

Figure 4

Figure 5 shows the tight relationship between new cases and new deaths when Figure 3 is adjusted to introduce the 7-day lag.

Figure 5

Figure 6 shows the similarly tight relationship after removing 8 “hot spots” which have the highest incidence of cases per capita — Connecticut, District of Columbia, Louisiana, Massachusetts, Michigan, New Jersey, New York, and Rhode Island.

Figure 6

Figures 5 and 6 give me added confidence that the crisis has peaked.

100,000 to 240,000 COVID-19 Deaths in the U.S.?

LATEST VERSION HERE.

COVID-19 in the United States

LATEST VERSION HERE.

COVID-19 Update and Prediction

I have updated my statistical analysis here. Note especially the continued decline in the daily rate of new cases and the low rate of new deaths per new case.

Now for the prediction. Assuming that lockdowns, quarantines, and social distancing continue for at least two more weeks, and assuming that there isn’t a second wave of COVID-19 because of early relaxation or re-infection:

  • The total number of COVID-19 cases in the U.S. won’t exceed 250,000.
  • The total number of U.S. deaths attributed to COVID-19 won’t exceed 10,000.

In any event, the final numbers will be well below the totals for the swine-flu epidemic of 2009-10 (59 million and 12,000) but you won’t hear about it from the leftist media.

UPDATE 03/31/20: Some sources are reporting higher numbers of U.S. cases and deaths than the source that I am using for my analysis and predictions. It is therefore possible that the final numbers (according to some sources) will be higher than my predictions. But I will be in the ballpark.

UPDATE 04/10/20: See my revised estimate.

Avoid New York (and New Yorkers) Like the Plague

The current infection rate in New York is 100 times the rate in Texas. I live in Texas, I’m happy to say.

Putting COVID-19 in Perspective for Americans

GO HERE FOR THE MOST RECENT VERSION.

What Is Natural?

Back-to-nature types, worriers about what “humans are doing to the planet”, and neurotics (leftists) generally take a dim view of the artifacts of human existence. There’s a lot of hypocrisy in that view, of course, mixed with goodly doses of envy and virtue-signalling.

A lot of the complaints heard from back-to-nature types, etc., are really esthetic. They just don’t like to envision a pipeline running across some open far away and well out of sight, ditto a distant and relatively small cluster of oil rigs. Such objections would seem to conflict with their preference for ugly bird-killing highway straddling, skyline cluttering wind farms. Chalk it up economically ignorant indoctrination in the “evils” of fossil fuels.

At any rate, what makes a pipeline, an oil rig, or even a wind farm any less natural than the artifacts constructed by lower animals to promote their survival? The main difference between the artifacts of the lower animals — bird’s nests, bee hives, beaver dams, underground burrows, etc. — and those of human beings is that human artifacts are far more ingenious and complex. Moreover, because humans are far more ingenious than the lower animals, the number of different human artifacts is far greater than the number arising from any other species, or even all of them taken together.

Granted, there are artifacts that aren’t necessary to the survival of human beings (e.g., movies, TV, and electric guitars), but those aren’t the ones that the back-to-nature crowd and its allies find objectionable. No, they object to the artifacts that enable the back-to-earthers, etc., to live in comfort.

In sum, a pipeline is just as natural as a bird’s nest. Remember that the next time you encounter an aging “flower child”. And ask her if a wind farm is more natural than a pipeline, and how she would like it if she had to forage for firewood to stay warm and cook her meals.

Lesson from the Diamond Princess: Panic Is Unwarranted

As of today there have been 696 reported cases of coronavirus among the 3,711 passengers who were aboard the Diamond Princess cruise ship. The ship was quarantined on February 1; all passengers and crew had disembarked by March 1. As of March 1, there were 6 deaths among those infected, and the number hasn’t grown (as of today).

Given the ease with which the virus could be transmitted on a ship, the Diamond Princess may represent an upper limit on contagion and mortality:

  • an infection rate of 19 percent of those onboard the ship
  • a fatality rate of less than 1 percent among those known to have contracted the disease
  • a fatality rate of less than 2/10 of 1 percent of the population potentially exposed to the disease.

Conclusion: There is no question that coronavirus represents a significant threat to life, health, and economic activity. But the panic being fomented by the media and opportunistic politicians is unwarranted.

Preliminary Thoughts about “Theism, Atheism, and Big Bang Cosmology”

I am in the early sections of Theism, Atheism, and Big Bang Cosmology by William Lane Craig and Quentin Smith, but I am beginning to doubt that it will inform my views about cosmology. (These are spelled out with increasing refinement here, here, here, and here.) The book consists of alternating essays by Craig and Smith, in which Craig defends the classical argument for a creation (the Kalām cosmological argument) against Smith’s counter-arguments.

For one thing, Smith — who takes the position that the universe wasn’t created — seems to pin a lot on the belief prevalent at the time of the book’s publication (1993) that the universe was expanding but at a decreasing rate. It is now believed generally among physicists that the universe is expanding at an accelerating rate. I must therefore assess Smith’s argument in light of the current belief.

For another thing, Craig and Smith (in the early going, at least) seem to be bogged down in an arcane argument about the meaning of infinity. Craig takes the position, understandably, that an actual infinity is impossible in the physical world. Smith, of course, takes the opposite position. The problem here is that Craig and Smith argue about what is an empirical (if empirically undecidable) matter by resorting to philosophical and mathematical concepts. The observed and observable facts are on Craig’s side: Nothing is known to have happened in the material universe without an antecedent material cause. Philosophical and mathematical arguments about the nature of infinity seem beside the point.

For a third thing, Craig seems to pin a lot on the Big Bang, while Smith is at pains to deny its significance. Smith seems to claim that the Big Bang wasn’t the beginning of the universe; rather, the universe was present in the singularity from which the Big Bang arose. The singularity might therefore have existed all along.

Craig, on the other hand, sees the hand of God in the Big Bang. The presence of the singularity (the original clump of material “stuff”) had to have been created so that the Big Bang could follow. That’s all well and good, but what was God doing before the Big Bang, that is, in the infinite span of time before 15 billion years ago? (Is it presumptuous of me to ask?) And why should the Big Bang prove God’s existence any more than, say, a universe that came into being at an indeterminate time? The necessity of God (or some kind of creator) arises from the known character of the universe: material effects follow from material causes, which cannot cause themselves. In short, Craig pins too much on the Big Bang, and his argument would collapse if the Big Bang is found to be a figment of observational error.

There’s much more to come, I hope.

Fifty-Two Weeks on the Learning Curve

I first learned of the learning curve when I was a newly hired analyst at a defense think-tank. A learning curve

is a graphical representation of how an increase in learning (measured on the vertical axis) comes from greater experience (the horizontal axis); or how the more someone (or something) performs a task, the better they [sic] get at it.

In my line of work, the learning curve figured importantly in the estimation of aircraft procurement costs. There was a robust statistical relationship between the cost of making a particular model of aircraft and the cumulative number of such aircraft produced. Armed with the learning-curve equation and the initial production cost of an aircraft, it was easy to estimate of the cost of producing any number of the same aircraft.

The learning curve figures prominently in tests that purport to measure intelligence. Two factors that may explain the Flynn effect — a secular rise in average IQ scores — are aspects of learning: schooling and test familiarity and a generally more stimulating environment in which one learns more. The Flynn effect doesn’t measure changes in intelligence, it measures changes in IQ scores resulting from learning. There is an essential difference between ignorance and stupidity. The Flynn effect is about the former, not the latter.

Here’s a personal example of the Flynn effect in action. I’ve been doing The New York Times crossword puzzle online since February 18, 2019. I have completed all 365 puzzles published by TNYT from that date through the puzzle for February 17, 2020, with generally increasing ease:

The fitted curve is a decaying exponential, which means that progress continues but at an increasingly slower rate, which is typical of a learning curve.

The difficulty of the puzzle varies from day to day, with Monday puzzles being the easiest and Sunday puzzles being the hardest (as measured by time to complete):

For each day of the week, my best time is more recent than my worst time, and the trend of time to complete is downward for every day of the week (as reflected in the first graph above). In fact:

  • My worst times were all recorded in March through June of last year.
  • Today I tied my best time for a Monday puzzle.
  • I set or tied my best time for the Wednesday, Friday, and Sunday puzzles in the last three weeks.
  • In the same three weeks, my times for the Tuesday puzzle have twice been only a minute higher than my best.

I know that that I haven’t become more intelligent in the last 52 weeks. And being several decades past the peak of my intelligence, I am certain that it diminishes steadily, though in tiny increments (I hope). I have simply become more practiced at doing the crossword puzzle because I have learned a lot about it. For example, certain clues recur with some frequency, and they always have the same answers. Clues often have double meanings, which are hard to decipher at first, but which become easier to decipher with practice. There are other subtleties, all of which reflect the advantages of learning.

In a nutshell, I am no smarter than I was 52 weeks ago, but my ignorance of TNYT crossword puzzle has diminished significantly.

(See also “More about Intelligence“, “Selected Writings about Intelligence“, and especially “Intelligence“, in which I quote experts about the Flynn Effect.)

Psychiatry Is a Disorder

I happened upon “Schizoid personality disorder” (SPD) at Wikipedia, and wondered why it is a disorder, that is, a “bad thing”. A footnote in the article leads to a summary of SPD. Here are some excerpts:

A person with schizoid personality disorder often:

  • Appears distant and detached
  • Avoids social activities that involve emotional closeness with other people
  • Does not want or enjoy close relationships, even with family members….

People with schizoid personality disorder often do well in relationships that don’t focus on emotional closeness. They tend to be better at handling relationships that focus on:

  • Work
  • Intellectual activities
  • Expectations

In other words, persons who “suffer” from SPD may in fact be highly productive in pursuits that demand (and reward) prowess in science, technology, engineering, and mathematics — a.k.a. STEM. But because they don’t conform strictly to a psychiatric definition of normality they are said to have a disorder.

What is the psychiatric definition of a normal personality? This is from a page at the website of the American Psychiatric Association (APA):

Personality is the way of thinking, feeling and behaving that makes a person different from other people. An individual’s personality is influenced by experiences, environment (surroundings, life situations) and inherited characteristics. A person’s personality typically stays the same over time. A personality disorder is a way of thinking, feeling and behaving that deviates from the expectations of the culture, causes distress or problems functioning, and lasts over time.

There are 10 specific types of personality disorders. Personality disorders are long-term patterns of behavior and inner experiences that differs significantly from what is expected. The pattern of experience and behavior begins by late adolescence or early adulthood and causes distress or problems in functioning. Without treatment, personality disorders can be long-lasting. Personality disorders affect at least two of these areas:

  • Way of thinking about oneself and others
  • Way of responding emotionally
  • Way of relating to other people
  • Way of controlling one’s behavior

Types of Personality Disorders

  • Antisocial personality disorder: a pattern of disregarding or violating the rights of others. A person with antisocial personality disorder may not conform to social norms, may repeatedly lie or deceive others, or may act impulsively.
  • Avoidant personality disorder: a pattern of extreme shyness, feelings of inadequacy and extreme sensitivity to criticism. People with avoidant personality disorder may be unwilling to get involved with people unless they are certain of being liked, be preoccupied with being criticized or rejected, or may view themselves as not being good enough or socially inept.
  • Borderline personality disorder: a pattern of instability in personal relationships, intense emotions, poor self-image and impulsivity. A person with borderline personality disorder may go to great lengths to avoid being abandoned, have repeated suicide attempts, display inappropriate intense anger or have ongoing feelings of emptiness.
  • Dependent personality disorder: a pattern of needing to be taken care of and submissive and clingy behavior. People with dependent personality disorder may have difficulty making daily decisions without reassurance from others or may feel uncomfortable or helpless when alone because of fear of inability to take care of themselves.
  • Histrionic personality disorder: a pattern of excessive emotion and attention seeking. People with histrionic personality disorder may be uncomfortable when they are not the center of attention, may use physical appearance to draw attention to themselves or have rapidly shifting or exaggerated emotions.
  • Narcissistic personality disorder: a pattern of need for admiration and lack of empathy for others. A person with narcissistic personality disorder may have a grandiose sense of self-importance, a sense of entitlement, take advantage of others or lack empathy.
  • Obsessive-compulsive personality disorder: a pattern of preoccupation with orderliness, perfection and control. A person with obsessive-compulsive personality disorder may be overly focused on details or schedules, may work excessively not allowing time for leisure or friends, or may be inflexible in their morality and values. (This is NOT the same as obsessive compulsive disorder.)
  • Paranoid personality disorder: a pattern of being suspicious of others and seeing them as mean or spiteful. People with paranoid personality disorder often assume people will harm or deceive them and don’t confide in others or become close to them.
  • Schizoid personality disorder: being detached from social relationships and expressing little emotion. A person with schizoid personality disorder typically does not seek close relationships, chooses to be alone and seems to not care about praise or criticism from others.
  • Schizotypal personality disorder: a pattern of being very uncomfortable in close relationships, having distorted thinking and eccentric behavior. A person with schizotypal personality disorder may have odd beliefs or odd or peculiar behavior or speech or may have excessive social anxiety.

Holy mackerel, Andy, there’s hardly a “normal” person alive. And certainly none of them is a psychiatrist. The very compilation of a list of personality traits that one considers “abnormal” is a manifestation of narcissistic personality disorder and obsessive-compulsive personality disorder, at the very least.

Other than an actual disease of the brain, there is only one kind of mental “disorder” that requires treatment — criminal behavior. And the proper treatment for it is the application of criminal justice, sans psychiatric intervention. (See the articles by Thomas Szasz at FEE.)


Related posts:

I’ll Never Understand the Insanity Defense
Does Capital Punishment Deter Homicide?
Libertarian Twaddle about the Death Penalty
Crime and Punishment
Saving the Innocent?
Saving the Innocent?: Part II
More Punishment Means Less Crime
More About Crime and Punishment
More Punishment Means Less Crime: A Footnote
Clear Thinking about the Death Penalty
Let the Punishment Fit the Crime
A Precedent for the Demise of the Insanity Defense?
Another Argument for the Death Penalty
Less Punishment Means More Crime
Clear Thinking about the Death Penalty
What Is Justice?
Why Stop at the Death Penalty?
In Defense of Capital Punishment
Lock ‘Em Up
Free Will, Crime, and Punishment
Stop, Frisk, and Save Lives
Poverty, Crime, and Big Government
Crime Revisited
Rush to Judgment?
Stop, Frisk, and Save Lives II

Intuition vs. Rationality

To quote myself:

[I]ntuition [is] a manifestation of intelligence, not a cause of it. To put it another way, intuition is not an emotion; it is the opposite of emotion.

Intuition is reasoning at high speed. For example, a skilled athlete knows where and when to make a move (e.g., whether and where to swing at a pitched ball) because he subconsciously makes the necessary calculations, which he could not make consciously in the split-second that is available to him once the pitcher releases the ball.

Intuition is an aspect of reasoning (rationality) that is missing from “reason” — the cornerstone of the Enlightenment. The Enlightenment’s proponents and defenders are always going on about the power of logic applied to facts, and how that power brought mankind (or mankind in the West, at least) out of the benighted Middle Ages (via the Renaissance) and into the light of Modernity.

But “reason” of the kind associated with the Enlightenment is of the plodding variety, whereby “truth” is revealed at the conclusion of deliberate, conscious processes (e.g., the scientific method). But those processes, as I point out in the preceding paragraph, are susceptible of error because they rest on errors and assumptions that are hidden from view — often wittingly, as in the case of “climate change“.

Science, for all of its value to mankind, requires abstraction from reality. That is to say, it is reductionist. A good example is the arbitrary division of continuous social and scientific processes into discrete eras (the Middle Ages, the Renaissance, the Enlightenment, etc.). This ought to be a warning that mere abstractions are often, and mistakenly, taken as “facts”.

Reductionism makes it possible to “prove” almost anything by hiding errors and assumptions (wittingly or not) behind labels. Thus: x + y = z only when x and y are strictly defined and commensurate. Otherwise, x and y cannot be summed, or their summation can result in many correct values other than z. Further, as in the notable case of “climate change”, it is easy to assume (from bias or error) that z is determined only by x and y, when there are good reasons to believe that it is also determined by other factors: known knowns, known unknowns, and unknown unknowns.

Such things happen because human beings are ineluctably emotional and biased creatures, and usually unaware of their emotions and biases. The Enlightenment’s proponents and defenders are no more immune from emotion and bias than the “lesser” beings whom they presume to lecture about rationality.

The plodding search for “answers” is, furthermore, inherently circumscribed because it dismisses or minimizes the vital role played by unconscious deliberation — to coin a phrase. How many times have you found the answer to a question, a problem, or a puzzle by putting aside your deliberate, conscious search for the answer, only to have it come to you in a “Eureka!” moment sometime later (perhaps after a nap or good night’s sleep). That’s your brain at work in ways that aren’t well understood.

This process (to put too fine a word on it) is known as combinatorial play. Its importance has been acknowledged by many creative persons. Combinatorial play can be thought of as slow-motion intuition, where the brain takes some time to assemble (unconsciously) existing knowledge into an answer to a question, a problem, or a puzzle.

There is also fast-motion intuition, an example of which I invoked in the quotation at the top of this post: the ability of a batter to calculate in a split-second where a pitch will be when it reaches him. Other examples abound, including such vital ones as the ability of drivers to maneuver lethal objects in infinitely varied and often treacherous conditions. Much is made of the number of fatal highway accidents; too little is made of their relative infrequency given the billions of daily opportunities for their occurrence.  Imagine the carnage if drivers relied on plodding “reason” instead of fast-motion intuition.

The plodding version of “reason” that has been celebrated since the Enlightenment is therefore just one leg of a triad: thinking quickly and unconsciously, thinking somewhat less quickly and unconsciously, and thinking slowly and consciously.

Wasn’t it ever thus? Of course it was. Which means that the Enlightenment and its sequel unto the present day have merely fetishized one mode of dealing with the world and its myriad uncertainties. I would have said arriving at the truth, but it is well known (except by ignorant science-idolaters) that scientific “knowledge” is provisional and ever-changing. (Just think of the many things that were supposed to be bad for you but are now supposed to be good for you, and conversely.)

I am not a science-denier by any means. But scientific “knowledge” must be taken with copious quantities of salt because it is usually inadequate in the face of messy reality. A theoretical bridge, for example, may hold up under theoretical conditions, but it is likely to collapse when built in the real world, where there is much uncertainty about present and future conditions (e.g., the integrity of materials, adherence to best construction practices, soil conditions, the cumulative effects of traffic). An over-built bridge — the best kind — is one that allows wide margins of error for such uncertainties. The same is true of planes, trains, automobiles, buildings, and much else that our lives depend on. All such things fail less frequently than in the past not only because of the advance of knowledge but also because greater material affluence enables the use of designs and materials that afford wider margins of error.

In any event, too little credit is given to the other legs of reason’s triad: fast-motion and slow-motion intuition. Any good athlete, musician, or warrior will attest the the value former. I leave it to Albert Einstein to attest to the value of the latter,

combinatory [sic] play seems to be the essential feature in productive thought — before there is any connection with logical construction in words or other kinds of signs which can be communicated to others….

[F]ull consciousness is a limit case which can never be fully accomplished. This seems to me connected with the fact called the narrowness of consciousness.


Related page and category:

Modeling and Science
Science and Understanding

“Climate Hysteria”, Updated

Here.

An Implication of Universal Connectedness

A couple of provocative pieces (here and here) led me to an observation that is so obvious that I had never articulated it: Everything is connected.

The essence of it is captured in the first verse of “Dem Bones“:

Toe bone connected to the foot bone
Foot bone connected to the heel bone
Heel bone connected to the ankle bone
Ankle bone connected to the shin bone
Shin bone connected to the knee bone
Knee bone connected to the thigh bone
Thigh bone connected to the hip bone
Hip bone connected to the back bone
Back bone connected to the shoulder bone
Shoulder bone connected to the neck bone
Neck bone connected to the head bone …

The final line gets to the bottom of it (if you are a deist or theist):

… Now hear the word of the Lord.

But belief in the connectedness of everything in the universe doesn’t depend on one’s cosmological views. In fact, a strict materialist who holds that the universe “just is” will be obliged to believe in universal connectedness because everything is merely physical (or electromagnetic), and one thing touches other things, which touch other things, ad infinitum. (The “touching” may be done by light.)

Connectedness isn’t necessarily causality; it can be mere observation. Though observation — which involves the electromagnetic spectrum — is thought to be causal with respect to sub-atomic particles. As I put it here:

There’s no question that [a] particle exists independently of observation (knowledge of the particle’s existence), but its specific characteristic (quantum state) is determined by the act of observation. Does this mean that existence of a specific kind depends on knowledge? No. It means that observation determines the state of the particle, which can then be known.

I should have been clear about the meaning of “determine”, as used above. In my view it isn’t that observation causes the quantum state that is observed. Rather, observation measures (determines) the quantum state at the instant of measurement. Here’s an illustration of what I mean:

A die is rolled. Its “quantum state” is determined (measured) when it stops rolling and is readily observed. But the quantum state isn’t caused by the act of observation. In fact, the quantum state can be observed (determined, measured) — but not caused — at any point while the die is rolling by viewing it, sufficiently magnified, with the aid of a high-speed camera.

Connectedness can also involve causality, of course. The difficult problem — addressed at the two links in the opening paragraph — is sorting out causal relationships given so much connectedness. Another term for the problem is “causal density“, which leads to spurious findings:

When there are many factors that have an impact on a system, statistical analysis yields unreliable results. Computer simulations give you exquisitely precise unreliable results. Those who run such simulations and call what they do “science” are deceiving themselves.

Is it any wonder that “scientists” tell us that one thing or another is bad for us, only to tell us at a later date that it isn’t bad for us and may even be good for us? This is a widely noted phenomenon (though insufficiently documented). But its implications for believing in, say, anthropogenic global warming seem to be widely ignored — most unfortunately.

(See also “Predicting ‘Global’ Temperatures — An Analogy with Baseball“.)

Existence and Knowledge

Philosophical musings by a non-philosopher which are meant to be accessible to other non-philosophers.

Ontology is the branch of philosophy that deals with existence. Epistemology is the branch of philosophy that deals with knowledge.

I submit (with no claim to originality) that existence (what really is) is independent of knowledge (proposition A), but knowledge is impossible without existence (proposition B).

In proposition A, I include in existence those things that exist in the present, those things that have existed in the past, and the processes (happenings) by which past existences either end (e.g., death of an organism, collapse of a star) or become present existences (e.g., an older version of a living person, the formation of a new star). That which exists is real; existence is reality.

In proposition B, I mean knowledge as knowledge of that which exists, and not the kind of “knowledge” that arises from misperception, hallucination, erroneous deduction, lying, and so on. Much of what is called scientific knowledge is “knowledge” of the latter kind because, as scientists know (when they aren’t advocates) scientific knowledge is provisional. Proposition B implies that knowledge is something that human beings and other living organisms possess, to widely varying degrees of complexity. (A flower may “know” that the Sun is in a certain direction, but not in the same way that a human being knows it.) In what follows, I assume the perspective of human beings, including various compilations of knowledge resulting from human endeavors. (Aside: Knowledge is self-referential, in that it exists and is known to exist.)

An example of proposition A is the claim that there is a falling tree (it exists), even if no one sees, hears, or otherwise detects the tree falling. An example of proposition B is the converse of Cogito, ergo sum, I think, therefore I am; namely, I am, therefore I (a sentient being) am able to know that I am (exist).

Here’s a simple illustration of proposition A. You have a coin in your pocket, though I can’t see it. The coin is, and its existence in your pocket doesn’t depend on my act of observing it. You may not even know that there is a coin in your pocket. But it exists — it is — as you will discover later when you empty your pocket.

Here’s another one. Earth spins on its axis, even though the “average” person perceives it only indirectly in the daytime (by the apparent movement of the Sun) and has no easy way of perceiving it (without the aid of a Foucault pendulum) when it is dark or when asleep. Sunrise (or at least a diminution of darkness) is a simple bit of evidence for the reality of Earth spinning on its axis without our having perceived it.

Now for a somewhat more sophisticated illustration of proposition A. One interpretation of quantum mechanics is that a sub-atomic particle (really an electromagnetic phenomenon) exists in an indeterminate state until an observer measures it, at which time its state is determinate. There’s no question that the particle exists independently of observation (knowledge of the particle’s existence), but its specific characteristic (quantum state) is determined by the act of observation. Does this mean that existence of a specific kind depends on knowledge? No. It means that observation determines the state of the particle, which can then be known. Observation precedes knowledge, even if the gap is only infinitesimal. (A clear-cut case is the autopsy of a dead person to determine his cause of death. The autopsy didn’t cause the person’s death, but came after it as an act of observation.)

Regarding proposition B, there are known knowns, known unknowns, unknown unknowns, and unknown “knowns”. Examples:

Known knowns (real knowledge = true statements about existence) — The experiences of a conscious, sane, and honest person: I exist; am eating; I had a dream last night; etc. (Recollections of details and events, however, are often mistaken, especially with the passage of time.)

Known unknowns (provisional statements of fact; things that must be or have been but which are not in evidence) — Scientific theories, hypotheses, data upon which these are based, and conclusions drawn from them. The immediate causes of the deaths of most persons who have died since the advent of homo sapiens. The material process by which the universe came to be (i.e., what happened to cause the Big Bang, if there was a Big Bang).

Unknown unknowns (things that exist but are unknown to anyone) — Almost everything about the universe.

Unknown “knowns” (delusions and outright falsehoods accepted by some persons as facts) — Frauds, scientific and other. The apparent reality of a dream.

Regarding unknown “knowns”, one might dream of conversing with a dead person, for example. The conversation isn’t real, only the dream is. And it is real only to the dreamer. But it is real, nevertheless. And the brain activity that causes a dream is real even if the person in whom the activity occurs has no perception or memory of a dream. A dream is analogous to a movie about fictional characters. The movie is real but the fictional characters exist only in the script of the movie and the movie itself. The actors who play the fictional characters are themselves, not the fictional characters.

There is a fine line between known unknowns (provisional statements of fact) and unknown “knowns” (delusions and outright falsehoods). The former are statements about existence that are made in good faith. The latter are self-delusions of some kind (e.g., the apparent reality of a dream as it occurs), falsehoods that acquire the status of “truth” (e.g., George Washington’s false teeth were made of wood), or statements of “fact” that are made in bad faith (e.g., adjusting the historic temperature record to make the recent past seem warmer relative to the more distant past).

The moral of the story is that a doubting Thomas is a wise person.

Wicked Problems: The Pretense of Rationality

Arnold Kling points to a paper by Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning” (Policy Sciences, June 1973). As Kling says, the paper is “notable for the way in which it describes — in 1973 — the fallibility of experts relative to technocratic expectations”.

Among the authors’ many insights are these about government planning:

The kinds of problems that planners deal with-societal problems-are inherently different from the problems that scientists and perhaps some classes of engineers deal with. Planning problems are inherently wicked.

As distinguished from problems in the natural sciences, which are definable and separable and may have solutions that are findable, the problems of governmental planning-and especially those of social or policy planning-are ill-defined; and they rely upon elusive political judgment for resolution. (Not “solution.” Social problems are never solved. At best they are only re-solved-over and over again.) Permit us to draw a cartoon that will help clarify the distinction we intend.

The problems that scientists and engineers have usually focused upon are mostly “tame” or “benign” ones. As an example, consider a problem of mathematics, such as solving an equation; or the task of an organic chemist in analyzing the structure of some unknown compound; or that of the chessplayer attempting to accomplish checkmate in five moves. For each the mission is clear. It is clear, in turn, whether or not the problems have been solved.

Wicked problems, in contrast, have neither of these clarifying traits; and they include nearly all public policy issues-whether the question concerns the location of a freeway, the adjustment of a tax rate, the modification of school curricula, or the confrontation of crime….

In the sciences and in fields like mathematics, chess, puzzle-solving or mechanical engineering design, the problem-solver can try various runs without penalty. Whatever his outcome on these individual experimental runs, it doesn’t matter much to the subject-system or to the course of societal affairs. A lost chess game is seldom consequential for other chess games or for non-chess-players.

With wicked planning problems, however, every implemented solution is consequential. It leaves “traces” that cannot be undone. One cannot build a freeway to see how it works, and then easily correct it after unsatisfactory performance. Large public-works are effectively irreversible, and the consequences they generate have long half-lives. Many people’s lives will have been irreversibly influenced, and large amounts of money will have been spent-another irreversible act. The same happens with most other large-scale public works and with virtually all public-service programs. The effects of an experimental curriculum will follow the pupils into their adult lives.

Rittel and Webber address a subject about which I know a lot, from first-hand experience — systems analysis. This is a loose discipline in which mathematical tools are applied to broad and seemingly intractable problems in an effort to arrive at “optimal” solutions to those problems. In fact, as Rittel and Webber say:

With arrogant confidence, the early systems analysts pronounced themselves ready to take on anyone’s perceived problem, diagnostically to discover its hidden character, and then, having exposed its true nature, skillfully to excise its root causes. Two decades of experience have worn the self-assurances thin. These analysts are coming to realize how valid their model really is, for they themselves have been caught by the very same diagnostic difficulties that troubled their clients.

Remember, that was written in 1973, a scant five years after Robert Strange McNamara — that supreme rationalist — left the Pentagon, having discovered that the Vietnam War wasn’t amenable to systems analysis. McNamara’s demise as secretary of defense also marked the demise of the power that had been wielded by his Systems Analysis Office (though it lives on under a different name, having long since been pushed down the departmental hierarchy).

My own disillusionment with systems analysis came to a head at about the same time as Rittel and Webber published their paper. A paper that I wrote in 1981 (much to the consternation of my colleagues in the defense-analysis business) was an outgrowth of a memorandum that I had written in 1975 to the head of the defense think-tank where I worked. Here is the crux of the 1981 paper:

Aside from a natural urge for certainty, faith in quantitative models of warfare springs from the experience of World War II, when they seemed to lead to more effective tactics and equipment. But the foundation of this success was not the quantitative methods themselves. Rather, it was the fact that the methods were applied in wartime. Morse and Kimball put it well [in Methods of Operations Research (1946)]:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [p. 10]

Contrast this attitude with the attempts of analysts for the past twenty years to evaluate weapons, forces, and strategies with abstract models of combat. However elegant and internally consistent the models, they have remained as untested and untestable as the postulates of theology.

There is, of course, no valid test to apply to a warfare model. In peacetime, there is no enemy; in wartime, the enemy’s actions cannot be controlled….

Lacking pertinent data, an analyst is likely to resort to models of great complexity. Thus, if useful estimates of detection probabilities are unavailable, the detection process is modeled; if estimates of the outcomes of dogfights are unavailable, aerial combat is reduced to minutiae. Spurious accuracy replaces obvious inaccuracy; untestable hypotheses and unchecked calibrations multiply apace. Yet the analyst claims relative if not absolute accuracy, certifying that he has identified, measured, and properly linked, a priori, the parameters that differentiate weapons, forces, and strategies.

In the end, “reasonableness” is the only defense of warfare models of any stripe.

It is ironic that analysts must fall back upon the appeal to intuition that has been denied to military men — whose intuition at least flows from a life-or-death incentive to make good guesses when choosing weapons, forces, or strategies.

This generalizes to government planning of almost every kind, at every level, and certainly to the perpetually recurring — and badly mistaken — belief that an entire economy can be planned and its produce “equitably” distributed according to needs rather than abilities.

(For much more in this vein, see the posts listed at “Modeling, Science, and ‘Reason’“. See also “Why I Am Bunkered in My Half-Acre of Austin“.)

Not-So-Random Thoughts (XXV)

“Not-So-Random Thoughts” is an occasional series in which I highlight writings by other commentators on varied subjects that I have addressed in the past. Other entries in the series can be found at these links: I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI, XXII, XXIII, and XXIV. For more in the same style, see “The Tenor of the Times” and “Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration“.

CONTENTS

The Real Unemployment Rate and Labor-Force Participation

Is Partition Possible?

Still More Evidence for Why I Don’t Believe in “Climate Change”

Transgenderism, Once More

Big, Bad Oligopoly?

Why I Am Bunkered in My Half-Acre of Austin

“Government Worker” Is (Usually) an Oxymoron


The Real Unemployment Rate and Labor-Force Participation

There was much celebration (on the right, at least) when it was announced that the official unemployment rate, as of November, is only 3.5 percent, and that 266,000 jobs were added to the employment rolls (see here, for example). The exultation is somewhat overdone. Yes, things would be much worse if Obama’s anti-business rhetoric and policies still prevailed, but Trump is pushing a big boulder of deregulation uphill.

In fact, the real unemployment rate is a lot higher than official figure I refer you to “Employment vs. Big Government and Disincentives to Work“. It begins with this:

The real unemployment rate is several percentage points above the nominal rate. Officially, the unemployment rate stood at 3.5 percent as of November 2019. Unofficially — but in reality — the unemployment rate was 9.4 percent.

The explanation is that the labor-force participation rate has declined drastically since peaking in January 2000. When the official unemployment rate is adjusted to account for that decline (and for a shift toward part-time employment), the result is a considerably higher real unemployment rate.

Arnold Kling recently discussed the labor-force participation rate:

[The] decline in male labor force participation among those without a college degree is a significant issue. Note that even though the unemployment rate has come down for those workers, their rate of labor force participation is still way down.

Economists on the left tend to assume that this is due to a drop in demand for workers at the low end of the skill distribution. Binder’s claim is that instead one factor in declining participation is an increase in the ability of women to participate in the labor market, which in turn lowers the advantage of marrying a man. The reduced interest in marriage on the part of women attenuates the incentive for men to work.

Could be. I await further analysis.


Is Partition Possible?

Angelo Codevilla peers into his crystal ball:

Since 2016, the ruling class has left no doubt that it is not merely enacting chosen policies: It is expressing its identity, an identity that has grown and solidified over more than a half century, and that it is not capable of changing.

That really does mean that restoring anything like the Founders’ United States of America is out of the question. Constitutional conservatism on behalf of a country a large part of which is absorbed in revolutionary identity; that rejects the dictionary definition of words; that rejects common citizenship, is impossible. Not even winning a bloody civil war against the ruling class could accomplish such a thing.

The logical recourse is to conserve what can be conserved, and for it to be done by, of, and for those who wish to conserve it. However much force of what kind may be required to accomplish that, the objective has to be conservation of the people and ways that wish to be conserved.

That means some kind of separation.

As I argued in “The Cold Civil War,” the natural, least stressful course of events is for all sides to tolerate the others going their own ways. The ruling class has not been shy about using the powers of the state and local governments it controls to do things at variance with national policy, effectively nullifying national laws. And they get away with it.

For example, the Trump Administration has not sent federal troops to enforce national marijuana laws in Colorado and California, nor has it punished persons and governments who have defied national laws on immigration. There is no reason why the conservative states, counties, and localities should not enforce their own view of the good.

Not even President Alexandria Ocasio-Cortez would order troops to shoot to re-open abortion clinics were Missouri or North Dakota, or any city, to shut them down. As Francis Buckley argues in American Secession: The Looming Breakup of the United States, some kind of separation is inevitable, and the options regarding it are many.

I would like to believe Mr. Codevilla, but I cannot. My money is on a national campaign of suppression, which will begin the instant that the left controls the White House and Congress. Shooting won’t be necessary, given the massive displays of force that will be ordered from the White House, ostensibly to enforce various laws, including but far from limited to “a woman’s right to an abortion”. Leftists must control everything because they cannot tolerate dissent.

As I say in “Leftism“,

Violence is a good thing if your heart is in the “left” place. And violence is in the hearts of leftists, along with hatred and the irresistible urge to suppress that which is hated because it challenges leftist orthodoxy — from climate skepticism and the negative effect of gun ownership on crime to the negative effect of the minimum wage and the causal relationship between Islam and terrorism.

There’s more in “The Subtle Authoritarianism of the ‘Liberal Order’“; for example:

[Quoting Sumantra Maitra] Domestically, liberalism divides a nation into good and bad people, and leads to a clash of cultures.

The clash of cultures was started and sustained by so-called liberals, the smug people described above. It is they who — firmly believing themselves to be smarter, on the the side of science, and on the side of history — have chosen to be the aggressors in the culture war.

Hillary Clinton’s remark about Trump’s “deplorables” ripped the mask from the “liberal” pretension to tolerance and reason. Clinton’s remark was tantamount to a declaration of war against the self-appointed champion of the “deplorables”: Donald Trump. And war it has been. much of it waged by deep-state “liberals” who cannot entertain the possibility that they are on the wrong side of history, and who will do anything — anything — to make history conform to their smug expectations of it.


Still More Evidence for Why I Don’t Believe in “Climate Change”

This is a sequel to an item in the previous edition of this series: “More Evidence for Why I Don’t Believe in Climate Change“.

Dave Middleton debunks the claim that 50-year-old climate models correctly predicted the susequent (but not steady) rise in the globe’s temperature (whatever that is). He then quotes a talk by Dr. John Christy of the University of Alabama-Huntsville Climate Research Center:

We have a change in temperature from the deep atmosphere over 37.5 years, we know how much forcing there was upon the atmosphere, so we can relate these two with this little ratio, and multiply it by the ratio of the 2x CO2 forcing. So the transient climate response is to say, what will the temperature be like if you double CO2– if you increase at 1% per year, which is roughly what the whole greenhouse effect is, and which is achieved in about 70 years. Our result is that the transient climate response in the troposphere is 1.1 °C. Not a very alarming number at all for a doubling of CO2. When we performed the same calculation using the climate models, the number was 2.31°C. Clearly, and significantly different. The models’ response to the forcing – their ∆t here, was over 2 times greater than what has happened in the real world….

There is one model that’s not too bad, it’s the Russian model. You don’t go to the White House today and say, “the Russian model works best”. You don’t say that at all! But the fact is they have a very low sensitivity to their climate model. When you look at the Russian model integrated out to 2100, you don’t see anything to get worried about. When you look at 120 years out from 1980, we already have 1/3 of the period done – if you’re looking out to 2100. These models are already falsified [emphasis added], you can’t trust them out to 2100, no way in the world would a legitimate scientist do that. If an engineer built an aeroplane and said it could fly 600 miles and the thing ran out of fuel at 200 and crashed, he might say: “I was only off by a factor of three”. No, we don’t do that in engineering and real science! A factor of three is huge in the energy balance system. Yet that’s what we see in the climate models….

Theoretical climate modelling is deficient for describing past variations. Climate models fail for past variations, where we already know the answer. They’ve failed hypothesis tests and that means they’re highly questionable for giving us accurate information about how the relatively tiny forcing … will affect the climate of the future.

For a lot more in this vein, see my pages “Climate Change” and “Modeling and Science“.


Transgenderism, Once More

Theodore Dalrymple (Anthony Daniels, M.D.) is on the case:

The problem alluded to in [a paper in the Journal of Medical Ethics] is, of course, the consequence of a fiction, namely that a man who claims to have changed sex actually has changed sex, and is now what used to be called the opposite sex. But when a man who claims to have become a woman competes in women’s athletic competitions, he often retains an advantage derived from the sex of his birth. Women competitors complain that this is unfair, and it is difficult not to agree with them….

Man being both a problem-creating and solving creature, there is, of course, a very simple way to resolve this situation: namely that men who change to simulacra of women should compete, if they must, with others who have done the same. The demand that they should suffer no consequences that they neither like nor want from the choices they have made is an unreasonable one, as unreasonable as it would be for me to demand that people should listen to me playing the piano though I have no musical ability. Thomas Sowell has drawn attention to the intellectual absurdity and deleterious practical consequences of the modern search for what he calls “cosmic justice.”…

We increasingly think that we live in an existential supermarket in which we pick from the shelf of limitless possibilities whatever we want to be. We forget that limitation is not incompatible with infinity; for example, that our language has a grammar that excludes certain forms of words, without in any way limiting the infinite number of meanings that we can express. Indeed, such limitation is a precondition of our freedom, for otherwise nothing that we said would be comprehensible to anybody else.

That is a tour de force typical of the good doctor. In the span of three paragraphs, he addresses matters that I have treated at length in “The Transgender Fad and Its Consequences” (and later in the previous edition of this series), “Positive Rights and Cosmic Justice“, and “Writing: A Guide” (among other entries at this blog).


Big, Bad Oligopoly?

Big Tech is giving capitalism a bad name, as I discuss in “Why Is Capitalism Under Attack from the Right?“, but it’s still the best game in town. Even oligopoly and its big brother, monopoly, aren’t necessarily bad. See, for example, my posts, “Putting in Some Good Words for Monopoly” and “Monopoly: Private Is Better than Public“. Arnold Kling makes the essential point here:

Do indicators of consolidation show us that the economy is getting less competitive or more competitive? The answer depends on which explanation(s) you believe to be most important. For example, if network effects or weak resistance to mergers are the main factors, then the winners from consolidation are quasi-monopolists that may be overly insulated from competition. On the other hand, if the winners are firms that have figured out how to develop and deploy software more effectively than their rivals, then the growth of those firms at the expense of rivals just shows us that the force of competition is doing its work.


Why I Am Bunkered in My Half-Acre of Austin

Randal O’Toole takes aim at the planners of Austin, Texas, and hits the bullseye:

Austin is one of the fastest-growing cities in America, and the city of Austin and Austin’s transit agency, Capital Metro, have a plan for dealing with all of the traffic that will be generated by that growth: assume that a third of the people who now drive alone to work will switch to transit, bicycling, walking, or telecommuting by 2039. That’s right up there with planning for dinner by assuming that food will magically appear on the table the same way it does in Hogwarts….

[W]hile Austin planners are assuming they can reduce driving alone from 74 to 50 percent, it is actually moving in the other direction….

Planners also claim that 11 percent of Austin workers carpool to work, an amount they hope to maintain through 2039. They are going to have trouble doing that as carpooling, in fact, only accounted for 8.0 percent of Austin workers in 2018.

Planners hope to increase telecommuting from its current 8 percent (which is accurate) to 14 percent. That could be difficult as they have no policy tools that can influence telecommuting.

Planners also hope to increase walking and bicycling from their current 2 and 1 percent to 4 and 5 percent. Walking to work is almost always greater than cycling to work, so it’s difficult to see how they plan to magic cycling to be greater than walking. This is important because cycling trips are longer than walking trips and so have more of a potential impact on driving.

Finally, planners want to increase transit from 4 to 16 percent. In fact, transit carried just 3.24 percent of workers to their jobs in 2018, down from 3.62 percent in 2016. Changing from 4 to 16 percent is a an almost impossible 300 percent increase; changing from 3.24 to 16 is an even more formidable 394 percent increase. Again, reality is moving in the opposite direction from planners’ goals….

Planners have developed two main approaches to transportation. One is to estimate how people will travel and then provide and maintain the infrastructure to allow them to do so as efficiently and safely as possible. The other is to imagine how you wish people would travel and then provide the infrastructure assuming that to happen. The latter method is likely to lead to misallocation of capital resources, increased congestion, and increased costs to travelers.

Austin’s plan is firmly based on this second approach. The city’s targets of reducing driving alone by a third, maintaining carpooling at an already too-high number, and increasing transit by 394 percent are completely unrealistic. No American city has achieved similar results in the past two decades and none are likely to come close in the next two decades.

Well, that’s the prevailing mentality of Austin’s political leaders and various bureaucracies: magical thinking. Failure is piled upon failure (e.g., more bike lanes crowding out traffic lanes, a hugely wasteful curbside composting plan) because to admit failure would be to admit that the emperor has no clothes.

You want to learn more about Austin? You’ve got it:

Driving and Politics (1)
Life in Austin (1)
Life in Austin (2)
Life in Austin (3)
Driving and Politics (2)
AGW in Austin?
Democracy in Austin
AGW in Austin? (II)
The Hypocrisy of “Local Control”
Amazon and Austin


“Government Worker” Is (Usually) an Oxymoron

In “Good News from the Federal Government” I sarcastically endorse the move to grant all federal workers 12 weeks of paid parental leave:

The good news is that there will be a lot fewer civilian federal workers on the job, which means that the federal bureaucracy will grind a bit more slowly when it does the things that it does to screw up the economy.

The next day, Audacious Epigone put some rhetorical and statistical meat on the bones of my informed prejudice in “Join the Crooks and Liars: Get a Government Job!“:

That [the title of the post] used to be a frequent refrain on Radio Derb. Though the gag has been made emeritus, the advice is even better today than it was when the Derb introduced it. As he explains:

The percentage breakdown is private-sector 76 percent, government 16 percent, self-employed 8 percent.

So one in six of us works for a government, federal, state, or local.

Which group does best on salary? Go on: see if you can guess. It’s government workers, of course. Median earnings 52½ thousand. That’s six percent higher than the self-employed and fourteen percent higher than the poor shlubs toiling away in the private sector.

If you break down government workers into two further categories, state and local workers in category one, federal workers in category two, which does better?

Again, which did you think? Federal workers are way out ahead, median earnings 66 thousand. Even state and local government workers are ahead of us private-sector and self-employed losers, though.

Moral of the story: Get a government job! — federal for strong preference.

….

Though it is well known that a government gig is a gravy train, opinions of the people with said gigs is embarrassingly low as the results from several additional survey questions show.

First, how frequently the government can be trusted “to do what’s right”? [“Just about always” and “most of the time” badly trail “some of the time”.]

….

Why can’t the government be trusted to do what’s right? Because the people who populate it are crooks and liars. Asked whether “hardly any”, “not many” or “quite a few” people in the federal government are crooked, the following percentages answered with “quite a few” (“not sure” responses, constituting 12% of the total, are excluded). [Responses of “quite a few” range from 59 percent to 77 percent across an array of demographic categories.]

….

Accompanying a strong sense of corruption is the perception of widespread incompetence. Presented with a binary choice between “the people running the government are smart” and “quite a few of them don’t seem to know what they are doing”, a solid majority chose the latter (“not sure”, at 21% of all responses, is again excluded). [The “don’t know what they’re doing” responses ranged from 55 percent to 78 percent across the same demographic categories.]

Are the skeptics right? Well, most citizens have had dealings with government employees of one kind and another. The “wisdom of crowds” certainly applies in this case.