Ad-Hoc Hypothesizing and Data Mining

An ad-hoc hypothesis is

a hypothesis added to a theory in order to save it from being falsified….

Scientists are often skeptical of theories that rely on frequent, unsupported adjustments to sustain them. This is because, if a theorist so chooses, there is no limit to the number of ad hoc hypotheses that they could add. Thus the theory becomes more and more complex, but is never falsified. This is often at a cost to the theory’s predictive power, however. Ad hoc hypotheses are often characteristic of pseudoscientific subjects.

An ad-hoc hypothesis can also be formed from an existing hypothesis (a proposition that hasn’t yet risen to the level of a theory) when the existing hypothesis has been falsified or is in danger of falsification. The (intellectually dishonest) proponents of the existing hypothesis seek to protect it from falsification by putting the burden of proof on the doubters rather than where it belongs, namely, on the proponents.

Data mining is “the process of discovering patterns in large data sets”. It isn’t hard to imagine the abuses that are endemic to data mining; for example, running regressions on the data until the “correct” equation is found, and excluding or adjusting portions of the data because their use leads to “counterintuitive” results.

Ad-hoc hypothesizing and data mining are two sides of the same coin: intellectual dishonesty. The former is overt; the latter is covert. (At least, it is covert until someone gets hold of the data and the analysis, which is why many “scientists” and “scientific” journals have taken to hiding the data and obscuring the analysis.) Both methods are justified (wrongly) as being consistent with the scientific method. But the ad-hoc theorizer is just trying to rescue a falsified hypothesis, and the data miner is just trying to conceal information that would falsify his hypothesis.

From what I have seen, the proponents of the human activity>CO2>”global warming” hypothesis have been guilty of both kinds of quackery: ad-hoc hypothesizing and data mining (with a lot of data manipulation thrown in for good measure).

The Learning Curve and the Flynn Effect

I first learned of the learning curve when I was a newly hired analyst at a defense think-tank. A learning curve

is a graphical representation of how an increase in learning (measured on the vertical axis) comes from greater experience (the horizontal axis); or how the more someone (or something) performs a task, the better they [sic] get at it.

In my line of work, the learning curve figured importantly in the estimation of aircraft procurement costs. There was a robust statistical relationship between the cost of making a particular model of aircraft and the cumulative number of such aircraft produced. Armed with the learning-curve equation and the initial production cost of an aircraft, it was easy to estimate of the cost of producing any number of the same aircraft.

The learning curve figures prominently in tests that purport to measure intelligence. Two factors that may explain the Flynn effect — a secular rise in average IQ scores — are aspects of learning: schooling and test familiarity and a generally more stimulating environment in which one learns more. The Flynn effect doesn’t measure changes in intelligence, it measures changes in IQ scores resulting from learning. There is an essential difference between ignorance and stupidity. The Flynn effect is about the former, not the latter.

Here’s a personal example of the Flynn effect in action. I’ve been doing The New York Times crossword puzzle online since February 18 of this year. I have completed all 170 puzzles published by TNYT from that date through today, with generally increasing ease:

The difficulty of the puzzle varies from day to day, with Monday puzzles being the easiest and Sunday puzzles being the hardest (as measured by time to complete). For each day of the week, my best time is more recent than my worst time, and the trend of time to complete is sharply downward for every day of the week (as reflected in the graph above).

I know that that I haven’t become more intelligent in the last 24 weeks. And being several decades past the peak of my intelligence, I am certain that it diminishes daily, though only fractionally so (I hope). I have simply become more practiced at doing the crossword puzzle because I have learned a lot about it. For example, certain clues recur with some frequency, and they always have the same answers. Clues often have double meanings, which were hard to decipher at first, but which have become easier to decipher with practice. There are other subtleties, all of which reflect the advantages of learning.

In a nutshell, I am no smarter than I was 24 weeks ago, but my ignorance of TNYT crossword puzzle has diminished significantly.

(See also “More about Intelligence“, “Selected Writings about Intelligence“, and especially “Intelligence“.)

Modeling Is Not Science: Another Demonstration

The title of this post is an allusion to an earlier one: “Modeling Is Not Science“. This post addresses a model that is the antithesis of science. Tt seems to have been extracted from the ether. It doesn’t prove what its authors claim for it. It proves nothing, in fact, but the ability of some people to dazzle other people with mathematics.

In this case, a writer for MIT Technology Review waxes enthusiastic about

the work of Alessandro Pluchino at the University of Catania in Italy and a couple of colleagues. These guys [sic] have created a computer model of human talent and the way people use it to exploit opportunities in life. The model allows the team to study the role of chance in this process.

The results are something of an eye-opener. Their simulations accurately reproduce the wealth distribution in the real world. But the wealthiest individuals are not the most talented (although they must have a certain level of talent). They are the luckiest. And this has significant implications for the way societies can optimize the returns they get for investments in everything from business to science.

Pluchino and co’s [sic] model is straightforward. It consists of N people, each with a certain level of talent (skill, intelligence, ability, and so on). This talent is distributed normally around some average level, with some standard deviation. So some people are more talented than average and some are less so, but nobody is orders of magnitude more talented than anybody else….

The computer model charts each individual through a working life of 40 years. During this time, the individuals experience lucky events that they can exploit to increase their wealth if they are talented enough.

However, they also experience unlucky events that reduce their wealth. These events occur at random.

At the end of the 40 years, Pluchino and co rank the individuals by wealth and study the characteristics of the most successful. They also calculate the wealth distribution. They then repeat the simulation many times to check the robustness of the outcome.

When the team rank individuals by wealth, the distribution is exactly like that seen in real-world societies. “The ‘80-20’ rule is respected, since 80 percent of the population owns only 20 percent of the total capital, while the remaining 20 percent owns 80 percent of the same capital,” report Pluchino and co.

That may not be surprising or unfair if the wealthiest 20 percent turn out to be the most talented. But that isn’t what happens. The wealthiest individuals are typically not the most talented or anywhere near it. “The maximum success never coincides with the maximum talent, and vice-versa,” say the researchers.

So if not talent, what other factor causes this skewed wealth distribution? “Our simulation clearly shows that such a factor is just pure luck,” say Pluchino and co.

The team shows this by ranking individuals according to the number of lucky and unlucky events they experience throughout their 40-year careers. “It is evident that the most successful individuals are also the luckiest ones,” they say. “And the less successful individuals are also the unluckiest ones.”

The writer, who is dazzled by pseudo-science, gives away his Obamanomic bias (“you didn’t build that“) by invoking fairness. Luck and fairness have nothing to do with each other. Luck is luck, and it doesn’t make the beneficiary any less deserving of the talent, or legally obtained income or wealth, that comes his way.

In any event, the model in question is junk. To call it junk science would be to imply that it’s just bad science. But it isn’t science; it’s a model pulled out of thin air. The modelers admit this in the article cited by the Technology Review writer, “Talent vs. Luck, the Role of Randomness in Success and Failure“:

In what follows we propose an agent-based model, called “Talent vs Luck” (TvL) model, which builds on a small set of very simple assumptions, aiming to describe the evolution of careers of a group of people influenced by lucky or unlucky random events.

We consider N individuals, with talent Ti (intelligence, skills, ability, etc.) normally distributed in the interval [0; 1] around a given mean mT with a standard deviation T , randomly placed in xed positions within a square world (see Figure 1) with periodic boundary conditions (i.e. with a toroidal topology) and surrounded by a certain number NE of “moving” events (indicated by dots), someone lucky, someone else unlucky (neutral events are not considered in the model, since they have not relevant effects on the individual life). In Figure 1 we report these events as colored points: lucky ones, in green and with relative percentage pL, and unlucky ones, in red and with percentage (100􀀀pL). The total number of event-points NE are uniformly distributed, but of course such a distribution would be perfectly uniform only for NE ! 1. In our simulations, typically will be NE N=2: thus, at the beginning of each simulation, there will be a greater random concentration of lucky or unlucky event-points in different areas of the world, while other areas will be more neutral. The further random movement of the points inside the square lattice, the world, does not change this fundamental features of the model, which exposes dierent individuals to dierent amount of lucky or unlucky events during their life, regardless of their own talent.

In other words, this is a simplistic, completely abstract model set in a simplistic, completely abstract world, using only the authors’ assumptions about the values of a small number of abstract variables and the effects of their interactions. Those variables are “talent” and two kinds of event: “lucky” and “unlucky”.

What could be further from science — actual knowledge — than that? The authors effectively admit the model’s complete lack of realism when they describe “talent”:

[B]y the term “talent” we broadly mean intelligence, skill, smartness, stubbornness, determination, hard work, risk taking and so on.

Think of all of the ways that those various — and critical — attributes vary from person to person. “Talent”, in other words, subsumes an array of mostly unmeasured and unmeasurable attributes, without distinguishing among them or attempting to weight them. The authors might as well have called the variable “sex appeal” or “body odor”. For that matter, given the complete abstractness of the model, they might as well have called its three variables “body mass index”, “elevation”, and “race”.

It’s obvious that the model doesn’t account for the actual means by which wealth is acquired. In the model, wealth is just the mathematical result of simulated interactions among an arbitrarily named set of variables. It’s not even a multiple regression model based on statistics. (Although no set of statistics could capture the authors’ broad conception of “talent”.)

The modelers seem surprised that wealth isn’t normally distributed. But that wouldn’t be a surprise if they were to consider that wealth represents a compounding effect, which naturally favors those with higher incomes over those with lower incomes. But they don’t even try to model income.

So when wealth (as modeled) doesn’t align with “talent”, the discrepancy — according to the modelers — must be assigned to “luck”. But a model that lacks any nuance in its definition of variables, any empirical estimates of their values, and any explanation of the relationship between income and wealth cannot possibly tell us anything about the role of luck in the determination of wealth.

At any rate, it is meaningless to say that the model is valid because its results mimic the distribution of wealth in the real world. The model itself is meaningless, so any resemblance between its results and the real world is coincidental (“lucky”) or, more likely, contrived to resemble something like the distribution of wealth in the real world. On that score, the authors are suitably vague about the actual distribution, pointing instead to various estimates.

(See also “Modeling, Science, and Physics Envy” and “Modeling Revisited“.)


There is a post at Politico about the adventures of McKinsey & Company, a giant consulting firm, in the world of intelligence:

America’s vast spying apparatus was built around a Cold War world of dead drops and double agents. Today, that world has fractured and migrated online, with hackers and rogue terrorist cells, leaving intelligence operatives scrambling to keep up.

So intelligence agencies did what countless other government offices have done: They brought in a consultant. For the past four years, the powerhouse firm McKinsey and Co., has helped restructure the country’s spying bureaucracy, aiming to improve response time and smooth communication.

Instead, according to nearly a dozen current and former officials who either witnessed the restructuring firsthand or are familiar with the project, the multimillion dollar overhaul has left many within the country’s intelligence agencies demoralized and less effective.

These insiders said the efforts have hindered decision-making at key agencies — including the CIA, National Security Agency and the Office of the Director of National Intelligence.

They said McKinsey helped complicate a well-established linear chain of command, slowing down projects and turnaround time, and applied cookie-cutter solutions to agencies with unique cultures. In the process, numerous employees have become dismayed, saying the efforts have at best been a waste of money and, at worst, made their jobs more difficult. It’s unclear how much McKinsey was paid in that stretch, but according to news reports and people familiar with the effort, the total exceeded $10 million.

Consulting to U.S.-government agencies on a grand scale grew out of the perceived successes in World War II of civilian analysts who were embedded in military organizations. To the extent that the civilian analysts were actually helpful*, it was because they focused on specific operations, such as methods of searching for enemy submarines. In such cases, the government client can benefit from an outside look at the effectiveness of the operations, the identification of failure points, and suggestions for changes in weapons and tactics that are informed by first-hand observation of military operations.

Beyond that, however, outsiders are of little help, and may be a hindrance, as in the case cited above. Outsiders can’t really grasp the dynamics and unwritten rules of organizational cultures that embed decades of learning and adaptation.

The consulting game is now (and has been for decades) an invasive species. It is a perverse outgrowth of operations research as it was developed in World War II. Too much of a “good thing” is a bad thing — as I saw for myself many years ago.
* The success of the U.S. Navy’s antisubmarine warfare (ASW) operations had been for decades ascribed to the pioneering civilian organization known as the Antisubmarine Warfare Operations Research Group (ASWORG). However, with the publication of The Ultra Secret in 1974 (and subsequent revelations), it became known that code-breaking may have contributed greatly to the success of various operations against enemy forces, including ASW.

Beware of Outliers

An outlier, in the field of operations research, is an unusual event that can distract the observer from the normal run of events. Because an outlier is an unusual event, it is more memorable than events of the same kind that occur more frequently.

Take the case of the late Bill Buckner, who was a steady first baseman and good hitter for many years. What is Buckner remembered for? Not his many accomplishments in a long career. No, he is remembered for a fielding error that cost his team (the accursed Red Sox) game 6 of the 1986 World Series, a game that would have clinched the series for the Red Sox had they won it. But they lost it, and went on to lose the deciding 7th game.

Buckner’s bobble was an outlier that erased from the memories of most fans his prowess as a player and the many occasions on which he helped his team to victory. He is remembered, if at all, for the error — though he erred on less than 1/10 of 1 percent of more than 15,000 fielding plays during his career.

I am beginning to think of America’s decisive victory in Word War II as an outlier.

To be continued.

The “Candle Problem” and Its Ilk

Among the many topics that I address in “The Balderdash Chronicles” is the management “science” fad; in particular, as described by Graham Morehead,

[t]he Candle Problem [which] was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

The implication of which, according to Morehead, is (supposedly) this:

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

My take (in part):

[T]he Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

Now comes James Thompson, with this general conclusion about such exercises:

One important conclusion I draw from this entire paper [by Gerd Gigerenzer, here] is that the logical puzzles enjoyed by Kahneman, Tversky, Stanovich and others are rightly rejected by psychometricians as usually being poor indicators of real ability. They fail because they are designed to lead people up the garden path, and depend on idiosyncratic interpretations.

Told you so.

Is Race a Social Construct?

Of course it is. Science, generally, is a social construct. Everything that human beings do and “know” is a social construct, in that human behavior and “knowledge” are products of acculturation and the irrepressible urge to name and classify things.

Whence that urge? You might say that it’s genetically based. But our genetic inheritance is inextricably twined with social constructs — preferences for, say, muscular men and curvaceous women, and so on. What we are depends not only on our genes but also on the learned preferences that shape the gene pool. There’s no way to sort them out, despite claims (from the left) that human beings are blank slates and claims (from loony libertarians) that genes count for everything.

All of that, however true it may be (and I believe it to be true), is a recipe for solipsism, nay, for Humean chaos. The only way out of this morass, as I see it, is to admit that human beings (or most of them) possess a life-urge that requires them to make distinctions: friend vs. enemy, workable from non-workable ways of building things, etc.

Race is among those useful distinctions for reasons that will be obvious to anyone who has actually observed the behaviors of groups that can be sorted along racial lines instead of condescending to “tolerate” or “celebrate” differences (a luxury that is easily indulged in the safety of ivory towers and gated communities). Those lines may be somewhat arbitrary, for, as many have noted there are more genetic differences within a racial classification than between racial classifications. Which is a fatuous observation, in that there are more genetic differences among, say, the apes than there are between what are called apes and what are called human beings.

In other words, the usual “scientific” objection to the concept of race is based on a false premise, namely, that all genetic differences are equal. If one believes that, one should be just as willing to live among apes as among human beings. But human beings do not choose to live among apes (though a few human beings do choose to observe them at close quarters). Similarly, human beings — for the most part — do not choose to live among people from whom they are racially distinct, and therefore (usually) socially distinct.

Why? Because under the skin we are not all alike. Under the skin there are social (cultural) differences that are causally correlated with genetic differences.

Race may be a social construct, but — like engineering — it is a useful one.

“Science Is Real”

Yes, it is. But the real part of science is the never-ending search for truth about the “natural” world. Scientific “knowledge” is always provisional.

The flower children — young and old — who display “science is real” posters have it exactly backwards. They believe that science consists of provisional knowledge, and when that “knowledge” matches their prejudices the search for truth is at an end.

Provisional knowledge is valuable in some instances — building bridges and airplanes, for example. But bridges and airplanes are (or should be) built by allowing for error, and a lot of it.

The Pretense of Knowledge

Anyone with more than a passing knowledge of science and disciplines that pretend to be scientific (e.g., economics) will appreciate the shallowness and inaccuracy of humans’ “knowledge” of nature and human nature — from the farthest galaxies to our own psyches. Anyone, that is, but a pretentious “scientist” or an over-educated ignoramus.

Not with a Bang

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

T.S. Elliot, The Hollow Men

It’s also the way that America is ending. Yes, there are verbal fireworks aplenty, but there will not be a “hot” civil war. The country that my parents and grandparents knew and loved — the country of my youth in the 1940s and 1950s — is just fading away.

This would not necessarily be a bad thing if the remaking of America were a gradual, voluntary process, leading to time-tested changes for the better. But that isn’t the case. The very soul of America has been and is being ripped out by the government that was meant to protect that soul, and by movements that government not only tolerates but fosters.

Before I go further, I should explain what I mean by America, which is not the same thing as the geopolitical entity known as the United States, though the two were tightly linked for a long time.

America was a relatively homogeneous cultural order that fostered mutual respect, mutual trust, and mutual forbearance — or far more of those things than one might expect in a nation as populous and far-flung as the United States. Those things — conjoined with a Constitution that has been under assault since the New Deal — made America a land of liberty. That is to say, they fostered real liberty, which isn’t an unattainable state of bliss but an actual (and imperfect) condition of peaceful, willing coexistence and its concomitant: beneficially cooperative behavior.

The attainment of this condition depends on social comity, which depends in turn on (a) genetic kinship and (b) the inculcation and enforcement of social norms, especially the norms that define harm.

All of that is going by the boards because the emerging cultural order is almost diametrically opposite that which prevailed in America. The new dispensation includes:

  • casual sex
  • serial cohabitation
  • subsidized illegitimacy
  • abortion on demand
  • easy divorce
  • legions of non-mothering mothers
  • concerted (and deluded) efforts to defeminize females and to neuter or feminize males
  • gender-confusion as a burgeoning norm
  • “alternative lifestyles” that foster disease, promiscuity, and familial instability
  • normalization of drug abuse
  • forced association (with accompanying destruction of property and employment rights)
  • suppression of religion
  • rampant obscenity
  • identity politics on steroids
  • illegal immigration as a “right”
  • “free stuff” from government (Social Security was meant to be self-supporting)
  • America as the enemy
  • all of this (and more) as gospel to influential elites whose own lives are modeled mostly on old America.

As the culture has rotted, so have the ties that bound America.

The rot has occurred to the accompaniment of cacophony. Cultural coarsening begets loud and inconsiderate vulgarity. Worse than that is the cluttering of the ether with the vehement and belligerent propaganda, most of it aimed at taking down America.

The advocates of the new dispensation haven’t quite finished the job of dismantling America. But that day isn’t far off. Complete victory for the enemies of America is only a few election cycles away. The squishy center of the electorate — as is its wont — will swing back toward the Democrat Party. With a Democrat in the White House, a Democrat-controlled Congress, and a few party switches in the Supreme Court (of the packing of it), the dogmas of the anti-American culture will become the law of the land; for example:

Billions and trillions of dollars will be wasted on various “green” projects, including but far from limited to the complete replacement of fossil fuels by “renewables”, with the resulting impoverishment of most Americans, except for comfortable elites who press such policies).

It will be illegal to criticize, even by implication, such things as abortion, illegal immigration, same-sex marriage, transgenderism, anthropogenic global warming, or the confiscation of firearms. These cherished beliefs will be mandated for school and college curricula, and enforced by huge fines and draconian prison sentences (sometimes in the guise of “re-education”).

Any hint of Christianity and Judaism will be barred from public discourse, and similarly punished. Islam will be held up as a model of unity and tolerance.

Reverse discrimination in favor of females, blacks, Hispanics, gender-confused persons, and other “protected” groups will be required and enforced with a vengeance. But “protections” will not apply to members of such groups who are suspected of harboring libertarian or conservative impulses.

Sexual misconduct (as defined by the “victim”) will become a crime, and any male person may be found guilty of it on the uncorroborated testimony of any female who claims to have been the victim of an unwanted glance, touch (even if accidental), innuendo (as perceived by the victim), etc.

There will be parallel treatment of the “crimes” of racism, anti-Islamism, nativism, and genderism.

All health care in the United States will be subject to review by a national, single-payer agency of the central government. Private care will be forbidden, though ready access to doctors, treatments, and medications will be provided for high officials and other favored persons. The resulting health-care catastrophe that befalls most of the populace (like that of the UK) will be shrugged off as a residual effect of “capitalist” health care.

The regulatory regime will rebound with a vengeance, contaminating every corner of American life and regimenting all businesses except those daring to operate in an underground economy. The quality and variety of products and services will decline as their real prices rise as a fraction of incomes.

The dire economic effects of single-payer health care and regulation will be compounded by massive increases in other kinds of government spending (defense excepted). The real rate of economic growth will approach zero.

The United States will maintain token armed forces, mainly for the purpose of suppressing domestic uprisings. Given its economically destructive independence from foreign oil and its depressed economy, it will become a simulacrum of the USSR and Mao’s China — and not a rival to the new superpowers, Russia and China, which will largely ignore it as long as it doesn’t interfere in their pillaging of respective spheres of influence. A policy of non-interference (i.e., tacit collusion) will be the order of the era in Washington.

Though it would hardly be necessary to rig elections in favor of Democrats, given the flood of illegal immigrants who will pour into the country and enjoy voting rights, a way will be found to do just that. The most likely method will be election laws requiring candidates to pass ideological purity tests by swearing fealty to the “law of the land” (i.e., abortion, unfettered immigration, same-sex marriage, freedom of gender choice for children, etc., etc., etc.). Those who fail such a test will be barred from holding any kind of public office, no matter how insignificant.

Are my fears exaggerated? I don’t think so, given what has happened in recent decades and the cultural revolutionaries’ tightening grip on the Democrat party. What I have sketched out can easily happen within a decade after Democrats seize total control of the central government.

Will the defenders of liberty rally to keep it from happening? Perhaps, but I fear that they will not have a lot of popular support, for three reasons:

First, there is the problem of asymmetrical ideological warfare, which favors the party that says “nice” things and promises “free” things.

Second, What has happened thus far — mainly since the 1960s — has happened slowly enough that it seems “natural” to too many Americans. They are like fish in water who cannot grasp the idea of life in a different medium.

Third, although change for the worse has accelerated in recent years, it has occurred mainly in forums that seem inconsequential to most Americans, for example, in academic fights about free speech, in the politically correct speeches of Hollywood stars, and in culture wars that are conducted mainly in the blogosphere. The unisex-bathroom issue seems to have faded as quickly as it arose, mainly because it really affects so few people. The latest gun-control mania may well subside — though it has reached new heights of hysteria — but it is only one battle in the broader war being waged by the left. And most Americans lack the political and historical knowledge to understand that there really is a civil war underway — just not a “hot” one.

Is a reversal possible? Possible, yes, but unlikely. The rot is too deeply entrenched. Public schools and universities are cesspools of anti-Americanism. The affluent elites of the information-entertainment-media-academic complex are in the saddle. Republican politicians, for the most part, are of no help because they are more interested on preserving their comfortable sinecures than in defending America or the Constitution.

On that note, I will take a break from blogging — perhaps forever. I urge you to read one of my early posts, “Reveries“, for a taste of what America means to me. As for my blogging legacy, please see “A Summing Up“, which links to dozens of posts and pages that amplify and support this post.

Il faut cultiver notre jardin.

Voltaire, Candide

Related reading:

Michael Anton, “What We Still Have to Lose“, American Greatness, February 10, 2019

Rod Dreher, “Benedict Option FAQ“, The American Conservative, October 6, 2015

Roger Kimball, “Shall We Defend Our Common History?“, Imprimis, February 2019

Joel Kotkin, “Today’s Cultural Engineers“, newgeography, January 26, 2019

Daniel Oliver, “Where Has All the Culture Gone?“, The Federalist, February 8, 2019

Malcolm Pollack, “On Civil War“, Motus Mentis, March 7, 2019

Fred Reed, “The White Man’s Burden: Reflections on the Custodial State“, Fred on Everything, January 17, 2019

Gilbert T. Sewall, “The Diminishing Authority of the Bourgeois Culture“, The American Conservative, February 4, 2019

Bob Unger, “Requiem for America“, The New American, January 24, 2019

A Summing Up

This post has been updated and moved to “Favorite Posts“.

Not-So-Random Thoughts (XXIII)


Government and Economic Growth

Reflections on Defense Economics

Abortion: How Much Jail Time?

Illegal Immigration and the Welfare State

Prosperity Isn’t Everything

Google et al. As State Actors

The Transgender Trap


Guy Sorman reviews Alan Greenspan and Adrian Wooldridge’s Capitalism in America: A History. Sorman notes that

the golden days of American capitalism are over—or so the authors opine. That conclusion may seem surprising, as the U.S. economy appears to be flourishing. But the current GDP growth rate of roughly 3 percent, after deducting a 1 percent demographic increase, is rather modest, the authors maintain, compared with the historic performance of the postwar years, when the economy grew at an annual average of 5 percent. Moreover, unemployment appears low only because a significant portion of the population is no longer looking for work.

Greenspan and Wooldridge reject the conventional wisdom on mature economies growing more slowly. They blame relatively slow growth in the U.S. on the increase in entitlement spending and the expansion of the welfare state—a classic free-market argument.

They are right to reject the conventional wisdom.  Slow growth is due to the expansion of government spending (including entitlements) and the regulatory burden. See “The Rahn Curve in Action” for details, including an equation that accurately explains the declining rate of growth since the end of World War II.


Arnold Kling opines about defense economics. Cost-effectiveness analysis was the big thing in the 1960s. Analysts applied non-empirical models of warfare and cost estimates that were often WAGs (wild-ass guesses) to the comparison of competing weapon systems. The results were about as accurate a global climate models, which is to say wildly inaccurate. (See “Modeling Is not Science“.) And the results were worthless unless they comported with the prejudices of the “whiz kids” who worked for Robert Strange McNamara. (See “The McNamara Legacy: A Personal Perspective“.)


Georgi Boorman says “Yes, It Would Be Just to Punish Women for Aborting Their Babies“. But, as she says,

mainstream pro-lifers vigorously resist this argument. At the same time they insist that “the unborn child is a human being, worthy of legal protection,” as Sarah St. Onge wrote in these pages recently, they loudly protest when so-called “fringe” pro-lifers state the obvious: of course women who willfully hire abortionists to kill their children should be prosecuted.

Anna Quindlen addressed the same issue more than eleven years ago, in Newsweek:

Buried among prairie dogs and amateur animation shorts on YouTube is a curious little mini-documentary shot in front of an abortion clinic in Libertyville, Ill. The man behind the camera is asking demonstrators who want abortion criminalized what the penalty should be for a woman who has one nonetheless. You have rarely seen people look more gobsmacked. It’s as though the guy has asked them to solve quadratic equations. Here are a range of responses: “I’ve never really thought about it.” “I don’t have an answer for that.” “I don’t know.” “Just pray for them.”

You have to hand it to the questioner; he struggles manfully. “Usually when things are illegal there’s a penalty attached,” he explains patiently. But he can’t get a single person to be decisive about the crux of a matter they have been approaching with absolute certainty.

… If the Supreme Court decides abortion is not protected by a constitutional guarantee of privacy, the issue will revert to the states. If it goes to the states, some, perhaps many, will ban abortion. If abortion is made a crime, then surely the woman who has one is a criminal. But, boy, do the doctrinaire suddenly turn squirrelly at the prospect of throwing women in jail.

“They never connect the dots,” says Jill June, president of Planned Parenthood of Greater Iowa.

I addressed Quindlen, and queasy pro-lifers, eleven years ago:

The aim of Quindlen’s column is to scorn the idea of jail time as punishment for a woman who procures an illegal abortion. In fact, Quindlen’s “logic” reminds me of the classic definition of chutzpah: “that quality enshrined in a man who, having killed his mother and father, throws himself on the mercy of the court because he is an orphan.” The chutzpah, in this case, belongs to Quindlen (and others of her ilk) who believe that a woman should not face punishment for an abortion because she has just “lost” a baby.

Balderdash! If a woman illegally aborts her child, why shouldn’t she be punished by a jail term (at least)? She would be punished by jail (or confinement in a psychiatric prison) if she were to kill her new-born infant, her toddler, her ten-year old, and so on. What’s the difference between an abortion and murder? None. (Read this, then follow the links in this post.)

Quindlen (who predictably opposes capital punishment) asks “How much jail time?” in a cynical effort to shore up the anti-life front. It ain’t gonna work, lady.

See also “Abortion Q & A“.


Add this to what I say in “The High Cost of Untrammeled Immigration“:

In a new analysis of the latest numbers [by the Center for Immigration Studies], from 2014, 63 percent of non-citizens are using a welfare program, and it grows to 70 percent for those here 10 years or more, confirming another concern that once immigrants tap into welfare, they don’t get off it.

See also “Immigration and Crime” and “Immigration and Intelligence“.

Milton Friedman, thinking like an economist, favored open borders only if the welfare state were abolished. But there’s more to a country than GDP. (See “Genetic Kinship and Society“.) Which leads me to…


Patrick T. Brown writes about Oren Cass’s The Once and Future Worker:

Responding to what he cutely calls “economic piety”—the belief that GDP per capita defines a country’s well-being, and the role of society is to ensure the economic “pie” grows sufficiently to allow each individual to consume satisfactorily—Cass offers a competing hypothesis….

[A]s Cass argues, if well-being is measured by considerations in addition to economic ones, a GDP-based measurement of how our society is doing might not only be insufficient now, but also more costly over the long term. The definition of success in our public policy (and cultural) efforts should certainly include some economic measures, but not at the expense of the health of community and family life.

Consider this line, striking in the way it subverts the dominant paradigm: “If, historically, two-parent families could support themselves with only one parent working outside the home, then something is wrong with ‘growth’ that imposes a de facto need for two incomes.”…

People need to feel needed. The hollowness at the heart of American—Western?—society can’t be satiated with shinier toys and tastier brunches. An overemphasis on production could, of course, be as fatal as an overemphasis on consumption, and certainly the realm of the meritocrats gives enough cause to worry on this score. But as a matter of policy—as a means of not just sustaining our fellow citizen in times of want but of helping him feel needed and essential in his family and community life—Cass’s redefinition of “efficiency” to include not just its economic sense but some measure of social stability and human flourishing is welcome. Frankly, it’s past due as a tenet of mainstream conservatism.

Cass goes astray by offering governmental “solutions”; for example:

Cass suggests replacing the current Earned Income Tax Credit (along with some related safety net programs) with a direct wage subsidy, which would be paid to workers by the government to “top off” their current wage. In lieu of a minimum wage, the government would set a “target wage” of, say, $12 an hour. If an employee received $9 an hour from his employer, the government would step up to fill in that $3 an hour gap.

That’s no solution at all, inasmuch as the cost of a subsidy must be borne by someone. The someone, ultimately, is the low-wage worker whose wage is low because he is less productive than he would be. Why is he less productive? Because the high-income person who is taxed for the subsidy has that much less money to invest in business capital that raises productivity.

The real problem is that America — and the West, generally — has turned into a spiritual and cultural wasteland. See, for example, “A Century of Progress?“, “Prosperity Isn’t Everything“, and “James Burnham’s Misplaced Optimism“.


In “Preemptive (Cold) Civil War” (03/18/18) I recommended treating Google et al. as state actors to enforce the free-speech guarantee of the First Amendment against them:

The Constitution is the supreme law of the land. (Article V.)

Amendment I to the Constitution says that “Congress shall make no law … abridging the freedom of speech”.

Major entities in the telecommunications, news, entertainment, and education industries have exerted their power to suppress speech because of its content…. The collective actions of these entities — many of them government- licensed and government-funded — effectively constitute a governmental violation of the Constitution’s guarantee of freedom of speech (See Smith v. Allwright, 321 U.S. 649 (1944) and Marsh v. Alabama, 326 U.S. 501 (1946).)

I recommended presidential action. But someone has moved the issue to the courts. Tucker Higgins has the story:

The Supreme Court has agreed to hear a case that could determine whether users can challenge social media companies on free speech grounds.

The case, Manhattan Community Access Corp. v. Halleck, No. 17-702, centers on whether a private operator of a public access television network is considered a state actor, which can be sued for First Amendment violations.

The case could have broader implications for social media and other media outlets. In particular, a broad ruling from the high court could open the country’s largest technology companies up to First Amendment lawsuits.

That could shape the ability of companies like Facebook, Twitter and Alphabet’s Google to control the content on their platforms as lawmakers clamor for more regulation and activists on the left and right spar over issues related to censorship and harassment.

The Supreme Court accepted the case on [October 12]….

the court of Chief Justice John Roberts has shown a distinct preference for speech cases that concern conservative ideology, according to an empirical analysis conducted by researchers affiliated with Washington University in St. Louis and the University of Michigan.

The analysis found that the justices on the court appointed by Republican presidents sided with conservative speech nearly 70 percent of the time.

“More than any other modern Court, the Roberts Court has trained its sights on speech promoting conservative values,” the authors found.

Here’s hoping.


Babette Francis and John Ballantine tell it like it is:

Dr. Paul McHugh, the University Distinguished Service Professor of Psychiatry at Johns Hopkins Medical School and the former psychiatrist-in-chief at Johns Hopkins Hospital, explains that “‘sex change’ is biologically impossible.” People who undergo sex-reassignment surgery do not change from men to women or vice versa.

In reality, gender dysphoria is more often than not a passing phase in the lives of certain children. The American Psychological Association’s Handbook of Sexuality and Psychology has revealed that, before the widespread promotion of transgender affirmation, 75 to 95 percent of pre-pubertal children who were uncomfortable or distressed with their biological sex eventually outgrew that distress. Dr. McHugh says: “At Johns Hopkins, after pioneering sex-change surgery, we demonstrated that the practice brought no important benefits. As a result, we stopped offering that form of treatment in the 1970s.”…

However, in today’s climate of political correctness, it is more than a health professional’s career is worth to offer a gender-confused patient an alternative to pursuing sex-reassignment. In some states, as Dr. McHugh has noted, “a doctor who would look into the psychological history of a transgendered boy or girl in search of a resolvable conflict could lose his or her license to practice medicine.”

In the space of a few years, these sorts of severe legal prohibitions—usually known as “anti-reparative” and “anti-conversion” laws—have spread to many more jurisdictions, not only across the United States, but also in Canada, Britain, and Australia. Transgender ideology, it appears, brooks no opposition from any quarter….

… Brown University succumbed to political pressure when it cancelled authorization of a news story of a recent study by one of its assistant professors of public health, Lisa Littman, on “rapid-onset gender dysphoria.” Science Daily reported:

Among the noteworthy patterns Littman found in the survey data: twenty-one percent of parents reported their child had one or more friends who become transgender-identified at around the same time; twenty percent reported an increase in their child’s social media use around the same time as experiencing gender dysphoria symptoms; and forty-five percent reported both.

A former dean of Harvard Medical School, Professor Jeffrey S. Flier, MD, defended Dr. Littman’s freedom to publish her research and criticized Brown University for censoring it. He said:

Increasingly, research on politically charged topics is subject to indiscriminate attack on social media, which in turn can pressure school administrators to subvert established norms regarding the protection of free academic inquiry. What’s needed is a campaign to mobilize the academic community to protect our ability to conduct and communicate such research, whether or not the methods and conclusions provoke controversy or even outrage.

The examples described above of the ongoing intimidation—sometimes, actual sackings—of doctors and academics who question transgender dogma represent only a small part of a very sinister assault on the independence of the medical profession from political interference. Dr. Whitehall recently reflected: “In fifty years of medicine, I have not witnessed such reluctance to express an opinion among my colleagues.”

For more about this outrage see “The Transgender Fad and Its Consequences“.

Macroeconomic Modeling Revisited

Modeling is not science. Take Professor Ray Fair, for example. He teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983 Professor Fair has been forecasting changes in real GDP four quarters ahead. He has made dozens of forecasts based on a model that he has tweaked many times over the years. The current model can be found here. His forecasting track record is here.

How has he done? Here’s how:

1. The mean absolute error of his forecasts is 70 percent; that is, on average his predictions vary by 70 percent from actual rates of growth.

2. The median absolute error of his forecasts is 33 percent.

3. His forecasts are systematically biased: too high when real, four-quarter GDP growth is less than 3 percent; too low when real, four-quarter GDP growth is greater than 3 percent. (See figure 1.)

4. His forecasts have grown generally worse — not better — with time. (See figure 2.)

5. In sum, the overall predictive value of the model is weak. (See figures 3 and 4.)


Figures 1-4 are derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Fair’s website.




Given the foregoing, you might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

Could I do better? Well, I’ve done better, with the simple model that I devised to estimate the Rahn Curve. It’s described in “The Rahn Curve in Action“, which is part III of “Economic Growth Since World War II“.

The theory behind the Rahn Curve is simple — but not simplistic. A relatively small government with powers limited mainly to the protection of citizens and their property is worth more than its cost to taxpayers because it fosters productive economic activity (not to mention liberty). But additional government spending hinders productive activity in many ways, which are discussed in Daniel Mitchell’s paper, “The Impact of Government Spending on Economic Growth.” (I would add to Mitchell’s list the burden of regulatory activity, which grows even when government does not.)

What does the Rahn Curve look like? Mitchell estimates this relationship between government spending and economic growth:

Rahn curve_Mitchell

The curve is dashed rather than solid at low values of government spending because it has been decades since the governments of developed nations have spent as little as 20 percent of GDP. But as Mitchell and others note, the combined spending of governments in the U.S. was 10 percent (and less) until the eve of the Great Depression. And it was in the low-spending, laissez-faire era from the end of the Civil War to the early 1900s that the U.S. enjoyed its highest sustained rate of economic growth.

Elsewhere, I estimated the Rahn curve that spans most of the history of the United States. I came up with this relationship (terms modified for simplicity (with a slight cosmetic change in terminology):

Yg = 0.054 -0.066F

To be precise, it’s the annualized rate of growth over the most recent 10-year span (Yg), as a function of F (fraction of GDP spent by governments at all levels) in the preceding 10 years. The relationship is lagged because it takes time for government spending (and related regulatory activities) to wreak their counterproductive effects on economic activity. Also, I include transfer payments (e.g., Social Security) in my measure of F because there’s no essential difference between transfer payments and many other kinds of government spending. They all take money from those who produce and give it to those who don’t (e.g., government employees engaged in paper-shuffling, unproductive social-engineering schemes, and counterproductive regulatory activities).

When F is greater than the amount needed for national defense and domestic justice — no more than 0.1 (10 percent of GDP) — it discourages productive, growth-producing, job-creating activity. And because government spending weighs most heavily on taxpayers with above-average incomes, higher rates of F also discourage saving, which finances growth-producing investments in new businesses, business expansion, and capital (i.e., new and more productive business assets, both physical and intellectual).

I’ve taken a closer look at the post-World War II numbers because of the marked decline in the rate of growth since the end of the war (Figure 2).

Here’s the revised result, which accounts for more variables:

Yg = 0.0275 -0.340F + 0.0773A – 0.000336R – 0.131P


Yg = real rate of GDP growth in a 10-year span (annualized)

F = fraction of GDP spent by governments at all levels during the preceding 10 years

A = the constant-dollar value of private nonresidential assets (business assets) as a fraction of GDP, averaged over the preceding 10 years

R = average number of Federal Register pages, in thousands, for the preceding 10-year period

P = growth in the CPI-U during the preceding 10 years (annualized).

The r-squared of the equation is 0.74 and the F-value is 1.60E-13. The p-values of the intercept and coefficients are 0.093, 3.98E-08, 4.83E-09, 6.05E-07, and 0.0071. The standard error of the estimate is 0.0049, that is, about half a percentage point.

Here’s how the equation stacks up against actual 10-year rates of real GDP growth:

What does the new equation portend for the next 10 years? Based on the values of F, A, R, and P for 2008-2017, the real rate of growth for the next 10 years will be about 2.0 percent.

There are signs of hope, however. The year-over-year rate of real growth in the four most recent quarters (2017Q4 – 2018Q3) were 2.4, 2.6, 2.9, and 3.0 percent, as against the dismal rates of 1.4, 1.2, 1.5, and 1.8 percent for four quarters of 2016 — Obama’s final year in office. A possible explanation is the election of Donald Trump and the well-founded belief that his tax and regulatory policies would be more business-friendly.

I took the data set that I used to estimate the new equation and made a series of out-of-sample estimates of growth over the next 10 years. I began with the data for 1946-1964 to estimate the growth for 1965-1974. I continued by taking the data for 1946-1965 to estimate the growth for 1966-1975, and so on, until I had estimated the growth for every 10-year period from 1965-1974 through 2008-2017. In other words, like Professor Fair, I updated my model to reflect new data, and I estimated the rate of economic growth in the future. How did I do? Here’s a first look:


For ease of comparison, I made the scale of the vertical axis of figure 5 the same as the scale of the vertical axis of figure 2. It’s obvious that my estimate of the Rahn Curve does a much better job of predicting the real rate of GDP growth than does Fair’s model.

Not only that, but my model is less biased:


The systematic bias reflected in figure 6 is far weaker than the systematic bias in Fair’s estimates (figure 1).

Finally, unlike Fair’s model (figure 4), my model captures the downward trend in the rate of real growth:


The moral of the story: It’s futile to build complex models of the economy. They can’t begin to capture the economy’s real complexity, and they’re likely to obscure the important variables — the ones that will determine the future course of economic growth.

A final note: Elsewhere (e.g., here) I’ve disparaged economic aggregates, of which GDP is the apotheosis. And yet I’ve built this post around estimates of GDP. Am I contradicting myself? Not really. There’s a rough consistency in measures of GDP across time, and I’m not pretending that GDP represents anything but an estimate of the monetary value of those products and services to which monetary values can be ascribed.

As a practical matter, then, if you want to know the likely future direction and value of GDP, stick with simple estimation techniques like the one I’ve demonstrated here. Don’t get bogged down in the inconclusive minutiae of a model like Professor Fair’s.

Wildfires and “Climate Change”, Again

In view of the current hysteria about the connection between wildfires and “climate change”, I must point readers to a three-month-old post The connection is nil, just like the bogus connection between tropical cyclone activity and “climate change”.

Ford, Kavanaugh, and Probability

I must begin by quoting the ever-quotable Theodore Dalrymple. In closing a post in which he addresses (inter alia) the high-tech low-life lynching of Brett Kavanaugh, he writes:

The most significant effect of the whole sorry episode is the advance of the cause of what can be called Femaoism, an amalgam of feminism and Maoism. For some people, there is a lot of pleasure to be had in hatred, especially when it is made the meaning of life.

Kavanaugh’s most “credible” accuser — Christine Blasey Ford (CBF) — was incredible (in the literal meaning of the word) for many reasons, some of which are given in the items listed at the end of “Where I Stand on Kavanaugh“.

Arnold Kling gives what is perhaps the best reason for believing Kavanaugh’s denial of CBF’s accusation, a reason that occurred to me at the time:

[Kavanaugh] came out early and emphatically with his denial. This risked having someone corroborate the accusation, which would have irreparably ruined his career. If he did it, it was much safer to own it than to attempt to get away with lying about it. If he lied, chances are he would be caught–at some point, someone would corroborate her story. The fact that he took that risk, along with the fact that there was no corroboration, even from her friend, suggests to me that he is innocent.

What does any of this have to do with probability? Kling’s post is about the results of a survey conducted by Scott Alexander, the proprietor of Slate Star Codex. Kling opens with this:

Scott Alexander writes,

I asked readers to estimate their probability that Judge Kavanaugh was guilty of sexually assaulting Dr. Ford. I got 2,350 responses (thank you, you are great). Here was the overall distribution of probabilities.

… A classical statistician would have refused to answer this question. In classical statistics, he is either guilty or he is not. A probability statement is nonsense. For a Bayesian, it represents a “degree of belief” or something like that. Everyone who answered the poll … either is a Bayesian or consented to act like one.

As a staunch adherent of the classical position (though I am not a statistician), I agree with Kling.

But the real issue in the recent imbroglio surrounding Kavanaugh wasn’t the “probability” that he had committed or attempted some kind of assault on CBF. The real issue was the ideological direction of the Supreme Court:

  1. With the departure of Anthony Kennedy from the Court, there arose an opportunity to secure a reliably conservative (constitutionalist) majority. (Assuming that Chief Justice Roberts remains in the fold.)
  2. Kavanaugh is seen to be a reliable constitutionalist.
  3. With Kavanaugh in the conservative majority, the average age of that majority would be (and now is) 63; whereas, the average age of the “liberal” minority is 72, and the two oldest justices (at 85 and 80) are “liberals”.
  4. Though the health and fitness of individual justices isn’t well known, there are more opportunities in the coming years for the enlargement of the Court’s conservative wing than for the enlargement of its “liberal” wing.
  5. This is bad news for the left because it dims the prospects for social and economic revolution via judicial decree — a long-favored leftist strategy. In fact, it brightens the prospects for the rollback of some of the left’s legislative and judicial “accomplishments”.

Thus the transparently fraudulent attacks on Brett Kavanaugh by desperate leftists and “tools” like CBF. That is to say, except for those who hold a reasoned position (e.g., Arnold Kling and me), one’s stance on Kavanaugh is driven by one’s politics.

Scott Alexander’s post supports my view:

Here are the results broken down by party (blue is Democrats, red is Republicans):

And here are the results broken down by gender (blue is men, pink is women):

Given that women are disproportionately Democrat, relative to men, the second graph simply tells us the same thing as the first graph: The “probability” of Kavanaugh’s “guilt” is strongly linked to political persuasion. (I am heartened to see that a large chunk of the female population hasn’t succumbed to Femaoism.)

Probability, in the proper meaning of the word, has nothing to do with question of Kavanaugh’s “guilt”. A feeling or inclination isn’t a probability, it’s just a feeling or inclination. Putting a number on it is false quantification. Scott Alexander should know better.

Why I Don’t Believe in “Climate Change”


There are lots of reasons to disbelieve in “climate change”, that is, a measurable and statistically significant influence of human activity on the “global” temperature. Many of the reasons can be found at my page on the subject — in the text, the list of related readings, and the list of related posts. Here’s the main one: Surface temperature data — the basis for the theory of anthropogenic global warming — simply do not support the theory.

As Dr. Tim Ball points out:

A fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).


Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

“Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.”

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

Take the National Weather Service station for Austin, Texas, which is located 2.7 miles from my house. The station is on the grounds of Camp Mabry, a Texas National Guard base near the center of Austin, the fastest-growing large city in the U.S. The base is adjacent to a major highway (Texas Loop 1) that traverses Austin. The weather station is about 1/4 mile from the highway,100 feet from a paved road on the base, and near a complex of buildings and parking areas.

Here’s a ground view of the weather station:

And here’s an aerial view; the weather station is the tan rectangle at the center of the photo:

As I have shown elsewhere, the general rise in temperatures recorded at the weather station over the past several decades is fully explained by the urban-heat-island effect due to the rise in Austin’s population during those decades.

Further, there is a consistent difference in temperature and rainfall between my house and Camp Mabry. My house is located farther from the center of Austin — northwest of Camp Mabry — in a topographically different area. The topography in my part of Austin is typical of the Texas Hill Country, which begins about a mile east of my house and covers a broad swath of land stretching as far as 250 miles from Austin.

The contrast is obvious in the next photo. Camp Mabry is at the “1” (for Texas Loop 1) near the lower edge of the image. Topographically, it belongs with the flat part of Austin that lies mostly east of Loop 1. It is unrepresentative of the huge chunk of Austin and environs that lies to its north and west.

Getting down to cases. I observed that in the past summer, when daily highs recorded at Camp Mabry hit 100 degrees or more 52 times, the daily high at my house reached 100 or more only on the handful of days when it reached 106-110 at Camp Mabry. That’s consistent with another observation; namely, that the daily high at my house is generally 6 degrees lower than the daily high at Camp Mabry when it is above 90 degrees there.

As for rainfall, my house seems to be in a different ecosystem than Camp Mabry’s. Take September and October of this year: 15.7 inches of rain fell at Camp Mabry, as against 21.0 inches at my house. The higher totals at my house are typical, and are due to a phenomenon called orographic lift. It affects areas to the north and west of Camp Mabry, but not Camp Mabry itself.

So the climate at Camp Mabry is not my climate. Nor is the climate at Camp Mabry typical of a vast area in and around Austin, despite the use of Camp Mabry’s climate to represent that area.

There is another official weather station at Austin-Bergstrom International Airport, which is in the flatland 9.5 miles to the southeast of Camp Mabry. Its rainfall total for September and October was 12.8 inches — almost 3 inches less than at Camp Mabry — but its average temperatures for the two months were within a degree of Camp Mabry’s. Suppose Camp Mabry’s weather station went offline. The weather station at ABIA would then record temperatures and precipitation even less representative of those at my house and similar areas to the north and west.

Speaking of precipitation — it is obviously related to cloud cover. The more it rains, the cloudier it will be. The cloudier it is, the lower the temperature, other things being the same (e.g., locale). This is true for Austin:

12-month avg temp vs. precip

The correlation coefficient is highly significant, given the huge sample size. Note that the relationship is between precipitation in a given month and temperature a month later. Although cloud cover (and thus precipitation) has an immediate effect on temperature, precipitation has a residual effect in that wet ground absorbs more solar radiation than dry ground, so that there is less heat reflected from the ground to the air. The lagged relationship is strongest at 1 month, and considerably stronger than any relationship in which temperature leads precipitation.

I bring up this aspect of Austin’s climate because of a post by Anthony Watts (“Data: Global Temperatures Fell As Cloud Cover Rose in the 1980s and 90s“, Watts Up With That?, November 1, 2018):

I was reminded about a study undertaken by Clive Best and Euan Mearns looking at the role of cloud cover four years ago:

Clouds have a net average cooling effect on the earth’s climate. Climate models assume that changes in cloud cover are a feedback response to CO2 warming. Is this assumption valid? Following a study withEuan Mearns showing a strong correlation in UK temperatures with clouds, we  looked at the global effects of clouds by developing a combined cloud and CO2 forcing model to sudy how variations in both cloud cover [8] and CO2 [14] data affect global temperature anomalies between 1983 and 2008. The model as described below gives a good fit to HADCRUT4 data with a Transient Climate Response (TCR )= 1.6±0.3°C. The 17-year hiatus in warming can then be explained as resulting from a stabilization in global cloud cover since 1998.  An excel spreadsheet implementing the model as described below can be downloaded from

The full post containing all of the detailed statistical analysis is here.

But this is the key graph:


Figure 1a showing the ISCCP global averaged monthly cloud cover from July 1983 to Dec 2008 over-laid in blue with Hadcrut4 monthly anomaly data. The fall in cloud cover coincides with a rapid rise in temperatures from 1983-1999. Thereafter the temperature and cloud trends have both flattened. The CO2 forcing from 1998 to 2008 increases by a further ~0.3 W/m2 which is evidence that changes in clouds are not a direct feedback to CO2 forcing.

In conclusion, natural cyclic change in global cloud cover has a greater impact on global average temperatures than CO2. There is little evidence of a direct feedback relationship between clouds and CO2. Based on satellite measurements of cloud cover (ISCCP), net cloud forcing (CERES) and CO2 levels (KEELING) we developed a model for predicting global temperatures. This results in a best-fit value for TCR = 1.4 ± 0.3°C. Summer cloud forcing has a larger effect in the northern hemisphere resulting in a lower TCR = 1.0 ± 0.3°C. Natural phenomena must influence clouds although the details remain unclear, although the CLOUD experiment has given hints that increased fluxes of cosmic rays may increase cloud seeding [19].  In conclusion, the gradual reduction in net cloud cover explains over 50% of global warming observed during the 80s and 90s, and the hiatus in warming since 1998 coincides with a stabilization of cloud forcing.

Why there was a decrease in cloud cover is another question of course.

In addition to Paul Homewood’s piece, we have this WUWT story from 2012:

Spencer’s posited 1-2% cloud cover variation found

A paper published last week finds that cloud cover over China significantly decreased during the period 1954-2005. This finding is in direct contradiction to the theory of man-made global warming which presumes that warming allegedly from CO2 ‘should’ cause an increase in water vapor and cloudiness. The authors also find the decrease in cloud cover was not related to man-made aerosols, and thus was likely a natural phenomenon, potentially a result of increased solar activity via the Svensmark theory or other mechanisms.

Case closed. (Not for the first time.)

Hurricane Hysteria, Updated

In view of Hurricane Michael, and the attendant claims about the role of “climate change”, I have updated “Hurricane Hysteria“. The bottom line remains the same: Global measures of accumulated cyclone energy (ACE) do not support the view that there is a correlation between “climate change” and tropical cyclone activity.

Atheistic Scientism Revisited

I recently had the great pleasure of reading The Devil’s Delusion: Atheism and Its Scientific Pretensions, by David Berlinksi. (Many thanks to Roger Barnett for recommending the book to me.) Berlinski, who knows far more about science than I do, writes with flair and scathing logic. I can’t do justice to his book, but I will try to convey its gist.

Before I do that, I must tell you that I enjoyed Berlinski’s book not only because of the author’s acumen and biting wit, but also because he agrees with me. (I suppose I should say, in modesty, that I agree with him.) I have argued against atheistic scientism in many blog posts (see below).

Here is my version of the argumment against atheism in its briefest form (June 15, 2011):

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

As for scientism, I call upon Friedrich Hayek:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it. [The Counter Revolution Of Science]

As Berlinski amply illustrates and forcibly argues, atheistic scientism is rampant in the so-called sciences. I have reproduced below some key passages from Berlinski’s book. They are representative, but far from exhaustive (though I did nearly exhaust the publisher’s copy limit on the Kindle edition). I have forgone the block-quotation style for ease of reading, and have inserted triple asterisks to indicate (sometimes subtle) changes of topic.

*   *   *

Richard Dawkins, the author of The God Delusion, … is not only an intellectually fulfilled atheist, he is determined that others should be as full as he. A great many scientists are satisfied that at last someone has said out loud what so many of them have said among themselves: Scientific and religious belief are in conflict. They cannot both be right. Let us get rid of the one that is wrong….

Because atheism is said to follow from various scientific doctrines, literary atheists, while they are eager to speak their minds, must often express themselves in other men’s voices. Christopher Hitchens is an example. With forthcoming modesty, he has affirmed his willingness to defer to the world’s “smart scientists” on any matter more exigent than finger-counting. Were smart scientists to report that a strain of yeast supported the invasion of Iraq, Hitchens would, no doubt, conceive an increased respect for yeast….

If nothing else, the attack on traditional religious thought marks the consolidation in our time of science as the single system of belief in which rational men and women might place their faith, and if not their faith, then certainly their devotion. From cosmology to biology, its narratives have become the narratives. They are, these narratives, immensely seductive, so much so that looking at them with innocent eyes requires a very deliberate act. And like any militant church, this one places a familiar demand before all others: Thou shalt have no other gods before me.

It is this that is new; it is this that is important….

For scientists persuaded that there is no God, there is no finer pleasure than recounting the history of religious brutality and persecution. Sam Harris is in this regard especially enthusiastic, The End of Faith recounting in lurid but lingering detail the methods of torture used in the Spanish Inquisition….

Nonetheless, there is this awkward fact: The twentieth century was not an age of faith, and it was awful. Lenin, Stalin, Hitler, Mao, and Pol Pot will never be counted among the religious leaders of mankind….

… Just who has imposed on the suffering human race poison gas, barbed wire, high explosives, experiments in eugenics, the formula for Zyklon B, heavy artillery, pseudo-scientific justifications for mass murder, cluster bombs, attack submarines, napalm, intercontinental ballistic missiles, military space platforms, and nuclear weapons?

If memory serves, it was not the Vatican….

What Hitler did not believe and what Stalin did not believe and what Mao did not believe and what the SS did not believe and what the Gestapo did not believe and what the NKVD did not believe and what the commissars, functionaries, swaggering executioners, Nazi doctors, Communist Party theoreticians, intellectuals, Brown Shirts, Black Shirts, gauleiters, and a thousand party hacks did not believe was that God was watching what they were doing.

And as far as we can tell, very few of those carrying out the horrors of the twentieth century worried overmuch that God was watching what they were doing either.

That is, after all, the meaning of a secular society….

Richard Weikart, … in his admirable treatise, From Darwin to Hitler: Evolutionary Ethics, Eugenics, and Racism in Germany, makes clear what anyone capable of reading the German sources already knew: A sinister current of influence ran from Darwin’s theory of evolution to Hitler’s policy of extermination.

*   *   *

It is wrong, the nineteenth-century British mathematician W. K. Clifford affirmed, “always, everywhere, and for anyone, to believe anything upon insufficient evidence.” I am guessing that Clifford believed what he wrote, but what evidence he had for his belief, he did not say.

Something like Clifford’s injunction functions as the premise in a popular argument for the inexistence of God. If God exists, then his existence is a scientific claim, no different in kind from the claim that there is tungsten to be found in Bermuda. We cannot have one set of standards for tungsten and another for the Deity….

There remains the obvious question: By what standards might we determine that faith in science is reasonable, but that faith in God is not? It may well be that “religious faith,” as the philosopher Robert Todd Carroll has written, “is contrary to the sum of evidence,” but if religious faith is found wanting, it is reasonable to ask for a restatement of the rules by which “the sum of evidence” is computed….

… The concept of sufficient evidence is infinitely elastic…. What a physicist counts as evidence is not what a mathematician generally accepts. Evidence in engineering has little to do with evidence in art, and while everyone can agree that it is wrong to go off half-baked, half-cocked, or half-right, what counts as being baked, cocked, or right is simply too variable to suggest a plausible general principle….

Neither the premises nor the conclusions of any scientific theory mention the existence of God. I have checked this carefully. The theories are by themselves unrevealing. If science is to champion atheism, the requisite demonstration must appeal to something in the sciences that is not quite a matter of what they say, what they imply, or what they reveal.

*   *   *

The universe in its largest aspect is the expression of curved space and time. Four fundamental forces hold sway. There are black holes and various infernal singularities. Popping out of quantum fields, the elementary particles appear as bosons or fermions. The fermions are divided into quarks and leptons. Quarks come in six varieties, but they are never seen, confined as they are within hadrons by a force that perversely grows weaker at short distances and stronger at distances that are long. There are six leptons in four varieties. Depending on just how things are counted, matter has as its fundamental constituents twenty-four elementary particles, together with a great many fields, symmetries, strange geometrical spaces, and forces that are disconnected at one level of energy and fused at another, together with at least a dozen different forms of energy, all of them active.

… It is remarkably baroque. And it is promiscuously catholic. For the atheist persuaded that materialism offers him a no-nonsense doctrinal affiliation, materialism in this sense comes to the declaration of a barroom drinker that he will have whatever he’s having, no matter who he is or what he is having. What he is having is what he always takes, and that is any concept, mathematical structure, or vagrant idea needed to get on with it. If tomorrow, physicists determine that particle physics requires access to the ubiquity of the body of Christ, that doctrine would at once be declared a physical principle and treated accordingly….

What remains of the ideology of the sciences? It is the thesis that the sciences are true— who would doubt it?— and that only the sciences are true. The philosopher Michael Devitt thus argues that “there is only one way of knowing, the empirical way that is the basis of science.” An argument against religious belief follows at once on the assumptions that theology is not science and belief is not knowledge. If by means of this argument it also follows that neither mathematics, the law, nor the greater part of ordinary human discourse have a claim on our epistemological allegiance, they must be accepted as casualties of war.

*   *   *

The claim that the existence of God should be treated as a scientific question stands on a destructive dilemma: If by science one means the great theories of mathematical physics, then the demand is unreasonable. We cannot treat any claim in this way. There is no other intellectual activity in which theory and evidence have reached this stage of development….

Is there a God who has among other things created the universe? “It is not by its conclusions,” C. F. von Weizsäcker has written in The Relevance of Science, but by its methodological starting point that modern science excludes direct creation. Our methodology would not be honest if this fact were denied . . . such is the faith in the science of our time, and which we all share” (italics added).

In science, as in so many other areas of life, faith is its own reward….

The medieval Arabic argument known as the kalam is an example of the genre [cosmological argument].

Its first premise: Everything that begins to exist has a cause.

And its second: The universe began to exist.

And its conclusion: So the universe had a cause.

This is not by itself an argument for the existence of God. It is suggestive without being conclusive. Even so, it is an argument that in a rush covers a good deal of ground carelessly denied by atheists. It is one thing to deny that there is a God; it is quite another to deny that the universe has a cause….

The universe, orthodox cosmologists believe, came into existence as the expression of an explosion— what is now called the Big Bang. The word explosion is a sign that words have failed us, as they so often do, for it suggests a humanly comprehensible event— a gigantic explosion or a stupendous eruption. This is absurd. The Big Bang was not an event taking place at a time or in a place. Space and time were themselves created by the Big Bang, the measure along with the measured….

Whatever its name, as far as most physicists are concerned, the Big Bang is now a part of the established structure of modern physics….

… Many physicists have found the idea that the universe had a beginning alarming. “So long as the universe had a beginning,” Stephen Hawking has written, “we could suppose it had a creator.” God forbid!

… Big Bang cosmology has been confirmed by additional evidence, some of it astonishing. In 1963, the physicists Arno Penzias and Robert Wilson observed what seemed to be the living remnants of the Big Bang— and after 14 billion years!— when in 1962 they detected, by means of a hum in their equipment, a signal in the night sky they could only explain as the remnants of the microwave radiation background left over from the Big Bang itself.

More than anything else, this observation, and the inference it provoked, persuaded physicists that the structure of Big Bang cosmology was anchored into fact….

“Perhaps the best argument in favor of the thesis that the Big Bang supports theism,” the astrophysicist Christopher Isham has observed, “is the obvious unease with which it is greeted by some atheist physicists. At times this has led to scientific ideas, such as continuous creation or an oscillating universe, being advanced with a tenacity which so exceeds their intrinsic worth that one can only suspect the operation of psychological forces lying very much deeper than the usual academic desire of a theorist to support his or her theory.”…

… With the possibility of inexistence staring it in the face, why does the universe exist? To say that universe just is, as Stephen Hawking has said, is to reject out of hand any further questions. We know that it is. It is right there in plain sight. What philosophers such as ourselves wish to know is why it is. It may be that at the end of these inquiries we will answer our own question by saying that the universe exists for no reason whatsoever. At the end of these inquiries, and not the beginning….

Among physicists, the question of how something emerged from nothing has one decisive effect: It loosens their tongues. “One thing [that] is clear,” a physicist writes, “in our framing of questions such as ‘How did the Universe get started?’ is that the Universe was self-creating. This is not a statement on a ‘cause’ behind the origin of the Universe, nor is it a statement on a lack of purpose or destiny. It is simply a statement that the Universe was emergent, that the actual Universe probably derived from an indeterminate sea of potentiality that we call the quantum vacuum, whose properties may always remain beyond our current understanding.”

It cannot be said that “an indeterminate sea of potentiality” has anything like the clarifying effect needed by the discussion, and indeed, except for sheer snobbishness, physicists have offered no reason to prefer this description of the Source of Being to the one offered by Abu al-Hassan al Hashari in ninth-century Baghdad. The various Islamic versions of that indeterminate sea of being he rejected in a spasm of fierce disgust. “We confess,” he wrote, “that God is firmly seated on his throne. We confess that God has two hands, without asking how. We confess that God has two eyes, without asking how. We confess that God has a face.”…

Proposing to show how something might emerge from nothing, [the physicist Victor Stenger] introduces “another universe [that] existed prior to ours that tunneled through . . . to become our universe. Critics will argue that we have no way of observing such an earlier universe, and so this is not very scientific” (italics added). This is true. Critics will do just that. Before they do, they will certainly observe that Stenger has completely misunderstood the terms of the problem that he has set himself, and that far from showing how something can arise from nothing, he has shown only that something might arise from something else. This is not an observation that has ever evoked a firestorm of controversy….

… [A]ccording to the many-worlds interpretation [of quantum mechanics], at precisely the moment a measurement is made, the universe branches into two or more universes. The cat who was half dead and half alive gives rise to two separate universes, one containing a cat who is dead, the other containing a cat who is alive. The new universes cluttering up creation embody the quantum states that were previously in a state of quantum superposition.

The many-worlds interpretation of quantum mechanics is rather like the incarnation. It appeals to those who believe in it, and it rewards belief in proportion to which belief is sincere….

No less than the doctrines of religious belief, the doctrines of quantum cosmology are what they seem: biased, partial, inconclusive, and largely in the service of passionate but unexamined conviction.

*   *   *

The cosmological constant is a number controlling the expansion of the universe. If it were negative, the universe would appear doomed to contract in upon itself, and if positive, equally doomed to expand out from itself. Like the rest of us, the universe is apparently doomed no matter what it does. And here is the odd point: If the cosmological constant were larger than it is, the universe would have expanded too quickly, and if smaller, it would have collapsed too early, to permit the appearance of living systems….

“Scientists,” the physicist Paul Davies has observed, “are slowly waking up to an inconvenient truth— the universe looks suspiciously like a fix. The issue concerns the very laws of nature themselves. For 40 years, physicists and cosmologists have been quietly collecting examples of all too convenient ‘coincidences’ and special features in the underlying laws of the universe that seem to be necessary in order for life, and hence conscious beings, to exist. Change any one of them and the consequences would be lethal.”….

Why? Yes, why?

An appeal to still further physical laws is, of course, ruled out on the grounds that the fundamental laws of nature are fundamental. An appeal to logic is unavailing. The laws of nature do not seem to be logical truths. The laws of nature must be intrinsically rich enough to specify the panorama of the universe, and the universe is anything but simple. As Newton remarks, “Blind metaphysical necessity, which is certainly the same always and everywhere, could produce no variety of things.”

If the laws of nature are neither necessary nor simple, why, then, are they true?

Questions about the parameters and laws of physics form a single insistent question in thought: Why are things as they are when what they are seems anything but arbitrary?

One answer is obvious. It is the one that theologians have always offered: The universe looks like a put-up job because it is a put-up job.

*   *   *

Any conception of a contingent deity, Aquinas argues, is doomed to fail, and it is doomed to fail precisely because whatever He might do to explain the existence of the universe, His existence would again require an explanation. “Therefore, not all beings are merely possible, but there must exist something the existence of which is necessary.”…

… “We feel,” Wittgenstein wrote, “that even when all possible scientific questions have been answered, the problems of life remain completely untouched.” Those who do feel this way will see, following Aquinas, that the only inference calculated to overcome the way things are is one directed toward the way things must be….

“The key difference between the radically extravagant God hypothesis,” [Dawkins] writes, “and the apparently extravagant multiverse hypothesis, is one of statistical improbability.”

It is? I had no idea, the more so since Dawkins’s very next sentence would seem to undercut the sentence he has just written. “The multiverse, for all that it is extravagant, is simple,” because each of its constituent universes “is simple in its fundamental laws.”

If this is true for each of those constituent universes, then it is true for our universe as well. And if our universe is simple in its fundamental laws, what on earth is the relevance of Dawkins’s argument?

Simple things, simple explanations, simple laws, a simple God.

Bon appétit.

*   *   *

As a rhetorical contrivance, the God of the Gaps makes his effect contingent on a specific assumption: that whatever the gaps, they will in the course of scientific research be filled…. Western science has proceeded by filling gaps, but in filling them, it has created gaps all over again. The process is inexhaustible. Einstein created the special theory of relativity to accommodate certain anomalies in the interpretation of Clerk Maxwell’s theory of the electromagnetic field. Special relativity led directly to general relativity. But general relativity is inconsistent with quantum mechanics, the largest visions of the physical world alien to one another. Understanding has improved, but within the physical sciences, anomalies have grown great, and what is more, anomalies have grown great because understanding has improved….

… At the very beginning of his treatise Vertebrate Paleontology and Evolution, Robert Carroll observes quite correctly that “most of the fossil record does not support a strictly gradualistic account” of evolution. A “strictly gradualistic” account is precisely what Darwin’s theory demands: It is the heart and soul of the theory.

But by the same token, there are no laboratory demonstrations of speciation either, millions of fruit flies coming and going while never once suggesting that they were destined to appear as anything other than fruit flies. This is the conclusion suggested as well by more than six thousand years of artificial selection, the practice of barnyard and backyard alike. Nothing can induce a chicken to lay a square egg or to persuade a pig to develop wheels mounted on ball bearings….

… In a research survey published in 2001, and widely ignored thereafter, the evolutionary biologist Joel Kingsolver reported that in sample sizes of more than one thousand individuals, there was virtually no correlation between specific biological traits and either reproductive success or survival. “Important issues about selection,” he remarked with some understatement, “remain unresolved.”

Of those important issues, I would mention prominently the question whether natural selection exists at all.

Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not. Thomas Ray has for years been conducting computer experiments in an artificial environment that he has designated Tierra. Within this world, a shifting population of computer organisms meet, mate, mutate, and reproduce.

Sandra Blakeslee, writing for the New York Times, reported the results under the headline “Computer ‘Life Form’ Mutates in an Evolution Experiment: Natural Selection Is Found at Work in a Digital World.”

Natural selection found at work? I suppose so, for as Blakeslee observes with solemn incomprehension, “the creatures mutated but showed only modest increases in complexity.” Which is to say, they showed nothing of interest at all. This is natural selection at work, but it is hardly work that has worked to intended effect.

What these computer experiments do reveal is a principle far more penetrating than any that Darwin ever offered: There is a sucker born every minute….

… Daniel Dennett, like Mexican food, does not fail to come up long after he has gone down. “Contemporary biology,” he writes, “has demonstrated beyond all reasonable doubt that natural selection— the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge— has the power to generate breathtakingly ingenious designs” (italics added).

These remarks are typical in their self-enchanted self-confidence. Nothing in the physical sciences, it goes without saying— right?— has been demonstrated beyond all reasonable doubt. The phrase belongs to a court of law. The thesis that improvements in life appear automatically represents nothing more than Dennett’s conviction that living systems are like elevators: If their buttons are pushed, they go up. Or down, as the case may be. Although Darwin’s theory is very often compared favorably to the great theories of mathematical physics on the grounds that evolution is as well established as gravity, very few physicists have been heard observing that gravity is as well established as evolution. They know better and they are not stupid….

… The greater part of the debate over Darwin’s theory is not in service to the facts. Nor to the theory. The facts are what they have always been: They are unforthcoming. And the theory is what it always was: It is unpersuasive. Among evolutionary biologists, these matters are well known. In the privacy of the Susan B. Anthony faculty lounge, they often tell one another with relief that it is a very good thing the public has no idea what the research literature really suggests.

“Darwin?” a Nobel laureate in biology once remarked to me over his bifocals. “That’s just the party line.”

In the summer of 2007, Eugene Koonin, of the National Center for Biotechnology Information at the National Institutes of Health, published a paper entitled “The Biological Big Bang Model for the Major Transitions in Evolution.”

The paper is refreshing in its candor; it is alarming in its consequences. “Major transitions in biological evolution,” Koonin writes, “show the same pattern of sudden emergence of diverse forms at a new level of complexity” (italics added). Major transitions in biological evolution? These are precisely the transitions that Darwin’s theory was intended to explain. If those “major transitions” represent a “sudden emergence of new forms,” the obvious conclusion to draw is not that nature is perverse but that Darwin was wrong….

Koonin is hardly finished. He has just started to warm up. “In each of these pivotal nexuses in life’s history,” he goes on to say, “the principal ‘types’ seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate ‘grades’ or intermediate forms between different types are detectable.”…

… [H[is views are simply part of a much more serious pattern of intellectual discontent with Darwinian doctrine. Writing in the 1960s and 1970s, the Japanese mathematical biologist Motoo Kimura argued that on the genetic level— the place where mutations take place— most changes are selectively neutral. They do nothing to help an organism survive; they may even be deleterious…. Kimura was perfectly aware that he was advancing a powerful argument against Darwin’s theory of natural selection. “The neutral theory asserts,” he wrote in the introduction to his masterpiece, The Neutral Theory of Molecular Evolution, “that the great majority of evolutionary changes at the molecular level, as revealed by comparative studies of protein and DNA sequences, are caused not by Darwinian selection but by random drift of selectively neutral or nearly neutral mutations” (italics added)….

… Writing in the Proceedings of the National Academy of Sciences, the evolutionary biologist Michael Lynch observed that “Dawkins’s agenda has been to spread the word on the awesome power of natural selection.” The view that results, Lynch remarks, is incomplete and therefore “profoundly misleading.” Lest there be any question about Lynch’s critique, he makes the point explicitly: “What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms.”…

When asked what he was in awe of, Christopher Hitchens responded that his definition of an educated person is that you have some idea how ignorant you are. This seems very much as if Hitchens were in awe of his own ignorance, in which case he has surely found an object worthy of his veneration.

*   *   *

Do read the whole thing. It will take you only a few hours. And it will remind you — as we badly need reminding these days — that sanity reigns in some corners of the universe.

Related posts:

Same Old Story, Same Old Song and Dance
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
The Legality of Teaching Intelligent Design
Science, Logic, and God
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
The Big Bang and Atheism
Atheism, Religion, and Science Redux
Religion as Beneficial Evolutionary Adaptation
A Non-Believer Defends Religion
The Greatest Mystery
Landsburg Is Half-Right
Evolution, Human Nature, and “Natural Rights”
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Religion on the Left
Scientism, Evolution, and the Meaning of Life
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Some Thoughts about Evolution
Rationalism, Empiricism, and Scientific Knowledge
Fine-Tuning in a Wacky Wrapper
Beating Religion with the Wrong End of the Stick
Quantum Mechanics and Free Will
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
The Fragility of Knowledge
Altruism, One More Time
Religion, Creation, and Morality
The Pretence of Knowledge
Evolution, Intelligence, and Race

New Pages

In case you haven’t noticed the list in the right sidebar, I have converted several classic posts to pages, for ease of access. Some have new names; many combine several posts on the same subject:

Abortion Q & A

Climate Change

Constitution: Myths and Realities

Economic Growth Since World War II


Keynesian Multiplier: Fiction vs. Fact




Wildfires and “Climate Change”

Regarding the claim that there are more wildfires because of “climate change”:

In case the relationship isn’t obvious, here it is:

Estimates of the number of fires are from National Fire Protection Association, Number of Fires by Type of Fire. Specifically, the estimates are the sum of the columns for “Outside of Structures with Value Involved but no vehicle (outside storage crops, timber, etc) and “Brush, Grass, Wildland (excluding crops and timber), with no value or loss involved”.

Estimates of the global temperature anomalies are annual averages of monthly satellite readings for the lower troposphere, published by the Earth Science System Center of the University of Alabama-Huntsville.