CO2 Fail (Revisited)

ADDENDUM BELOW

I observed, in November 2020, that there is no connection between CO2 emissions and the amount of CO2 in the atmosphere. This suggests that emissions have little or no effect on the concentration of CO2. A recent post at WUWT notes that emissions hit a record high in 2021. What the post doesn’t address is the relationship between emissions and the concentration of CO2 in the atmosphere.

See for yourself. Here’s the WUWT graph of emissions from energy combustion and industrial processes:

Here’s the record of atmospheric CO2:

It’s obvious that CO2 has been rising monotonically, with regular seasonal variations, while emissions have been rising irregularly — even declining and holding steady at times. This relationship (or lack thereof) supports the thesis that the rise in atmospheric CO2 is the result of warming, not its cause.

ADDENDUM (04/09/22):

Dr. Roy Spencer, in a post at his eponymous blog, writes:

[T]he greatest correlations are found with global (or tropical) surface temperature changes and estimated yearly anthropogenic emissions. Curiously, reversing the direction of causation between surface temperature and CO2 (yearly changes in SST [dSST/dt] being caused by increasing CO2) yields a very low correlation.

That is to say, temperature changes seem to drive CO2 levels, not the other way around (which is the conventional view).


Sources for CO2 levels:

https://gml.noaa.gov/ccgg/trends/gl_data.html

https://gml.noaa.gov/ccgg/trends/data.html


Related reading: Clyde Spencer, “Anthropogenic CO2 and the Expected Results from Eliminating It” [zero, zilch, zip, nada], Watts Up With That?, March 22, 2022

The Iron Law of Change

Change upsets settled relationships. If change is mutually agreed, the parties to it are more likely than not to have anticipated and planned for its effects. If change is imposed, the imposing parties will have only a dim view of its effects and the parties imposed upon will have only scant knowledge of its likely effects; in neither case will the effects of change be well anticipated or planned for.

Opposition to change is a wise first-order response.

Old News Incites a Rant

A correspondent sent me a link to a video about Greenland ice core records. He called the video an eye opener, which is rather surprising to me because the man is a trained scientist and an experienced analyst of quantitative data. The video wasn’t at all an eye-opener for me. Here is my reply to the correspondent:

I began to look seriously at global warming ~2005, and used to write extensively about it. The info provided in the video is consistent with other observations, including icc-core measurements taken at Vostok, Antarctica. Here’s a related post, which includes the Vostok readings and much more: http://libertycorner.blogspot.com/2007/08/more-climate-stuff.html. Some of the other evidence that I have accrued is summarized here: https://politicsandprosperity.com/climate-change/.

Findings like those presented in the video seem to have no effect on the politics of “climate change”. It is a chimera, concocted by “scientists” who manipulate complex models (which have almost no predictive power) and, on the basis of those models, constantly adjust historical temperature readings to comport with what should have happened according to the models. (Thus “proving” the correctness of the models.) This kind of manipulation is widely known and well documented, as is the predictive failure of the models. But there is a “climate change” industry — a government-academic-media complex if you will — that has a life of its own, and it has transformed what should be a scientific issue into a secular religion. Due in no small part to the leftist leanings of public-school and university educators, tens of millions of American children and young adults have been brainwashed into believing that Earth is headed for a fiery denouement if “evil” things like fossil fuels aren’t banned. Being impressionable — not to mention scientifically and economically illiterate — they don’t question the pseudo-science that underlies “climate change” or the consider the economic consequences of drastic anti-warming measures, which would yield (at best) a lowering of Earth’s average temperature by ~0.1 degree by 2100 in exchange for a return to the horse-and-buggy age.

Here’s what I left unsaid:

The only possible way to defeat the “climate change” industry is to elect politicians who firmly reject its “intellectual” foundations and its draconian prescriptions. There was one such politician who managed to claw his way to the top in the U.S., but he was turned out of office, due in large part to the efforts a powerful cabal (https://time.com/5936036/secret-2020-election-campaign/) which is heavily invested in an all-powerful central government that can shape the U.S. to its liking. It didn’t help that the politician was rude and crude, which turned off fastidious voters (like you) who didn’t think about or care about the consequences of a Democrat return to power.

End of rant.

The 96-Year Pause

Much has been written (pro and con) about the “pause” in global warming climate change the synthetic reconstruction of Earth’s “average” temperature from 1997 to 2012. That pause was followed fairly quickly by a new one, which began in 2014 and is still in progress (if a pause can be said to exhibit progress).

Well, I have a better one for you, drawn from the official temperature records for Austin, Texas — the festering Blue wound in the otherwise healthy Red core of Texas. (Borrowing Winston Churchill’s formulation, Austin is the place up with which I have put for 18 years — and will soon quit, to my everlasting joy.)

There is a continuous record of temperatures in central Austin from January 1903 to the present. The following graph is derived from that record:

A brief inspection of the graph reveals the obvious fact that there was a pause in Austin’s average temperature from (at least) 1903 until sometime in 1999. Something happened in 1999 to break the pause. What was it? It couldn’t have been “global warming”, the advocates of which trace back to the late 1800s (despite some prolonged cooling periods after that).

Austin’s weather station was relocated in 1999, which might have had something to do with it. More likely, the illusory jump in Austin’s temperature was caused by the urban heat-island effect induced by the growth of Austin’s population, which increased markedly from 1999 to 2000, and has been rising rapidly ever since.


Related reading:

Paul Homewood, “Washington’s New Climate ‘Normals’ Are Hotter“, Not a Lot of People Know That, May 6, 2021 (wherein the writer shows that the rise in D.C.’s new “normal” temperatures is due to the urban heat-island effect)

H. Sterling Burnett, “Sorry, CBS, NOAA’s ‘U.S. Climate Normals’ Report Misrepresents the Science“, Climate Realism, May 7, 2021 (just what the title says)

The “Pause” Redux: The View from Austin

Christopher Monckton of Brenchley — who, contrary to Wikipedia, is not a denier of “climate change” but a learned critic of its scale and relationship to CO2 — posits a new “pause” in global warming:

At long last, following the warming effect of the El Niño of 2016, there are signs of a reasonably significant La Niña, which may well usher in another Pause in global temperature, which may even prove similar to the Great Pause that endured for 224 months from January 1997 to August 2015, during which a third of our entire industrial-era influence on global temperature drove a zero trend in global warming:

As we come close to entering the la Niña, the trend in global mean surface temperature has already been zero for 5 years 4 months:

There is not only a global pause, but a local one in a place that I know well: Austin, Texas. I have compiled the National Weather Service’s monthly records for Austin, which go back to the 1890s. More to the point here, I have also compiled daily weather records since October 1, 2014, for the NWS station at Camp Mabry, in the middle of Austin’s urban heat island. Based on those records, I have derived a regression equation that adjusts the official high-temperature readings for three significant variables: precipitation (which strongly correlates with cloud cover), wind speed, and wind direction (the combination of wind from the south has a marked, positive effect on Austin’s temperature).

Taking October 1, 2014, as a starting point, I constructed cumulative plots of the average actual and adjusted  deviations from normal:

Both averages have remained almost constant since April 2017, that is, almost four years ago. The adjusted deviation is especially significant because the hypothesized effect of CO2 on temperature doesn’t depend on other factors, such as precipitation, wind speed, or wind direction. Therefore, there has been no warming in Austin — despite some very hot spells — since April 2017.

Moreover, Austin’s population grew by about 5 percent from 2017 to 2020. According to the relationship between population and temperature presented here, that increase would have induced an temperature increase of 0.1 degrees Fahrenheit. That’s an insignificant number in the context of this analysis — though one that would have climate alarmists crying doom — but it reinforces my contention that Austin’s “real” temperature hasn’t risen for the past 3.75 years.


Related page and posts:

Climate Change
AGW in Austin?
AGW in Austin? (II)
UHI in Austin Revisited

CO2 Fail

Anthony Watts of Watts Up With That? catches the U.N. in a moment of candor:

From a World Meteorological Organization (WMO) press release titled “Carbon dioxide levels continue at record levels, despite COVID-19 lockdown,” comes this statement about the effects of carbon dioxide (CO2) reductions during the COVID-19 lockdown:

“Preliminary estimates indicate a reduction in the annual global emission between 4.2% and 7.5%. At the global scale, an emissions reduction this scale will not cause atmospheric CO2 to go down. CO2 will continue to go up, though at a slightly reduced pace (0.08-0.23 ppm per year lower). This falls well within the 1 ppm natural inter-annual variability. This means that on the short-term the impact [of CO2 reduction] of the COVID-19 confinements cannot be distinguished from natural variability…”

Let this sink in: The WMO admits reduce carbon dioxide emissions are having no effect on climate that is distinguishable from natural variability.

The WMO acknowledges that after our global economic lockdown, where CO2 emissions from travel, industry, and power generation were all curtailed, there wasn’t any measurable difference in global atmospheric CO2 levels. Zero, zilch, none, nada.

Of course, we already knew this and wrote about it on Climate at a Glance: Coronavirus Impact on CO2 Levels. An analysis by climate scientist Dr. Roy Spencer showed that despite crashing economies and large cutbacks in travel, industry, and energy generation, climate scientists have yet to find any hint of a drop in atmospheric CO2 levels.

The graph in Watts’s post depicts CO2 readings only for Mauna Loa, and only through April 2020. The following graph covers CO2 readings for Mauna Loa (through October 2020) and for a global average of marine surface sites (through August 2020):

The bottom line remains the same: There’s nothing to see here, folks, just an uninterrupted pattern of seasonal variations.

Climate-change fanatics will have to look elsewhere than human activity for the rise in atmospheric CO2.


Data definitions and source:

https://www.esrl.noaa.gov/gmd/ccgg/trends/mlo.html

ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt

https://www.esrl.noaa.gov/gmd/ccgg/trends/global.html

ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_gl.txt

UHI in Austin Revisited

See “Climate Hysteria: An Update” for the background of this post.

The average annual temperature in the city of Austin, Texas, rose by 3.7 degrees F between 1960 and 2019, that is, from 67.2 degrees to 70.9 degrees. The increase in Austin’s population from 187,000 in 1960 to 930,000 in 2019 accounts for all of the increase. (The population estimate for 2019 reflects a downward adjustment to compensate for an annexation in 1998 that significantly enlarged Austin’s territory and population.)

My estimate of the effect of Austin’s population increase on temperature is based on the equation for North American cities in T.R. Oke’s “City Size and the Urban Heat Island”. The equation (simplified for ease of reproduction) is

T’ = 2.96 log P – 6.41

Where,

T’ = change in temperature, degrees C

P = population, holding area constant

The author reports r-squared = 0.92 and SE = 0.7 degrees C (1.26 degrees F).

I plugged the values for Austin’s population in 1960 and 2019 into the equation, took the difference between the results, and converted that difference to degrees Fahrenheit, with this result: The effect of Austin’s population growth from 1960 to 2019 was to increase Austin’s temperature by 3.7 degrees F. What an amazing non-coincidence.

Austin’s soup weather nazi should now shut up about the purported effect of “climate change” on Austin’s temperature.

Climate Hysteria: An Update

I won’t repeat all of “Climate Hysteria“, which is long but worth a look if you haven’t read it. A key part of it is a bit out of date, specifically, the part about the weather in Austin, Texas.

Last fall’s heat wave in Austin threw our local soup weather nazi into a tizzy. Of course it did; he proclaims it “nice” when daytime high temperatures are in the 60s and 70s, and complains about anything above 80. I wonder why he stays in Austin.

The weather nazi is also a warmist. He was in “climate change” heaven when, on several days in September and October, the official weather station in Austin reported new record highs for the relevant dates. To top it off, tropical storm Imelda suddenly formed in mid-September near the gulf coast of Texas and inundated Houston. According to the weather nazi, both events were due to “climate change”. Or were they just weather? My money’s on the latter.

Let’s take Imelda, which the weather nazi proclaimed to be an example of the kind of “extreme” weather event that will occur more often as “climate change” takes us in the direction of catastrophe. Those “extreme” weather events, when viewed globally (which is the only correct way to view them) aren’t occurring more often, as I document in “Hurricane Hysteria“.

Here, I want to focus on Austin’s temperature record.

There are some problems with the weather nazi’s reaction to the heat wave. First, the global circulation models (GCMs) that forecast ever-rising temperatures have been falsified. (See the discussion of GCMs here.) Second, the heat wave and the dry spell should be viewed in perspective. Here, for example are annualized temperature and rainfall averages for Austin, going back to the decade in which “global warming” began to register on the consciousnesses of climate hysterics:

What do you see? I see a recent decline in Austin’s average temperature from the El Nino effect of 2015-2016. I also see a decline in rainfall that doesn’t come close to being as severe the a dozen or so declines that have occurred since 1970.

Here’s a plot of the relationship between monthly average temperature and monthly rainfall during the same period. The 1-month lag in temperature gives the best fit. The equation is statistically significant, despite the low correlation coefficient (r = 0.24) because of the large number of observations.

Abnormal heat is to be expected when there is little rain and a lot of sunshine. In other words, temperature data, standing by themselves, are of little use in explaining a region’s climate.

Drawing on daily weather reports for the past five-and-a-half years in Austin, I find that Austin’s daily high temperature is significantly affected by rainfall, wind speed, wind direction, and cloud cover. For example (everything else being the same):

  • An additional inch of rainfall induces an temperature drop of 1.4 degrees F.
  • A wind of 10 miles an hour from the north induces a temperature drop of about 5.9 degrees F relative to a 10-mph wind from the south.
  • Going from 100-percent sunshine to 100-percent cloud cover induces a temperature drop of 0.3 degrees F.
  • The combined effect of an inch of rain and complete loss of sunshine is therefore 1.7 degrees F, even before other factors come into play (e.g., rain accompanied by wind from the north or northwest, as is often the case in Austin).

The combined effects of variations in rainfall, wind speed, wind direction, and cloud cover are far more than enough to account for the molehill temperature anomalies that “climate change” hysterics magnify into mountains of doom.

Further, there is no systematic bias in the estimates, as shown by the following plot of regression residuals:


Meteorological seasons: tan = fall (September, October, November); blue = winter (December, January, February); green = spring (March, April, May); ochre = summer (June July, August). Values greater than zero = underestimates; values less than zero = overestimates.

Summer is the most predictable of the seasons; winter, the least predicable; spring and fall are in between. However, the fall of 2019 (which included both the hot spell and cold snap discussed above) was dominated by overestimated (below-normal) temperatures, not above-normal ones, despite the weather-nazi’s hysteria to the contrary. In fact, the below-normal temperatures were the most below-normal of those recorded during the five-and-a-half year period.

The winter of 2019-2020 was on the warm side, but not abnormally so (cf. the winter of 2016-2017). Further, the warming in the winter of 2019-2020 can be attributed in part to weak El Nino conditions.

Lurking behind all of this, and swamping all other causes of the (slightly upward) temperature trend is a pronounced urban-heat-island (UHI) effect (discussed here). What the weather nazi really sees (but doesn’t understand or won’t admit) is that Austin is getting warmer mainly because of rapid population growth (50 percent since 2000) and all that has ensued — more buildings, more roads, more vehicles on the move, and less green space.

The moral of the story: If you really want to do something about the weather, move to a climate that you find more congenial (hint, hint).

Not-So-Random Thoughts (XXV)

“Not-So-Random Thoughts” is an occasional series in which I highlight writings by other commentators on varied subjects that I have addressed in the past. Other entries in the series can be found at these links: I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI, XXII, XXIII, and XXIV. For more in the same style, see “The Tenor of the Times” and “Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration“.

CONTENTS

The Real Unemployment Rate and Labor-Force Participation

Is Partition Possible?

Still More Evidence for Why I Don’t Believe in “Climate Change”

Transgenderism, Once More

Big, Bad Oligopoly?

Why I Am Bunkered in My Half-Acre of Austin

“Government Worker” Is (Usually) an Oxymoron


The Real Unemployment Rate and Labor-Force Participation

There was much celebration (on the right, at least) when it was announced that the official unemployment rate, as of November, is only 3.5 percent, and that 266,000 jobs were added to the employment rolls (see here, for example). The exultation is somewhat overdone. Yes, things would be much worse if Obama’s anti-business rhetoric and policies still prevailed, but Trump is pushing a big boulder of deregulation uphill.

In fact, the real unemployment rate is a lot higher than official figure I refer you to “Employment vs. Big Government and Disincentives to Work“. It begins with this:

The real unemployment rate is several percentage points above the nominal rate. Officially, the unemployment rate stood at 3.5 percent as of November 2019. Unofficially — but in reality — the unemployment rate was 9.4 percent.

The explanation is that the labor-force participation rate has declined drastically since peaking in January 2000. When the official unemployment rate is adjusted to account for that decline (and for a shift toward part-time employment), the result is a considerably higher real unemployment rate.

Arnold Kling recently discussed the labor-force participation rate:

[The] decline in male labor force participation among those without a college degree is a significant issue. Note that even though the unemployment rate has come down for those workers, their rate of labor force participation is still way down.

Economists on the left tend to assume that this is due to a drop in demand for workers at the low end of the skill distribution. Binder’s claim is that instead one factor in declining participation is an increase in the ability of women to participate in the labor market, which in turn lowers the advantage of marrying a man. The reduced interest in marriage on the part of women attenuates the incentive for men to work.

Could be. I await further analysis.


Is Partition Possible?

Angelo Codevilla peers into his crystal ball:

Since 2016, the ruling class has left no doubt that it is not merely enacting chosen policies: It is expressing its identity, an identity that has grown and solidified over more than a half century, and that it is not capable of changing.

That really does mean that restoring anything like the Founders’ United States of America is out of the question. Constitutional conservatism on behalf of a country a large part of which is absorbed in revolutionary identity; that rejects the dictionary definition of words; that rejects common citizenship, is impossible. Not even winning a bloody civil war against the ruling class could accomplish such a thing.

The logical recourse is to conserve what can be conserved, and for it to be done by, of, and for those who wish to conserve it. However much force of what kind may be required to accomplish that, the objective has to be conservation of the people and ways that wish to be conserved.

That means some kind of separation.

As I argued in “The Cold Civil War,” the natural, least stressful course of events is for all sides to tolerate the others going their own ways. The ruling class has not been shy about using the powers of the state and local governments it controls to do things at variance with national policy, effectively nullifying national laws. And they get away with it.

For example, the Trump Administration has not sent federal troops to enforce national marijuana laws in Colorado and California, nor has it punished persons and governments who have defied national laws on immigration. There is no reason why the conservative states, counties, and localities should not enforce their own view of the good.

Not even President Alexandria Ocasio-Cortez would order troops to shoot to re-open abortion clinics were Missouri or North Dakota, or any city, to shut them down. As Francis Buckley argues in American Secession: The Looming Breakup of the United States, some kind of separation is inevitable, and the options regarding it are many.

I would like to believe Mr. Codevilla, but I cannot. My money is on a national campaign of suppression, which will begin the instant that the left controls the White House and Congress. Shooting won’t be necessary, given the massive displays of force that will be ordered from the White House, ostensibly to enforce various laws, including but far from limited to “a woman’s right to an abortion”. Leftists must control everything because they cannot tolerate dissent.

As I say in “Leftism“,

Violence is a good thing if your heart is in the “left” place. And violence is in the hearts of leftists, along with hatred and the irresistible urge to suppress that which is hated because it challenges leftist orthodoxy — from climate skepticism and the negative effect of gun ownership on crime to the negative effect of the minimum wage and the causal relationship between Islam and terrorism.

There’s more in “The Subtle Authoritarianism of the ‘Liberal Order’“; for example:

[Quoting Sumantra Maitra] Domestically, liberalism divides a nation into good and bad people, and leads to a clash of cultures.

The clash of cultures was started and sustained by so-called liberals, the smug people described above. It is they who — firmly believing themselves to be smarter, on the the side of science, and on the side of history — have chosen to be the aggressors in the culture war.

Hillary Clinton’s remark about Trump’s “deplorables” ripped the mask from the “liberal” pretension to tolerance and reason. Clinton’s remark was tantamount to a declaration of war against the self-appointed champion of the “deplorables”: Donald Trump. And war it has been. much of it waged by deep-state “liberals” who cannot entertain the possibility that they are on the wrong side of history, and who will do anything — anything — to make history conform to their smug expectations of it.


Still More Evidence for Why I Don’t Believe in “Climate Change”

This is a sequel to an item in the previous edition of this series: “More Evidence for Why I Don’t Believe in Climate Change“.

Dave Middleton debunks the claim that 50-year-old climate models correctly predicted the susequent (but not steady) rise in the globe’s temperature (whatever that is). He then quotes a talk by Dr. John Christy of the University of Alabama-Huntsville Climate Research Center:

We have a change in temperature from the deep atmosphere over 37.5 years, we know how much forcing there was upon the atmosphere, so we can relate these two with this little ratio, and multiply it by the ratio of the 2x CO2 forcing. So the transient climate response is to say, what will the temperature be like if you double CO2– if you increase at 1% per year, which is roughly what the whole greenhouse effect is, and which is achieved in about 70 years. Our result is that the transient climate response in the troposphere is 1.1 °C. Not a very alarming number at all for a doubling of CO2. When we performed the same calculation using the climate models, the number was 2.31°C. Clearly, and significantly different. The models’ response to the forcing – their ∆t here, was over 2 times greater than what has happened in the real world….

There is one model that’s not too bad, it’s the Russian model. You don’t go to the White House today and say, “the Russian model works best”. You don’t say that at all! But the fact is they have a very low sensitivity to their climate model. When you look at the Russian model integrated out to 2100, you don’t see anything to get worried about. When you look at 120 years out from 1980, we already have 1/3 of the period done – if you’re looking out to 2100. These models are already falsified [emphasis added], you can’t trust them out to 2100, no way in the world would a legitimate scientist do that. If an engineer built an aeroplane and said it could fly 600 miles and the thing ran out of fuel at 200 and crashed, he might say: “I was only off by a factor of three”. No, we don’t do that in engineering and real science! A factor of three is huge in the energy balance system. Yet that’s what we see in the climate models….

Theoretical climate modelling is deficient for describing past variations. Climate models fail for past variations, where we already know the answer. They’ve failed hypothesis tests and that means they’re highly questionable for giving us accurate information about how the relatively tiny forcing … will affect the climate of the future.

For a lot more in this vein, see my pages “Climate Change” and “Modeling and Science“.


Transgenderism, Once More

Theodore Dalrymple (Anthony Daniels, M.D.) is on the case:

The problem alluded to in [a paper in the Journal of Medical Ethics] is, of course, the consequence of a fiction, namely that a man who claims to have changed sex actually has changed sex, and is now what used to be called the opposite sex. But when a man who claims to have become a woman competes in women’s athletic competitions, he often retains an advantage derived from the sex of his birth. Women competitors complain that this is unfair, and it is difficult not to agree with them….

Man being both a problem-creating and solving creature, there is, of course, a very simple way to resolve this situation: namely that men who change to simulacra of women should compete, if they must, with others who have done the same. The demand that they should suffer no consequences that they neither like nor want from the choices they have made is an unreasonable one, as unreasonable as it would be for me to demand that people should listen to me playing the piano though I have no musical ability. Thomas Sowell has drawn attention to the intellectual absurdity and deleterious practical consequences of the modern search for what he calls “cosmic justice.”…

We increasingly think that we live in an existential supermarket in which we pick from the shelf of limitless possibilities whatever we want to be. We forget that limitation is not incompatible with infinity; for example, that our language has a grammar that excludes certain forms of words, without in any way limiting the infinite number of meanings that we can express. Indeed, such limitation is a precondition of our freedom, for otherwise nothing that we said would be comprehensible to anybody else.

That is a tour de force typical of the good doctor. In the span of three paragraphs, he addresses matters that I have treated at length in “The Transgender Fad and Its Consequences” (and later in the previous edition of this series), “Positive Rights and Cosmic Justice“, and “Writing: A Guide” (among other entries at this blog).


Big, Bad Oligopoly?

Big Tech is giving capitalism a bad name, as I discuss in “Why Is Capitalism Under Attack from the Right?“, but it’s still the best game in town. Even oligopoly and its big brother, monopoly, aren’t necessarily bad. See, for example, my posts, “Putting in Some Good Words for Monopoly” and “Monopoly: Private Is Better than Public“. Arnold Kling makes the essential point here:

Do indicators of consolidation show us that the economy is getting less competitive or more competitive? The answer depends on which explanation(s) you believe to be most important. For example, if network effects or weak resistance to mergers are the main factors, then the winners from consolidation are quasi-monopolists that may be overly insulated from competition. On the other hand, if the winners are firms that have figured out how to develop and deploy software more effectively than their rivals, then the growth of those firms at the expense of rivals just shows us that the force of competition is doing its work.


Why I Am Bunkered in My Half-Acre of Austin

Randal O’Toole takes aim at the planners of Austin, Texas, and hits the bullseye:

Austin is one of the fastest-growing cities in America, and the city of Austin and Austin’s transit agency, Capital Metro, have a plan for dealing with all of the traffic that will be generated by that growth: assume that a third of the people who now drive alone to work will switch to transit, bicycling, walking, or telecommuting by 2039. That’s right up there with planning for dinner by assuming that food will magically appear on the table the same way it does in Hogwarts….

[W]hile Austin planners are assuming they can reduce driving alone from 74 to 50 percent, it is actually moving in the other direction….

Planners also claim that 11 percent of Austin workers carpool to work, an amount they hope to maintain through 2039. They are going to have trouble doing that as carpooling, in fact, only accounted for 8.0 percent of Austin workers in 2018.

Planners hope to increase telecommuting from its current 8 percent (which is accurate) to 14 percent. That could be difficult as they have no policy tools that can influence telecommuting.

Planners also hope to increase walking and bicycling from their current 2 and 1 percent to 4 and 5 percent. Walking to work is almost always greater than cycling to work, so it’s difficult to see how they plan to magic cycling to be greater than walking. This is important because cycling trips are longer than walking trips and so have more of a potential impact on driving.

Finally, planners want to increase transit from 4 to 16 percent. In fact, transit carried just 3.24 percent of workers to their jobs in 2018, down from 3.62 percent in 2016. Changing from 4 to 16 percent is a an almost impossible 300 percent increase; changing from 3.24 to 16 is an even more formidable 394 percent increase. Again, reality is moving in the opposite direction from planners’ goals….

Planners have developed two main approaches to transportation. One is to estimate how people will travel and then provide and maintain the infrastructure to allow them to do so as efficiently and safely as possible. The other is to imagine how you wish people would travel and then provide the infrastructure assuming that to happen. The latter method is likely to lead to misallocation of capital resources, increased congestion, and increased costs to travelers.

Austin’s plan is firmly based on this second approach. The city’s targets of reducing driving alone by a third, maintaining carpooling at an already too-high number, and increasing transit by 394 percent are completely unrealistic. No American city has achieved similar results in the past two decades and none are likely to come close in the next two decades.

Well, that’s the prevailing mentality of Austin’s political leaders and various bureaucracies: magical thinking. Failure is piled upon failure (e.g., more bike lanes crowding out traffic lanes, a hugely wasteful curbside composting plan) because to admit failure would be to admit that the emperor has no clothes.

You want to learn more about Austin? You’ve got it:

Driving and Politics (1)
Life in Austin (1)
Life in Austin (2)
Life in Austin (3)
Driving and Politics (2)
AGW in Austin?
Democracy in Austin
AGW in Austin? (II)
The Hypocrisy of “Local Control”
Amazon and Austin


“Government Worker” Is (Usually) an Oxymoron

In “Good News from the Federal Government” I sarcastically endorse the move to grant all federal workers 12 weeks of paid parental leave:

The good news is that there will be a lot fewer civilian federal workers on the job, which means that the federal bureaucracy will grind a bit more slowly when it does the things that it does to screw up the economy.

The next day, Audacious Epigone put some rhetorical and statistical meat on the bones of my informed prejudice in “Join the Crooks and Liars: Get a Government Job!“:

That [the title of the post] used to be a frequent refrain on Radio Derb. Though the gag has been made emeritus, the advice is even better today than it was when the Derb introduced it. As he explains:

The percentage breakdown is private-sector 76 percent, government 16 percent, self-employed 8 percent.

So one in six of us works for a government, federal, state, or local.

Which group does best on salary? Go on: see if you can guess. It’s government workers, of course. Median earnings 52½ thousand. That’s six percent higher than the self-employed and fourteen percent higher than the poor shlubs toiling away in the private sector.

If you break down government workers into two further categories, state and local workers in category one, federal workers in category two, which does better?

Again, which did you think? Federal workers are way out ahead, median earnings 66 thousand. Even state and local government workers are ahead of us private-sector and self-employed losers, though.

Moral of the story: Get a government job! — federal for strong preference.

….

Though it is well known that a government gig is a gravy train, opinions of the people with said gigs is embarrassingly low as the results from several additional survey questions show.

First, how frequently the government can be trusted “to do what’s right”? [“Just about always” and “most of the time” badly trail “some of the time”.]

….

Why can’t the government be trusted to do what’s right? Because the people who populate it are crooks and liars. Asked whether “hardly any”, “not many” or “quite a few” people in the federal government are crooked, the following percentages answered with “quite a few” (“not sure” responses, constituting 12% of the total, are excluded). [Responses of “quite a few” range from 59 percent to 77 percent across an array of demographic categories.]

….

Accompanying a strong sense of corruption is the perception of widespread incompetence. Presented with a binary choice between “the people running the government are smart” and “quite a few of them don’t seem to know what they are doing”, a solid majority chose the latter (“not sure”, at 21% of all responses, is again excluded). [The “don’t know what they’re doing” responses ranged from 55 percent to 78 percent across the same demographic categories.]

Are the skeptics right? Well, most citizens have had dealings with government employees of one kind and another. The “wisdom of crowds” certainly applies in this case.

“Hurricane Hysteria” and “Climate Hysteria”, Updated

In view of the persistent claims about the role of “climate change” as the cause of tropical cyclone activity (i.e, tropical storms and hurricanes) I have updated “Hurricane Hysteria“. The bottom line remains the same: Global measures of accumulated cyclone energy (ACE) do not support the view that there is a correlation between “climate change” and tropical cyclone activity.

I have also updated “Climate Hysteria“, which borrows from “Hurricane Hysteria” but also examines climate patterns in Austin, Texas, where our local weather nazi peddles his “climate change” balderdash.

Climate Hysteria

UPDATED 01/23/20

Recent weather events have served to reinforce climate hysteria. There are the (usual) wildfires in California, which have nothing to do with “climate change” (e.g., this, this, and this), but you wouldn’t know it if you watch the evening news (which I don’t but impressionable millions do).

Closer to home, viewers have been treated to more of the same old propaganda from our local weather nazi, who proclaims it “nice” when daytime high temperatures are in the 60s and 70s, and who bemoans higher temperatures. (Why does he stay in Austin, then?) We watch him because when he isn’t proselytizing “climate change” he delivers the most detailed weather report available on Austin’s TV stations.

He was in “climate change” heaven when in September and part of October (2019) Austin endured a heat wave that saw many new high temperatures for the relevant dates. To top it off, tropical storm Imelda suddenly formed in mid-September near the gulf coast of Texas and inundated Houston. According to him, both events were due to “climate change”. Or were they just weather? My money’s on the latter.

Let’s take Imelda, which the weather nazi proclaimed to be an example of the kind of “extreme” weather event that will occur more often as “climate change” takes us in the direction of catastrophe. Those “extreme” weather events, when viewed globally (which is the only correct way to view them) aren’t occurring more often. This is from “Hurricane Hysteria“, which I have just updated to include statistics compiled as of today (11/19/19):

[T]he data sets for tropical cyclone activity that are maintained by the Tropical Meteorology Project at Colorado State University cover all six of the relevant ocean basins as far back as 1972. The coverage goes back to 1961 (and beyond) for all but the North Indian Ocean basin — which is by far the least active.

Here is NOAA’s reconstruction of ACE in the North Atlantic basin through November 19, 2019, which, if anything, probably understates ACE before the early 1960s:

The recent spikes in ACE are not unprecedented. And there are many prominent spikes that predate the late-20th-century temperature rise on which “warmism” is predicated. The trend from the late 1800s to the present is essentially flat. And, again, the numbers before the early 1960s must understate ACE.

Moreover, the metric of real interest is global cyclone activity; the North Atlantic basin is just a sideshow. Consider this graph of the annual values for each basin from 1972 through November 19, 2019:

Here’s a graph of stacked (cumulative) totals for the same period:

The red line is the sum of ACE for all six basins, including the Northwest Pacific basin; the yellow line in the sum of ACE for the next five basins, including the Northeast Pacific basin; etc.

I have these observations about the numbers represented in the preceding graphs:

  • If one is a believer in CAGW (the G stands for global), it is a lie (by glaring omission) to focus on random, land-falling hurricanes hitting the U.S. or other parts of the Western Hemisphere.
  • The overall level of activity is practically flat between 1972 and 2019, with the exception of spikes that coincide with strong El Niño events.
  • There is nothing in the long-term record for the North Atlantic basin, which is probably understated before the early 1960s, to suggest that global activity in recent decades is unusually high.

Imelda was an outlier — an unusual event that shouldn’t be treated as a typical one. Imelda happened along in the middle of a heat wave and accompanying dry spell in central Texas. This random juxtaposition caused the weather nazi to drool in anticipation of climate catastrophe.

There are some problems with the weather nazi’s reaction to the heat wave. First, the global circulation models (GCMs) that forecast ever-rising temperatures have been falsified. (See the discussion of GCMs here.) Second, the heat wave and the dry spell should be viewed in perspective. Here, for example are annualized temperature and rainfall averages for Austin, going back to the decade in which “global warming” began to register on the consciousnesses of climate hysterics:

 

What do you see? I see a recent decline in Austin’s average temperature from the El Nino effect of 2015-2016. I also see a decline in rainfall that doesn’t come close to being as severe the a dozen or so declines that have occurred since 1970.

In fact, abnormal heat is to be expected when there is little rain and a lot of sunshine. Temperature data, standing by themselves, are of little use because of the pronounced urban-heat-island (UHI) effect (discussed here). Drawing on daily weather reports for Austin for the past five years, I find that Austin’s daily high temperature is significantly affected by rainfall, wind speed, wind direction, and cloud cover. For example (everything else being the same):

  • An additional inch of rainfall induces an temperature drop of 1.4 degrees F.
  • A wind of 10 miles an hour from the north induces a temperature drop of about 5.8 degrees F relative to a 10-mph wind from the south.
  • Going from 100-percent sunshine to 100-percent cloud cover induces a temperature drop of 0.5 degrees F. (The combined effect of an inch of rain and complete loss of sunshine is therefore 1.9 degrees F, even before other factors come into play.)

The combined effects of variations in rainfall, wind speed, wind direction, and cloud cover are far more than enough to account for the molehill temperature anomalies that “climate change” hysterics magnify into mountains of doom.

Further, there is no systematic bias in the estimates, as shown by the following plot of regression residuals:

 

Summer is the most predictable of the seasons; winter, the least predicable. Over- and under-estimates seem to be evenly distributed across the seasons. In other words, the regression doesn’t mask changes in seasonal temperature patterns. Note, however, that this fall (which includes both the hot spell and cold snap discussed above) has been dominated by below-normal temperatures, not above-normal ones.

Anyway, during the spell of hot, dry weather in the first half of the meteorological fall of 2019, the maximum temperature went as high as 16 degrees F above the 30-year average for relevant date. Two days later, the maximum temperature was 12 degrees F below the 30-year average for the relevant date. Those extremes tell us a lot about the variability of weather in central Texas and nothing about “climate change”.

However, the 16-degree deviation above the 30-year average was far from the greatest during the period under analysis; above-normal deviations have ranged as high as 26 degrees F above 30-year averages. By contrast, during the subsequent cold snap, deviations reached their lowest levels for the period under analysis. The down-side deviations (latter half of meteorological fall, 2019) are obvious in the preceding graph. The pattern suggests that, if anything, fall 2019 in Austin was abnormally cold rather than abnormally hot.

Winter 2019-2020 has started on out the warm side, by not abnormally so. Further, the warming can be attributed in part to weak El Nino conditions.

Not-So-Random Thoughts (XXIV)

“Not-So-Random Thoughts” is an occasional series in which I highlight writings by other commentators on varied subjects that I have addressed in the past. Other entries in the series can be found at these links: I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI, XXII, and XXIII. For more in the same style, see “The Tenor of the Times” and “Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration“.

CONTENTS

The Transgender Trap: A Political Nightmare Becomes Reality

Spygate (a.k.a. Russiagate) Revisited

More Evidence for Why I Don’t Believe in “Climate Change”

Thoughts on Mortality

Assortative Mating, Income Inequality, and the Crocodile Tears of “Progressives”


The Transgender Trap: A Political Nightmare Becomes Reality

Begin here and here, then consider the latest outrage.

First, from Katy Faust (“Why It’s Probably Not A Coincidence That The Mother Transing Her 7-Year-Old Isn’t Biologically Related“, The Federalist, October 24, 2019):

The story of seven-year-old James, whom his mother has pressured to become “Luna,” has been all over my newsfeed. The messy custody battle deserves every second of our click-bait-prone attention: Jeffrey Younger, James’s father, wants to keep his son’s body intact, while Anne Georgulas, James’s mother, wants to allow for “treatment” that would physically and chemically castrate him.

The havoc that divorce wreaks in a child’s life is mainstage in this tragic case. Most of us children of divorce quickly learn to act one way with mom and another way with dad. We can switch to a different set of rules, diet, family members, bedtime, screen time limits, and political convictions in that 20-minute ride from mom’s house to dad’s.

Unfortunately for little James, the adaptation he had to make went far beyond meat-lover’s pizza at dad’s house and cauliflower crusts at mom’s: it meant losing one of the most sacred aspects of his identity—his maleness. His dad loved him as a boy, so he got to be himself when he was at dad’s house. But mom showered love on the version of James she preferred, the one with the imaginary vagina.

So, as kids are so apt to do, when James was at her house, he conformed to the person his mother loved. This week a jury ruled that James must live like he’s at mom’s permanently, where he can “transition” fully, regardless of the cost to his mental and physical health….

Beyond the “tale of two households” that set up this court battle, and the ideological madness on display in the proceedings, something else about this case deserves our attention: one of the two parents engaged in this custodial tug-of-war isn’t biologically related to little James. Care to guess which one? Do you think it’s the parent who wants to keep him physically whole? It’s not.

During her testimony Georgulas stated she is not the biological mother of James or his twin brother Jude. She purchased eggs from a biological stranger. This illuminates a well-known truth in the world of family and parenthood: biological parents are the most connected to, invested in, and protective of their children.

Despite the jury’s unfathomable decision to award custody of James to his demented mother, there is hope for James. Walt Hyer picks up the story (“Texas Court Gives 7-Year-Old Boy A Reprieve From Transgender Treatments“, The Federalist, October 25, 2019):

Judge Kim Cooks put aside the disappointing jury’s verdict of Monday against the father and ruled Thursday that Jeffrey Younger now has equal joint conservatorship with the mother, Dr. Anne Georgulas, of their twin boys.

The mother no longer has unfettered authority to manipulate her 7-year old boy into gender transition. Instead both mother and father will share equally in medical, psychological, and other decision-making for the boys. Additionally, the judge changed the custody terms to give Younger an equal amount of visitation time with his sons, something that had been severely limited….

For those who need a little background, here’s a recap. “Six-year-old James is caught in a gender identity nightmare. Under his mom’s care in Dallas, Texas, James obediently lives as a trans girl named ‘Luna.’ But given the choice when he’s with dad, he’s all boy—his sex from conception.

“In their divorce proceedings, the mother has charged the father with child abuse for not affirming James as transgender, has sought restraining orders against him, and is seeking to terminate his parental rights. She is also seeking to require him to pay for the child’s visits to a transgender-affirming therapist and transgender medical alterations, which may include hormonal sterilization starting at age eight.”

All the evidence points to a boy torn between pleasing two parents, not an overwhelming preference to be a girl….

Younger said at the trial he was painted as paranoid and in need of several years of psychotherapy because he doesn’t believe his young son wants to be a girl. But many experts agree that transgendering young children is hazardous.

At the trial, Younger’s expert witnesses testified about these dangers and provided supporting evidence. Dr. Stephen Levine, a psychiatrist renowned for his work on human sexuality, testified that social transition—treating them as the opposite sex—increases the chance that a child will remain gender dysphoric. Dr. Paul W. Hruz, a pediatric endocrinologist and professor of pediatrics and cellular biology at Washington University School of Medicine in Saint Louis, testified that the risks of social transition are so great that the “treatment” cannot be recommended at all.

Are these doctors paranoid, too? Disagreement based on scientific evidence is now considered paranoia requiring “thought reprogramming.” That’s scary stuff when enforced by the courts….

The jury’s 11-1 vote to keep sole managing conservatorship from the father shows how invasive and acceptable this idea of confusing children and transitioning them has become. It’s like we are watching a bad movie where scientific evidence is ignored and believing the natural truth of male and female biology is considered paranoia. I can testify from my life experience the trans-life movie ends in unhappiness, regret, detransitions, or sadly, suicide.

The moral of the story is that the brainwashing of the American public by the media may have advanced to the tipping point. The glory that was America may soon vanish with a whimper.


Spygate (a.k.a. Russiagate) Revisited

I posted my analysis of “Spygate” well over a year ago, and have continually updated the appended list of supporting reference. The list continues to grow as evidence mounts to support the thesis that the Trump-Russia collusion story was part of a plot hatched at the highest levels of the Obama administration and executed within the White House, the CIA, and the Department of Justice (including especially the FBI).

Margot Cleveland addresses the case of Michael Flynn (“Sidney Powell Drops Bombshell Showing How The FBI Trapped Michael Flynn“, The Federalist, October 25, 2019):

Earlier this week, Michael Flynn’s star attorney, Sidney Powell, filed under seal a brief in reply to federal prosecutors’ claims that they have already given Flynn’s defense team all the evidence they are required by law to provide. A minimally redacted copy of the reply brief has just been made public, and with it shocking details of the deep state’s plot to destroy Flynn….

What is most striking, though, is the timeline Powell pieced together from publicly reported text messages withheld from the defense team and excerpts from documents still sealed from public view. The sequence Powell lays out shows that a team of “high-ranking FBI officials orchestrated an ambush-interview of the new president’s National Security Advisor, not for the purpose of discovering any evidence of criminal activity—they already had tapes of all the relevant conversations about which they questioned Mr. Flynn—but for the purpose of trapping him into making statements they could allege as false” [in an attempt to “flip” Flynn in the Spygate affair]….

The timeline continued to May 10 when McCabe opened an “obstruction” investigation into President Trump. That same day, Powell writes, “in an important but still wrongly redacted text, Strzok says: ‘We need to lock in [redacted]. In a formal chargeable way. Soon.’” Page replies: “I agree. I’ve been pushing and I’ll reemphasize with Bill [Priestap].”

Powell argues that “both from the space of the redaction, its timing, and other events, the defense strongly suspects the redacted name is Flynn.” That timing includes Robert Mueller’s appointment as special counsel on May 17, and then the reentering of Flynn’s 302 on May 31, 2017, “for Special Counsel Mueller to use.”

The only surprise (to me) is evidence cited by Cleveland that Comey was deeply embroiled in the plot. I have heretofore written off Comey as an opportunist who was out to get Trump for his own reasons.

In any event, Cleveland reinforces my expressed view of former CIA director John Brennan’s central role in the plot (“All The Russia Collusion Clues Are Beginning To Point Back To John Brennan“, The Federalist, October 25, 2019):

[I]f the media reports are true, and [Attorney General William] Barr and [U.S. attorney John] Durham have turned their focus to Brennan and the intelligence community, it is not a matter of vengeance; it is a matter of connecting the dots in congressional testimony and reports, leaks, and media spin, and facts exposed during the three years of panting about supposed Russia collusion. And it all started with Brennan.

That’s not how the story went, of course. The company story ran that the FBI launched its Crossfire Hurricane surveillance of the Trump campaign on July 31, 2016, after learning that a young Trump advisor, George Papadopoulos, had bragged to an Australian diplomat, Alexander Downer, that the Russians had dirt on Hillary Clinton….

But as the Special Counsel Robert Mueller report made clear, it wasn’t merely Papadopoulos’ bar-room boast at issue: It was “a series of contacts between Trump Campaign officials and individuals with ties to the Russian government,” that the DOJ and FBI, and later the Special Counsel’s office investigated.

And who put the FBI on to those supposedly suspicious contacts? Former CIA Director John Brennan….

The evidence suggests … that Brennan’s CIA and the intelligence community did much more than merely pass on details about “contacts and interactions between Russian officials and U.S. persons involved in the Trump campaign” to the FBI. The evidence suggests that the CIA and intelligence community—including potentially the intelligence communities of the UK, Italy, and Australia—created the contacts and interactions that they then reported to the FBI as suspicious.

The Deep State in action.


More Evidence for Why I Don’t Believe in “Climate Change”

I’ve already adduced a lot of evidence in “Why I Don’t Believe in Climate Change” and “Climate Change“. One of the scientists to whom I give credence is Dr. Roy Spencer of the Climate Research Center at the University of Alabama-Huntsville. Spencer agrees that CO2 emissions must have an effect on atmospheric temperatures, but is doubtful about the magnitude of the effect.

He revisits a point that he has made before, namely, that the there is no “preferred” state of the climate (“Does the Climate System Have a Preferred Average State? Chaos and the Forcing-Feedback Paradigm“, Roy Spencer, Ph.D., October 25, 2019):

If there is … a preferred average state, then the forcing-feedback paradigm of climate change is valid. In that system of thought, any departure of the global average temperature from the Nature-preferred state is resisted by radiative “feedback”, that is, changes in the radiative energy balance of the Earth in response to the too-warm or too-cool conditions. Those radiative changes would constantly be pushing the system back to its preferred temperature state…

[W]hat if the climate system undergoes its own, substantial chaotic changes on long time scales, say 100 to 1,000 years? The IPCC assumes this does not happen. But the ocean has inherently long time scales — decades to millennia. An unusually large amount of cold bottom water formed at the surface in the Arctic in one century might take hundreds or even thousands of years before it re-emerges at the surface, say in the tropics. This time lag can introduce a wide range of complex behaviors in the climate system, and is capable of producing climate change all by itself.

Even the sun, which we view as a constantly burning ball of gas, produces an 11-year cycle in sunspot activity, and even that cycle changes in strength over hundreds of years. It would seem that every process in nature organizes itself on preferred time scales, with some amount of cyclic behavior.

This chaotic climate change behavior would impact the validity of the forcing-feedback paradigm as well as our ability to determine future climate states and the sensitivity of the climate system to increasing CO2. If the climate system has different, but stable and energy-balanced, states, it could mean that climate change is too complex to predict with any useful level of accuracy [emphasis added].

Which is exactly what I say in “Modeling and Science“.


Thoughts on Mortality

I ruminated about it in “The Unique ‘Me’“:

Children, at some age, will begin to understand that there is death, the end of a human life (in material form, at least). At about the same time, in my experience, they will begin to speculate about the possibility that they might have been someone else: a child born in China, for instance.

Death eventually loses its fascination, though it may come to mind from time to time as one grows old. (Will I wake up in the morning? Is this the day that my heart stops beating? Will I be able to break my fall when the heart attack happens, or will I just go down hard and die of a fractured skull?)

Bill Vallicella (Maverick Philosopher) has been ruminating about it in recent posts. This is from his “Six Types of Death Fear” (October 24, 2019):

1. There is the fear of nonbeing, of annihilation….

2. There is the fear of surviving one’s bodily death as a ghost, unable to cut earthly attachments and enter nonbeing and oblivion….

3. There is the fear of post-mortem horrors….

4. There is the fear of the unknown….

5. There is the fear of the Lord and his judgment….

6. Fear of one’s own judgment or the judgment of posterity.

There is also — if one is in good health and enjoying life — the fear of losing what seems to be a good thing, namely, the enjoyment of life itself.


Assortative Mating, Income Inequality, and the Crocodile Tears of “Progressives”

Mating among human beings has long been assortative in various ways, in that the selection of a mate has been circumscribed or determined by geographic proximity, religious affiliation, clan rivalries or alliances, social relationships or enmities, etc. The results have sometimes been propitious, as Gregory Cochran points out in “An American Dilemma” (West Hunter, October 24, 2019):

Today we’re seeing clear evidence of genetic differences between classes: causal differences.  People with higher socioeconomic status have ( on average) higher EA polygenic scores. Higher scores for cognitive ability, as well. This is of course what every IQ test has shown for many decades….

Let’s look at Ashkenazi Jews in the United States. They’re very successful, averaging upper-middle-class.   So you’d think that they must have high polygenic scores for EA  (and they do).

Were they a highly selected group?  No: most were from Eastern Europe. “Immigration of Eastern Yiddish-speaking Ashkenazi Jews, in 1880–1914, brought a large, poor, traditional element to New York City. They were Orthodox or Conservative in religion. They founded the Zionist movement in the United States, and were active supporters of the Socialist party and labor unions. Economically, they concentrated in the garment industry.”

And there were a lot of them: it’s harder for a sample to be very unrepresentative when it makes up a big fraction of the entire population.

But that can’t be: that would mean that Europeans Jews were just smarter than average.  And that would be racist.

Could it be result of some kind of favoritism?  Obviously not, because that would be anti-Semitic.

Cochran obviously intends sarcasm in the final two paragraphs. The evidence for the heritability of intelligence is, as he says, quite strong. (See, for example, my “Race and Reason: The Achievement Gap — Causes and Implications” and “Intelligence“.) Were it not for assortative mating among Ashkenazi Jews, they wouldn’t be the most intelligent ethnic-racial group.

Branko Milanovic specifically addresses the “hot” issue in “Rich Like Me: How Assortative Mating Is Driving Income Inequality“, Quillette, October 18, 2019):

Recent research has documented a clear increase in the prevalence of homogamy, or assortative mating (people of the same or similar education status and income level marrying each other). A study based on a literature review combined with decennial data from the American Community Survey showed that the association between partners’ level of education was close to zero in 1970; in every other decade through 2010, the coefficient was positive, and it kept on rising….

At the same time, the top decile of young male earners have been much less likely to marry young women who are in the bottom decile of female earners. The rate has declined steadily from 13.4 percent to under 11 percent. In other words, high-earning young American men who in the 1970s were just as likely to marry high-earning as low-earning young women now display an almost three-to- one preference in favor of high-earning women. An even more dramatic change happened for women: the percentage of young high-earning women marrying young high-earning men increased from just under 13 percent to 26.4 percent, while the percentage of rich young women marrying poor young men halved. From having no preference between rich and poor men in the 1970s, women currently prefer rich men by a ratio of almost five to one….

High income and wealth inequality in the United States used to be justified by the claim that everyone had the opportunity to climb up the ladder of success, regardless of family background. This idea became known as the American Dream. The emphasis was on equality of opportunity rather than equality of outcome….

The American Dream has remained powerful both in the popular imagination and among economists. But it has begun to be seriously questioned during the past ten years or so, when relevant data have become available for the first time. Looking at twenty-two countries around the world, Miles Corak showed in 2013 that there was a positive correlation between high inequality in any one year and a strong correlation between parents’ and children’s incomes (i.e., low income mobility). This result makes sense, because high inequality today implies that the children of the rich will have, compared to the children of the poor, much greater opportunities. Not only can they count on greater inheritance, but they will also benefit from better education, better social capital obtained through their parents, and many other intangible advantages of wealth. None of those things are available to the children of the poor. But while the American Dream thus was somewhat deflated by the realization that income mobility is greater in more egalitarian countries than in the United States, these results did not imply that intergenerational mobility had actually gotten any worse over time.

Yet recent research shows that intergenerational mobility has in fact been declining. Using a sample of parent-son and parent-daughter pairs, and comparing a cohort born between 1949 and 1953 to one born between 1961 and 1964, Jonathan Davis and Bhashkar Mazumder found significantly lower intergenerational mobility for the latter cohort.

Milanovic doesn’t mention the heritabiliity of intelligence, which is bound to be generally higher among children of high-IQ parents (like Ashkenzi Jews and East Asians), and the strong correlation between intelligence and income. Does this mean that assortative mating should be banned and “excess” wealth should be confiscated and redistributed? Elizabeth Warren and Bernie Sanders certainly favor the second prescription, which would have a disastrous effect on the incentive to become rich and therefore on economic growth.

I addressed these matters in “Intelligence, Assortative Mating, and Social Engineering“:

So intelligence is real; it’s not confined to “book learning”; it has a strong influence on one’s education, work, and income (i.e., class); and because of those things it leads to assortative mating, which (on balance) reinforces class differences. Or so the story goes.

But assortative mating is nothing new. What might be new, or more prevalent than in the past, is a greater tendency for intermarriage within the smart-educated-professional class instead of across class lines, and for the smart-educated-professional class to live in “enclaves” with their like, and to produce (generally) bright children who’ll (mostly) follow the lead of their parents.

How great are those tendencies? And in any event, so what? Is there a potential social problem that will  have to be dealt with by government because it poses a severe threat to the nation’s political stability or economic well-being? Or is it just a step in the voluntary social evolution of the United States — perhaps even a beneficial one?…

[Lengthy quotations from statistical evidence and expert commentary.]

What does it all mean? For one thing, it means that the children of top-quintile parents reach the top quintile about 30 percent of the time. For another thing, it means that, unsurprisingly, the children of top-quintile parents reach the top quintile more often than children of second-quintile parents, who reach the top quintile more often than children of third-quintile parents, and so on.

There is nevertheless a growing, quasi-hereditary, smart-educated-professional-affluent class. It’s almost a sure thing, given the rise of the two-professional marriage, and given the correlation between the intelligence of parents and that of their children, which may be as high as 0.8. However, as a fraction of the total population, membership in the new class won’t grow as fast as membership in the “lower” classes because birth rates are inversely related to income.

And the new class probably will be isolated from the “lower” classes. Most members of the new class work and live where their interactions with persons of “lower” classes are restricted to boss-subordinate and employer-employee relationships. Professionals, for the most part, work in office buildings, isolated from the machinery and practitioners of “blue collar” trades.

But the segregation of housing on class lines is nothing new. People earn more, in part, so that they can live in nicer houses in nicer neighborhoods. And the general rise in the real incomes of Americans has made it possible for persons in the higher income brackets to afford more luxurious homes in more luxurious neighborhoods than were available to their parents and grandparents. (The mansions of yore, situated on “Mansion Row,” were occupied by the relatively small number of families whose income and wealth set them widely apart from the professional class of the day.) So economic segregation is, and should be, as unsurprising as a sunrise in the east.

None of this will assuage progressives, who like to claim that intelligence (like race) is a social construct (while also claiming that Republicans are stupid); who believe that incomes should be more equal (theirs excepted); who believe in “diversity,” except when it comes to where most of them choose to live and school their children; and who also believe that economic mobility should be greater than it is — just because. In their superior minds, there’s an optimum income distribution and an optimum degree of economic mobility — just as there is an optimum global temperature, which must be less than the ersatz one that’s estimated by combining temperatures measured under various conditions and with various degrees of error.

The irony of it is that the self-segregated, smart-educated-professional-affluent class is increasingly progressive….

So I ask progressives, given that you have met the new class and it is you, what do you want to do about it? Is there a social problem that might arise from greater segregation of socio-economic classes, and is it severe enough to warrant government action. Or is the real “problem” the possibility that some people — and their children and children’s children, etc. — might get ahead faster than other people — and their children and children’s children, etc.?

Do you want to apply the usual progressive remedies? Penalize success through progressive (pun intended) personal income-tax rates and the taxation of corporate income; force employers and universities to accept low-income candidates (whites included) ahead of better-qualified ones (e.g., your children) from higher-income brackets; push “diversity” in your neighborhood by expanding the kinds of low-income housing programs that helped to bring about the Great Recession; boost your local property and sales taxes by subsidizing “affordable housing,” mandating the payment of a “living wage” by the local government, and applying that mandate to contractors seeking to do business with the local government; and on and on down the list of progressive policies?

Of course you do, because you’re progressive. And you’ll support such things in the vain hope that they’ll make a difference. But not everyone shares your naive beliefs in blank slates, equal ability, and social homogenization (which you don’t believe either, but are too wedded to your progressive faith to admit). What will actually be accomplished — aside from tokenism — is social distrust and acrimony, which had a lot to do with the electoral victory of Donald J. Trump, and economic stagnation, which hurts the “little people” a lot more than it hurts the smart-educated-professional-affluent class….

The solution to the pseudo-problem of economic inequality is benign neglect, which isn’t a phrase that falls lightly from the lips of progressives. For more than 80 years, a lot of Americans — and too many pundits, professors, and politicians — have been led astray by that one-off phenomenon: the Great Depression. FDR and his sycophants and their successors created and perpetuated the myth that an activist government saved America from ruin and totalitarianism. The truth of the matter is that FDR’s policies prolonged the Great Depression by several years, and ushered in soft despotism, which is just “friendly” fascism. And all of that happened at the behest of people of above-average intelligence and above-average incomes.

Progressivism is the seed-bed of eugenics, and still promotes eugenics through abortion on demand (mainly to rid the world of black babies). My beneficial version of eugenics would be the sterilization of everyone with an IQ above 125 or top-40-percent income who claims to be progressive [emphasis added].

Enough said.

More about Modeling and Science

This post is based on a paper that I wrote 38 years ago. The subject then was the bankruptcy of warfare models, which shows through in parts of this post. I am trying here to generalize the message to encompass all complex, synthetic models (defined below). For ease of future reference, I have created a page that includes links to this post and the many that are listed at the bottom.

THE METAPHYSICS OF MODELING

Alfred North Whitehead said in Science and the Modern World (1925) that “the certainty of mathematics depends on its complete abstract generality” (p. 25). The attraction of mathematical models is their apparent certainty. But a model is only a representation of reality, and its fidelity to reality must be tested rather than assumed. And even if a model seems faithful to reality, its predictive power is another thing altogether. We are living in an era when models that purport to reflect reality are given credence despite their lack of predictive power. Ironically, those who dare point this out are called anti-scientific and science-deniers.

To begin at the beginning, I am concerned here with what I will call complex, synthetic models of abstract variables like GDP and “global” temperature. These are open-ended, mathematical models that estimate changes in the variable of interest by attempting to account for many contributing factors (parameters) and describing mathematically the interactions between those factors. I call such models complex because they have many “moving parts” — dozens or hundreds of sub-models — each of which is a model in itself. I call them synthetic because the estimated changes in the variables of interest depend greatly on the selection of sub-models, the depictions of their interactions, and the values assigned to the constituent parameters of the sub-models. That is to say, compared with a model of the human circulatory system or an internal combustion engine, a synthetic model of GDP or “global” temperature rests on incomplete knowledge of the components of the systems in question and the interactions among those components.

Modelers seem ignorant of or unwilling to acknowledge what should be a basic tenet of scientific inquiry: the complete dependence of logical systems (such as mathematical models) on the underlying axioms (assumptions) of those systems. Kurt Gödel addressed this dependence in his incompleteness theorems:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic….

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

There is the view that Gödel’s theorems aren’t applicable in fields outside of mathematical logic. But any quest for certainty about the physical world necessarily uses mathematical logic (which includes statistics).

This doesn’t mean that the results of computational exercises are useless. It simply means that they are only as good as the assumptions that underlie them; for example, assumptions about relationships between parameters, assumptions about the values of the parameters, and assumptions as to whether the correct parameters have been chosen (and properly defined) in the first place.

There is nothing new in that, certainly nothing that requires Gödel’s theorems by way of proof. It has long been understood that a logical argument may be valid — the conclusion follows from the premises — but untrue if the premises (axioms) are untrue. But it bears repeating — and repeating.

REAL MODELERS AT WORK

There have been mathematical models of one kind and another for centuries, but formal models weren’t used much outside the “hard sciences” until the development of microeconomic theory in the 19th century. Then came F.W. Lanchester, who during World War I devised what became known as Lanchester’s laws (or Lanchester’s equations), which are

mathematical formulae for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two [opponents’] strengths A and B as a function of time, with the function depending only on A and B.

Lanchester’s equations are nothing more than abstractions that must be given a semblance of reality by the user, who is required to make myriad assumptions (explicit and implicit) about the factors that determine the “strengths” of A and B, including but not limited to the relative killing power of various weapons, the effectiveness of opponents’ defenses, the importance of the speed and range of movement of various weapons, intelligence about the location of enemy forces, and commanders’ decisions about when, where, and how to engage the enemy. It should be evident that the predictive value of the equations, when thus fleshed out, is limited to small, discrete engagements, such as brief bouts of aerial combat between two (or a few) opposing aircraft. Alternatively — and in practice — the values are selected so as to yield results that mirror what actually happened (in the “replication” of a historical battle) or what “should” happen (given the preferences of the analyst’s client).

More complex (and realistic) mathematical modeling (also known as operations research) had seen limited use in industry and government before World War II. Faith in the explanatory power of mathematical models was burnished by their use during the war, where such models seemed to be of aid in the design of more effective tactics and weapons.

But the foundation of that success wasn’t the mathematical character of the models. Rather, it was the fact that the models were tested against reality. Philip M. Morse and George E. Kimball put it well in Methods of Operations Research (1946):

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [Op cit., p. 10]

A mathematical model doesn’t represent scientific knowledge unless its predictions can be and have been tested. Even then, a valid model can represent only a narrow slice of reality. The expansion of a model beyond that narrow slice requires the addition of parameters whose interactions may not be well understood and whose values will be uncertain.

Morse and Kimball accordingly urged “hemibel thinking”:

Having obtained the constants of the operations under study … we compare the value of the constants obtained in actual operations with the optimum theoretical value, if this can be computed. If the actual value is within a hemibel ( … a factor of 3) of the theoretical value, then it is extremely unlikely that any improvement in the details of the operation will result in significant improvement. [When] there is a wide gap between the actual and theoretical results … a hint as to the possible means of improvement can usually be obtained by a crude sorting of the operational data to see whether changes in personnel, equipment, or tactics produce a significant change in the constants. [Op cit., p. 38]

Should we really attach little significance to differences of less than a hemibel? Consider a five-parameter model involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model can easily yield a cumulative error of a hemibel (or greater), given a twenty-five percent error in the value each parameter. (Mathematically, 1.255 = 3.05; alternatively, 0.755 = 0.24, or about one-fourth.)

ANTI-SCIENTIFIC MODELING

What does this say about complex, synthetic models such as those of economic activity or “climate change”? Any such model rests on the modeler’s assumptions as to the parameters that should be included, their values (and the degree of uncertainty surrounding them), and the interactions among them. The interactions must be modeled based on further assumptions. And so assumptions and uncertainties — and errors — multiply apace.

But the prideful modeler (I have yet to meet a humble one) will claim validity if his model has been fine-tuned to replicate the past (e.g., changes in GDP, “global” temperature anomalies). But the model is useless unless it predicts the future consistently and with great accuracy, where “great” means accurately enough to validly represent the effects of public-policy choices (e.g., setting the federal funds rate, investing in CO2 abatement technology).

Macroeconomic Modeling: A Case Study

In macroeconomics, for example, there is Professor Ray Fair, who teaches macroeconomic theory, econometrics, and macroeconometric modeling at Yale University. He has been plying his trade at prestigious universities since 1968, first at Princeton, then at MIT, and since 1974 at Yale. Professor Fair has since 1983 been forecasting changes in real GDP — not decades ahead, just four quarters (one year) ahead. He has made 141 such forecasts, the earliest of which covers the four quarters ending with the second quarter of 1984, and the most recent of which covers the four quarters ending with the second quarter of 2019. The forecasts are based on a model that Professor Fair has revised many times over the years. The current model is here. His forecasting track record is here.) How has he done? Here’s how:

1. The median absolute error of his forecasts is 31 percent.

2. The mean absolute error of his forecasts is 69 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 3 percent; too low when real, four-quarter GDP growth is greater than 3 percent.

4. His forecasts have grown generally worse — not better — with time. Recent forecasts are better, but still far from the mark.

Thus:


This and the next two graphs were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing, as noted in the caption.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

It fails a crucial test, in that it doesn’t reflect the downward trend in economic growth:

General Circulation Models (GCMs) and “Climate Change”

As for climate models, Dr. Tim Ball writes about a

fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).

clip_image006

Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

And yet the proponents of CO2-forced “climate change” rely heavily on that flawed temperature record because it is the only one that goes back far enough to “prove” the modelers’ underlying assumption, namely, that it is anthropogenic CO2 emissions which have caused the rise in “global” temperatures. See, for example, Dr. Roy Spencer’s “The Faith Component of Global Warming Predictions“, wherein Dr. Spencer points out that the modelers

have only demonstrated what they assumed from the outset. It is circular reasoning. A tautology. Evidence that nature also causes global energy imbalances is abundant: e.g., the strong warming before the 1940s; the Little Ice Age; the Medieval Warm Period. This is why many climate scientists try to purge these events from the historical record, to make it look like only humans can cause climate change.

In fact the models deal in temperature anomalies, that is, departures from a 30-year average. The anomalies — which range from -1.41 to +1.68 degrees C — are so small relative to the errors and uncertainties inherent in the compilation, estimation, and model-driven adjustments of the temperature record, that they must fail Morse and Kimball’s hemibel test. (The model-driven adjustments are, as Dr. Spencer suggests, downward adjustments of historical temperature data for consistency with the models which “prove” that CO2 emissions induce a certain rate of warming. More circular reasoning.)

They also fail, and fail miserably, the acid test of predicting future temperatures with accuracy. This failure has been pointed out many times. Dr. John Christy, for example, has testified to that effect before Congress (e.g., this briefing). Defenders of the “climate change” faith have attacked Dr. Christy’s methods and finding, but the rebuttals to one such attack merely underscore the validity of Dr. Christy’s work.

This is from “Manufacturing Alarm: Dana Nuccitelli’s Critique of John Christy’s Climate Science Testimony“, by Mario Lewis Jr.:

Christy’s testimony argues that the state-of-the-art models informing agency analyses of climate change “have a strong tendency to over-warm the atmosphere relative to actual observations.” To illustrate the point, Christy provides a chart comparing 102 climate model simulations of temperature change in the global mid-troposphere to observations from two independent satellite datasets and four independent weather balloon data sets….

To sum up, Christy presents an honest, apples-to-apples comparison of modeled and observed temperatures in the bulk atmosphere (0-50,000 feet). Climate models significantly overshoot observations in the lower troposphere, not just in the layer above it. Christy is not “manufacturing doubt” about the accuracy of climate models. Rather, Nuccitelli is manufacturing alarm by denying the models’ growing inconsistency with the real world.

And this is from Christopher Monckton of Brenchley’s “The Guardian’s Dana Nuccitelli Uses Pseudo-Science to Libel Dr. John Christy“:

One Dana Nuccitelli, a co-author of the 2013 paper that found 0.5% consensus to the effect that recent global warming was mostly manmade and reported it as 97.1%, leading Queensland police to inform a Brisbane citizen who had complained to them that a “deception” had been perpetrated, has published an article in the British newspaper The Guardian making numerous inaccurate assertions calculated to libel Dr John Christy of the University of Alabama in connection with his now-famous chart showing the ever-growing discrepancy between models’ wild predictions and the slow, harmless, unexciting rise in global temperature since 1979….

… In fact, as Mr Nuccitelli knows full well (for his own data file of 11,944 climate science papers shows it), the “consensus” is only 0.5%. But that is by the bye: the main point here is that it is the trends on the predictions compared with those on the observational data that matter, and, on all 73 models, the trends are higher than those on the real-world data….

[T]he temperature profile [of the oceans] at different strata shows little or no warming at the surface and an increasing warming rate with depth, raising the possibility that, contrary to Mr Nuccitelli’s theory that the atmosphere is warming the ocean, the ocean is instead being warmed from below, perhaps by some increase in the largely unmonitored magmatic intrusions into the abyssal strata from the 3.5 million subsea volcanoes and vents most of which Man has never visited or studied, particularly at the mid-ocean tectonic divergence boundaries, notably the highly active boundary in the eastern equatorial Pacific. [That possibility is among many which aren’t considered by GCMs.]

How good a job are the models really doing in their attempts to predict global temperatures? Here are a few more examples:

Mr Nuccitelli’s scientifically illiterate attempts to challenge Dr Christy’s graph are accordingly misconceived, inaccurate and misleading.

I have omitted the bulk of both pieces because this post is already longer than needed to make my point. I urge you to follow the links and read the pieces for yourself.

Finally, I must quote a brief but telling passage from a post by Pat Frank, “Why Roy Spencer’s Criticism is Wrong“:

[H]ere’s NASA on clouds and resolution: “A doubling in atmospheric carbon dioxide (CO2), predicted to take place in the next 50 to 100 years, is expected to change the radiation balance at the surface by only about 2 percent. … If a 2 percent change is that important, then a climate model to be useful must be accurate to something like 0.25%. Thus today’s models must be improved by about a hundredfold in accuracy, a very challenging task.

Frank’s very long post substantiates what I say here about the errors and uncertainties in GCMs — and the multiplicative effect of those errors and uncertainties. I urge you to read it. It is telling that “climate skeptics” like Spencer and Frank will argue openly, whereas “true believers” work clandestinely to present a united front to the public. It’s science vs. anti-science.

CONCLUSION

In the end, complex, synthetic models can be defended only by resorting to the claim that they are “scientific”, which is a farcical claim when models consistently fail to yield accurate predictions. It is a claim based on a need to believe in the models — or, rather, what they purport to prove. It is, in other words, false certainty, which is the enemy of truth.

Newton said it best:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Just as Newton’s self-doubt was not an attack on science, neither have I essayed an attack on science or modeling — only on the abuses of both that are too often found in the company of complex, synthetic models. It is too easily forgotten that the practice of science (of which modeling is a tool) is in fact an art, not a science. With this art we may portray vividly the few pebbles and shells of truth that we have grasped; we can but vaguely sketch the ocean of truth whose horizons are beyond our reach.


Related pages and posts:

Climate Change
Modeling and Science

Modeling Is Not Science
Modeling, Science, and Physics Envy
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
Ty Cobb and the State of Science
Is Science Self-Correcting?
Mathematical Economics
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Hurricane Hysteria
Deduction, Induction, and Knowledge
A (Long) Footnote about Science
The Balderdash Chronicles
Analytical and Scientific Arrogance
The Pretence of Knowledge
Wildfires and “Climate Change”
Why I Don’t Believe in “Climate Change”
Modeling Is Not Science: Another Demonstration
Ad-Hoc Hypothesizing and Data Mining
Analysis vs. Reality

Predicting “Global” Temperatures — An Analogy with Baseball

The following graph is a plot of the 12-month moving average of “global” mean temperature anomalies for 1979-2018 in the lower troposphere, as reported by the climate-research unit of the University of Alabama-Huntsville (UAH):

The UAH values, which are derived from satellite-borne sensors, are as close as one can come to an estimate of changes in “global” mean temperatures. The UAH values certainly are more complete and reliable than the values derived from the surface-thermometer record, which is biased toward observations over the land masses of the Northern Hemisphere (the U.S., in particular) — observations that are themselves notoriously fraught with siting problems, urban-heat-island biases, and “adjustments” that have been made to “homogenize” temperature data, that is, to make it agree with the warming predictions of global-climate models.

The next graph roughly resembles the first one, but it’s easier to describe. It represents the fraction of games won by the Oakland Athletics baseball team in the 1979-2018 seasons:

Unlike the “global” temperature record, the A’s W-L record is known with certainty. Every game played by the team (indeed, by all teams in organized baseball) is diligently recorded, and in great detail. Those records yield a wealth of information not only about team records, but also about the accomplishments of the individual players whose combined performance determines whether and how often a team wins its games. Given that information, and much else about which statistics are or could be compiled (records of players in the years and games preceding a season or game; records of each team’s owner, general managers, and managers; orientations of the ballparks in which each team compiled its records; distances to the fences in those ballparks; time of day at which games were played; ambient temperatures, and on and on).

Despite all of that knowledge, there is much uncertainty about how to model the interactions among the quantifiable elements of the game, and how to give weight to the non-quantifiable elements (a manager’s leadership and tactical skills, team spirit, and on and on). Even the professional prognosticators at FiveThirtyEight, armed with a vast compilation of baseball statistics from which they have devised a complex predictive model of baseball outcomes will admit that perfection (or anything close to it) eludes them. Like many other statisticians, they fall back on the excuse that “chance” or “luck” intrudes too often to allow their statistical methods to work their magic. What they won’t admit to themselves is that the results of simulations (such as those employed in the complex model devised by FiveThirtyEight),

reflect the assumptions underlying the authors’ model — not reality. A key assumption is that the model … accounts for all relevant variables….

As I have said, “luck” is mainly an excuse and rarely an explanation. Attributing outcomes to “luck” is an easy way of belittling success when it accrues to a rival.

It is also an easy way of dodging the fact that no model can accurately account for the outcomes of complex systems. “Luck” is the disappointed modeler’s excuse.

If the outcomes of baseball games and seasons could be modeled with great certainly, people wouldn’t bet on those outcomes. The existence of successful models would become general knowledge, and betting would cease, as the small gains that might accrue from betting on narrow odds would be wiped out by vigorish.

Returning now to “global” temperatures, I am unaware of any model that actually tries to account for the myriad factors that influence climate. The pseudo-science of “climate change” began with the assumption that “global” temperatures are driven by human activity, namely the burning of fossil fuels that releases CO2 into the atmosphere. CO2 became the centerpiece of global climate models (GCMs), and everything else became an afterthought, or a non-thought. It is widely acknowledged that cloud formation and cloud cover — obviously important determinants of near-surface temperatures — are treated inadequately (when treated at all). The mechanism by which the oceans absorb heat and transmit it to the atmosphere also remain mysterious. The effect of solar activity on cosmic radiation reaching Earth (and thus on cloud formation) remains is often dismissed despite strong evidence of its importance. Other factors that seem to have little or no weight in GCMs (though they are sometimes estimated in isolation) include plate techtonics, magma flows, volcanic activity, and vegetation.

Despite all of that, builders of GCMs — and the doomsayers who worship them — believe that “global” temperatures will rise to catastrophic readings. The rising oceans will swamp coastal cities; the earth will be scorched. except where it is flooded by massive storms; crops will fail accordingly; tempers will flare and wars will break out more frequently.

There’s just one catch, and it’s a big one. Minute changes in the value of a dependent variable (“global” temperature, in this case) can’t be explained by a model in which key explanatory variables are unaccounted for, about which there is much uncertainty surrounding the values of those explanatory variables that can be accounted for, and about which there is great uncertainty about the mechanisms by which the variables interact. Even an impossibly complete model would be wildly inaccurate given the uncertainty of the interactions among variables and the values of those variables (in the past as well as in the future).

I say “minute changes” because first graph above is grossly misleading. An unbiased depiction of “global” temperatures looks like this:

There’s a much better chance of predicting the success or failure of the Oakland A’s, whose record looks like this on an absolute scale:

Just as no rational (unemotional) person should believe that predictions of “global” temperatures should dictate government spending and regulatory policies, no sane bettor is holding his breath in anticipation that the success or failure of the A’s (or any team) can be predicted with bankable certainty.

All of this illustrates a concept known as causal density, which Arnold Kling explains:

When there are many factors that have an impact on a system, statistical analysis yields unreliable results. Computer simulations give you exquisitely precise unreliable results. Those who run such simulations and call what they do “science” are deceiving themselves.

The folks at FiveThirtyEight are no more (and no less) delusional than the creators of GCMs.

Recent Updates

In case you hadn’t noticed, I have in the past few days added new links to the following post and pages:

The Green New Deal Is a Plan for Economic Devastation

Climate Change

Favorite Posts

Intelligence

Spygate

I have also updated “Writing: A Guide“.

Wildfires and “Climate Change”, Again

In view of the current hysteria about the connection between wildfires and “climate change”, I must point readers to a three-month-old post The connection is nil, just like the bogus connection between tropical cyclone activity and “climate change”.

Why I Don’t Believe in “Climate Change”

UPDATED AND EXTENDED, 11/01/18

There are lots of reasons to disbelieve in “climate change”, that is, a measurable and statistically significant influence of human activity on the “global” temperature. Many of the reasons can be found at my page on the subject — in the text, the list of related readings, and the list of related posts. Here’s the main one: Surface temperature data — the basis for the theory of anthropogenic global warming — simply do not support the theory.

As Dr. Tim Ball points out:

A fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).

clip_image006

Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

“Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.”

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

Take the National Weather Service station for Austin, Texas, which is located 2.7 miles from my house. The station is on the grounds of Camp Mabry, a Texas National Guard base near the center of Austin, the fastest-growing large city in the U.S. The base is adjacent to a major highway (Texas Loop 1) that traverses Austin. The weather station is about 1/4 mile from the highway,100 feet from a paved road on the base, and near a complex of buildings and parking areas.

Here’s a ground view of the weather station:

And here’s an aerial view; the weather station is the tan rectangle at the center of the photo:

As I have shown elsewhere, the general rise in temperatures recorded at the weather station over the past several decades is fully explained by the urban-heat-island effect due to the rise in Austin’s population during those decades.

Further, there is a consistent difference in temperature and rainfall between my house and Camp Mabry. My house is located farther from the center of Austin — northwest of Camp Mabry — in a topographically different area. The topography in my part of Austin is typical of the Texas Hill Country, which begins about a mile east of my house and covers a broad swath of land stretching as far as 250 miles from Austin.

The contrast is obvious in the next photo. Camp Mabry is at the “1” (for Texas Loop 1) near the lower edge of the image. Topographically, it belongs with the flat part of Austin that lies mostly east of Loop 1. It is unrepresentative of the huge chunk of Austin and environs that lies to its north and west.

Getting down to cases. I observed that in the past summer, when daily highs recorded at Camp Mabry hit 100 degrees or more 52 times, the daily high at my house reached 100 or more only on the handful of days when it reached 106-110 at Camp Mabry. That’s consistent with another observation; namely, that the daily high at my house is generally 6 degrees lower than the daily high at Camp Mabry when it is above 90 degrees there.

As for rainfall, my house seems to be in a different ecosystem than Camp Mabry’s. Take September and October of this year: 15.7 inches of rain fell at Camp Mabry, as against 21.0 inches at my house. The higher totals at my house are typical, and are due to a phenomenon called orographic lift. It affects areas to the north and west of Camp Mabry, but not Camp Mabry itself.

So the climate at Camp Mabry is not my climate. Nor is the climate at Camp Mabry typical of a vast area in and around Austin, despite the use of Camp Mabry’s climate to represent that area.

There is another official weather station at Austin-Bergstrom International Airport, which is in the flatland 9.5 miles to the southeast of Camp Mabry. Its rainfall total for September and October was 12.8 inches — almost 3 inches less than at Camp Mabry — but its average temperatures for the two months were within a degree of Camp Mabry’s. Suppose Camp Mabry’s weather station went offline. The weather station at ABIA would then record temperatures and precipitation even less representative of those at my house and similar areas to the north and west.

Speaking of precipitation — it is obviously related to cloud cover. The more it rains, the cloudier it will be. The cloudier it is, the lower the temperature, other things being the same (e.g., locale). This is true for Austin:

12-month avg temp vs. precip

The correlation coefficient is highly significant, given the huge sample size. Note that the relationship is between precipitation in a given month and temperature a month later. Although cloud cover (and thus precipitation) has an immediate effect on temperature, precipitation has a residual effect in that wet ground absorbs more solar radiation than dry ground, so that there is less heat reflected from the ground to the air. The lagged relationship is strongest at 1 month, and considerably stronger than any relationship in which temperature leads precipitation.

I bring up this aspect of Austin’s climate because of a post by Anthony Watts (“Data: Global Temperatures Fell As Cloud Cover Rose in the 1980s and 90s“, Watts Up With That?, November 1, 2018):

I was reminded about a study undertaken by Clive Best and Euan Mearns looking at the role of cloud cover four years ago:

Clouds have a net average cooling effect on the earth’s climate. Climate models assume that changes in cloud cover are a feedback response to CO2 warming. Is this assumption valid? Following a study withEuan Mearns showing a strong correlation in UK temperatures with clouds, we  looked at the global effects of clouds by developing a combined cloud and CO2 forcing model to sudy how variations in both cloud cover [8] and CO2 [14] data affect global temperature anomalies between 1983 and 2008. The model as described below gives a good fit to HADCRUT4 data with a Transient Climate Response (TCR )= 1.6±0.3°C. The 17-year hiatus in warming can then be explained as resulting from a stabilization in global cloud cover since 1998.  An excel spreadsheet implementing the model as described below can be downloaded from http://clivebest.com/GCC.

The full post containing all of the detailed statistical analysis is here.

But this is the key graph:

CC-HC4

Figure 1a showing the ISCCP global averaged monthly cloud cover from July 1983 to Dec 2008 over-laid in blue with Hadcrut4 monthly anomaly data. The fall in cloud cover coincides with a rapid rise in temperatures from 1983-1999. Thereafter the temperature and cloud trends have both flattened. The CO2 forcing from 1998 to 2008 increases by a further ~0.3 W/m2 which is evidence that changes in clouds are not a direct feedback to CO2 forcing.

In conclusion, natural cyclic change in global cloud cover has a greater impact on global average temperatures than CO2. There is little evidence of a direct feedback relationship between clouds and CO2. Based on satellite measurements of cloud cover (ISCCP), net cloud forcing (CERES) and CO2 levels (KEELING) we developed a model for predicting global temperatures. This results in a best-fit value for TCR = 1.4 ± 0.3°C. Summer cloud forcing has a larger effect in the northern hemisphere resulting in a lower TCR = 1.0 ± 0.3°C. Natural phenomena must influence clouds although the details remain unclear, although the CLOUD experiment has given hints that increased fluxes of cosmic rays may increase cloud seeding [19].  In conclusion, the gradual reduction in net cloud cover explains over 50% of global warming observed during the 80s and 90s, and the hiatus in warming since 1998 coincides with a stabilization of cloud forcing.

Why there was a decrease in cloud cover is another question of course.

In addition to Paul Homewood’s piece, we have this WUWT story from 2012:

Spencer’s posited 1-2% cloud cover variation found

A paper published last week finds that cloud cover over China significantly decreased during the period 1954-2005. This finding is in direct contradiction to the theory of man-made global warming which presumes that warming allegedly from CO2 ‘should’ cause an increase in water vapor and cloudiness. The authors also find the decrease in cloud cover was not related to man-made aerosols, and thus was likely a natural phenomenon, potentially a result of increased solar activity via the Svensmark theory or other mechanisms.

Case closed. (Not for the first time.)

New Pages

In case you haven’t noticed the list in the right sidebar, I have converted several classic posts to pages, for ease of access. Some have new names; many combine several posts on the same subject:

Abortion Q & A

Climate Change

Constitution: Myths and Realities

Economic Growth Since World War II

Intelligence

Keynesian Multiplier: Fiction vs. Fact

Leftism

Movies

Spygate