Wildfires and “Climate Change”, Again

In view of the current hysteria about the connection between wildfires and “climate change”, I must point readers to a three-month-old post The connection is nil, just like the bogus connection between tropical cyclone activity and “climate change”.

Why I Don’t Believe in “Climate Change”

UPDATED AND EXTENDED, 11/01/18

There are lots of reasons to disbelieve in “climate change”, that is, a measurable and statistically significant influence of human activity on the “global” temperature. Many of the reasons can be found at my page on the subject — in the text, the list of related readings, and the list of related posts. Here’s the main one: Surface temperature data — the basis for the theory of anthropogenic global warming — simply do not support the theory.

As Dr. Tim Ball points out:

A fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).

clip_image006

Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

“Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.”

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

Take the National Weather Service station for Austin, Texas, which is located 2.7 miles from my house. The station is on the grounds of Camp Mabry, a Texas National Guard base near the center of Austin, the fastest-growing large city in the U.S. The base is adjacent to a major highway (Texas Loop 1) that traverses Austin. The weather station is about 1/4 mile from the highway,100 feet from a paved road on the base, and near a complex of buildings and parking areas.

Here’s a ground view of the weather station:

And here’s an aerial view; the weather station is the tan rectangle at the center of the photo:

As I have shown elsewhere, the general rise in temperatures recorded at the weather station over the past several decades is fully explained by the urban-heat-island effect due to the rise in Austin’s population during those decades.

Further, there is a consistent difference in temperature and rainfall between my house and Camp Mabry. My house is located farther from the center of Austin — northwest of Camp Mabry — in a topographically different area. The topography in my part of Austin is typical of the Texas Hill Country, which begins about a mile east of my house and covers a broad swath of land stretching as far as 250 miles from Austin.

The contrast is obvious in the next photo. Camp Mabry is at the “1” (for Texas Loop 1) near the lower edge of the image. Topographically, it belongs with the flat part of Austin that lies mostly east of Loop 1. It is unrepresentative of the huge chunk of Austin and environs that lies to its north and west.

Getting down to cases. I observed that in the past summer, when daily highs recorded at Camp Mabry hit 100 degrees or more 52 times, the daily high at my house reached 100 or more only on the handful of days when it reached 106-110 at Camp Mabry. That’s consistent with another observation; namely, that the daily high at my house is generally 6 degrees lower than the daily high at Camp Mabry when it is above 90 degrees there.

As for rainfall, my house seems to be in a different ecosystem than Camp Mabry’s. Take September and October of this year: 15.7 inches of rain fell at Camp Mabry, as against 21.0 inches at my house. The higher totals at my house are typical, and are due to a phenomenon called orographic lift. It affects areas to the north and west of Camp Mabry, but not Camp Mabry itself.

So the climate at Camp Mabry is not my climate. Nor is the climate at Camp Mabry typical of a vast area in and around Austin, despite the use of Camp Mabry’s climate to represent that area.

There is another official weather station at Austin-Bergstrom International Airport, which is in the flatland 9.5 miles to the southeast of Camp Mabry. Its rainfall total for September and October was 12.8 inches — almost 3 inches less than at Camp Mabry — but its average temperatures for the two months were within a degree of Camp Mabry’s. Suppose Camp Mabry’s weather station went offline. The weather station at ABIA would then record temperatures and precipitation even less representative of those at my house and similar areas to the north and west.

Speaking of precipitation — it is obviously related to cloud cover. The more it rains, the cloudier it will be. The cloudier it is, the lower the temperature, other things being the same (e.g., locale). This is true for Austin:

12-month avg temp vs. precip

The correlation coefficient is highly significant, given the huge sample size. Note that the relationship is between precipitation in a given month and temperature a month later. Although cloud cover (and thus precipitation) has an immediate effect on temperature, precipitation has a residual effect in that wet ground absorbs more solar radiation than dry ground, so that there is less heat reflected from the ground to the air. The lagged relationship is strongest at 1 month, and considerably stronger than any relationship in which temperature leads precipitation.

I bring up this aspect of Austin’s climate because of a post by Anthony Watts (“Data: Global Temperatures Fell As Cloud Cover Rose in the 1980s and 90s“, Watts Up With That?, November 1, 2018):

I was reminded about a study undertaken by Clive Best and Euan Mearns looking at the role of cloud cover four years ago:

Clouds have a net average cooling effect on the earth’s climate. Climate models assume that changes in cloud cover are a feedback response to CO2 warming. Is this assumption valid? Following a study withEuan Mearns showing a strong correlation in UK temperatures with clouds, we  looked at the global effects of clouds by developing a combined cloud and CO2 forcing model to sudy how variations in both cloud cover [8] and CO2 [14] data affect global temperature anomalies between 1983 and 2008. The model as described below gives a good fit to HADCRUT4 data with a Transient Climate Response (TCR )= 1.6±0.3°C. The 17-year hiatus in warming can then be explained as resulting from a stabilization in global cloud cover since 1998.  An excel spreadsheet implementing the model as described below can be downloaded from http://clivebest.com/GCC.

The full post containing all of the detailed statistical analysis is here.

But this is the key graph:

CC-HC4

Figure 1a showing the ISCCP global averaged monthly cloud cover from July 1983 to Dec 2008 over-laid in blue with Hadcrut4 monthly anomaly data. The fall in cloud cover coincides with a rapid rise in temperatures from 1983-1999. Thereafter the temperature and cloud trends have both flattened. The CO2 forcing from 1998 to 2008 increases by a further ~0.3 W/m2 which is evidence that changes in clouds are not a direct feedback to CO2 forcing.

In conclusion, natural cyclic change in global cloud cover has a greater impact on global average temperatures than CO2. There is little evidence of a direct feedback relationship between clouds and CO2. Based on satellite measurements of cloud cover (ISCCP), net cloud forcing (CERES) and CO2 levels (KEELING) we developed a model for predicting global temperatures. This results in a best-fit value for TCR = 1.4 ± 0.3°C. Summer cloud forcing has a larger effect in the northern hemisphere resulting in a lower TCR = 1.0 ± 0.3°C. Natural phenomena must influence clouds although the details remain unclear, although the CLOUD experiment has given hints that increased fluxes of cosmic rays may increase cloud seeding [19].  In conclusion, the gradual reduction in net cloud cover explains over 50% of global warming observed during the 80s and 90s, and the hiatus in warming since 1998 coincides with a stabilization of cloud forcing.

Why there was a decrease in cloud cover is another question of course.

In addition to Paul Homewood’s piece, we have this WUWT story from 2012:

Spencer’s posited 1-2% cloud cover variation found

A paper published last week finds that cloud cover over China significantly decreased during the period 1954-2005. This finding is in direct contradiction to the theory of man-made global warming which presumes that warming allegedly from CO2 ‘should’ cause an increase in water vapor and cloudiness. The authors also find the decrease in cloud cover was not related to man-made aerosols, and thus was likely a natural phenomenon, potentially a result of increased solar activity via the Svensmark theory or other mechanisms.

Case closed. (Not for the first time.)

Hurricane Hysteria, Updated

In view of Hurricane Michael, and the attendant claims about the role of “climate change”, I have updated “Hurricane Hysteria“. The bottom line remains the same: Global measures of accumulated cyclone energy (ACE) do not support the view that there is a correlation between “climate change” and tropical cyclone activity.

New Pages

In case you haven’t noticed the list in the right sidebar, I have converted several classic posts to pages, for ease of access. Some have new names; many combine several posts on the same subject:

Abortion Q & A

Climate Change

Constitution: Myths and Realities

Economic Growth Since World War II

Intelligence

Keynesian Multiplier: Fiction vs. Fact

Leftism

Movies

Spygate

Wildfires and “Climate Change”

Regarding the claim that there are more wildfires because of “climate change”:

In case the relationship isn’t obvious, here it is:

Estimates of the number of fires are from National Fire Protection Association, Number of Fires by Type of Fire. Specifically, the estimates are the sum of the columns for “Outside of Structures with Value Involved but no vehicle (outside storage crops, timber, etc) and “Brush, Grass, Wildland (excluding crops and timber), with no value or loss involved”.

Estimates of the global temperature anomalies are annual averages of monthly satellite readings for the lower troposphere, published by the Earth Science System Center of the University of Alabama-Huntsville.

Hot Is Better Than Cold: A Small Case Study

I’ve been trying to find wandering classmates as the 60th anniversary of our graduation from high school looms. Not all are enthusiastic about returning to our home town in Michigan for a reunion next August. Nor am I, truth be told.

A sunny, August day in Michigan is barely warm enough for me. I’m far from alone in holding that view, as anyone with a casual knowledge of inter-State migration knows.

Take my graduating class, for example. Of the 79 living graduates whose whereabouts are known, 45 are still in Michigan; 24 are in warmer States (Arizona, California, Florida, Georgia, Kentucky, Louisiana, Mississippi, Tennessee, and Texas — moi); and 10 (inexplicably) have opted for other States at about the same latitude. In sum: 30 percent have opted for warmer climes; only 13 percent have chosen to leave a cold State for another cold State.

It would be a good thing if the world were warming a tad, as it might be.

Hurricane Hysteria

UPDATED 09/15/17, 09/16/17, 09/12/18, AND 10/10/18. (Items are added occasionally to the list of related readings at the bottom of the post.)

Yes, hurricanes are bad things when they kill and injure people, destroy property, and saturate the soil with seawater. But hurricanes are in the category of “stuff happens”.

Contrary to the true believers in catastrophic anthropogenic global warming (CAGW), hurricanes are not the fault of human beings. Hurricanes are not nature’s “retribution” for mankind’s “sinful” ways, such as the use of fossil fuels.

How do I know? Because there are people who actually look at the numbers. See, for example, “Hate on Display: Climate Activists Go Bonkers Over #Irma and Nonexistent Climate Connection” by Anthony Watts  (Watts Up With That?, September 11, 2017). See also Michel de Rougement’s “Correlation of Accumulated Cyclone Energy and Atlantic Multidecadal Oscillations” (Watts Up With That?, September 4, 2017).

M. de Rougemont’s post addresses accumulated cyclone energy (ACE):

The total energy accumulated each year by tropical storms and hurricanes (ACE) is also showing such a cyclic pattern.

NOAA’s Hurricane Research Division explanations on ACE: “the ACE is calculated by squaring the maximum sustained surface wind in the system every six hours (knots) and summing it up for the season. It is expressed in 104 kt2.” Direct instrumental observations are available as monthly series since 1848. A historic reconstruction since 1851 was done by NOAA (yearly means).

clip_image006

Figure 2 Yearly accumulated cyclone energy (ACE) ACE_7y: centered running average over 7 years

A correlation between ACE and AMO [Atlantic Multidecadal Oscillation] is confirmed by regression analysis.

clip_image008

Figure 3 Correlation ACE=f(AMO), using the running averages over 7 years. AMO: yearly means of the Atlantic Multidecadal Oscillations ACE_7y: yearly observed accumulated cyclone energy ACE_calc: calculated ACE by using the indicated formula.

Regression formula:

clip_image010
clip_image012

Thus, a simple, linear relation ties ACE to AMO, in part directly, and in part with an 18 years delay. The correlation coefficient is astonishingly good.

Anthony Watts adds fuel to this fire (or ice to this cocktail) in “Report: Ocean Cycles, Not Humans, May Be Behind Most Observed Climate Change” (Watts Up With That?, September 15, 2017). There, he discusses a report by Anastosios Tsonis, which I have added to the list of related readings, below:

… Anastasios Tsonis, emeritus distinguished professor of atmospheric sciences at the University of Wisconsin-Milwaukee, describes new and cutting-edge research into natural climatic cycles, including the well known El Nino cycle and the less familiar North Atlantic Oscillation and Pacific Decadal Oscillation.

He shows how interactions between these ocean cycles have been shown to drive changes in the global climate on timescales of several decades.

Professor Tsonis says:

We can show that at the start of the 20th century, the North Atlantic Oscillation pushed the global climate into a warming phase, and in 1940 it pushed it back into cooling mode. The famous “pause” in global warming at the start of the 21st century seems to have been instigated by the North Atlantic Oscillation too.

In fact, most of the changes in the global climate over the period of the instrumental record seem to have their origins in the North Atlantic.

Tsonis’ insights have profound implications for the way we view calls for climate alarm.

It may be that another shift in the North Atlantic could bring about another phase shift in the global climate, leading to renewed cooling or warming for several decades to come.

These climatic cycles are entirely natural, and can tell us nothing about the effect of carbon dioxide emissions. But they should inspire caution over the slowing trajectory of global warming we have seen in recent decades.

As Tsonis puts it:

While humans may play a role in climate change, other natural forces may play important roles too.

There are other reasons to be skeptical of CAGW, and even of AGW. For one thing, temperature records are notoriously unreliable, especially records from land-based thermometers. (See, for example, these two posts at Watt’s Up With That?: “Press Release – Watts at #AGU15 The Quality of Temperature Station Siting Matters for Temperature Trends” by Anthony Watts on December 17, 2015, and “Ooops! Australian BoM Climate Readings May Be invalid Due To Lack of Calibration“, on September 11, 2017.) And when those records aren’t skewed by siting and lack-of-coverage problems, they’re skewed by fudging the numbers to “prove” CAGW. (See my post, “Global-Warming Hype“, August 22, 2017.) Moreover, the models that “prove” CAGW and AGW are terrible, to put it bluntly. (Again, see “Global-Warming Hype“, and also Dr. Tim Ball’s post of September 16, 2017, “Climate Models Can’t Even Approximate Reality Because Atmospheric Structure and Movements are Virtually Unknown” at Watts Up With That?)

It’s certainly doubtful that NOAA’s reconstruction of ACE is accurate and consistent as far back as 1851. I hesitate to give credence to a data series that predates the confluence of satellite observations, ocean-buoys, and specially equipped aircraft. The history of weather satellites casts doubt on the validity of aggregate estimates for any period preceding the early 1960s.

As it happens, the data sets for tropical cyclone activity that are maintained by the Tropical Meteorology Project at Colorado State University cover all six of the relevant ocean basins as far back as 1972. The coverage goes back to 1961 (and beyond) for all but the North Indian Ocean basin — which is by far the least active.

NOAA’s reconstruction of ACE in the North Atlantic basin — which, if anything, probably understates ACE before the early 1960s — is rather suggestive:

Accumulated cyclone energy - North Atlantic basin

The recent spikes in ACE are not unprecedented. And there are many prominent spikes that predate the late-20th-century temperature rise on which “warmism” is predicated. The trend from the late 1800s to the present is essentially flat. And, again, the numbers before the early 1960s must understate ACE.

Moreover, if one is concerned about global warming, the North Atlantic basin is just a sideshow. Consider this graph of the annual values for each basin from 1972 through 2017:

Accumulated cyclone energy - individual totals 1972-2017

Here’s a graph of stacked (cumulative) totals for 1972-2017:

Accumulated cyclone energy - stacked totals 1972-2017

The red line is the sum of ACE for all six basins, including the Northwest Pacific basin; the yellow line in the sum of ACE for the next five basins, including the Northeast Pacific basin; etc.

I have these observations about the numbers represented in the preceding graphs:

If one is a believer in CAGW (the G stands for global), it is a lie (by glaring omission) to focus on random, land-falling hurricanes hitting the U.S. or other parts of the Western Hemisphere.

The overall level of activity is practically flat between 1972 and 2017, with the exception of spikes that coincide with strong El Niño events.

There is nothing in the long-term record for the North Atlantic basin, which is probably understated before the early 1960s, to suggest that global activity in recent decades is unusually high.

I am very sorry for the victims of Michael, Florence, Harvey, Irma, and every weather-related disaster — and every disaster, whether man-made or not. But I am not about to reduce my carbon footprint because of the Luddite hysterics who dominate and cling to the quasi-science of climatology.


Related reading:

Ron Clutz, “Temperatures According to Climate Models“, Science Matters, March 24, 2015

Dr. Tim Ball, “Long-Term Climate Change: What Is a Reasonable Sample Size?“, Watts Up With That?, February 7, 2016

The Global Warming Policy Foundation, Climate Science: Assumptions, Policy Implications, and the Scientific Method, 2017

John Mauer, “Through the Looking Glass with NASA GISS“, Watts Up With That?, February 22, 2017

George White, “A Consensus of Convenience“, Watts Up With That?, August 20, 2017

Jennifer Marohasy, “Most of the Recent Warming Could be Natural“, Jennifer Marohasy, August 21, 2017

Anthony Watts, “What You Need to Know and Are Not Told about Hurricanes“, Watts Up With That?, September 15, 2017

Anastasios Tsonis, The Little Boy: El Niño and Natural Climate Change, Global Warming Policy Foundation, GWPF Report 26, 2017

Anthony Watts, “Pielke Jr. – U.S. Tornado Damage Continues to Fall, 2018 Activity Near Record Lows“, Watts Up With That?, July 25, 2018

Related page and posts:

Climate Change
AGW: The Death Knell (with many links to related reading and earlier posts)
Not-So-Random Thoughts (XIV) (second item)
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II) (with more links to related reading)
Hot Is Better Than Cold: A Small Case Study

Oh, the Hysteria!

There’s little to add to the unfounded hysteria about Trump’s decision to pull out of the Paris climate agreement … but this:

If all of the hysterics truly believe that a failure to reduce CO2 emissions will result in catastrophic global warming, they have it within their power to reduce emissions drastically. They can start by getting rid of their cars in favor of bikes and horses, moving to smaller homes, doing without air conditioning, keeping their homes at 50 degrees in the winter, bathing and washing clothes in cold water, growing and raising their own foodstuffs (to eliminate transportation-based emissions), reading by candle light, and throwing out all of their electrical appliances — even including their smart phones, which rely on electrically powered systems.

Given the number of hysterics out there, I’m sure that the (non) CO2 problem would be solved in no time. If their grandparents, great-grandparents, and all who came before them could live a CO2-minimal life, why can’t a few billion true-blue saviors of the world do the same?

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.

AGW in Austin? (II)

I said this in “AGW in Austin?“:

There’s a rise in temperatures [in Austin] between the 1850s and the early 1890s, consistent with the gradual warming that followed the Little Ice Age. The gap between the early 1890s and mid-19naughts seems to have been marked by lower temperatures. It’s possible to find several mini-trends between the mid-19naughts and 1977, but the most obvious “trend” is a flat line for the entire period….

Following the sudden jump between 1977 and 1980, the “trend” remains almost flat through 1997, albeit at a slightly higher level….

The sharpest upward trend really began after the very strong (and naturally warming) El Niño of 1997-1998….

Oh, wait! It turns out that Austin’s sort-of hot-spell from 1998 to the present coincides with the “pause” in global warming….

The rapid increase in Austin’s population since 2000 probably has caused an acceleration of the urban heat-island (UHI) effect. This is known to inflate city temperatures above those in the surrounding countryside by several degrees.

What about drought? In Austin, the drought of recent years is far less severe than the drought of the 1950s, but temperatures have risen more in recent years than they did in the 1950s….

Why? Because Austin’s population is now six times greater than it was in the 1950s. The UHI effect has magnified the drought effect.

Conclusion: Austin’s recent hot weather has nothing to do with AGW.

Now, I’ll quantify the relationship between temperature, precipitation, and population. Here are a few notes about the analysis:

  • I have annual population estimates for Austin from 1960 to the present. However, to tilt the scale in favor of AGW, I used values for 1968-2015, because the average temperature in 1968 was the lowest recorded since 1924.
  • I reduced the official population figures for 1998-2015 to reflect a major annexation in 1998 that significantly increased Austin’s population. The statistical effect of that adjustment is to reduce the apparent effect of population on temperature — thus further tilting the scale in favor of AGW.
  • The official National Weather Service station moved from Mueller Airport (near I-35) to Camp Mabry (near Texas Loop 1) in 1999. I ran the regression for 1968-2015 with a dummy variable for location, but that variable is statistically insignificant.

Here’s the regression equation for 1968-2015:

T = -0.049R + 5.57E-06P + 67.8

Where,

T = average annual temperature (degrees Fahrenheit)

R = annual precipitation (inches)

P = mid-year population (adjusted, as discussed above)

The r-squared of the equation is 0.538, which is considerably better than the r-squared for a simple time trend (see the first graph below). Also, the standard error is 1.01 degrees; F = 2.96E-08; and the p-values on the variables and intercept are highly significant at 0.00313, 2.19E-08, and 7.34E-55, respectively.

Here’s a graph of actual vs. predicted temperatures:

Actual vs predicted average annual temperatures in Austin

The residuals are randomly distributed with respect to time and the estimated values of T, so there’s no question (in my mind) about having omitted a significant variable:

Average annual temperatures_residuals vs. year

Average annual temperaturs_residuals vs. estimates of T

Austin’s average annual temperature rose by 3.6 degrees F between 1968 and 2015, that is, from 66.2 degrees to 69.8 degrees. According to the regression equation, the rise in Austin’s population from 234,000 in 1968 to 853,000 (adjusted) in 2015 accounts for essentially all of the increase — 3.5 degrees of it, to be precise. That’s well within the range of urban heat-island effects for big cities, and it’s obvious that Austin became a big city between 1968 and 2015. It also agrees with the estimated effect of Austin’s population increase, as derived from the equation for North American cities in T.R. Oke’s “City Size and the Urban Heat Island.” The equation (simplified for ease of reproduction) is

T’ = 2.96 log P – 6.41

Where,

T’ = change in temperature, degrees C

P = population, holding area constant

The author reports r-squared = 0.92 and SE = 0.7 degrees C (1.26 degrees F).

The estimated UHI effect of Austin’s population growth from 1968 to 2015 is 2.99 degrees F. Given the standard error of the estimate, the estimate of 2.99 degrees isn’t significantly different from my estimate of 3.5 degrees or from the actual increase of 3.6 degrees.

I therefore dismiss the possibility that population is a proxy for the effects of CO2 emissions, which — if they significantly affect temperature (a big “if”) — do so because of their prevalence in the atmosphere, not because of their concentration in particular areas. And Austin’s hottest years occurred during the “pause” in global warming after 1998. There was no “pause” in Austin because its population continued to grow rapidly; thus:

12-month average temperatures in Austin_1903-2016

Bottom line: Austin’s temperature can be accounted for by precipitation and population. AGW will have to find another place in which to work its evil magic.

*     *     *

Related reading:
U.S. climate page at WUWT
Articles about UHI at WUWT
David Evans, “There Is No Evidence,” Science Speak, June 16, 2009
Roy W. Spencer, “Global Urban Heat Island Effect Study – An Update,” WUWT, March 10, 2010
David M.W. Evans, “The Skeptic’s Case,” Science Speak, August 16, 2012
Anthony Watts, “UHI – Worse Than We Thought?,” WUWT, August 20, 2014
Christopher Monckton of Brenchley, “The Great Pause Lengthens Again,” WUWT, January 3, 2015
Anthony Watts, “Two New Papers Suggest Solar Activity Is a ‘Climate Pacemaker‘,” WUWT, January 9, 2015
John Hinderaker, “Was 2014 Really the Warmest Year Ever?,” PowerLine, January 16, 2015
Roy W. Spencer, John R. Christy, and William D. Braswell, “Version 6.0 of the UAH Temperature Dataset Released: New LT Trend = +0.11 C/decade,” DrRoySpencer.com, April 28, 2015
Bob Tisdale, “New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years,” WUWT, April 29, 2015
Patrick J. Michaels and Charles C. Knappenberger, “You Ought to Have a Look: Science Round Up—Less Warming, Little Ice Melt, Lack of Imagination,” Cato at Liberty, May 1, 2015
Mike Brakey, “151 Degrees Of Fudging…Energy Physicist Unveils NOAA’s “Massive Rewrite” Of Maine Climate History,” NoTricksZone, May 2, 2015 (see also David Archibald, “A Prediction Coming True?,” WUWT, May 4, 2015)
Christopher Monckton of Brenchley, “El Niño Has Not Yet Paused the Pause,” WUWT, May 4, 2015
Anthony J. Sadar and JoAnn Truchan, “Saul Alinsky, Climate Scientist,” American Thinker, May 4, 2015
Clyde Spencer, “Anthropogenic Global Warming and Its Causes,” WUWT, May 5, 2015
Roy W. Spencer, “Nearly 3,500 Days since Major Hurricane Strike … Despite Record CO2,” DrRoySpencer.com, May 8, 2015

Related posts:
AGW: The Death Knell (with many links to related readings and earlier posts)
Not-So-Random Thoughts (XIV) (second item)
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Precautionary Principle and Pascal’s Wager

AGW in Austin?

“Climate change” is religion refracted through the lens of paganism.

Melanie Phillips

There is a hypothesis that the purported rise in global temperatures since 1850 (or some shorter span if you’re embarrassed by periods of notable decline after 1850) was or is due mainly or solely to human activity, as manifested in emissions of CO2. Adherents of this hypothesis call the supposed phenomenon by various names: anthropogenic global warming (AGW), just plain global warming, climate change, and climate catastrophe, for example.

Those adherents loudly advocate measures that (they assert) would reduce CO2 emissions by enough to avoid climatic catastrophe. They have been advocating such measures for about 25 years, yet climate catastrophe remains elusive. (See “pause,” below.) But the true believers in AGW remain steadfast in their faith.

Actually, belief in catastrophic AGW requires three leaps of faith. The first leap is to assume the truth of the alternative hypothesis — a strong and persistent connection between CO2 emissions and global temperatures — without having found (or even looked for) scientific evidence which disproves the null hypothesis, namely, that there isn’t a strong and persistent connection between CO2 emissions and global temperatures. The search for such evidence shouldn’t be confined to the near-past, but should extend centuries, millennia, and eons into the past. The problem for advocates of AGW is that a diligent search of that kind works against the alternative hypothesis and supports the null hypothesis. As a result, the advocates of AGW confine their analysis to the recent past and substitute kludgy computer models, full of fudge-factors, for a disinterested examination of the actual causes of climate change. There is strong evidence that such causes include solar activity and its influence on cloud formation through cosmic radiation. That truth is too inconvenient for the AGW mob, as are many other truths about climate.

The second leap of faith is to assume that rising temperatures, whatever the cause, are a bad thing. This, despite the known advantages of warmer climates: longer growing seasons and lower death rates, to name but two. This is so because believers in AGW and policies that would (according to them) mitigate it, like to depict worst-case scenarios about the extent of global warming and its negative effects.

The third leap of faith is related to the first two. It is the belief that policies meant to mitigate global warming — policies that mainly involve the curtailment of CO2 emissions — would be (a) effective and (b) worth the cost. There is more than ample doubt about both propositions, which seem to flow from the kind of anti-scientific mind that eagerly embraces the alternative hypothesis without first having disproved the null hypothesis. It is notable that “worth the cost” is a value judgment which springs readily from the tongues and keyboards of affluent Westerners like __________ who already have it made. (Insert “Al Gore”, “high-end Democrats,” “liberal pundits and politicians,” etc.)

Prominent among the leapers-of-faith in my neck of the woods is the “chief weathercaster” of an Austin TV station. We watch his weather forecasts because he spews out more information than his competitors, but I must resist the urge to throw a brick through my TV screen when his mask slips and he reveals himself as a true believer in AGW. What else should I expect from a weather nazi who proclaims it “nice” when daytime high temperatures are in the 60s and 70s, and who bemoans higher temperatures?

Like any nazi, he projects his preferences onto others — in this case his viewership. This undoubtedly includes a goodly number of persons (like me) who moved to Austin and stay in Austin for the sake of sunny days when the thermometer is in the 80-to-95-degree range. It is a bit much when temperatures are consistently in the high 90s and low 100s, as they are for much of Austin’s summer. But that’s the price of those sunny days in the 80s and low 90s, unless you can afford to live in San Diego or Hawaii instead of Austin.

Anyway, the weather nazi would make a great deal out of the following graph:

12-month average temperatures in Austin_1977-2015

The graph covers the period from April 1977 through April 2015. The jagged line represents 12-month averages of monthly averages for the official National Weather Service stations in Austin: Mueller Airport (until July 1999) and Camp Mabry (July 1999 to the present). (There’s a history of Austin’s weather stations in a NOAA document, “Austin Climate Summary.”) The upward trend is unmistakeable. Equally unmistakeable is the difference between the early and late years of the period — a difference that’s highlighted by the y-error bars, which represent a span of plus-and-minus one standard deviation from the mean for the period.

Your first question should be “Why begin with April 1977?” Well, it’s a “good” starting point — if you want to sell AGW — because the 12-month average temperature as of April 1977 was the lowest in 64 years. After all, it was the seemingly steep increase in temperatures after 1970 that sparked the AGW business.

What about the “fact” that temperatures have been rising since about 1850? The “fact” is that temperatures have been recorded in a relatively small number of locales continuously since the 1850s, though the reliability of the temperature data and their relationship to any kind of “global” average is in serious doubt. The most reliable data come from weather satellites, and those have been in operation only since the late 1970s.

A recent post by Bob Tisdale, “New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years” (Watts Up With That?, April 29, 2015), summarizes the history of satellite readings, in the course of documenting the “pause” in global warming. The “pause,” if dated from 2001, has lasted 14 years; if dated from 1997, it has lasted 18 years. In either event, the “pause” has lasted about as long as the rise in late-20th century temperatures that led to the AGW hypothesis.

What about those observations since the 1850s? Riddled with holes, that’s what. And even if they were reliable and covered a good part of the globe (which they aren’t and don’t), they wouldn’t tell the story that AGW enthusiasts are trying to sell. Take Austin, for example, which has a (broken) temperature record dating back to 1856:

12-month average temperatures in Austin_1856-2015

Looks just like the first graph? No, it doesn’t. The trend line and error bars suggest a trend that isn’t there. Strip away the trend line and the error bars, and you see this:

12-month average temperatures in Austin_1856-2015_2

Which is what? There’s a rise in temperatures between the 1850s and the early 1890s, consistent with the gradual warming that followed the Little Ice Age. The gap between the early 1890s and mid-19naughts seems to have been marked by lower temperatures. It’s possible to find several mini-trends between the mid-19naughts and 1977, but the most obvious “trend” is a flat line for the entire period:

12-month average temperatures in Austin_1903-1977

Following the sudden jump between 1977 and 1980, the “trend” remains almost flat through 1997, albeit at a slightly higher level:

12-month average temperatures in Austin_1980-1997

The sharpest upward trend really began after the very strong (and naturally warming) El Niño of 1997-1998:

12-month average temperatures in Austin_1997-2015

Oh, wait! It turns out that Austin’s sort-of hot-spell from 1998 to the present coincides with the “pause” in global warming:

The pause_from WUWT_20150429
Source: Bob Tisdale, “New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years,” Watts Up With That?, April 29, 2015.

What a revolting development this would be for our local weather nazi, if he could be bothered to acknowledge it. And if he did, he’d have to look beyond the egregious AGW hypothesis for an explanation of the warmer temperatures that he abhors. Where should he look? Here: the rapid increase in Austin’s population, combined with a drought.

The rapid increase in Austin’s population since 2000 probably has caused an acceleration of the urban heat-island (UHI) effect. This is known to inflate city temperatures above those in the surrounding countryside by several degrees.

What about drought? In Austin, the drought of recent years is far less severe than the drought of the 1950s, but temperatures have risen more in recent years than they did in the 1950s:

Indices of 5-year average precipitation and temperature

Why? Because Austin’s population is now six times greater than it was in the 1950s. The UHI effect has magnified the drought effect.

Conclusion: Austin’s recent hot weather has nothing to do with AGW. But don’t try to tell that to a weather nazi — or to the officials of the City of Austin, who lurch zombie-like onward in their pursuit of “solutions” to a non-problem.

BE SURE TO READ THE SEQUEL, IN WHICH I QUANTIFY THE EFFECTS OF PRECIPITATION AND POPULATION, LEAVING NOTHING ON THE TABLE FOR AGW.

*     *     *

Related reading:
U.S. climate page at WUWT
Articles about UHI at WUWT
Roy W. Spencer, “Global Urban Heat Island Effect Study – An Update,” WUWT, March 10, 2010
Anthony Watts, “UHI – Worse Than We Thought?,” WUWT, August 20, 2014
Christopher Monckton of Brenchley, “The Great Pause Lengthens Again,” WUWT, January 3, 2015
Anthony Watts, “Two New Papers Suggest Solar Activity Is a ‘Climate Pacemaker‘,” WUWT, January 9, 2015
John Hinderaker, “Was 2014 Really the Warmest Year Ever?,” PowerLine, January 16, 2015
Roy W. Spencer, John R. Christy, and William D. Braswell, “Version 6.0 of the UAH Temperature Dataset Released: New LT Trend = +0.11 C/decade,” DrRoySpencer.com, April 28, 2015
Bob Tisdale, “New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years,” WUWT, April 29, 2015
Patrick J. Michaels and Charles C. Knappenberger, “You Ought to Have a Look: Science Round Up—Less Warming, Little Ice Melt, Lack of Imagination,” Cato at Liberty, May 1, 2015
Mike Brakey, “151 Degrees Of Fudging…Energy Physicist Unveils NOAA’s “Massive Rewrite” Of Maine Climate History,” NoTricksZone, May 2, 2015 (see also David Archibald, “A Prediction Coming True?,” WUWT, May 4, 2015)
Christopher Monckton of Brenchley, “El Niño Has Not Yet Paused the Pause,” WUWT, May 4, 2015
Anthony J. Sadar and JoAnn Truchan, “Saul Alinsky, Climate Scientist,” American Thinker, May 4, 2015
Clyde Spencer, “Anthropogenic Global Warming and Its Causes,” WUWT, May 5, 2015
Roy W. Spencer, “Nearly 3,500 Days since Major Hurricane Strike … Despite Record CO2,” DrRoySpencer.com, May 8, 2015

Related posts:
AGW: The Death Knell (with many links to related readings and earlier posts)
Not-So-Random Thoughts (XIV) (second item)

Signature

Not-So-Random Thoughts (X)

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

How Much Are Teachers Worth?

David Harsanyi writes:

“The bottom line,” says the Center for American Progress, “is that mid- and late-career teachers are not earning what they deserve, nor are they able to gain the salaries that support a middle-class existence.”

Alas, neither liberal think tanks nor explainer sites have the capacity to determine the worth of human capital. And contrasting the pay of a person who has a predetermined government salary with the pay earned by someone in a competitive marketplace tells us little. Public-school teachers’ compensation is determined by contracts negotiated long before many of them even decided to teach. These contracts hurt the earning potential of good teachers and undermine the education system. And it has nothing to do with what anyone “deserves.”

So if teachers believe they aren’t making what they’re worth — and they may well be right about that — let’s free them from union constraints and let them find out what the job market has to offer. Until then, we can’t really know. Because a bachelor’s degree isn’t a dispensation from the vagaries of economic reality. And teaching isn’t the first step toward sainthood. Regardless of what you’ve heard. (“Are Teachers Underpaid? Let’s Find Out,” Creators.com, July 25, 2014)

Harsanyi is right, but too kind. Here’s my take, from “The Public-School Swindle“:

[P]ublic “education” — at all levels — is not just a rip-off of taxpayers, it is also an employment scheme for incompetents (especially at the K-12 level) and a paternalistic redirection of resources to second- and third-best uses.

And, to top it off, public education has led to the creation of an army of left-wing zealots who, for many decades, have inculcated America’s children and young adults in the advantages of collective, non-market, anti-libertarian institutions, where paternalistic “empathy” supplants personal responsibility.

Utilitarianism, Once More

EconLog bloggers Bryan Caplan and Scott Sumner are enjoying an esoteric exchange about utilitarianism (samples here and here), which is a kind of cost-benefit calculus in which the calculator presumes to weigh the costs and benefits that accrue to other persons.  My take is that utilitarianism borders on psychopathy. In “Utilitarianism and Psychopathy,” I quote myself to this effect:

Here’s the problem with cost-benefit analysis — the problem it shares with utilitarianism: One person’s benefit can’t be compared with another person’s cost. Suppose, for example, the City of Los Angeles were to conduct a cost-benefit analysis that “proved” the wisdom of constructing yet another freeway through the city in order to reduce the commuting time of workers who drive into the city from the suburbs.

Before constructing the freeway, the city would have to take residential and commercial property. The occupants of those homes and owners of those businesses (who, in many cases would be lessees and not landowners) would have to start anew elsewhere. The customers of the affected businesses would have to find alternative sources of goods and services. Compensation under eminent domain can never be adequate to the owners of taken property because the property is taken by force and not sold voluntarily at a true market price. Moreover, others who are also harmed by a taking (lessees and customers in this example) are never compensated for their losses. Now, how can all of this uncompensated cost and inconvenience be “justified” by, say, the greater productivity that might (emphasize might) accrue to those commuters who would benefit from the construction of yet another freeway.

Yet, that is how cost-benefit analysis works. It assumes that group A’s cost can be offset by group B’s benefit: “the greatest amount of happiness altogether.”

America’s Financial Crisis

Timothy Taylor tackles the looming debt crisis:

First, the current high level of government debt, and the projections for the next 25 years, mean that the U.S. government lacks fiscal flexibility….

Second, the current spending patterns of the U.S. government are starting to crowd out everything except health care, Social Security, and interest payments….

Third, large government borrowing means less funding is available for private investment….

…CBO calculates an “alternative fiscal scenario,” in which it sets aside some of these spending and tax changes that are scheduled to take effect in five years or ten years or never…. [T]he extended baseline scenario projected that the debt/GDP ratio would be 106% by 2039. In the alternative fiscal scenario, the debt-GDP ratio is projected to reach 183% of GDP by 2039. As the report notes: “CBO’s extended alternative fiscal scenario is based on the assumptions that certain policies that are now in place but are scheduled to change under current law will be continued and that some provisions of law that might be difficult to sustain for a long period will be modified. The scenario, therefore, captures what some analysts might consider to be current policies, as opposed to current laws.”…

My own judgement is that the path of future budget deficits in the next decade or so is likely to lean toward the alternative fiscal scenario. But long before we reach a debt/GDP ratio of 183%, something is going to give. I don’t know what will change. But as an old-school economist named Herb Stein used to say, “If something can’t go on, it won’t.” (Long Term Budget Deficits,Conversable Economist, July 24, 2014)

Professional economists are terribly low-key, aren’t they? Here’s the way I see it, in “America’s Financial Crisis Is Now“:

It will not do simply to put an end to the U.S. government’s spending spree; too many State and local governments stand ready to fill the void, and they will do so by raising taxes where they can. As a result, some jurisdictions will fall into California- and Michigan-like death-spirals while jobs and growth migrate to other jurisdictions…. Even if Congress resists the urge to give aid and comfort to profligate States and municipalities at the expense of the taxpayers of fiscally prudent jurisdictions, the high taxes and anti-business regimes of California- and Michigan-like jurisdictions impose deadweight losses on the whole economy….

So, the resistance to economically destructive policies cannot end with efforts to reverse the policies of the federal government. But given the vast destructiveness of those policies — “entitlements” in particular — the resistance must begin there. Every conservative and libertarian voice in the land must be raised in reasoned opposition to the perpetuation of the unsustainable “promises” currently embedded in Social Security, Medicare, and Medicaid — and their expansion through Obamacare. To those voices must be added the voices of “moderates” and “liberals” who see through the proclaimed good intentions of “entitlements” to the economic and libertarian disaster that looms if those “entitlements” are not pared down to their original purpose: providing a safety net for the truly needy.

The alternative to successful resistance is stark: more borrowing, higher interest payments, unsustainable debt, higher taxes, and economic stagnation (at best).

For the gory details about government spending and economic stagnation, see “Estimating the Rahn Curve: Or, How Government Spending Inhibits Economic Growth” and “The True Multiplier.”

Climate Change: More Evidence against the Myth of AGW

There are voices of reason, that is, real scientists doing real science:

Over the 55-years from 1958 to 2012, climate models not only significantly over-predict observed warming in the tropical troposphere, but they represent it in a fundamentally different way than is observed. (Ross McKittrick and Timothy Vogelsang, “Climate models not only significantly over-predict observed warming in the tropical troposphere, but they represent it in a fundamentally different way than is observed,” excerpted at Watt’s Up With That, July 24, 2014)

Since the 1980s anthropogenic aerosols have been considerably reduced in Europe and the Mediterranean area. This decrease is often considered as the likely cause of the brightening effect observed over the same period. This phenomenon is however hardly reproduced by global and regional climate models. Here we use an original approach based on reanalysis-driven coupled regional climate system modelling, to show that aerosol changes explain 81 ± 16 per cent of the brightening and 23 ± 5 per cent of the surface warming simulated for the period 1980–2012 over Europe. The direct aerosol effect is found to dominate in the magnitude of the simulated brightening. The comparison between regional simulations and homogenized ground-based observations reveals that observed surface solar radiation, as well as land and sea surface temperature spatio-temporal variations over the Euro-Mediterranean region are only reproduced when simulations include the realistic aerosol variations. (“New paper finds 23% of warming in Europe since 1980 due to clean air laws reducing sulfur dioxide,” The Hockey Schtick, July 23, 2014)

My (somewhat out-of-date but still useful) roundup of related posts and articles is at “AGW: The Death Knell.”

Crime Explained…

…but not by this simplistic item:

Of all of the notions that have motivated the decades-long rise of incarceration in the United States, this is probably the most basic: When we put people behind bars, they can’t commit crime.

The implied corollary: If we let them out, they will….

Crime trends in a few states that have significantly reduced their prison populations, though, contradict this fear. (Emily Badger, “There’s little evidence that fewer prisoners means more crime,” Wonkblog, The Washington Post, July 21, 2014)

Staring at charts doesn’t yield answers to complex, multivariate questions, such as the causes of crime. Ms. Badger should have extended my work of seven years ago (“Crime, Explained“). Had she, I’m confident that she would have obtained the same result, namely:

VPC (violent+property crimes per 100,000 persons) =

-33174.6

+346837BLK (number of blacks as a decimal fraction of the population)

-3040.46GRO (previous year’s change in real GDP per capita, as a decimal fraction of the base)

-1474741PRS (the number of inmates in federal and State prisons in December of the previous year, as a decimal fraction of the previous year’s population)

The t-statistics on the intercept and coefficients are 19.017, 21.564, 1.210, and 17.253, respectively; the adjusted R-squared is 0.923; the standard error of the estimate/mean value of VPC = 0.076.

The coefficient and t-statistic for PRS mean that incarceration has a strong, statistically significant, negative effect on the violent-property crime rate. In other words, more prisoners = less crime against persons and their property.

The Heritability of Intelligence

Strip away the trappings of culture and what do you find? This:

If a chimpanzee appears unusually intelligent, it probably had bright parents. That’s the message from the first study to check if chimp brain power is heritable.

The discovery could help to tease apart the genes that affect chimp intelligence and to see whether those genes in humans also influence intelligence. It might also help to identify additional genetic factors that give humans the intellectual edge over their non-human-primate cousins.

The researchers estimate that, similar to humans, genetic differences account for about 54 per cent of the range seen in “general intelligence” – dubbed “g” – which is measured via a series of cognitive tests. “Our results in chimps are quite consistent with data from humans, and the human heritability in g,” says William Hopkins of the Yerkes National Primate Research Center in Atlanta, Georgia, who heads the team reporting its findings in Current Biology.

“The historical view is that non-genetic factors dominate animal intelligence, and our findings challenge that view,” says Hopkins. (Andy Coghlan, “Chimpanzee brain power is strongly heritable,New Scientist, July 10, 2014)

Such findings are consistent with Nicholas Wade’s politically incorrect A Troublesome Inheritance: Genes, Race and Human History. For related readings, see “‘Wading’ into Race, Culture, and IQ’.” For a summary of scholarly evidence about the heritability of intelligence — and its dire implications — see “Race and Reason — The Achievement Gap: Causes and Implications.” John Derbyshire offers an even darker view: “America in 2034” (American Renaissance, June 9, 2014).

The correlation of race and intelligence is, for me, an objective matter, not an emotional one. For evidence of my racial impartiality, see the final item in “My Moral Profile.”

Modeling Is Not Science

The title of this post applies, inter alia, to econometric models — especially those that purport to forecast macroeconomic activity — and climate models — especially those that purport to forecast global temperatures. I have elsewhere essayed my assessments of macroeconomic and climate models. (See this and this, for example.) My purpose here is to offer a general warning about models that claim to depict and forecast the behavior of connected sets of phenomena (systems) that are large, complex, and dynamic. I draw, in part, on a paper that I wrote 28 years ago. That paper is about warfare models, but it has general applicability.

HEMIBEL THINKING

Philip M. Morse and George E. Kimball, pioneers in the field of military operations research — the analysis and modeling of military operations — wrote that the

successful application of operations research usually results in improvements by factors of 3 or 10 or more. . . . In our first study of any operation we are looking for these large factors of possible improvement. . . .

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Methods of Operations Research, 1946, p. 38)

This is science-speak for the following proposition: In large, complex, and dynamic systems (e.g., war, economy, climate) there is much uncertainty about the relevant parameters, about how to characterize their interactions mathematically, and about their numerical values.

Hemibel thinking assumes great importance in light of the imprecision inherent in models of large, complex, and dynamic systems. Consider, for example, a simple model with only 10 parameters. Even if such a model doesn’t omit crucial parameters or mischaracterize their interactions,  its results must be taken with large doses of salt. Simple mathematics tells the cautionary tale: An error of about 12 percent in the value of each parameter can produce a result that is off by a factor of 3 (a hemibel); An error of about 25 percent in the value of each parameter can produce a result that is off by a factor of 10. (Remember, this is a model of a relatively small system.)

If you think that models and “data” about such things as macroeconomic activity and climatic conditions cannot be as inaccurate as that, you have no idea how such models are devised or how such data are collected and reported. It would be kind to say that such models are incomplete, inaccurate guesswork. It would be fair to say that all too many of them reflect their developers’ policy biases.

Of course, given a (miraculously) complete model, data errors might (miraculously) be offsetting, but don’t bet on it. It’s not that simple: Some errors will be large and some errors will be small (but which are which?), and the errors may lie in either direction (but in which direction?). In any event, no amount of luck can prevent a modeler from constructing a model whose estimates advance a favored agenda (e.g., massive, indiscriminate government spending; massive, futile, and costly efforts to cool the planet).

NO MODEL IS EVER PROVEN

The construction of a model is only one part of the scientific method. A model means nothing unless it can be tested repeatedly against facts (facts not already employed in the development of the model) and, through such tests, is found to be more accurate than alternative explanations of the same facts.As Morse and Kimball put it,

[t]o be valuable [operations research] must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. (Op. cit., p. 10)

Even after rigorous testing, a model is never proven. It is, at best, a plausible working hypothesis about relations between the phenomena that it encompasses.

A model is never proven for two reasons. First, new facts may be discovered that do not comport with the model. Second, the facts upon which a model is based may be open to a different interpretation, that is, they may support a new model that yields better predictions than its predecessor.

The fact that a model cannot be proven can be take as an excuse for action: “We must act on the best information we have.”  That excuse — which justifies an entire industry, namely, government-funded analysis — does not fly, as I discuss below.

MODELS LIE WHEN LIARS MODEL

Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a masterful demonstration of a model that proves what it was designed to prove by the assumptions built into it, see this.

IMPLICATIONS

Government policies can be dangerous and impoverishing things. Despite that, it is hard (if not impossible) to modify and reverse government policies. Consider, for example, the establishment of public schools more than a century ago, the establishment of Social Security more than 70 years ago, and the establishment of Medicare and Medicaid more than 40 years ago. There is plenty of evidence that all four institutions are monumentally expensive failures. But all four institutions have become so entrenched that to call for their abolition is to be thought of as an eccentric, if not an uncaring anti-government zealot. (For the latest about public schools, see this.)

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).

CLOSING THOUGHTS

The average person is so mystified and awed by “science” that he has little if any understanding of its limitations and pitfalls, some of which I have addressed here in the context of modeling. The average person’s mystification and awe are unjustified, given that many so-called scientists exploit the public’s mystification and awe in order to advance personal biases, gain the approval of other scientists (whence “consensus”), and garner funding for research that yields results congenial to its sponsors (e.g., global warming is an artifact of human activity).

Isaac Newton, who must be numbered among the greatest scientists in human history, was not a flawless scientist. (Has there ever been one?) But scientists and non-scientists alike should heed Newton on the subject of scientific humility:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. (Quoted in Horace Freeland Judson,The Search for Solutions, 1980, p. 5.)


Related reading: Willis Eschenbach, “How Not to Model the Historical Temperature“, Watts Up With That?, March 25, 2018