Modeling Is Not Science: Another Demonstration

The title of this post is an allusion to an earlier one: “Modeling Is Not Science“. This post addresses a model that is the antithesis of science. Tt seems to have been extracted from the ether. It doesn’t prove what its authors claim for it. It proves nothing, in fact, but the ability of some people to dazzle other people with mathematics.

In this case, a writer for MIT Technology Review waxes enthusiastic about

the work of Alessandro Pluchino at the University of Catania in Italy and a couple of colleagues. These guys [sic] have created a computer model of human talent and the way people use it to exploit opportunities in life. The model allows the team to study the role of chance in this process.

The results are something of an eye-opener. Their simulations accurately reproduce the wealth distribution in the real world. But the wealthiest individuals are not the most talented (although they must have a certain level of talent). They are the luckiest. And this has significant implications for the way societies can optimize the returns they get for investments in everything from business to science.

Pluchino and co’s [sic] model is straightforward. It consists of N people, each with a certain level of talent (skill, intelligence, ability, and so on). This talent is distributed normally around some average level, with some standard deviation. So some people are more talented than average and some are less so, but nobody is orders of magnitude more talented than anybody else….

The computer model charts each individual through a working life of 40 years. During this time, the individuals experience lucky events that they can exploit to increase their wealth if they are talented enough.

However, they also experience unlucky events that reduce their wealth. These events occur at random.

At the end of the 40 years, Pluchino and co rank the individuals by wealth and study the characteristics of the most successful. They also calculate the wealth distribution. They then repeat the simulation many times to check the robustness of the outcome.

When the team rank individuals by wealth, the distribution is exactly like that seen in real-world societies. “The ‘80-20’ rule is respected, since 80 percent of the population owns only 20 percent of the total capital, while the remaining 20 percent owns 80 percent of the same capital,” report Pluchino and co.

That may not be surprising or unfair if the wealthiest 20 percent turn out to be the most talented. But that isn’t what happens. The wealthiest individuals are typically not the most talented or anywhere near it. “The maximum success never coincides with the maximum talent, and vice-versa,” say the researchers.

So if not talent, what other factor causes this skewed wealth distribution? “Our simulation clearly shows that such a factor is just pure luck,” say Pluchino and co.

The team shows this by ranking individuals according to the number of lucky and unlucky events they experience throughout their 40-year careers. “It is evident that the most successful individuals are also the luckiest ones,” they say. “And the less successful individuals are also the unluckiest ones.”

The writer, who is dazzled by pseudo-science, gives away his Obamanomic bias (“you didn’t build that“) by invoking fairness. Luck and fairness have nothing to do with each other. Luck is luck, and it doesn’t make the beneficiary any less deserving of the talent, or legally obtained income or wealth, that comes his way.

In any event, the model in question is junk. To call it junk science would be to imply that it’s just bad science. But it isn’t science; it’s a model pulled out of thin air. The modelers admit this in the article cited by the Technology Review writer, “Talent vs. Luck, the Role of Randomness in Success and Failure“:

In what follows we propose an agent-based model, called “Talent vs Luck” (TvL) model, which builds on a small set of very simple assumptions, aiming to describe the evolution of careers of a group of people influenced by lucky or unlucky random events.

We consider N individuals, with talent Ti (intelligence, skills, ability, etc.) normally distributed in the interval [0; 1] around a given mean mT with a standard deviation T , randomly placed in xed positions within a square world (see Figure 1) with periodic boundary conditions (i.e. with a toroidal topology) and surrounded by a certain number NE of “moving” events (indicated by dots), someone lucky, someone else unlucky (neutral events are not considered in the model, since they have not relevant effects on the individual life). In Figure 1 we report these events as colored points: lucky ones, in green and with relative percentage pL, and unlucky ones, in red and with percentage (100􀀀pL). The total number of event-points NE are uniformly distributed, but of course such a distribution would be perfectly uniform only for NE ! 1. In our simulations, typically will be NE N=2: thus, at the beginning of each simulation, there will be a greater random concentration of lucky or unlucky event-points in different areas of the world, while other areas will be more neutral. The further random movement of the points inside the square lattice, the world, does not change this fundamental features of the model, which exposes dierent individuals to dierent amount of lucky or unlucky events during their life, regardless of their own talent.

In other words, this is a simplistic, completely abstract model set in a simplistic, completely abstract world, using only the authors’ assumptions about the values of a small number of abstract variables and the effects of their interactions. Those variables are “talent” and two kinds of event: “lucky” and “unlucky”.

What could be further from science — actual knowledge — than that? The authors effectively admit the model’s complete lack of realism when they describe “talent”:

[B]y the term “talent” we broadly mean intelligence, skill, smartness, stubbornness, determination, hard work, risk taking and so on.

Think of all of the ways that those various — and critical — attributes vary from person to person. “Talent”, in other words, subsumes an array of mostly unmeasured and unmeasurable attributes, without distinguishing among them or attempting to weight them. The authors might as well have called the variable “sex appeal” or “body odor”. For that matter, given the complete abstractness of the model, they might as well have called its three variables “body mass index”, “elevation”, and “race”.

It’s obvious that the model doesn’t account for the actual means by which wealth is acquired. In the model, wealth is just the mathematical result of simulated interactions among an arbitrarily named set of variables. It’s not even a multiple regression model based on statistics. (Although no set of statistics could capture the authors’ broad conception of “talent”.)

The modelers seem surprised that wealth isn’t normally distributed. But that wouldn’t be a surprise if they were to consider that wealth represents a compounding effect, which naturally favors those with higher incomes over those with lower incomes. But they don’t even try to model income.

So when wealth (as modeled) doesn’t align with “talent”, the discrepancy — according to the modelers — must be assigned to “luck”. But a model that lacks any nuance in its definition of variables, any empirical estimates of their values, and any explanation of the relationship between income and wealth cannot possibly tell us anything about the role of luck in the determination of wealth.

At any rate, it is meaningless to say that the model is valid because its results mimic the distribution of wealth in the real world. The model itself is meaningless, so any resemblance between its results and the real world is coincidental (“lucky”) or, more likely, contrived to resemble something like the distribution of wealth in the real world. On that score, the authors are suitably vague about the actual distribution, pointing instead to various estimates.

(See also “Modeling, Science, and Physics Envy” and “Modeling Revisited“.)

Consulting

There is a post at Politico about the adventures of McKinsey & Company, a giant consulting firm, in the world of intelligence:

America’s vast spying apparatus was built around a Cold War world of dead drops and double agents. Today, that world has fractured and migrated online, with hackers and rogue terrorist cells, leaving intelligence operatives scrambling to keep up.

So intelligence agencies did what countless other government offices have done: They brought in a consultant. For the past four years, the powerhouse firm McKinsey and Co., has helped restructure the country’s spying bureaucracy, aiming to improve response time and smooth communication.

Instead, according to nearly a dozen current and former officials who either witnessed the restructuring firsthand or are familiar with the project, the multimillion dollar overhaul has left many within the country’s intelligence agencies demoralized and less effective.

These insiders said the efforts have hindered decision-making at key agencies — including the CIA, National Security Agency and the Office of the Director of National Intelligence.

They said McKinsey helped complicate a well-established linear chain of command, slowing down projects and turnaround time, and applied cookie-cutter solutions to agencies with unique cultures. In the process, numerous employees have become dismayed, saying the efforts have at best been a waste of money and, at worst, made their jobs more difficult. It’s unclear how much McKinsey was paid in that stretch, but according to news reports and people familiar with the effort, the total exceeded $10 million.

Consulting to U.S.-government agencies on a grand scale grew out of the perceived successes in World War II of civilian analysts who were embedded in military organizations. To the extent that the civilian analysts were actually helpful*, it was because they focused on specific operations, such as methods of searching for enemy submarines. In such cases, the government client can benefit from an outside look at the effectiveness of the operations, the identification of failure points, and suggestions for changes in weapons and tactics that are informed by first-hand observation of military operations.

Beyond that, however, outsiders are of little help, and may be a hindrance, as in the case cited above. Outsiders can’t really grasp the dynamics and unwritten rules of organizational cultures that embed decades of learning and adaptation.

The consulting game is now (and has been for decades) an invasive species. It is a perverse outgrowth of operations research as it was developed in World War II. Too much of a “good thing” is a bad thing — as I saw for myself many years ago.
__________
* The success of the U.S. Navy’s antisubmarine warfare (ASW) operations had been for decades ascribed to the pioneering civilian organization known as the Antisubmarine Warfare Operations Research Group (ASWORG). However, with the publication of The Ultra Secret in 1974 (and subsequent revelations), it became known that code-breaking may have contributed greatly to the success of various operations against enemy forces, including ASW.

Beware of Outliers

An outlier, in the field of operations research, is an unusual event that can distract the observer from the normal run of events. Because an outlier is an unusual event, it is more memorable than events of the same kind that occur more frequently.

Take the case of the late Bill Buckner, who was a steady first baseman and good hitter for many years. What is Buckner remembered for? Not his many accomplishments in a long career. No, he is remembered for a fielding error that cost his team (the accursed Red Sox) game 6 of the 1986 World Series, a game that would have clinched the series for the Red Sox had they won it. But they lost it, and went on to lose the deciding 7th game.

Buckner’s bobble was an outlier that erased from the memories of most fans his prowess as a player and the many occasions on which he helped his team to victory. He is remembered, if at all, for the error — though he erred on less than 1/10 of 1 percent of more than 15,000 fielding plays during his career.

I am beginning to think of America’s decisive victory in Word War II as an outlier.

To be continued.

The “Candle Problem” and Its Ilk

Among the many topics that I address in “The Balderdash Chronicles” is the management “science” fad; in particular, as described by Graham Morehead,

[t]he Candle Problem [which] was first presented by Karl Duncker. Published posthumously in 1945, “On problem solving” describes how Duncker provided subjects with a candle, some matches, and a box of tacks. He told each subject to affix the candle to a cork board wall in such a way that when lit, the candle won’t drip wax on the table below (see figure at right). Can you think of the answer?

The only answer that really works is this: 1.Dump the tacks out of the box, 2.Tack the box to the wall, 3.Light the candle and affix it atop the box as if it were a candle-holder. Incidentally, the problem was much easier to solve if the tacks weren’t in the box at the beginning. When the tacks were in the box the participant saw it only as a tack-box, not something they could use to solve the problem. This phenomenon is called “Functional fixedness.”

The implication of which, according to Morehead, is (supposedly) this:

When your employees have to do something straightforward, like pressing a button or manning one stage in an assembly line, financial incentives work. It’s a small effect, but they do work. Simple jobs are like the simple candle problem.

However, if your people must do something that requires any creative or critical thinking, financial incentives hurt. The In-Box Candle Problem is the stereotypical problem that requires you to think “Out of the Box,” (you knew that was coming, didn’t you?). Whenever people must think out of the box, offering them a monetary carrot will keep them in that box.

A monetary reward will help your employees focus. That’s the point. When you’re focused you are less able to think laterally. You become dumber. This is not the kind of thing we want if we expect to solve the problems that face us in the 21st century.

My take (in part):

[T]he Candle Problem is unlike any work situation that I can think of. Tasks requiring creativity are not performed under deadlines of a few minutes; tasks requiring creativity are (usually) assigned to persons who have demonstrated a creative flair, not to randomly picked subjects; most work, even in this day, involves the routine application of protocols and tools that were designed to produce a uniform result of acceptable quality; it is the design of protocols and tools that requires creativity, and that kind of work is not done under the kind of artificial constraints found in the Candle Problem.

Now comes James Thompson, with this general conclusion about such exercises:

One important conclusion I draw from this entire paper [by Gerd Gigerenzer, here] is that the logical puzzles enjoyed by Kahneman, Tversky, Stanovich and others are rightly rejected by psychometricians as usually being poor indicators of real ability. They fail because they are designed to lead people up the garden path, and depend on idiosyncratic interpretations.

Told you so.

Is Race a Social Construct?

Of course it is. Science, generally, is a social construct. Everything that human beings do and “know” is a social construct, in that human behavior and “knowledge” are products of acculturation and the irrepressible urge to name and classify things.

Whence that urge? You might say that it’s genetically based. But our genetic inheritance is inextricably twined with social constructs — preferences for, say, muscular men and curvaceous women, and so on. What we are depends not only on our genes but also on the learned preferences that shape the gene pool. There’s no way to sort them out, despite claims (from the left) that human beings are blank slates and claims (from loony libertarians) that genes count for everything.

All of that, however true it may be (and I believe it to be true), is a recipe for solipsism, nay, for Humean chaos. The only way out of this morass, as I see it, is to admit that human beings (or most of them) possess a life-urge that requires them to make distinctions: friend vs. enemy, workable from non-workable ways of building things, etc.

Race is among those useful distinctions for reasons that will be obvious to anyone who has actually observed the behaviors of groups that can be sorted along racial lines instead of condescending to “tolerate” or “celebrate” differences (a luxury that is easily indulged in the safety of ivory towers and gated communities). Those lines may be somewhat arbitrary, for, as many have noted there are more genetic differences within a racial classification than between racial classifications. Which is a fatuous observation, in that there are more genetic differences among, say, the apes than there are between what are called apes and what are called human beings.

In other words, the usual “scientific” objection to the concept of race is based on a false premise, namely, that all genetic differences are equal. If one believes that, one should be just as willing to live among apes as among human beings. But human beings do not choose to live among apes (though a few human beings do choose to observe them at close quarters). Similarly, human beings — for the most part — do not choose to live among people from whom they are racially distinct, and therefore (usually) socially distinct.

Why? Because under the skin we are not all alike. Under the skin there are social (cultural) differences that are causally correlated with genetic differences.

Race may be a social construct, but — like engineering — it is a useful one.

“Science Is Real”

Yes, it is. But the real part of science is the never-ending search for truth about the “natural” world. Scientific “knowledge” is always provisional.

The flower children — young and old — who display “science is real” posters have it exactly backwards. They believe that science consists of provisional knowledge, and when that “knowledge” matches their prejudices the search for truth is at an end.

Provisional knowledge is valuable in some instances — building bridges and airplanes, for example. But bridges and airplanes are (or should be) built by allowing for error, and a lot of it.

The Pretense of Knowledge

Anyone with more than a passing knowledge of science and disciplines that pretend to be scientific (e.g., economics) will appreciate the shallowness and inaccuracy of humans’ “knowledge” of nature and human nature — from the farthest galaxies to our own psyches. Anyone, that is, but a pretentious “scientist” or an over-educated ignoramus.

Not with a Bang

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

T.S. Elliot, The Hollow Men

It’s also the way that America is ending. Yes, there are verbal fireworks aplenty, but there will not be a “hot” civil war. The country that my parents and grandparents knew and loved — the country of my youth in the 1940s and 1950s — is just fading away.

This would not necessarily be a bad thing if the remaking of America were a gradual, voluntary process, leading to time-tested changes for the better. But that isn’t the case. The very soul of America has been and is being ripped out by the government that was meant to protect that soul, and by movements that government not only tolerates but fosters.

Before I go further, I should explain what I mean by America, which is not the same thing as the geopolitical entity known as the United States, though the two were tightly linked for a long time.

America was a relatively homogeneous cultural order that fostered mutual respect, mutual trust, and mutual forbearance — or far more of those things than one might expect in a nation as populous and far-flung as the United States. Those things — conjoined with a Constitution that has been under assault since the New Deal — made America a land of liberty. That is to say, they fostered real liberty, which isn’t an unattainable state of bliss but an actual (and imperfect) condition of peaceful, willing coexistence and its concomitant: beneficially cooperative behavior.

The attainment of this condition depends on social comity, which depends in turn on (a) genetic kinship and (b) the inculcation and enforcement of social norms, especially the norms that define harm.

All of that is going by the boards because the emerging cultural order is almost diametrically opposite that which prevailed in America. The new dispensation includes:

  • casual sex
  • serial cohabitation
  • subsidized illegitimacy
  • abortion on demand
  • easy divorce
  • legions of non-mothering mothers
  • concerted (and deluded) efforts to defeminize females and to neuter or feminize males
  • gender-confusion as a burgeoning norm
  • “alternative lifestyles” that foster disease, promiscuity, and familial instability
  • normalization of drug abuse
  • forced association (with accompanying destruction of property and employment rights)
  • suppression of religion
  • rampant obscenity
  • identity politics on steroids
  • illegal immigration as a “right”
  • “free stuff” from government (Social Security was meant to be self-supporting)
  • America as the enemy
  • all of this (and more) as gospel to influential elites whose own lives are modeled mostly on old America.

As the culture has rotted, so have the ties that bound America.

The rot has occurred to the accompaniment of cacophony. Cultural coarsening begets loud and inconsiderate vulgarity. Worse than that is the cluttering of the ether with the vehement and belligerent propaganda, most of it aimed at taking down America.

The advocates of the new dispensation haven’t quite finished the job of dismantling America. But that day isn’t far off. Complete victory for the enemies of America is only a few election cycles away. The squishy center of the electorate — as is its wont — will swing back toward the Democrat Party. With a Democrat in the White House, a Democrat-controlled Congress, and a few party switches in the Supreme Court (of the packing of it), the dogmas of the anti-American culture will become the law of the land; for example:

Billions and trillions of dollars will be wasted on various “green” projects, including but far from limited to the complete replacement of fossil fuels by “renewables”, with the resulting impoverishment of most Americans, except for comfortable elites who press such policies).

It will be illegal to criticize, even by implication, such things as abortion, illegal immigration, same-sex marriage, transgenderism, anthropogenic global warming, or the confiscation of firearms. These cherished beliefs will be mandated for school and college curricula, and enforced by huge fines and draconian prison sentences (sometimes in the guise of “re-education”).

Any hint of Christianity and Judaism will be barred from public discourse, and similarly punished. Islam will be held up as a model of unity and tolerance.

Reverse discrimination in favor of females, blacks, Hispanics, gender-confused persons, and other “protected” groups will be required and enforced with a vengeance. But “protections” will not apply to members of such groups who are suspected of harboring libertarian or conservative impulses.

Sexual misconduct (as defined by the “victim”) will become a crime, and any male person may be found guilty of it on the uncorroborated testimony of any female who claims to have been the victim of an unwanted glance, touch (even if accidental), innuendo (as perceived by the victim), etc.

There will be parallel treatment of the “crimes” of racism, anti-Islamism, nativism, and genderism.

All health care in the United States will be subject to review by a national, single-payer agency of the central government. Private care will be forbidden, though ready access to doctors, treatments, and medications will be provided for high officials and other favored persons. The resulting health-care catastrophe that befalls most of the populace (like that of the UK) will be shrugged off as a residual effect of “capitalist” health care.

The regulatory regime will rebound with a vengeance, contaminating every corner of American life and regimenting all businesses except those daring to operate in an underground economy. The quality and variety of products and services will decline as their real prices rise as a fraction of incomes.

The dire economic effects of single-payer health care and regulation will be compounded by massive increases in other kinds of government spending (defense excepted). The real rate of economic growth will approach zero.

The United States will maintain token armed forces, mainly for the purpose of suppressing domestic uprisings. Given its economically destructive independence from foreign oil and its depressed economy, it will become a simulacrum of the USSR and Mao’s China — and not a rival to the new superpowers, Russia and China, which will largely ignore it as long as it doesn’t interfere in their pillaging of respective spheres of influence. A policy of non-interference (i.e., tacit collusion) will be the order of the era in Washington.

Though it would hardly be necessary to rig elections in favor of Democrats, given the flood of illegal immigrants who will pour into the country and enjoy voting rights, a way will be found to do just that. The most likely method will be election laws requiring candidates to pass ideological purity tests by swearing fealty to the “law of the land” (i.e., abortion, unfettered immigration, same-sex marriage, freedom of gender choice for children, etc., etc., etc.). Those who fail such a test will be barred from holding any kind of public office, no matter how insignificant.

Are my fears exaggerated? I don’t think so, given what has happened in recent decades and the cultural revolutionaries’ tightening grip on the Democrat party. What I have sketched out can easily happen within a decade after Democrats seize total control of the central government.

Will the defenders of liberty rally to keep it from happening? Perhaps, but I fear that they will not have a lot of popular support, for three reasons:

First, there is the problem of asymmetrical ideological warfare, which favors the party that says “nice” things and promises “free” things.

Second, What has happened thus far — mainly since the 1960s — has happened slowly enough that it seems “natural” to too many Americans. They are like fish in water who cannot grasp the idea of life in a different medium.

Third, although change for the worse has accelerated in recent years, it has occurred mainly in forums that seem inconsequential to most Americans, for example, in academic fights about free speech, in the politically correct speeches of Hollywood stars, and in culture wars that are conducted mainly in the blogosphere. The unisex-bathroom issue seems to have faded as quickly as it arose, mainly because it really affects so few people. The latest gun-control mania may well subside — though it has reached new heights of hysteria — but it is only one battle in the broader war being waged by the left. And most Americans lack the political and historical knowledge to understand that there really is a civil war underway — just not a “hot” one.

Is a reversal possible? Possible, yes, but unlikely. The rot is too deeply entrenched. Public schools and universities are cesspools of anti-Americanism. The affluent elites of the information-entertainment-media-academic complex are in the saddle. Republican politicians, for the most part, are of no help because they are more interested on preserving their comfortable sinecures than in defending America or the Constitution.

On that note, I will take a break from blogging — perhaps forever. I urge you to read one of my early posts, “Reveries“, for a taste of what America means to me. As for my blogging legacy, please see “A Summing Up“, which links to dozens of posts and pages that amplify and support this post.

Il faut cultiver notre jardin.

Voltaire, Candide


Related reading:

Michael Anton, “What We Still Have to Lose“, American Greatness, February 10, 2019

Rod Dreher, “Benedict Option FAQ“, The American Conservative, October 6, 2015

Roger Kimball, “Shall We Defend Our Common History?“, Imprimis, February 2019

Joel Kotkin, “Today’s Cultural Engineers“, newgeography, January 26, 2019

Daniel Oliver, “Where Has All the Culture Gone?“, The Federalist, February 8, 2019

Malcolm Pollack, “On Civil War“, Motus Mentis, March 7, 2019

Fred Reed, “The White Man’s Burden: Reflections on the Custodial State“, Fred on Everything, January 17, 2019

Gilbert T. Sewall, “The Diminishing Authority of the Bourgeois Culture“, The American Conservative, February 4, 2019

Bob Unger, “Requiem for America“, The New American, January 24, 2019

A Summing Up

This post has been updated and moved to “Favorite Posts“.

Not-So-Random Thoughts (XXIII)

CONTENTS

Government and Economic Growth

Reflections on Defense Economics

Abortion: How Much Jail Time?

Illegal Immigration and the Welfare State

Prosperity Isn’t Everything

Google et al. As State Actors

The Transgender Trap


GOVERNMENT AND ECONOMIC GROWTH

Guy Sorman reviews Alan Greenspan and Adrian Wooldridge’s Capitalism in America: A History. Sorman notes that

the golden days of American capitalism are over—or so the authors opine. That conclusion may seem surprising, as the U.S. economy appears to be flourishing. But the current GDP growth rate of roughly 3 percent, after deducting a 1 percent demographic increase, is rather modest, the authors maintain, compared with the historic performance of the postwar years, when the economy grew at an annual average of 5 percent. Moreover, unemployment appears low only because a significant portion of the population is no longer looking for work.

Greenspan and Wooldridge reject the conventional wisdom on mature economies growing more slowly. They blame relatively slow growth in the U.S. on the increase in entitlement spending and the expansion of the welfare state—a classic free-market argument.

They are right to reject the conventional wisdom.  Slow growth is due to the expansion of government spending (including entitlements) and the regulatory burden. See “The Rahn Curve in Action” for details, including an equation that accurately explains the declining rate of growth since the end of World War II.


REFLECTIONS ON DEFENSE ECONOMICS

Arnold Kling opines about defense economics. Cost-effectiveness analysis was the big thing in the 1960s. Analysts applied non-empirical models of warfare and cost estimates that were often WAGs (wild-ass guesses) to the comparison of competing weapon systems. The results were about as accurate a global climate models, which is to say wildly inaccurate. (See “Modeling Is not Science“.) And the results were worthless unless they comported with the prejudices of the “whiz kids” who worked for Robert Strange McNamara. (See “The McNamara Legacy: A Personal Perspective“.)


ABORTION: HOW MUCH JAIL TIME?

Georgi Boorman says “Yes, It Would Be Just to Punish Women for Aborting Their Babies“. But, as she says,

mainstream pro-lifers vigorously resist this argument. At the same time they insist that “the unborn child is a human being, worthy of legal protection,” as Sarah St. Onge wrote in these pages recently, they loudly protest when so-called “fringe” pro-lifers state the obvious: of course women who willfully hire abortionists to kill their children should be prosecuted.

Anna Quindlen addressed the same issue more than eleven years ago, in Newsweek:

Buried among prairie dogs and amateur animation shorts on YouTube is a curious little mini-documentary shot in front of an abortion clinic in Libertyville, Ill. The man behind the camera is asking demonstrators who want abortion criminalized what the penalty should be for a woman who has one nonetheless. You have rarely seen people look more gobsmacked. It’s as though the guy has asked them to solve quadratic equations. Here are a range of responses: “I’ve never really thought about it.” “I don’t have an answer for that.” “I don’t know.” “Just pray for them.”

You have to hand it to the questioner; he struggles manfully. “Usually when things are illegal there’s a penalty attached,” he explains patiently. But he can’t get a single person to be decisive about the crux of a matter they have been approaching with absolute certainty.

… If the Supreme Court decides abortion is not protected by a constitutional guarantee of privacy, the issue will revert to the states. If it goes to the states, some, perhaps many, will ban abortion. If abortion is made a crime, then surely the woman who has one is a criminal. But, boy, do the doctrinaire suddenly turn squirrelly at the prospect of throwing women in jail.

“They never connect the dots,” says Jill June, president of Planned Parenthood of Greater Iowa.

I addressed Quindlen, and queasy pro-lifers, eleven years ago:

The aim of Quindlen’s column is to scorn the idea of jail time as punishment for a woman who procures an illegal abortion. In fact, Quindlen’s “logic” reminds me of the classic definition of chutzpah: “that quality enshrined in a man who, having killed his mother and father, throws himself on the mercy of the court because he is an orphan.” The chutzpah, in this case, belongs to Quindlen (and others of her ilk) who believe that a woman should not face punishment for an abortion because she has just “lost” a baby.

Balderdash! If a woman illegally aborts her child, why shouldn’t she be punished by a jail term (at least)? She would be punished by jail (or confinement in a psychiatric prison) if she were to kill her new-born infant, her toddler, her ten-year old, and so on. What’s the difference between an abortion and murder? None. (Read this, then follow the links in this post.)

Quindlen (who predictably opposes capital punishment) asks “How much jail time?” in a cynical effort to shore up the anti-life front. It ain’t gonna work, lady.

See also “Abortion Q & A“.


ILLEGAL IMMIGRATION AND THE WELFARE STATE

Add this to what I say in “The High Cost of Untrammeled Immigration“:

In a new analysis of the latest numbers [by the Center for Immigration Studies], from 2014, 63 percent of non-citizens are using a welfare program, and it grows to 70 percent for those here 10 years or more, confirming another concern that once immigrants tap into welfare, they don’t get off it.

See also “Immigration and Crime” and “Immigration and Intelligence“.

Milton Friedman, thinking like an economist, favored open borders only if the welfare state were abolished. But there’s more to a country than GDP. (See “Genetic Kinship and Society“.) Which leads me to…


PROSPERITY ISN’T EVERYTHING

Patrick T. Brown writes about Oren Cass’s The Once and Future Worker:

Responding to what he cutely calls “economic piety”—the belief that GDP per capita defines a country’s well-being, and the role of society is to ensure the economic “pie” grows sufficiently to allow each individual to consume satisfactorily—Cass offers a competing hypothesis….

[A]s Cass argues, if well-being is measured by considerations in addition to economic ones, a GDP-based measurement of how our society is doing might not only be insufficient now, but also more costly over the long term. The definition of success in our public policy (and cultural) efforts should certainly include some economic measures, but not at the expense of the health of community and family life.

Consider this line, striking in the way it subverts the dominant paradigm: “If, historically, two-parent families could support themselves with only one parent working outside the home, then something is wrong with ‘growth’ that imposes a de facto need for two incomes.”…

People need to feel needed. The hollowness at the heart of American—Western?—society can’t be satiated with shinier toys and tastier brunches. An overemphasis on production could, of course, be as fatal as an overemphasis on consumption, and certainly the realm of the meritocrats gives enough cause to worry on this score. But as a matter of policy—as a means of not just sustaining our fellow citizen in times of want but of helping him feel needed and essential in his family and community life—Cass’s redefinition of “efficiency” to include not just its economic sense but some measure of social stability and human flourishing is welcome. Frankly, it’s past due as a tenet of mainstream conservatism.

Cass goes astray by offering governmental “solutions”; for example:

Cass suggests replacing the current Earned Income Tax Credit (along with some related safety net programs) with a direct wage subsidy, which would be paid to workers by the government to “top off” their current wage. In lieu of a minimum wage, the government would set a “target wage” of, say, $12 an hour. If an employee received $9 an hour from his employer, the government would step up to fill in that $3 an hour gap.

That’s no solution at all, inasmuch as the cost of a subsidy must be borne by someone. The someone, ultimately, is the low-wage worker whose wage is low because he is less productive than he would be. Why is he less productive? Because the high-income person who is taxed for the subsidy has that much less money to invest in business capital that raises productivity.

The real problem is that America — and the West, generally — has turned into a spiritual and cultural wasteland. See, for example, “A Century of Progress?“, “Prosperity Isn’t Everything“, and “James Burnham’s Misplaced Optimism“.


GOOGLE ET AL. AS STATE ACTORS

In “Preemptive (Cold) Civil War” (03/18/18) I recommended treating Google et al. as state actors to enforce the free-speech guarantee of the First Amendment against them:

The Constitution is the supreme law of the land. (Article V.)

Amendment I to the Constitution says that “Congress shall make no law … abridging the freedom of speech”.

Major entities in the telecommunications, news, entertainment, and education industries have exerted their power to suppress speech because of its content…. The collective actions of these entities — many of them government- licensed and government-funded — effectively constitute a governmental violation of the Constitution’s guarantee of freedom of speech (See Smith v. Allwright, 321 U.S. 649 (1944) and Marsh v. Alabama, 326 U.S. 501 (1946).)

I recommended presidential action. But someone has moved the issue to the courts. Tucker Higgins has the story:

The Supreme Court has agreed to hear a case that could determine whether users can challenge social media companies on free speech grounds.

The case, Manhattan Community Access Corp. v. Halleck, No. 17-702, centers on whether a private operator of a public access television network is considered a state actor, which can be sued for First Amendment violations.

The case could have broader implications for social media and other media outlets. In particular, a broad ruling from the high court could open the country’s largest technology companies up to First Amendment lawsuits.

That could shape the ability of companies like Facebook, Twitter and Alphabet’s Google to control the content on their platforms as lawmakers clamor for more regulation and activists on the left and right spar over issues related to censorship and harassment.

The Supreme Court accepted the case on [October 12]….

the court of Chief Justice John Roberts has shown a distinct preference for speech cases that concern conservative ideology, according to an empirical analysis conducted by researchers affiliated with Washington University in St. Louis and the University of Michigan.

The analysis found that the justices on the court appointed by Republican presidents sided with conservative speech nearly 70 percent of the time.

“More than any other modern Court, the Roberts Court has trained its sights on speech promoting conservative values,” the authors found.

Here’s hoping.


THE TRANSGENDER TRAP

Babette Francis and John Ballantine tell it like it is:

Dr. Paul McHugh, the University Distinguished Service Professor of Psychiatry at Johns Hopkins Medical School and the former psychiatrist-in-chief at Johns Hopkins Hospital, explains that “‘sex change’ is biologically impossible.” People who undergo sex-reassignment surgery do not change from men to women or vice versa.

In reality, gender dysphoria is more often than not a passing phase in the lives of certain children. The American Psychological Association’s Handbook of Sexuality and Psychology has revealed that, before the widespread promotion of transgender affirmation, 75 to 95 percent of pre-pubertal children who were uncomfortable or distressed with their biological sex eventually outgrew that distress. Dr. McHugh says: “At Johns Hopkins, after pioneering sex-change surgery, we demonstrated that the practice brought no important benefits. As a result, we stopped offering that form of treatment in the 1970s.”…

However, in today’s climate of political correctness, it is more than a health professional’s career is worth to offer a gender-confused patient an alternative to pursuing sex-reassignment. In some states, as Dr. McHugh has noted, “a doctor who would look into the psychological history of a transgendered boy or girl in search of a resolvable conflict could lose his or her license to practice medicine.”

In the space of a few years, these sorts of severe legal prohibitions—usually known as “anti-reparative” and “anti-conversion” laws—have spread to many more jurisdictions, not only across the United States, but also in Canada, Britain, and Australia. Transgender ideology, it appears, brooks no opposition from any quarter….

… Brown University succumbed to political pressure when it cancelled authorization of a news story of a recent study by one of its assistant professors of public health, Lisa Littman, on “rapid-onset gender dysphoria.” Science Daily reported:

Among the noteworthy patterns Littman found in the survey data: twenty-one percent of parents reported their child had one or more friends who become transgender-identified at around the same time; twenty percent reported an increase in their child’s social media use around the same time as experiencing gender dysphoria symptoms; and forty-five percent reported both.

A former dean of Harvard Medical School, Professor Jeffrey S. Flier, MD, defended Dr. Littman’s freedom to publish her research and criticized Brown University for censoring it. He said:

Increasingly, research on politically charged topics is subject to indiscriminate attack on social media, which in turn can pressure school administrators to subvert established norms regarding the protection of free academic inquiry. What’s needed is a campaign to mobilize the academic community to protect our ability to conduct and communicate such research, whether or not the methods and conclusions provoke controversy or even outrage.

The examples described above of the ongoing intimidation—sometimes, actual sackings—of doctors and academics who question transgender dogma represent only a small part of a very sinister assault on the independence of the medical profession from political interference. Dr. Whitehall recently reflected: “In fifty years of medicine, I have not witnessed such reluctance to express an opinion among my colleagues.”

For more about this outrage see “The Transgender Fad and Its Consequences“.

Macroeconomic Modeling Revisited

Modeling is not science. Take Professor Ray Fair, for example. He teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983 Professor Fair has been forecasting changes in real GDP four quarters ahead. He has made dozens of forecasts based on a model that he has tweaked many times over the years. The current model can be found here. His forecasting track record is here.

How has he done? Here’s how:

1. The mean absolute error of his forecasts is 70 percent; that is, on average his predictions vary by 70 percent from actual rates of growth.

2. The median absolute error of his forecasts is 33 percent.

3. His forecasts are systematically biased: too high when real, four-quarter GDP growth is less than 3 percent; too low when real, four-quarter GDP growth is greater than 3 percent. (See figure 1.)

4. His forecasts have grown generally worse — not better — with time. (See figure 2.)

5. In sum, the overall predictive value of the model is weak. (See figures 3 and 4.)

FIGURE 1

Figures 1-4 are derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Fair’s website.

FIGURE 2

FIGURE 3

FIGURE 4

Given the foregoing, you might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

Could I do better? Well, I’ve done better, with the simple model that I devised to estimate the Rahn Curve. It’s described in “The Rahn Curve in Action“, which is part III of “Economic Growth Since World War II“.

The theory behind the Rahn Curve is simple — but not simplistic. A relatively small government with powers limited mainly to the protection of citizens and their property is worth more than its cost to taxpayers because it fosters productive economic activity (not to mention liberty). But additional government spending hinders productive activity in many ways, which are discussed in Daniel Mitchell’s paper, “The Impact of Government Spending on Economic Growth.” (I would add to Mitchell’s list the burden of regulatory activity, which grows even when government does not.)

What does the Rahn Curve look like? Mitchell estimates this relationship between government spending and economic growth:

Rahn curve_Mitchell

The curve is dashed rather than solid at low values of government spending because it has been decades since the governments of developed nations have spent as little as 20 percent of GDP. But as Mitchell and others note, the combined spending of governments in the U.S. was 10 percent (and less) until the eve of the Great Depression. And it was in the low-spending, laissez-faire era from the end of the Civil War to the early 1900s that the U.S. enjoyed its highest sustained rate of economic growth.

Elsewhere, I estimated the Rahn curve that spans most of the history of the United States. I came up with this relationship (terms modified for simplicity (with a slight cosmetic change in terminology):

Yg = 0.054 -0.066F

To be precise, it’s the annualized rate of growth over the most recent 10-year span (Yg), as a function of F (fraction of GDP spent by governments at all levels) in the preceding 10 years. The relationship is lagged because it takes time for government spending (and related regulatory activities) to wreak their counterproductive effects on economic activity. Also, I include transfer payments (e.g., Social Security) in my measure of F because there’s no essential difference between transfer payments and many other kinds of government spending. They all take money from those who produce and give it to those who don’t (e.g., government employees engaged in paper-shuffling, unproductive social-engineering schemes, and counterproductive regulatory activities).

When F is greater than the amount needed for national defense and domestic justice — no more than 0.1 (10 percent of GDP) — it discourages productive, growth-producing, job-creating activity. And because government spending weighs most heavily on taxpayers with above-average incomes, higher rates of F also discourage saving, which finances growth-producing investments in new businesses, business expansion, and capital (i.e., new and more productive business assets, both physical and intellectual).

I’ve taken a closer look at the post-World War II numbers because of the marked decline in the rate of growth since the end of the war (Figure 2).

Here’s the revised result, which accounts for more variables:

Yg = 0.0275 -0.340F + 0.0773A – 0.000336R – 0.131P

Where,

Yg = real rate of GDP growth in a 10-year span (annualized)

F = fraction of GDP spent by governments at all levels during the preceding 10 years

A = the constant-dollar value of private nonresidential assets (business assets) as a fraction of GDP, averaged over the preceding 10 years

R = average number of Federal Register pages, in thousands, for the preceding 10-year period

P = growth in the CPI-U during the preceding 10 years (annualized).

The r-squared of the equation is 0.74 and the F-value is 1.60E-13. The p-values of the intercept and coefficients are 0.093, 3.98E-08, 4.83E-09, 6.05E-07, and 0.0071. The standard error of the estimate is 0.0049, that is, about half a percentage point.

Here’s how the equation stacks up against actual 10-year rates of real GDP growth:

What does the new equation portend for the next 10 years? Based on the values of F, A, R, and P for 2008-2017, the real rate of growth for the next 10 years will be about 2.0 percent.

There are signs of hope, however. The year-over-year rate of real growth in the four most recent quarters (2017Q4 – 2018Q3) were 2.4, 2.6, 2.9, and 3.0 percent, as against the dismal rates of 1.4, 1.2, 1.5, and 1.8 percent for four quarters of 2016 — Obama’s final year in office. A possible explanation is the election of Donald Trump and the well-founded belief that his tax and regulatory policies would be more business-friendly.

I took the data set that I used to estimate the new equation and made a series of out-of-sample estimates of growth over the next 10 years. I began with the data for 1946-1964 to estimate the growth for 1965-1974. I continued by taking the data for 1946-1965 to estimate the growth for 1966-1975, and so on, until I had estimated the growth for every 10-year period from 1965-1974 through 2008-2017. In other words, like Professor Fair, I updated my model to reflect new data, and I estimated the rate of economic growth in the future. How did I do? Here’s a first look:

FIGURE 5

For ease of comparison, I made the scale of the vertical axis of figure 5 the same as the scale of the vertical axis of figure 2. It’s obvious that my estimate of the Rahn Curve does a much better job of predicting the real rate of GDP growth than does Fair’s model.

Not only that, but my model is less biased:

FIGURE 6

The systematic bias reflected in figure 6 is far weaker than the systematic bias in Fair’s estimates (figure 1).

Finally, unlike Fair’s model (figure 4), my model captures the downward trend in the rate of real growth:

FIGURE 7

The moral of the story: It’s futile to build complex models of the economy. They can’t begin to capture the economy’s real complexity, and they’re likely to obscure the important variables — the ones that will determine the future course of economic growth.

A final note: Elsewhere (e.g., here) I’ve disparaged economic aggregates, of which GDP is the apotheosis. And yet I’ve built this post around estimates of GDP. Am I contradicting myself? Not really. There’s a rough consistency in measures of GDP across time, and I’m not pretending that GDP represents anything but an estimate of the monetary value of those products and services to which monetary values can be ascribed.

As a practical matter, then, if you want to know the likely future direction and value of GDP, stick with simple estimation techniques like the one I’ve demonstrated here. Don’t get bogged down in the inconclusive minutiae of a model like Professor Fair’s.

Wildfires and “Climate Change”, Again

In view of the current hysteria about the connection between wildfires and “climate change”, I must point readers to a three-month-old post The connection is nil, just like the bogus connection between tropical cyclone activity and “climate change”.

Ford, Kavanaugh, and Probability

I must begin by quoting the ever-quotable Theodore Dalrymple. In closing a post in which he addresses (inter alia) the high-tech low-life lynching of Brett Kavanaugh, he writes:

The most significant effect of the whole sorry episode is the advance of the cause of what can be called Femaoism, an amalgam of feminism and Maoism. For some people, there is a lot of pleasure to be had in hatred, especially when it is made the meaning of life.

Kavanaugh’s most “credible” accuser — Christine Blasey Ford (CBF) — was incredible (in the literal meaning of the word) for many reasons, some of which are given in the items listed at the end of “Where I Stand on Kavanaugh“.

Arnold Kling gives what is perhaps the best reason for believing Kavanaugh’s denial of CBF’s accusation, a reason that occurred to me at the time:

[Kavanaugh] came out early and emphatically with his denial. This risked having someone corroborate the accusation, which would have irreparably ruined his career. If he did it, it was much safer to own it than to attempt to get away with lying about it. If he lied, chances are he would be caught–at some point, someone would corroborate her story. The fact that he took that risk, along with the fact that there was no corroboration, even from her friend, suggests to me that he is innocent.

What does any of this have to do with probability? Kling’s post is about the results of a survey conducted by Scott Alexander, the proprietor of Slate Star Codex. Kling opens with this:

Scott Alexander writes,

I asked readers to estimate their probability that Judge Kavanaugh was guilty of sexually assaulting Dr. Ford. I got 2,350 responses (thank you, you are great). Here was the overall distribution of probabilities.

… A classical statistician would have refused to answer this question. In classical statistics, he is either guilty or he is not. A probability statement is nonsense. For a Bayesian, it represents a “degree of belief” or something like that. Everyone who answered the poll … either is a Bayesian or consented to act like one.

As a staunch adherent of the classical position (though I am not a statistician), I agree with Kling.

But the real issue in the recent imbroglio surrounding Kavanaugh wasn’t the “probability” that he had committed or attempted some kind of assault on CBF. The real issue was the ideological direction of the Supreme Court:

  1. With the departure of Anthony Kennedy from the Court, there arose an opportunity to secure a reliably conservative (constitutionalist) majority. (Assuming that Chief Justice Roberts remains in the fold.)
  2. Kavanaugh is seen to be a reliable constitutionalist.
  3. With Kavanaugh in the conservative majority, the average age of that majority would be (and now is) 63; whereas, the average age of the “liberal” minority is 72, and the two oldest justices (at 85 and 80) are “liberals”.
  4. Though the health and fitness of individual justices isn’t well known, there are more opportunities in the coming years for the enlargement of the Court’s conservative wing than for the enlargement of its “liberal” wing.
  5. This is bad news for the left because it dims the prospects for social and economic revolution via judicial decree — a long-favored leftist strategy. In fact, it brightens the prospects for the rollback of some of the left’s legislative and judicial “accomplishments”.

Thus the transparently fraudulent attacks on Brett Kavanaugh by desperate leftists and “tools” like CBF. That is to say, except for those who hold a reasoned position (e.g., Arnold Kling and me), one’s stance on Kavanaugh is driven by one’s politics.

Scott Alexander’s post supports my view:

Here are the results broken down by party (blue is Democrats, red is Republicans):

And here are the results broken down by gender (blue is men, pink is women):

Given that women are disproportionately Democrat, relative to men, the second graph simply tells us the same thing as the first graph: The “probability” of Kavanaugh’s “guilt” is strongly linked to political persuasion. (I am heartened to see that a large chunk of the female population hasn’t succumbed to Femaoism.)

Probability, in the proper meaning of the word, has nothing to do with question of Kavanaugh’s “guilt”. A feeling or inclination isn’t a probability, it’s just a feeling or inclination. Putting a number on it is false quantification. Scott Alexander should know better.

Why I Don’t Believe in “Climate Change”

UPDATED AND EXTENDED, 11/01/18

There are lots of reasons to disbelieve in “climate change”, that is, a measurable and statistically significant influence of human activity on the “global” temperature. Many of the reasons can be found at my page on the subject — in the text, the list of related readings, and the list of related posts. Here’s the main one: Surface temperature data — the basis for the theory of anthropogenic global warming — simply do not support the theory.

As Dr. Tim Ball points out:

A fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).

clip_image006

Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

“Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.”

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

Take the National Weather Service station for Austin, Texas, which is located 2.7 miles from my house. The station is on the grounds of Camp Mabry, a Texas National Guard base near the center of Austin, the fastest-growing large city in the U.S. The base is adjacent to a major highway (Texas Loop 1) that traverses Austin. The weather station is about 1/4 mile from the highway,100 feet from a paved road on the base, and near a complex of buildings and parking areas.

Here’s a ground view of the weather station:

And here’s an aerial view; the weather station is the tan rectangle at the center of the photo:

As I have shown elsewhere, the general rise in temperatures recorded at the weather station over the past several decades is fully explained by the urban-heat-island effect due to the rise in Austin’s population during those decades.

Further, there is a consistent difference in temperature and rainfall between my house and Camp Mabry. My house is located farther from the center of Austin — northwest of Camp Mabry — in a topographically different area. The topography in my part of Austin is typical of the Texas Hill Country, which begins about a mile east of my house and covers a broad swath of land stretching as far as 250 miles from Austin.

The contrast is obvious in the next photo. Camp Mabry is at the “1” (for Texas Loop 1) near the lower edge of the image. Topographically, it belongs with the flat part of Austin that lies mostly east of Loop 1. It is unrepresentative of the huge chunk of Austin and environs that lies to its north and west.

Getting down to cases. I observed that in the past summer, when daily highs recorded at Camp Mabry hit 100 degrees or more 52 times, the daily high at my house reached 100 or more only on the handful of days when it reached 106-110 at Camp Mabry. That’s consistent with another observation; namely, that the daily high at my house is generally 6 degrees lower than the daily high at Camp Mabry when it is above 90 degrees there.

As for rainfall, my house seems to be in a different ecosystem than Camp Mabry’s. Take September and October of this year: 15.7 inches of rain fell at Camp Mabry, as against 21.0 inches at my house. The higher totals at my house are typical, and are due to a phenomenon called orographic lift. It affects areas to the north and west of Camp Mabry, but not Camp Mabry itself.

So the climate at Camp Mabry is not my climate. Nor is the climate at Camp Mabry typical of a vast area in and around Austin, despite the use of Camp Mabry’s climate to represent that area.

There is another official weather station at Austin-Bergstrom International Airport, which is in the flatland 9.5 miles to the southeast of Camp Mabry. Its rainfall total for September and October was 12.8 inches — almost 3 inches less than at Camp Mabry — but its average temperatures for the two months were within a degree of Camp Mabry’s. Suppose Camp Mabry’s weather station went offline. The weather station at ABIA would then record temperatures and precipitation even less representative of those at my house and similar areas to the north and west.

Speaking of precipitation — it is obviously related to cloud cover. The more it rains, the cloudier it will be. The cloudier it is, the lower the temperature, other things being the same (e.g., locale). This is true for Austin:

12-month avg temp vs. precip

The correlation coefficient is highly significant, given the huge sample size. Note that the relationship is between precipitation in a given month and temperature a month later. Although cloud cover (and thus precipitation) has an immediate effect on temperature, precipitation has a residual effect in that wet ground absorbs more solar radiation than dry ground, so that there is less heat reflected from the ground to the air. The lagged relationship is strongest at 1 month, and considerably stronger than any relationship in which temperature leads precipitation.

I bring up this aspect of Austin’s climate because of a post by Anthony Watts (“Data: Global Temperatures Fell As Cloud Cover Rose in the 1980s and 90s“, Watts Up With That?, November 1, 2018):

I was reminded about a study undertaken by Clive Best and Euan Mearns looking at the role of cloud cover four years ago:

Clouds have a net average cooling effect on the earth’s climate. Climate models assume that changes in cloud cover are a feedback response to CO2 warming. Is this assumption valid? Following a study withEuan Mearns showing a strong correlation in UK temperatures with clouds, we  looked at the global effects of clouds by developing a combined cloud and CO2 forcing model to sudy how variations in both cloud cover [8] and CO2 [14] data affect global temperature anomalies between 1983 and 2008. The model as described below gives a good fit to HADCRUT4 data with a Transient Climate Response (TCR )= 1.6±0.3°C. The 17-year hiatus in warming can then be explained as resulting from a stabilization in global cloud cover since 1998.  An excel spreadsheet implementing the model as described below can be downloaded from http://clivebest.com/GCC.

The full post containing all of the detailed statistical analysis is here.

But this is the key graph:

CC-HC4

Figure 1a showing the ISCCP global averaged monthly cloud cover from July 1983 to Dec 2008 over-laid in blue with Hadcrut4 monthly anomaly data. The fall in cloud cover coincides with a rapid rise in temperatures from 1983-1999. Thereafter the temperature and cloud trends have both flattened. The CO2 forcing from 1998 to 2008 increases by a further ~0.3 W/m2 which is evidence that changes in clouds are not a direct feedback to CO2 forcing.

In conclusion, natural cyclic change in global cloud cover has a greater impact on global average temperatures than CO2. There is little evidence of a direct feedback relationship between clouds and CO2. Based on satellite measurements of cloud cover (ISCCP), net cloud forcing (CERES) and CO2 levels (KEELING) we developed a model for predicting global temperatures. This results in a best-fit value for TCR = 1.4 ± 0.3°C. Summer cloud forcing has a larger effect in the northern hemisphere resulting in a lower TCR = 1.0 ± 0.3°C. Natural phenomena must influence clouds although the details remain unclear, although the CLOUD experiment has given hints that increased fluxes of cosmic rays may increase cloud seeding [19].  In conclusion, the gradual reduction in net cloud cover explains over 50% of global warming observed during the 80s and 90s, and the hiatus in warming since 1998 coincides with a stabilization of cloud forcing.

Why there was a decrease in cloud cover is another question of course.

In addition to Paul Homewood’s piece, we have this WUWT story from 2012:

Spencer’s posited 1-2% cloud cover variation found

A paper published last week finds that cloud cover over China significantly decreased during the period 1954-2005. This finding is in direct contradiction to the theory of man-made global warming which presumes that warming allegedly from CO2 ‘should’ cause an increase in water vapor and cloudiness. The authors also find the decrease in cloud cover was not related to man-made aerosols, and thus was likely a natural phenomenon, potentially a result of increased solar activity via the Svensmark theory or other mechanisms.

Case closed. (Not for the first time.)

Atheistic Scientism Revisited

I recently had the great pleasure of reading The Devil’s Delusion: Atheism and Its Scientific Pretensions, by David Berlinksi. (Many thanks to Roger Barnett for recommending the book to me.) Berlinski, who knows far more about science than I do, writes with flair and scathing logic. I can’t do justice to his book, but I will try to convey its gist.

Before I do that, I must tell you that I enjoyed Berlinski’s book not only because of the author’s acumen and biting wit, but also because he agrees with me. (I suppose I should say, in modesty, that I agree with him.) I have argued against atheistic scientism in many blog posts (see below).

Here is my version of the argument against atheism in its briefest form (June 15, 2011):

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

As for scientism, I call upon Friedrich Hayek:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it. [The Counter Revolution Of Science]

As Berlinski amply illustrates and forcibly argues, atheistic scientism is rampant in the so-called sciences. I have reproduced below some key passages from Berlinski’s book. They are representative, but far from exhaustive (though I did nearly exhaust the publisher’s copy limit on the Kindle edition). I have forgone the block-quotation style for ease of reading, and have inserted triple asterisks to indicate (sometimes subtle) changes of topic.

*   *   *

Richard Dawkins, the author of The God Delusion, … is not only an intellectually fulfilled atheist, he is determined that others should be as full as he. A great many scientists are satisfied that at last someone has said out loud what so many of them have said among themselves: Scientific and religious belief are in conflict. They cannot both be right. Let us get rid of the one that is wrong….

Because atheism is said to follow from various scientific doctrines, literary atheists, while they are eager to speak their minds, must often express themselves in other men’s voices. Christopher Hitchens is an example. With forthcoming modesty, he has affirmed his willingness to defer to the world’s “smart scientists” on any matter more exigent than finger-counting. Were smart scientists to report that a strain of yeast supported the invasion of Iraq, Hitchens would, no doubt, conceive an increased respect for yeast….

If nothing else, the attack on traditional religious thought marks the consolidation in our time of science as the single system of belief in which rational men and women might place their faith, and if not their faith, then certainly their devotion. From cosmology to biology, its narratives have become the narratives. They are, these narratives, immensely seductive, so much so that looking at them with innocent eyes requires a very deliberate act. And like any militant church, this one places a familiar demand before all others: Thou shalt have no other gods before me.

It is this that is new; it is this that is important….

For scientists persuaded that there is no God, there is no finer pleasure than recounting the history of religious brutality and persecution. Sam Harris is in this regard especially enthusiastic, The End of Faith recounting in lurid but lingering detail the methods of torture used in the Spanish Inquisition….

Nonetheless, there is this awkward fact: The twentieth century was not an age of faith, and it was awful. Lenin, Stalin, Hitler, Mao, and Pol Pot will never be counted among the religious leaders of mankind….

… Just who has imposed on the suffering human race poison gas, barbed wire, high explosives, experiments in eugenics, the formula for Zyklon B, heavy artillery, pseudo-scientific justifications for mass murder, cluster bombs, attack submarines, napalm, intercontinental ballistic missiles, military space platforms, and nuclear weapons?

If memory serves, it was not the Vatican….

What Hitler did not believe and what Stalin did not believe and what Mao did not believe and what the SS did not believe and what the Gestapo did not believe and what the NKVD did not believe and what the commissars, functionaries, swaggering executioners, Nazi doctors, Communist Party theoreticians, intellectuals, Brown Shirts, Black Shirts, gauleiters, and a thousand party hacks did not believe was that God was watching what they were doing.

And as far as we can tell, very few of those carrying out the horrors of the twentieth century worried overmuch that God was watching what they were doing either.

That is, after all, the meaning of a secular society….

Richard Weikart, … in his admirable treatise, From Darwin to Hitler: Evolutionary Ethics, Eugenics, and Racism in Germany, makes clear what anyone capable of reading the German sources already knew: A sinister current of influence ran from Darwin’s theory of evolution to Hitler’s policy of extermination.

*   *   *

It is wrong, the nineteenth-century British mathematician W. K. Clifford affirmed, “always, everywhere, and for anyone, to believe anything upon insufficient evidence.” I am guessing that Clifford believed what he wrote, but what evidence he had for his belief, he did not say.

Something like Clifford’s injunction functions as the premise in a popular argument for the inexistence of God. If God exists, then his existence is a scientific claim, no different in kind from the claim that there is tungsten to be found in Bermuda. We cannot have one set of standards for tungsten and another for the Deity….

There remains the obvious question: By what standards might we determine that faith in science is reasonable, but that faith in God is not? It may well be that “religious faith,” as the philosopher Robert Todd Carroll has written, “is contrary to the sum of evidence,” but if religious faith is found wanting, it is reasonable to ask for a restatement of the rules by which “the sum of evidence” is computed….

… The concept of sufficient evidence is infinitely elastic…. What a physicist counts as evidence is not what a mathematician generally accepts. Evidence in engineering has little to do with evidence in art, and while everyone can agree that it is wrong to go off half-baked, half-cocked, or half-right, what counts as being baked, cocked, or right is simply too variable to suggest a plausible general principle….

Neither the premises nor the conclusions of any scientific theory mention the existence of God. I have checked this carefully. The theories are by themselves unrevealing. If science is to champion atheism, the requisite demonstration must appeal to something in the sciences that is not quite a matter of what they say, what they imply, or what they reveal.

*   *   *

The universe in its largest aspect is the expression of curved space and time. Four fundamental forces hold sway. There are black holes and various infernal singularities. Popping out of quantum fields, the elementary particles appear as bosons or fermions. The fermions are divided into quarks and leptons. Quarks come in six varieties, but they are never seen, confined as they are within hadrons by a force that perversely grows weaker at short distances and stronger at distances that are long. There are six leptons in four varieties. Depending on just how things are counted, matter has as its fundamental constituents twenty-four elementary particles, together with a great many fields, symmetries, strange geometrical spaces, and forces that are disconnected at one level of energy and fused at another, together with at least a dozen different forms of energy, all of them active.

… It is remarkably baroque. And it is promiscuously catholic. For the atheist persuaded that materialism offers him a no-nonsense doctrinal affiliation, materialism in this sense comes to the declaration of a barroom drinker that he will have whatever he’s having, no matter who he is or what he is having. What he is having is what he always takes, and that is any concept, mathematical structure, or vagrant idea needed to get on with it. If tomorrow, physicists determine that particle physics requires access to the ubiquity of the body of Christ, that doctrine would at once be declared a physical principle and treated accordingly….

What remains of the ideology of the sciences? It is the thesis that the sciences are true— who would doubt it?— and that only the sciences are true. The philosopher Michael Devitt thus argues that “there is only one way of knowing, the empirical way that is the basis of science.” An argument against religious belief follows at once on the assumptions that theology is not science and belief is not knowledge. If by means of this argument it also follows that neither mathematics, the law, nor the greater part of ordinary human discourse have a claim on our epistemological allegiance, they must be accepted as casualties of war.

*   *   *

The claim that the existence of God should be treated as a scientific question stands on a destructive dilemma: If by science one means the great theories of mathematical physics, then the demand is unreasonable. We cannot treat any claim in this way. There is no other intellectual activity in which theory and evidence have reached this stage of development….

Is there a God who has among other things created the universe? “It is not by its conclusions,” C. F. von Weizsäcker has written in The Relevance of Science, but by its methodological starting point that modern science excludes direct creation. Our methodology would not be honest if this fact were denied . . . such is the faith in the science of our time, and which we all share” (italics added).

In science, as in so many other areas of life, faith is its own reward….

The medieval Arabic argument known as the kalam is an example of the genre [cosmological argument].

Its first premise: Everything that begins to exist has a cause.

And its second: The universe began to exist.

And its conclusion: So the universe had a cause.

This is not by itself an argument for the existence of God. It is suggestive without being conclusive. Even so, it is an argument that in a rush covers a good deal of ground carelessly denied by atheists. It is one thing to deny that there is a God; it is quite another to deny that the universe has a cause….

The universe, orthodox cosmologists believe, came into existence as the expression of an explosion— what is now called the Big Bang. The word explosion is a sign that words have failed us, as they so often do, for it suggests a humanly comprehensible event— a gigantic explosion or a stupendous eruption. This is absurd. The Big Bang was not an event taking place at a time or in a place. Space and time were themselves created by the Big Bang, the measure along with the measured….

Whatever its name, as far as most physicists are concerned, the Big Bang is now a part of the established structure of modern physics….

… Many physicists have found the idea that the universe had a beginning alarming. “So long as the universe had a beginning,” Stephen Hawking has written, “we could suppose it had a creator.” God forbid!

… Big Bang cosmology has been confirmed by additional evidence, some of it astonishing. In 1963, the physicists Arno Penzias and Robert Wilson observed what seemed to be the living remnants of the Big Bang— and after 14 billion years!— when in 1962 they detected, by means of a hum in their equipment, a signal in the night sky they could only explain as the remnants of the microwave radiation background left over from the Big Bang itself.

More than anything else, this observation, and the inference it provoked, persuaded physicists that the structure of Big Bang cosmology was anchored into fact….

“Perhaps the best argument in favor of the thesis that the Big Bang supports theism,” the astrophysicist Christopher Isham has observed, “is the obvious unease with which it is greeted by some atheist physicists. At times this has led to scientific ideas, such as continuous creation or an oscillating universe, being advanced with a tenacity which so exceeds their intrinsic worth that one can only suspect the operation of psychological forces lying very much deeper than the usual academic desire of a theorist to support his or her theory.”…

… With the possibility of inexistence staring it in the face, why does the universe exist? To say that universe just is, as Stephen Hawking has said, is to reject out of hand any further questions. We know that it is. It is right there in plain sight. What philosophers such as ourselves wish to know is why it is. It may be that at the end of these inquiries we will answer our own question by saying that the universe exists for no reason whatsoever. At the end of these inquiries, and not the beginning….

Among physicists, the question of how something emerged from nothing has one decisive effect: It loosens their tongues. “One thing [that] is clear,” a physicist writes, “in our framing of questions such as ‘How did the Universe get started?’ is that the Universe was self-creating. This is not a statement on a ‘cause’ behind the origin of the Universe, nor is it a statement on a lack of purpose or destiny. It is simply a statement that the Universe was emergent, that the actual Universe probably derived from an indeterminate sea of potentiality that we call the quantum vacuum, whose properties may always remain beyond our current understanding.”

It cannot be said that “an indeterminate sea of potentiality” has anything like the clarifying effect needed by the discussion, and indeed, except for sheer snobbishness, physicists have offered no reason to prefer this description of the Source of Being to the one offered by Abu al-Hassan al Hashari in ninth-century Baghdad. The various Islamic versions of that indeterminate sea of being he rejected in a spasm of fierce disgust. “We confess,” he wrote, “that God is firmly seated on his throne. We confess that God has two hands, without asking how. We confess that God has two eyes, without asking how. We confess that God has a face.”…

Proposing to show how something might emerge from nothing, [the physicist Victor Stenger] introduces “another universe [that] existed prior to ours that tunneled through . . . to become our universe. Critics will argue that we have no way of observing such an earlier universe, and so this is not very scientific” (italics added). This is true. Critics will do just that. Before they do, they will certainly observe that Stenger has completely misunderstood the terms of the problem that he has set himself, and that far from showing how something can arise from nothing, he has shown only that something might arise from something else. This is not an observation that has ever evoked a firestorm of controversy….

… [A]ccording to the many-worlds interpretation [of quantum mechanics], at precisely the moment a measurement is made, the universe branches into two or more universes. The cat who was half dead and half alive gives rise to two separate universes, one containing a cat who is dead, the other containing a cat who is alive. The new universes cluttering up creation embody the quantum states that were previously in a state of quantum superposition.

The many-worlds interpretation of quantum mechanics is rather like the incarnation. It appeals to those who believe in it, and it rewards belief in proportion to which belief is sincere….

No less than the doctrines of religious belief, the doctrines of quantum cosmology are what they seem: biased, partial, inconclusive, and largely in the service of passionate but unexamined conviction.

*   *   *

The cosmological constant is a number controlling the expansion of the universe. If it were negative, the universe would appear doomed to contract in upon itself, and if positive, equally doomed to expand out from itself. Like the rest of us, the universe is apparently doomed no matter what it does. And here is the odd point: If the cosmological constant were larger than it is, the universe would have expanded too quickly, and if smaller, it would have collapsed too early, to permit the appearance of living systems….

“Scientists,” the physicist Paul Davies has observed, “are slowly waking up to an inconvenient truth— the universe looks suspiciously like a fix. The issue concerns the very laws of nature themselves. For 40 years, physicists and cosmologists have been quietly collecting examples of all too convenient ‘coincidences’ and special features in the underlying laws of the universe that seem to be necessary in order for life, and hence conscious beings, to exist. Change any one of them and the consequences would be lethal.”….

Why? Yes, why?

An appeal to still further physical laws is, of course, ruled out on the grounds that the fundamental laws of nature are fundamental. An appeal to logic is unavailing. The laws of nature do not seem to be logical truths. The laws of nature must be intrinsically rich enough to specify the panorama of the universe, and the universe is anything but simple. As Newton remarks, “Blind metaphysical necessity, which is certainly the same always and everywhere, could produce no variety of things.”

If the laws of nature are neither necessary nor simple, why, then, are they true?

Questions about the parameters and laws of physics form a single insistent question in thought: Why are things as they are when what they are seems anything but arbitrary?

One answer is obvious. It is the one that theologians have always offered: The universe looks like a put-up job because it is a put-up job.

*   *   *

Any conception of a contingent deity, Aquinas argues, is doomed to fail, and it is doomed to fail precisely because whatever He might do to explain the existence of the universe, His existence would again require an explanation. “Therefore, not all beings are merely possible, but there must exist something the existence of which is necessary.”…

… “We feel,” Wittgenstein wrote, “that even when all possible scientific questions have been answered, the problems of life remain completely untouched.” Those who do feel this way will see, following Aquinas, that the only inference calculated to overcome the way things are is one directed toward the way things must be….

“The key difference between the radically extravagant God hypothesis,” [Dawkins] writes, “and the apparently extravagant multiverse hypothesis, is one of statistical improbability.”

It is? I had no idea, the more so since Dawkins’s very next sentence would seem to undercut the sentence he has just written. “The multiverse, for all that it is extravagant, is simple,” because each of its constituent universes “is simple in its fundamental laws.”

If this is true for each of those constituent universes, then it is true for our universe as well. And if our universe is simple in its fundamental laws, what on earth is the relevance of Dawkins’s argument?

Simple things, simple explanations, simple laws, a simple God.

Bon appétit.

*   *   *

As a rhetorical contrivance, the God of the Gaps makes his effect contingent on a specific assumption: that whatever the gaps, they will in the course of scientific research be filled…. Western science has proceeded by filling gaps, but in filling them, it has created gaps all over again. The process is inexhaustible. Einstein created the special theory of relativity to accommodate certain anomalies in the interpretation of Clerk Maxwell’s theory of the electromagnetic field. Special relativity led directly to general relativity. But general relativity is inconsistent with quantum mechanics, the largest visions of the physical world alien to one another. Understanding has improved, but within the physical sciences, anomalies have grown great, and what is more, anomalies have grown great because understanding has improved….

… At the very beginning of his treatise Vertebrate Paleontology and Evolution, Robert Carroll observes quite correctly that “most of the fossil record does not support a strictly gradualistic account” of evolution. A “strictly gradualistic” account is precisely what Darwin’s theory demands: It is the heart and soul of the theory.

But by the same token, there are no laboratory demonstrations of speciation either, millions of fruit flies coming and going while never once suggesting that they were destined to appear as anything other than fruit flies. This is the conclusion suggested as well by more than six thousand years of artificial selection, the practice of barnyard and backyard alike. Nothing can induce a chicken to lay a square egg or to persuade a pig to develop wheels mounted on ball bearings….

… In a research survey published in 2001, and widely ignored thereafter, the evolutionary biologist Joel Kingsolver reported that in sample sizes of more than one thousand individuals, there was virtually no correlation between specific biological traits and either reproductive success or survival. “Important issues about selection,” he remarked with some understatement, “remain unresolved.”

Of those important issues, I would mention prominently the question whether natural selection exists at all.

Computer simulations of Darwinian evolution fail when they are honest and succeed only when they are not. Thomas Ray has for years been conducting computer experiments in an artificial environment that he has designated Tierra. Within this world, a shifting population of computer organisms meet, mate, mutate, and reproduce.

Sandra Blakeslee, writing for the New York Times, reported the results under the headline “Computer ‘Life Form’ Mutates in an Evolution Experiment: Natural Selection Is Found at Work in a Digital World.”

Natural selection found at work? I suppose so, for as Blakeslee observes with solemn incomprehension, “the creatures mutated but showed only modest increases in complexity.” Which is to say, they showed nothing of interest at all. This is natural selection at work, but it is hardly work that has worked to intended effect.

What these computer experiments do reveal is a principle far more penetrating than any that Darwin ever offered: There is a sucker born every minute….

… Daniel Dennett, like Mexican food, does not fail to come up long after he has gone down. “Contemporary biology,” he writes, “has demonstrated beyond all reasonable doubt that natural selection— the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge— has the power to generate breathtakingly ingenious designs” (italics added).

These remarks are typical in their self-enchanted self-confidence. Nothing in the physical sciences, it goes without saying— right?— has been demonstrated beyond all reasonable doubt. The phrase belongs to a court of law. The thesis that improvements in life appear automatically represents nothing more than Dennett’s conviction that living systems are like elevators: If their buttons are pushed, they go up. Or down, as the case may be. Although Darwin’s theory is very often compared favorably to the great theories of mathematical physics on the grounds that evolution is as well established as gravity, very few physicists have been heard observing that gravity is as well established as evolution. They know better and they are not stupid….

… The greater part of the debate over Darwin’s theory is not in service to the facts. Nor to the theory. The facts are what they have always been: They are unforthcoming. And the theory is what it always was: It is unpersuasive. Among evolutionary biologists, these matters are well known. In the privacy of the Susan B. Anthony faculty lounge, they often tell one another with relief that it is a very good thing the public has no idea what the research literature really suggests.

“Darwin?” a Nobel laureate in biology once remarked to me over his bifocals. “That’s just the party line.”

In the summer of 2007, Eugene Koonin, of the National Center for Biotechnology Information at the National Institutes of Health, published a paper entitled “The Biological Big Bang Model for the Major Transitions in Evolution.”

The paper is refreshing in its candor; it is alarming in its consequences. “Major transitions in biological evolution,” Koonin writes, “show the same pattern of sudden emergence of diverse forms at a new level of complexity” (italics added). Major transitions in biological evolution? These are precisely the transitions that Darwin’s theory was intended to explain. If those “major transitions” represent a “sudden emergence of new forms,” the obvious conclusion to draw is not that nature is perverse but that Darwin was wrong….

Koonin is hardly finished. He has just started to warm up. “In each of these pivotal nexuses in life’s history,” he goes on to say, “the principal ‘types’ seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate ‘grades’ or intermediate forms between different types are detectable.”…

… [H[is views are simply part of a much more serious pattern of intellectual discontent with Darwinian doctrine. Writing in the 1960s and 1970s, the Japanese mathematical biologist Motoo Kimura argued that on the genetic level— the place where mutations take place— most changes are selectively neutral. They do nothing to help an organism survive; they may even be deleterious…. Kimura was perfectly aware that he was advancing a powerful argument against Darwin’s theory of natural selection. “The neutral theory asserts,” he wrote in the introduction to his masterpiece, The Neutral Theory of Molecular Evolution, “that the great majority of evolutionary changes at the molecular level, as revealed by comparative studies of protein and DNA sequences, are caused not by Darwinian selection but by random drift of selectively neutral or nearly neutral mutations” (italics added)….

… Writing in the Proceedings of the National Academy of Sciences, the evolutionary biologist Michael Lynch observed that “Dawkins’s agenda has been to spread the word on the awesome power of natural selection.” The view that results, Lynch remarks, is incomplete and therefore “profoundly misleading.” Lest there be any question about Lynch’s critique, he makes the point explicitly: “What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms.”…

When asked what he was in awe of, Christopher Hitchens responded that his definition of an educated person is that you have some idea how ignorant you are. This seems very much as if Hitchens were in awe of his own ignorance, in which case he has surely found an object worthy of his veneration.

*   *   *

Do read the whole thing. It will take you only a few hours. And it will remind you — as we badly need reminding these days — that sanity reigns in some corners of the universe.


Related posts:

Same Old Story, Same Old Song and Dance
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
The Legality of Teaching Intelligent Design
Science, Logic, and God
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
The Big Bang and Atheism
Atheism, Religion, and Science Redux
Religion as Beneficial Evolutionary Adaptation
A Non-Believer Defends Religion
The Greatest Mystery
Landsburg Is Half-Right
Evolution, Human Nature, and “Natural Rights”
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Religion on the Left
Scientism, Evolution, and the Meaning of Life
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Some Thoughts about Evolution
Rationalism, Empiricism, and Scientific Knowledge
Fine-Tuning in a Wacky Wrapper
Beating Religion with the Wrong End of the Stick
Quantum Mechanics and Free Will
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
The Fragility of Knowledge
Altruism, One More Time
Religion, Creation, and Morality
The Pretence of Knowledge
Evolution, Intelligence, and Race

New Pages

In case you haven’t noticed the list in the right sidebar, I have converted several classic posts to pages, for ease of access. Some have new names; many combine several posts on the same subject:

Abortion Q & A

Climate Change

Constitution: Myths and Realities

Economic Growth Since World War II

Intelligence

Keynesian Multiplier: Fiction vs. Fact

Leftism

Movies

Spygate

Wildfires and “Climate Change”

Regarding the claim that there are more wildfires because of “climate change”:

In case the relationship isn’t obvious, here it is:

Estimates of the number of fires are from National Fire Protection Association, Number of Fires by Type of Fire. Specifically, the estimates are the sum of the columns for “Outside of Structures with Value Involved but no vehicle (outside storage crops, timber, etc) and “Brush, Grass, Wildland (excluding crops and timber), with no value or loss involved”.

Estimates of the global temperature anomalies are annual averages of monthly satellite readings for the lower troposphere, published by the Earth Science System Center of the University of Alabama-Huntsville.

Not-So-Random Thoughts (XXII)

This is a long-overdue entry; the previous one was posted on October 4, 2017. Accordingly, it is a long entry, consisting of these parts:

Censorship and Left-Wing Bias on the Web

The Real Collusion Story

“Suicide” of the West

Evolution, Intelligence, and Race

Will the Real Fascists Please Stand Up?

Consciousness

Empathy Is Over-Rated

“Nudging”



CENSORSHIP AND LEFT-WING BIAS ON THE WEB

It’s a hot topic these days. See, for example, this, this, this, this, and this. Also, this, which addresses Google’s slanting of search results about climate research. YouTube is at it, too.

A lot of libertarian and conservative commentators are loath to demand governmental intervention because the censorship is being committed by private companies: Apple, Facebook, Google, Twitter, YouTube, et al. Some libertarians and conservatives are hopeful that libertarian-conservative options will be successful (e.g., George Gilder). I am skeptical. I have seen and tried some of those options, and they aren’t in the same league as the left-wingers, which have pretty well locked up users and advertisers. (It’s called path-dependence.) And even if they finally succeed in snapping up a respectable share of the information market, the damage will have been done; libertarians and conservatives will have been marginalized, criminalized, and suppressed.

The time to roll out the big guns is now, as I explain here:

Given the influence that Google and the other members of the left-wing information-technology oligarchy exert in this country, that oligarchy is tantamount to a state apparatus….

These information-entertainment-media-academic institutions are important components of what I call the vast left-wing conspiracy in America. Their purpose and effect is the subversion of the traditional norms that made America a uniquely free, prosperous, and vibrant nation….

What will happen in America if that conspiracy succeeds in completely overthrowing “bourgeois culture”? The left will frog-march America in whatever utopian direction captures its “feelings” (but not its reason) at the moment…

Complete victory for the enemies of liberty is only a few election cycles away. The squishy center of the American electorate — as is its wont — will swing back toward the Democrat Party. With a Democrat in the White House, a Democrat-controlled Congress, and a few party switches in the Supreme Court, the dogmas of the information-entertainment-media-academic complex will become the law of the land….

[It is therefore necessary to] enforce the First Amendment against information-entertainment-media-academic complex. This would begin with action against high-profile targets (e.g., Google and a few large universities that accept federal money). That should be enough to bring the others into line. If it isn’t, keep working down the list until the miscreants cry uncle.

What kind of action do I have in mind?…

Executive action against state actors to enforce the First Amendment:

Amendment I to the Constitution says that “Congress shall make no law … abridging the freedom of speech”.

Major entities in the telecommunications, news, entertainment, and education industries have exerted their power to suppress speech because of its content. (See appended documentation.) The collective actions of these entities — many of them government- licensed and government-funded — effectively constitute a governmental violation of the Constitution’s guarantee of freedom of speech (See Smith v. Allwright, 321 U.S. 649 (1944) and Marsh v. Alabama, 326 U.S. 501 (1946).)

And so on. Read all about it here.



THE REAL COLLUSION STORY

Not quite as hot, but still in the news, is Spygate. Collusion among the White House, CIA, and FBI (a) to use the Trump-Russia collusion story to swing the 2016 election to Clinton, and (b) failing that, to cripple Trump’s presidency and provide grounds for removing him from office. The latest twist in the story is offered by Byron York:

Emails in 2016 between former British spy Christopher Steele and Justice Department official Bruce Ohr suggest Steele was deeply concerned about the legal status of a Putin-linked Russian oligarch, and at times seemed to be advocating on the oligarch’s behalf, in the same time period Steele worked on collecting the Russia-related allegations against Donald Trump that came to be known as the Trump dossier. The emails show Steele and Ohr were in frequent contact, that they intermingled talk about Steele’s research and the oligarch’s affairs, and that Glenn Simpson, head of the dirt-digging group Fusion GPS that hired Steele to compile the dossier, was also part of the ongoing conversation….

The newly-released Ohr-Steele-Simpson emails are just one part of the dossier story. But if nothing else, they show that there is still much for the public to learn about the complex and far-reaching effort behind it.

My take is here. The post includes a long list of related — and enlightening — reading, to which I’ve just added York’s piece.



“SUICIDE” OF THE WEST

Less “newsy”, but a hot topic on the web a few weeks back, is Jonah Goldberg’s Suicide of the West. It received mixed reviews. It is also the subject of an excellent non-review by Hubert Collins.

Here’s my take:

The Framers held a misplaced faith in the Constitution’s checks and balances (see Madison’s Federalist No. 51 and Hamilton’s Federalist No. 81). The Constitution’s wonderful design — containment of a strictly limited central government through horizontal and vertical separation of powers — worked rather well until the Progressive Era. The design then cracked under the strain of greed and the will to power, as the central government began to impose national economic regulation at the behest of muckrakers and do-gooders. The design then broke during the New Deal, which opened the floodgates to violations of constitutional restraint (e.g., Medicare, Medicaid, Obamacare,  the vast expansion of economic regulation, and the destruction of civilizing social norms), as the Supreme Court has enabled the national government to impose its will in matters far beyond its constitutional remit.

In sum, the “poison pill” baked into the nation at the time of the Founding is human nature, against which no libertarian constitution is proof unless it is enforced resolutely by a benign power.

See also my review essay on James Burnham’s Suicide of the West: An Essay on the Meaning and Destiny of Liberalism.



EVOLUTION, INTELLIGENCE, AND RACE

Evolution is closely related to and intertwined with intelligence and race. Two posts and a page of mine (here, here, and here) delve some of the complexities. The latter of the two posts draws on David Stove‘s critique of evolutionary theory, “So You Think You Are a Darwinian?“.

Fred Reed is far more entertaining than Stove, and no less convincing. His most recent columns on evolution are here and here. In the first of the two, he writes this:

What are some of the problems with official Darwinism? First, the spontaneous generation of life has not been replicated…. Nor has anyone assembled in the laboratory a chemical structure able to metabolize, reproduce, and thus to evolve. It has not been shown to be mathematically possible….

Sooner or later, a hypothesis must be either confirmed or abandoned. Which? When? Doesn’t science require evidence, reproducibility, demonstrated theoretical possibility? These do not exist….

Other serious problems with the official story: Missing intermediate fossils–”missing links”– stubbornly remain missing. “Punctuated equilibrium,” a theory of sudden rapid evolution invented to explain the lack of fossil evidence, seems unable to generate genetic information fast enough. Many proteins bear no resemblance to any others and therefore cannot have evolved from them. On and on.

Finally, the more complex an event, the less likely it is to  occur by chance. Over the years, cellular mechanisms have been found to be  ever more complex…. Recently with the discovery of epigenetics, complexity has taken a great leap upward. (For anyone wanting to subject himself to such things, there is The Epigenetics Revolution. It is not light reading.)

Worth noting is that  that the mantra of evolutionists, that “in millions and millions and billions of years something must have evolved”–does not necessarily hold water. We have all heard of Sir James Jeans assertion that a monkey, typing randomly, would eventually produce all the books in the British Museum. (Actually he would not produce a single chapter in the accepted age of the universe, but never mind.) A strong case can be made that spontaneous generation is similarly of mathematically vanishing probability. If evolutionists could prove the contrary, they would immensely strengthen their case. They haven’t….

Suppose that you saw an actual monkey pecking at a keyboard and, on examining his output, saw that he was typing, page after page, The Adventures of Tom Sawyer, with no errors.

You would suspect fraud, for instance that the typewriter was really a computer programmed with Tom. But no, on inspection you find that it is a genuine typewriter. Well then, you think, the monkey must be a robot, with Tom in RAM. But  this too turns out to be wrong: The monkey in fact is one. After exhaustive examination, you are forced to conclude that Bonzo really is typing at random.

Yet he is producing Tom Sawyer. This being impossible, you would have to conclude that something was going on that you did not understand.

Much of biology is similar. For a zygote, barely visible, to turn into a baby is astronomically improbable, a suicidal assault on Murphy’s Law. Reading embryology makes this apparent. (Texts are prohibitively expensive, but Life Unfolding serves.) Yet every step in the process is in accord with chemical principles.

This doesn’t make sense. Not, anyway, unless one concludes that something deeper is going on that we do not understand. This brings to mind several adages that might serve to ameliorate our considerable arrogance. As Haldane said, “The world is not only queerer than we think, but queerer than we can think.” Or Fred’s Principle, “The smartest of a large number of hamsters is still a hamster.”

We may be too full of ourselves.

On the subject of race, Fred is no racist, but he is a realist; for example:

We have black football players refusing to stand for the national anthem.  They think that young black males are being hunted down by cops. Actually of  course black males are hunting each other down in droves but black football players apparently have no objection to this. They do not themselves convincingly suffer discrimination. Where else can you get paid six million green ones a year for grabbing something and running? Maybe in a district of jewelers.

The non-standing is racial hostility to whites. The large drop in attendance of games, of television viewership, is racial blowback by whites. Millions of whites are thinking, that, if America doesn’t suit them, football players can afford a ticket to Kenya. While this line of reasoning is tempting, it doesn’t really address the problem and so would be a waste of time.

But what, really, is the problem?

It is one that dare not raise its head: that blacks cannot compete with whites, Asians, or Latin-Americans. Is there counter-evidence? This leaves them in an incurable state of resentment and thus hostility. I think we all know this: Blacks know it, whites know it, liberals know it, and conservatives know it. If any doubt this, the truth would be easy enough to determine with carefully done tests. [Which have been done.] The furious resistance to the very idea of measuring intelligence suggests awareness of the likely outcome. You don’t avoid a test if you expect good results.

So we do nothing while things worsen and the world looks on astounded. We have mob attacks by Black Lives Matter, the never-ending Knockout Game, flash mobs looting stores and subway trains, occasional burning cities, and we do nothing. Which makes sense, because there is nothing to be done short of restructuring the country.

Absolute, obvious, unacknowledged disaster.

Regarding which: Do we really want, any of us, what we are doing? In particular, has anyone asked ordinary blacks, not black pols and race hustlers. “Do you really want to live among whites, or would you prefer a safe middle-class black neighborhood? Do your kids want to go to school with whites? If so, why? Do you want them to? Why? Would you prefer black schools to decide what and how to teach your children? Keeping whites out of it? Would you prefer having only black police in your neighborhood?”

And the big one: “Do you, and the people you actually know in your neighborhood, really want integration? Or is it something imposed on you by oreo pols and white ideologues?”

But these are things we must never think, never ask.

Which brings me to my most recent post about blacks and crime, which is here. As for restructuring the country, Lincoln saw what was needed.

The touchy matter of intelligence — its heritability and therefore its racial component — is never far from my thoughts. I commend to you Gregory Hood’s excellent piece, “Forbidden Research: How the Study of Intelligence is Crippled by Ideology“. Hood mentions some of the scientists whose work I have cited in my writings about intelligence and its racial component. See this page, for example, which give links to several related posts and excerpts of relevant research about intelligence. (See also the first part of Fred Reed’s post “Darwin’s Vigilantes, Richard Sternberg, and Conventional Pseudoscience“.)

As for the racial component, my most recent post on the subject (which provides links to related posts) addresses the question “Why study race and intelligence?”. Here’s why:

Affirmative action and similar race-based preferences are harmful to blacks. But those preferences persist because most Americans do not understand that there are inherent racial differences that prevent blacks, on the whole, from doing as well as whites (and Asians) in school and in jobs that require above-average intelligence. But magical thinkers (like [Professor John] McWhorter) want to deny reality. He admits to being driven by hope: “I have always hoped the black–white IQ gap was due to environmental causes.”…

Magical thinking — which is rife on the left — plays into the hands of politicians, most of whom couldn’t care less about the truth. They just want the votes of those blacks who relish being told, time and again, that they are “down” because they are “victims”, and Big Daddy government will come to their rescue. But unless you are the unusual black of above-average intelligence, or the more usual black who has exceptional athletic skills, dependence on Big Daddy is self-defeating because (like a drug addiction) it only leads to more of the same. The destructive cycle of dependency can be broken only by willful resistance to the junk being peddled by cynical politicians.

It is for the sake of blacks that the truth about race and intelligence ought to be pursued — and widely publicized. If they read and hear the truth often enough, perhaps they will begin to realize that the best way to better themselves is to make the best of available opportunities instead of moaning abut racism and relying on preferences and handouts.



WILL THE REAL FASCISTS PLEASE STAND UP?

I may puke if I hear Trump called a fascist one more time. As I observe here,

[t]he idea … that Trump is the new Hitler and WaPo [The Washington Post] and its brethren will keep us out of the gas chambers by daring to utter the truth (not)…. is complete balderdash, inasmuch as WaPo and its ilk are enthusiastic hand-maidens of “liberal” fascism.

“Liberals” who call conservatives “fascists” are simply engaging in psychological projection. This is a point that I address at length here.

As for Mr. Trump, I call on Shawn Mitchell:

A lot of public intellectuals and writers are pushing an alarming thesis: President Trump is a menace to the American Republic and a threat to American liberties. The criticism is not exclusively partisan; it’s shared by prominent conservatives, liberals, and libertarians….

Because so many elites believe Trump should be impeached, or at least shunned and rendered impotent, it’s important to agree on terms for serious discussion. Authoritarian means demanding absolute obedience to a designated authority. It means that somewhere, someone, has unlimited power. Turning the focus to Trump, after 15 months in office, it’s impossible to assign him any of those descriptions….

…[T]here are no concentration camps or political arrests. Rather, the #Resistance ranges from fervent to rabid. Hollywood and media’s brightest stars regularly gather at galas to crudely declare their contempt for Trump and his deplorable supporters. Academics and reporters lodged in elite faculty lounges and ivory towers regularly malign his brains, judgment, and temperament. Activists gather in thousands on the streets to denounce Trump and his voters. None of these people believe Trump is an autocrat, or, if they do they are ignorant of the word’s meaning. None fear for their lives, liberty, or property.

Still, other elites pile on. Federal judges provide legal backup, contriving frivolous theories to block administrations moves. Some rule Trump lacks even the authority to undo by executive order things Obama himself introduced by executive order. Governors from states like California, Oregon and New York announce they will not cooperate with administration policy (current law, really) on immigration, the environment, and other issues.

Amidst such widespread rebellion, waged with impunity against the constitutionally elected president, the critics’ dark warnings that America faces a dictator are more than wrong; they are surreal and damnable. They are what amounts to the howl of that half the nation still refusing to accept election results it dislikes.

Conceding Trump lacks an inmate or body count, critics still offer theories to categorize him in genus monsterus. The main arguments cite Trump’s patented belligerent personality and undisciplined tweets, his use of executive orders; his alleged obstruction in firing James Comey and criticizing Robert Mueller, his blasts at the media, and his immigration policies. These attacks weigh less than the paper they might be printed on.

Trump’s personality doubtless is sui generis for national office. If he doesn’t occasionally offend listeners they probably aren’t listening. But so what? Personality is not policy. A sensibility is not a platform, and bluster and spittle are not coercive state action. The Human Jerk-o-meter could measure Trump in the 99th percentile, and the effect would not change one law, eliminate one right, or jail one critic.

Executive Orders are misunderstood. All modern presidents used them. There is nothing wrong in concept with executive orders. Some are constitutional some are not. What matters is whether they direct executive priorities within U.S. statutes or try to push authority beyond the law to change the rights and duties of citizens. For example, a president might order the EPA to focus on the Clean Air Act more than the Clean Water Act, or vice versa. That is fine. But, if a president orders the EPA to regulate how much people can water their lawns or what kind of lawns to plant, the president is trying to legislate and create new controls. That is unconstitutional.

Many of Obama’s executive orders were transgressive and unconstitutional. Most of Trump’s executive orders are within the law, and constitutional. However that debate turns out, though, it is silly to argue the issue implicates authoritarianism.

The partisan arguments over Trump’s response to the special counsel also miss key points. Presidents have authority to fire subordinates. The recommendation authored by Deputy Attorney General Rod Rosenstein provides abundant reason for Trump to have fired James Comey, who increasingly is seen as a bitter anti-Trump campaigner. As for Robert Mueller, criticizing is not usurping. Mueller’s investigation continues, but now readily is perceived as a target shoot, unmoored from the original accusations about Russia, in search of any reason to draw blood from Trump. Criticizing that is not dictatorial, it is reasonable.

No doubt Trump criticizes the media more than many modern presidents. But criticism is not oppression. It attacks not freedom of the press but the credibility of the press. That is civically uncomfortable, but the fact is, the war of words between Trump and the media is mutual. The media attacks Trump constantly, ferociously and very often inaccurately as Mollie Hemingway and Glenn Greenwald document from different political perspectives. Trump fighting back is not asserting government control. It is just challenging media assumptions and narratives in a way no president ever has. Reporters don’t like it, so they call it oppression. They are crybabies.

Finally, the accusation that Trump wants to enforce the border under current U.S. laws, as well as better vet immigration from a handful of failed states in the Middle East with significant militant activity hardly makes him a tyrant. Voters elected Trump to step up border enforcement. Scrutinizing immigrants from a handful of countries with known terrorist networks is not a “Muslim ban.” The idea insults the intelligence since there are about 65 majority Muslim countries the order does not touch.

Trump is not Hitler. Critics’ attacks are policy disputes, not examples of authoritarianism. The debate is driven by sore losers who are willing to erode norms that have preserved the republic for 240 years.

Amen.



CONSCIOUSNESS

For a complete change of pace I turn to a post by Bill Vallicella about consciousness:

This is an addendum to Thomas Nagel on the Mind-Body Problem. In that entry I set forth a problem in the philosophy of mind, pouring it into the mold of an aporetic triad:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely physical processes do not share.

3) The only acceptable explanation of conscious experience is in terms of physical properties alone.

Note first that the three propositions are collectively inconsistent: they cannot all be true.  Any two limbs entail the negation of the remaining one. Note second that each limb exerts a strong pull on our acceptance. But we cannot accept them all because they are logically incompatible.

This is one hard nut to crack.  So hard that many, following David Chalmers, call it, or something very much like it, the Hard Problem in the philosophy of mind.  It is so hard that it drives some into the loony bin. I am thinking of Daniel Dennett and those who have the chutzpah to deny (1)….

Sophistry aside, we either reject (2) or we reject (3).  Nagel and I accept (1) and (2) and reject (3). Those of a  scientistic stripe accept (1) and (3) and reject (2)….

I conclude that if our aporetic triad has a solution, the solution is by rejecting (3).

Vallicella reaches his conclusion by subtle argumentation, which I will not attempt to parse in this space.

My view is that (2) is false because the subjective character of conscious experience is an illusion that arises from the physical properties of the central nervous system. Consciousness itself is not an illusion. I accept (1) and (3). For more, see this and this.



EMPATHY IS OVER-RATED

Andrew Scull addresses empathy:

The basic sense in which most of us use “empathy” is analogous to what Adam Smith called “sympathy”: the capacity we possess (or can develop) to see the world through the eyes of another, to “place ourselves in his situation . . . and become in some measure the same person with him, and thence from some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them”….

In making moral choices, many would claim that empathy in this sense makes us more likely to care about others and to consider their interests when choosing our own course of action….

Conversely, understanding others’ feelings doesn’t necessarily lead one to treating them better. On the contrary: the best torturers are those who can anticipate and intuit what their victims most fear, and tailor their actions accordingly. Here, Bloom effectively invokes the case of Winston Smith’s torturer O’Brien in Orwell’s Nineteen Eighty-four, who is able to divine the former’s greatest dread, his fear of rats, and then use it to destroy him.

Guest blogger L.P. addressed empathy in several posts: here, here, here, here, here, and here. This is from the fourth of those posts:

Pro-empathy people think less empathetic people are “monsters.” However, as discussed in part 2 of this series, Baron-Cohen, Kevin Dutton in The Wisdom of Psychopaths, and other researchers establish that empathetic people, particularly psychopaths who have both affective and cognitive empathy, can be “monsters” too.

In fact, Kevin Dutton’s point about psychopaths generally being able to blend in and take on the appearance of the average person makes it obvious that they must have substantial emotional intelligence (linked to cognitive empathy) and experience of others’ feelings in order to mirror others so well….

Another point to consider however, as mentioned in part 1, is that those who try to empathize with others by imagining how they would experience another’s situation aren’t truly empathetic. They’re just projecting their own feelings onto others. This brings to mind Jonathan Haidt’s study on morality and political orientation. On the “Identification with All of Humanity Scale,” liberals most strongly endorsed the dimension regarding identification with “everyone around the world.” (See page 25 of “Understanding Libertarian Morality: The psychological roots of an individualist ideology.”) How can anyone empathize with billions of persons about whom one knows nothing, and a great number of whom are anything but liberal?

Haidt’s finding is a terrific example of problems with self-evaluation and self-reported data – liberals overestimating themselves in this case. I’m not judgmental about not understanding everyone in the world. There are plenty of people I don’t understand either. However, I don’t think people who overestimate their ability to understand people should be in a position that allows them to tamper with, or try to “improve,” the lives of people they don’t understand….

I conclude by quoting C. Daniel Batson who acknowledges the prevailing bias when it comes to evaluating altruism as a virtue. This is from his paper, “Empathy-Induced Altruistic Motivation,” written for the Inaugural Herzliya Symposium on Prosocial Motives, Emotions, and Behavior:

[W]hereas there are clear social sanctions against unbridled self-interest, there are not clear sanctions against altruism. As a result, altruism can at times pose a greater threat to the common good than does egoism.



“NUDGING”

I have addressed Richard Thaler and Cass Sunstein’s “libertarian” paternalism and “nudging in many posts. (See this post, the list at the bottom of it, and this post.) Nothing that I have written — clever and incisive as it may be — rivals Deirdre McCloskey’s take on Thaler’s non-Nobel prize, “The Applied Theory of Bossing“:

Thaler is distinguished but not brilliant, which is par for the course. He works on “behavioral finance,” the study of mistakes people make when they talk to their stock broker. He can be counted as the second winner for “behavioral economics,” after the psychologist Daniel Kahneman. His prize was for the study of mistakes people make when they buy milk….

Once Thaler has established that you are in myriad ways irrational it’s much easier to argue, as he has, vigorously—in his academic research, in popular books, and now in a column for The New York Times—that you are too stupid to be treated as a free adult. You need, in the coinage of Thaler’s book, co-authored with the law professor and Obama adviser Cass Sunstein, to be “nudged.” Thaler and Sunstein call it “libertarian paternalism.”*…

Wikipedia lists fully 257 cognitive biases. In the category of decision-making biases alone there are anchoring, the availability heuristic, the bandwagon effect, the baseline fallacy, choice-supportive bias, confirmation bias, belief-revision conservatism, courtesy bias, and on and on. According to the psychologists, it’s a miracle you can get across the street.

For Thaler, every one of the biases is a reason not to trust people to make their own choices about money. It’s an old routine in economics. Since 1848, one expert after another has set up shop finding “imperfections” in the market economy that Smith and Mill and Bastiat had come to understand as a pretty good system for supporting human flourishing….

How to convince people to stand still for being bossed around like children? Answer: Persuade them that they are idiots compared with the great and good in charge. That was the conservative yet socialist program of Kahneman, who won the 2002 Nobel as part of a duo that included an actual economist named Vernon Smith…. It is Thaler’s program, too.

Like with the psychologist’s list of biases, though, nowhere has anyone shown that the imperfections in the market amount to much in damaging the economy overall. People do get across the street. Income per head since 1848 has increased by a factor of 20 or 30….

The amiable Joe Stiglitz says that whenever there is a “spillover” — my ugly dress offending your delicate eyes, say — the government should step in. A Federal Bureau of Dresses, rather like the one Saudi Arabia has. In common with Thaler and Krugman and most other economists since 1848, Stiglitz does not know how much his imagined spillovers reduce national income overall, or whether the government is good at preventing the spill. I reckon it’s about as good as the Army Corps of Engineers was in Katrina.

Thaler, in short, melds the list of psychological biases with the list of economic imperfections. It is his worthy scientific accomplishment. His conclusion, unsupported by evidence?

It’s bad for us to be free.

CORRECTION: Due to an editing error, an earlier version of this article referred to Thaler’s philosophy as “paternalistic libertarianism.” The correct term is “libertarian paternalism.”

No, the correct term is paternalism.

I will end on that note.

The Pretence of Knowledge

Updated, with links to a related article and additional posts, and republished.

Friedrich Hayek, in his Nobel Prize lecture of 1974, “The Pretence of Knowledge,” observes that

the great and rapid advance of the physical sciences took place in fields where it proved that explanation and prediction could be based on laws which accounted for the observed phenomena as functions of comparatively few variables.

Hayek’s particular target was the scientism then (and still) rampant in economics. In particular, there was (and is) a quasi-religious belief in the power of central planning (e.g., regulation, “stimulus” spending, control of the money supply) to attain outcomes superior to those that free markets would yield.

But, as Hayek says in closing,

There is danger in the exuberant feeling of ever growing power which the advance of the physical sciences has engendered and which tempts man to try, “dizzy with success” … to subject not only our natural but also our human environment to the control of a human will. The recognition of the insuperable limits to his knowledge ought indeed to teach the student of society a lesson of humility which should guard him against becoming an accomplice in men’s fatal striving to control society – a striving which makes him not only a tyrant over his fellows, but which may well make him the destroyer of a civilization which no brain has designed but which has grown from the free efforts of millions of individuals.

I was reminded of Hayek’s observations by John Cochrane’s post, “Groundhog Day” (The Grumpy Economist, May 11, 2014), wherein Cochrane presents this graph:

The fed's forecasting models are broken

Cochrane adds:

Every serious forecast looked like this — Fed, yes, but also CBO, private forecasters, and the term structure of forward rates. Everyone has expected bounce-back growth and rise in interest rates to start next year, for the last 6 years. And every year it has not happened. Welcome to the slump. Every year, Sonny and Cher wake us up, and it’s still cold, and it’s still grey. But we keep expecting spring tomorrow.

Whether the corrosive effects of government microeconomic and regulatory policy, or a failure of those (unprintable adjectives) Republicans to just vote enough wasted-spending Keynesian stimulus, or a failure of the Fed to buy another $3 trillion of bonds, the question of the day really should be why we have this slump — which, let us be honest, no serious forecaster expected.

(I add the “serious forecaster” qualification on purpose. I don’t want to hear randomly mined quotes from bloviating prognosticators who got lucky once, and don’t offer a methodology or a track record for their forecasts.)

The Fed’s forecasting models are nothing more than sophisticated charlatanism — a term that Hayek applied to pseudo-scientific endeavors like macroeconomic modeling. Nor is charlatanism confined to economics and the other social “sciences.” It’s rampant in climate “science,” as Roy Spencer has shown. Consider, for example, this graph from Spencers’s post, “95% of Climate Models Agree: The Observations Must Be Wrong” (Roy Spencer, Ph.D., February 7, 2014):

95% of climate models agree_the observations must be wrong

Spencer has a lot more to say about the pseudo-scientific aspects of climate “science.” This example is from “Top Ten Good Skeptical Arguments” (May 1, 2014):

1) No Recent Warming. If global warming science is so “settled”, why did global warming stop over 15 years ago (in most temperature datasets), contrary to all “consensus” predictions?

2) Natural or Manmade? If we don’t know how much of the warming in the longer term (say last 50 years) is natural, then how can we know how much is manmade?

3) IPCC Politics and Beliefs. Why does it take a political body (the IPCC) to tell us what scientists “believe”? And when did scientists’ “beliefs” translate into proof? And when was scientific truth determined by a vote…especially when those allowed to vote are from the Global Warming Believers Party?

4) Climate Models Can’t Even Hindcast How did climate modelers, who already knew the answer, still fail to explain the lack of a significant temperature rise over the last 30+ years? In other words, how to you botch a hindcast?

5) …But We Should Believe Model Forecasts? Why should we believe model predictions of the future, when they can’t even explain the past?

6) Modelers Lie About Their “Physics”. Why do modelers insist their models are based upon established physics, but then hide the fact that the strong warming their models produce is actually based upon very uncertain “fudge factor” tuning?

7) Is Warming Even Bad? Who decided that a small amount of warming is necessarily a bad thing?

8) Is CO2 Bad? How did carbon dioxide, necessary for life on Earth and only 4 parts in 10,000 of our atmosphere, get rebranded as some sort of dangerous gas?

9) Do We Look that Stupid? How do scientists expect to be taken seriously when their “theory” is supported by both floods AND droughts? Too much snow AND too little snow?

10) Selective Pseudo-Explanations. How can scientists claim that the Medieval Warm Period (which lasted hundreds of years), was just a regional fluke…yet claim the single-summer (2003) heat wave in Europe had global significance?

11) (Spinal Tap bonus) Just How Warm is it, Really? Why is it that every subsequent modification/adjustment to the global thermometer data leads to even more warming? What are the chances of that? Either a warmer-still present, or cooling down the past, both of which produce a greater warming trend over time. And none of the adjustments take out a gradual urban heat island (UHI) warming around thermometer sites, which likely exists at virtually all of them — because no one yet knows a good way to do that.

It is no coincidence that leftists believe in the efficacy of central planning and cling tenaciously to a belief in catastrophic anthropogenic global warming. The latter justifies the former, of course. And both beliefs exemplify the left’s penchant for magical thinking, about which I’ve written several times (e.g., here, here, here, here, and here).

Magical thinking is the pretense of knowledge in the nth degree. It conjures “knowledge” from ignorance and hope. And no one better exemplifies magical thinking than our hopey-changey president.


Related reading: Walter E. Williams, “The Experts Have Been Wrong About a Lot of Things, Here’s a Sample“, The Daily Signal, July 25, 2018

Related posts:
Modeling Is Not Science
The Left and Its Delusions
Economics: A Survey
AGW: The Death Knell
The Keynesian Multiplier: Phony Math
Modern Liberalism as Wishful Thinking
“The Science Is Settled”
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
Bayesian Irrationality
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Babe Ruth and the Hot-Hand Hypothesis
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unknowable
A (Long) Footnote about Science
The Balderdash Chronicles
The Probability That Something Will Happen
Analytical and Scientific Arrogance

Analytical and Scientific Arrogance

It is customary in democratic countries to deplore expenditures on armaments as conflicting with the requirements of the social services. There is a tendency to forget that the most important social service that a government can do for its people is to keep them alive and free.

Marshal of the Royal Air Force Sir John Slessor, Strategy for the West

I’m returning to the past to make a timeless point: Analysis is a tool of decision-making, not a substitute for it.

That’s a point to which every analyst will subscribe, just as every judicial candidate will claim to revere the Constitution. But analysts past and present have tended to read their policy preferences into their analytical work, just as too many judges real their political preferences into the Constitution.

What is an analyst? Someone whose occupation requires him to gather facts bearing on an issue, discern robust relationships among the facts, and draw conclusions from those relationships.

Many professionals — from economists to physicists to so-called climate scientists — are more or less analytical in the practice of their professions. That is, they are not just seeking knowledge, but seeking to influence policies which depend on that knowledge.

There is also in this country (and in the West, generally) a kind of person who is an analyst first and a disciplinary specialist second (if at all). Such a person brings his pattern-seeking skills to the problems facing decision-makers in government and industry. Depending on the kinds of issues he addresses or the kinds of techniques that he deploys, he may be called a policy analyst, operations research analyst, management consultant, or something of that kind.

It is one thing to say, as a scientist or analyst, that a certain option (a policy, a system, a tactic) is probably better than the alternatives, when judged against a specific criterion (most effective for a given cost, most effective against a certain kind of enemy force). It is quite another thing to say that the option is the one that the decision-maker should adopt. The scientist or analyst is looking a small slice of the world; the decision-maker has to take into account things that the scientist or analyst did not (and often could not) take into account (economic consequences, political feasibility, compatibility with other existing systems and policies).

It is (or should be) unsconsionable for a scientist or analyst to state or imply that he has the “right” answer. But the clever arguer avoids coming straight out with the “right” answer; instead, he slants his presentation in a way that makes the “right” answer seem right.

A classic case in point is they hysteria surrounding the increase in “global” temperature in the latter part of the 20th century, and the coincidence of that increase with the rise in CO2. I have had much to say about the hysteria and the pseudo-science upon which it is based. (See links at the end of this post.) Here, I will take as a case study an event to which I was somewhat close: the treatment of the Navy’s proposal, made in the early 1980s, for an expansion to what was conveniently characterized as the 600-ship Navy. (The expansion would have involved personnel, logistics systems, ancillary war-fighting systems, stockpiles of parts and ammunition, and aircraft of many kinds — all in addition to a 25-percent increase in the number of ships in active service.)

The usual suspects, of an ilk I profiled here, wasted no time in making the 600-ship Navy seem like a bad idea. Of the many studies and memos on the subject, two by the Congressional Budget Office stand out a exemplars of slanted analysis by innuendo: “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches” (March 1982), and “Future Budget Requirements for the 600-Ship Navy: Preliminary Analysis” (April 1985). What did the “whiz kids” at CBO have to say about the 600-ship Navy? Here are excerpts of the concluding sections:

The Administration’s five-year shipbuilding plan, containing 133 new construction ships and estimated to cost over $80 billion in fiscal year 1983 dollars, is more ambitious than previous programs submitted to the Congress in the past few years. It does not, however, contain enough ships to realize the Navy’s announced force level goals for an expanded Navy. In addition, this plan—as has been the case with so many previous plans—has most of its ships programmed in the later out-years. Over half of the 133 new construction ships are programmed for the last two years of the five-year plan. Achievement of the Navy’s expanded force level goals would require adhering to the out-year building plans and continued high levels of construction in the years beyond fiscal year 1987. [1982 report, pp. 71-72]

Even the budget increases estimated here would be difficult to achieve if history is a guide. Since the end of World War II, the Navy has never sustained real increases in its budget for more than five consecutive years. The sustained 15-year expansion required to achieve and sustain the Navy’s present plans would result in a historic change in budget trends. [1985 report, p. 26]

The bias against the 600-ship Navy drips from the pages. The “argument” goes like this: If it hasn’t been done, it can’t be done and, therefore, shouldn’t be attempted. Why not? Because the analysts at CBO were a breed of cat that emerged in the 1960s, when Robert Strange McNamara and his minions used simplistic analysis (“tablesmanship”) to play “gotcha” with the military services:

We [I was one of the minions] did it because we were encouraged to do it, though not in so many words. And we got away with it, not because we were better analysts — most of our work was simplistic stuff — but because we usually had the last word. (Only an impassioned personal intercession by a service chief might persuade McNamara to go against SA [the Systems Analysis office run by Alain Enthoven] — and the key word is “might.”) The irony of the whole process was that McNamara, in effect, substituted “civilian judgment” for oft-scorned “military judgment.” McNamara revealed his preference for “civilian judgment” by elevating Enthoven and SA a level in the hierarchy, 1965, even though (or perhaps because) the services and JCS had been open in their disdain of SA and its snotty young civilians.

In the case of the 600-ship Navy, civilian analysts did their best to derail it by sending the barely disguised message that it was “unaffordable”. I was reminded of this “insight” by a colleague of long-standing who recently proclaimed that “any half-decent cost model would show a 600-ship Navy was unsustainable into this century.” How could a cost model show such a thing when the sustainability (affordability) of defense is a matter of political will, not arithmetic?

Defense spending fluctuates as function of perceived necessity. Consider, for example, this graph (misleadingly labeled “Recent Defense Spending”) from usgovernmentspending.com, which shows defense spending as a percentage of GDP for fiscal year (FY) 1792 to FY 2017:

What was “unaffordable” before World War II suddenly became affordable. And so it has gone throughout the history of the republic. Affordability (or sustainability) is a political issue, not a line drawn in the sand by an smart-ass analyst who gives no thought to the consequences of spending too little on defense.

I will now zoom in on the era of interest.

CBO’s “Building a 600-Ship Navy: Costs, Timing, and Alternative Approaches“, which crystallized opposition to the 600-ship Navy estimates the long-run, annual obligational authority required to sustain a 600-ship Navy (of the Navy’s design) to be about 20-percent higher in constant dollars than the FY 1982 Navy budget. (See Options I and II in Figure 2, p. 50.) The long-run would have begun around FY 1994, following several years of higher spending associated with the buildup of forces. I don’t have a historical breakdown of the Department of Defense (DoD) budget by service, but I found values for all-DoD spending on military programs at Office of Management and Budget Historical Tables. Drawing on Tables 5.2 and 10.1, I constructed a constant-dollar of DoD’s obligational authority (FY 1982 = 1):

FY Index
1983 1.08
1984 1.13
1985 1.21
1986 1.17
1987 1.13
1988 1.11
1989 1.10
1990 1.07
1991 0.97
1992 0.97
1993 0.90
1994 0.82
1995 0.82
1996 0.80
1997 0.80
1998 0.79
1999 0.84
2000 0.86
2001 0.92
2002 0.98
2003 1.23
2004 1.29
2005 1.28
2006 1.36
2007 1.50
2008 1.65
2009 1.61
2010 1.66
2011 1.62
2012 1.51
2013 1.32
2014 1.32
2015 1.25
2016 1.29
2017 1.34

There was no inherent reason that defense spending couldn’t have remained on the trajectory of the middle 1980s. The slowdown of the late 1980s was a reflection of improved relations between the U.S. and USSR. Those improved relations had much to do with the Reagan defense buildup, of which the goal of attaining a 600-ship Navy was an integral part.

The Reagan buildup helped to convince Soviet leaders (Gorbachev in particular) that trying to keep pace with the U.S. was futile and (actually) unaffordable. The rest — the end of the Cold War and the dissolution of the USSR — is history. The buildup, in other words, sowed the seeds of its own demise. But that couldn’t have been predicted with certainty in the early-to-middle 1980s, when CBO and others were doing their best to undermine political support for more defense spending. Had CBO and the other nay-sayers succeeded in their aims, the Cold War and the USSR might still be with us.

The defense drawdown of the mid-1990s was a deliberate response to the end of the Cold War and lack of other serious threats, not a historical necessity. It was certainly not on the table in the early 1980s, when the 600-ship Navy was being pushed. Had the Cold War not thawed and ended, there is no reason that U.S. defense spending couldn’t have continued at the pace of the middle 1980s, or higher. As is evident in the index values for recent years, even after drastic force reductions in Iraq, defense spending is now about one-third higher than it was in FY 1982.

John Lehman, Secretary of the Navy from 1981 to 1987, was rightly incensed that analysts — some of them on his payroll as civilian employees and contractors — were, in effect, undermining a deliberate strategy of pressing against a key Soviet weakness — the unsustainability of its defense strategy. There was much lamentation at the time about Lehman’s “war” on the offending parties, one of which was the think-tank for which I then worked. I can now admit openly that I was sympathetic to Lehman and offended by the arrogance of analysts who believed that it was their job to suggest that spending more on defense was “unaffordable”.

When I was a young analyst I was handed a pile of required reading material. One of the items was was Methods of Operations Research, by Philip M. Morse and George E. Kimball. Morse, in the early months of America’s involvement in World War II, founded the civilian operations-research organization from which my think-tank evolved. Kimball was a leading member of that organization. Their book is notable not just a compendium of analytical methods that were applied, with much success, to the war effort. It is also introspective — and properly humble — about the power and role of analysis.

Two passages, in particular, have stuck with me for the more than 50 years since I first read the book. Here is one of them:

[S]uccessful application of operations research usually results in improvements by factors of 3 or 10 or more…. In our first study of any operation we are looking for these large factors of possible improvement…. They can be discovered if the [variables] are given only one significant figure,…any greater accuracy simply adds unessential detail.

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. [p. 38]

Morse and Kimball — two brilliant scientists and analysts, who had worked with actual data (pardon the redundancy) about combat operations — counseled against making too much of quantitative estimates given the uncertainties inherent in combat. But, as I have seen over the years, analysts eager to “prove” something nevertheless make a huge deal out of minuscule differences in quantitative estimates — estimates based not on actual combat operations but on theoretical values derived from models of systems and operations yet to see the light of day. (I also saw, and still see, too much “analysis” about soft subjects, such as domestic politics and international relations. The amount of snake oil emitted by “analysts” — sometimes called scholars, journalists, pundits, and commentators — would fill the Great Lakes. Their perceptions of reality have an uncanny way of supporting their unabashed decrees about policy.)

The second memorable passage from Methods of Operations Research goes directly to the point of this post:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. [p. 10].

In the case of CBO and other opponents of the 600-ship Navy, substitute “cost estimate” for “operations research”, “responsible defense official” for “administrator in charge”, and “strategy” for “operations”. The principle is the same: The CBO and its ilk knew the price of the 600-ship Navy, but had no inkling of its value.

Too many scientists and analysts want to make policy. On the evidence of my close association with scientists and analysts over the years — including a stint as an unsparing reviewer of their products — I would say that they should learn to think clearly before they inflict their views on others. But too many of them — even those with Ph.D.s in STEM disciplines — are incapable of thinking clearly, and more than capable of slanting their work to support their biases. Exhibit A: Michael Mann, James Hansen (more), and their co-conspirators in the catastrophic-anthropogenic-global-warming scam.


Related posts:
The Limits of Science
How to View Defense Spending
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
The McNamara Legacy: A Personal Perspective
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
AGW in Austin? (II)
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Babe Ruth and the Hot-Hand Hypothesis
Hurricane Hysteria
Deduction, Induction, and Knowledge
Much Ado about the Unknown and Unknowable
A (Long) Footnote about Science
Further Thoughts about Probability
Climate Scare Tactics: The Mythical Ever-Rising 30-Year Average
A Grand Strategy for the United States