Another Thought about “Darkest Hour”

I said recently about Darkest Hour  that

despite Gary Oldman’s deservedly acclaimed, Oscar-winning impersonation of Winston Churchill, earned a rating of 7 from me. It was an entertaining film, but a rather trite offering of Hollywoodized history.

There was a subtle aspect of the film which led me to believe that Churchill’s firm stance against a negotiated peace with Hitler had more support from the Labour Party than from Churchill’s Conservative colleagues. So I went to Wikipedia, which says this (among many things) in a discussion of the film’s historical accuracy:

In The New Yorker, Adam Gopnik wrote: “…in late May of 1940, when the Conservative grandee Lord Halifax challenged Churchill, insisting that it was still possible to negotiate a deal with Hitler, through the good offices of Mussolini, it was the steadfast anti-Nazism of Attlee and his Labour colleagues that saved the day – a vital truth badly underdramatized in the current Churchill-centric film, Darkest Hour“. This criticism was echoed by Adrian Smith, emeritus professor of modern history at the University of Southampton, who wrote in the New Statesman that the film was “yet again overlooking Labour’s key role at the most dangerous moment in this country’s history … in May 1940 its leaders gave Churchill the unequivocal support he needed when refusing to surrender. Ignoring Attlee’s vital role is just one more failing in a deeply flawed film”.

I thought that, if anything, the film did portray Labour as more steadfast than the Tories. First, the Conservatives (especially Halifax and Neville Chamberlain) were made to seem derisive of Churchill and all-too-willing to compromise with Hitler. Second — and here’s the subtlety — at the end of Churcill’s speech to the House of Commons on June 4, 1940, which is made the climactic scene in Darkest Hour, the Labour side of the House erupts in enthusiastic applause, while the Conservative side is subdued until it follows Labour’s suit.

The final lines of Churchill’s speech are always worth repeating:

Even though large tracts of Europe and many old and famous States have fallen or may fall into the grip of the Gestapo and all the odious apparatus of Nazi rule, we shall not flag or fail. We shall go on to the end, we shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our Island, whatever the cost may be, we shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, and even if, which I do not for a moment believe, this Island or a large part of it were subjugated and starving, then our Empire beyond the seas, armed and guarded by the British Fleet, would carry on the struggle, until, in God’s good time, the New World, with all its power and might, steps forth to the rescue and the liberation of the old.

If G.W. Bush could have been as adamant in his opposition to the enemy (instead of pandering to the “religion of peace”), and as eloquent in his speech to Congress after 9/11 and at subsequent points in the ill-executed “war on terror”, there might now be a Pax Americana in the Middle East.

(See also “September 20, 2001: Hillary Clinton Signals the End of ‘Unity’“, “The War on Terror As It Should Have Been Fought“, and “A Rearview Look at the Invasion of Iraq and the War on Terror“.)

Analysis vs. Reality

In my days as a defense analyst I often encountered military officers who were skeptical about the ability of civilian analysts to draw valid conclusions from mathematical models about the merits of systems and tactics. I took me several years to understand and agree with their position. My growing doubts about the power of quantitative analysis of military matters culminated in a paper where I wrote that

combat is not a mathematical process…. One may describe the outcome of combat mathematically, but it is difficult, even after the fact, to determine the variables that made a difference in the outcome.

Much as we would like to fold the many different parameters of a weapon, a force, or a strategy into a single number, we can not. An analyst’s notion of which variables matter and how they interact is no substitute for data. Such data as exist, of course, represent observations of discrete events — usually peacetime events. It remains for the analyst to calibrate the observations, but without a benchmark to go by. Calibration by past battles is a method of reconstruction — of cutting one of several coats to fit a single form — but not a method of validation.

Lacking pertinent data, an analyst is likely to resort to models of great complexity. Thus, if useful estimates of detection probabilities are unavailable, the detection process is modeled; if estimates of the outcomes of dogfights are unavailable, aerial combat is reduced to minutiae. Spurious accuracy replaces obvious inaccuracy; untestable hypotheses and unchecked calibrations multiply apace. Yet the analyst claims relative if not absolute accuracy, certifying that he has identified, measured, and properly linked, a priori, the parameters that differentiate weapons, forces, and strategies.

In the end, “reasonableness” is the only defense of warfare models of any stripe.

It is ironic that analysts must fall back upon the appeal to intuition that has been denied to military men — whose intuition at least flows from a life-or-death incentive to make good guesses when choosing weapons, forces, or strategies.

My colleagues were not amused, to say the least.

I was reminded of all this by a recent exchange with a high-school classmate who had enlisted my help in tracking down a woman who, according to a genealogy website, is her first cousin, twice removed. The success of the venture is as yet uncertain. But if it does succeed it will be because of the classmate’s intimate knowledge of her family, not my command of research tools. As I said to my classmate,

You know a lot more about your family than I know about mine. I have all of the names and dates in my genealogy data bank, but I really don’t know much about their lives. After I moved to Virginia … I was out of the loop on family gossip, and my parents didn’t relate it to me. For example, when I visited my parents … for their 50th anniversary I happened to see a newspaper clipping about the death of my father’s sister a year earlier. It was news to me. And I didn’t learn of the death of my mother’s youngest brother (leaving her as the last of 10 children) until my sister happened to mention it to me a few years after he had died. And she didn’t know that I didn’t know.

All of which means that there’s a lot more to life than bare facts — dates of birth, death, etc. That’s why military people (with good reason) don’t trust analysts who draw conclusions about military weapons and tactics based on mathematical models. Those analysts don’t have a “feel” for how weapons and tactics actually work in the heat of battle, which is what matters.

Climate modelers are even more in the dark than military analysts because, unlike military officers with relevant experience, there’s no “climate officer” who can set climate modelers straight — or (more wisely) ignore them.

(See also “Modeling Is Not Science“, “The McNamara Legacy: A Personal Perspective“, “Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry“, “Analytical and Scientific Arrogance“, “Why I Don’t Believe in ‘Climate Change’“, and “Predicting ‘Global’ Temperatures — An Analogy with Baseball“.)

Predicting “Global” Temperatures — An Analogy with Baseball

The following graph is a plot of the 12-month moving average of “global” mean temperature anomalies for 1979-2018 in the lower troposphere, as reported by the climate-research unit of the University of Alabama-Huntsville (UAH):

The UAH values, which are derived from satellite-borne sensors, are as close as one can come to an estimate of changes in “global” mean temperatures. The UAH values certainly are more complete and reliable than the values derived from the surface-thermometer record, which is biased toward observations over the land masses of the Northern Hemisphere (the U.S., in particular) — observations that are themselves notoriously fraught with siting problems, urban-heat-island biases, and “adjustments” that have been made to “homogenize” temperature data, that is, to make it agree with the warming predictions of global-climate models.

The next graph roughly resembles the first one, but it’s easier to describe. It represents the fraction of games won by the Oakland Athletics baseball team in the 1979-2018 seasons:

Unlike the “global” temperature record, the A’s W-L record is known with certainty. Every game played by the team (indeed, by all teams in organized baseball) is diligently recorded, and in great detail. Those records yield a wealth of information not only about team records, but also about the accomplishments of the individual players whose combined performance determines whether and how often a team wins its games. Given that information, and much else about which statistics are or could be compiled (records of players in the years and games preceding a season or game; records of each team’s owner, general managers, and managers; orientations of the ballparks in which each team compiled its records; distances to the fences in those ballparks; time of day at which games were played; ambient temperatures, and on and on).

Despite all of that knowledge, there is much uncertainty about how to model the interactions among the quantifiable elements of the game, and how to give weight to the non-quantifiable elements (a manager’s leadership and tactical skills, team spirit, and on and on). Even the professional prognosticators at FiveThirtyEight, armed with a vast compilation of baseball statistics from which they have devised a complex predictive model of baseball outcomes will admit that perfection (or anything close to it) eludes them. Like many other statisticians, they fall back on the excuse that “chance” or “luck” intrudes too often to allow their statistical methods to work their magic. What they won’t admit to themselves is that the results of simulations (such as those employed in the complex model devised by FiveThirtyEight),

reflect the assumptions underlying the authors’ model — not reality. A key assumption is that the model … accounts for all relevant variables….

As I have said, “luck” is mainly an excuse and rarely an explanation. Attributing outcomes to “luck” is an easy way of belittling success when it accrues to a rival.

It is also an easy way of dodging the fact that no model can accurately account for the outcomes of complex systems. “Luck” is the disappointed modeler’s excuse.

If the outcomes of baseball games and seasons could be modeled with great certainly, people wouldn’t bet on those outcomes. The existence of successful models would become general knowledge, and betting would cease, as the small gains that might accrue from betting on narrow odds would be wiped out by vigorish.

Returning now to “global” temperatures, I am unaware of any model that actually tries to account for the myriad factors that influence climate. The pseudo-science of “climate change” began with the assumption that “global” temperatures are driven by human activity, namely the burning of fossil fuels that releases CO2 into the atmosphere. CO2 became the centerpiece of global climate models (GCMs), and everything else became an afterthought, or a non-thought. It is widely acknowledged that cloud formation and cloud cover — obviously important determinants of near-surface temperatures — are treated inadequately (when treated at all). The mechanism by which the oceans absorb heat and transmit it to the atmosphere also remain mysterious. The effect of solar activity on cosmic radiation reaching Earth (and thus on cloud formation) remains is often dismissed despite strong evidence of its importance. Other factors that seem to have little or no weight in GCMs (though they are sometimes estimated in isolation) include plate techtonics, magma flows, volcanic activity, and vegetation.

Despite all of that, builders of GCMs — and the doomsayers who worship them — believe that “global” temperatures will rise to catastrophic readings. The rising oceans will swamp coastal cities; the earth will be scorched. except where it is flooded by massive storms; crops will fail accordingly; tempers will flare and wars will break out more frequently.

There’s just one catch, and it’s a big one. Minute changes in the value of a dependent variable (“global” temperature, in this case) can’t be explained by a model in which key explanatory variables are unaccounted for, about which there is much uncertainty surrounding the values of those explanatory variables that can be accounted for, and about which there is great uncertainty about the mechanisms by which the variables interact. Even an impossibly complete model would be wildly inaccurate given the uncertainty of the interactions among variables and the values of those variables (in the past as well as in the future).

I say “minute changes” because first graph above is grossly misleading. An unbiased depiction of “global” temperatures looks like this:

There’s a much better chance of predicting the success or failure of the Oakland A’s, whose record looks like this on an absolute scale:

Just as no rational (unemotional) person should believe that predictions of “global” temperatures should dictate government spending and regulatory policies, no sane bettor is holding his breath in anticipation that the success or failure of the A’s (or any team) can be predicted with bankable certainty.

All of this illustrates a concept known as causal density, which Arnold Kling explains:

When there are many factors that have an impact on a system, statistical analysis yields unreliable results. Computer simulations give you exquisitely precise unreliable results. Those who run such simulations and call what they do “science” are deceiving themselves.

The folks at FiveThirtyEight are no more (and no less) delusional than the creators of GCMs.

A Footnote to “Movies”

I noted here that I’ve updated my “Movies” page. There’s a further update. I’ve added a list of my very favorite films — the 69 that I’ve rated a 10 or 9 (out of 10). The list is reproduced below, complete with links to IMDb pages so that you can look up a film with which you may be unfamiliar.

Many of the films on my list are slanted to the left (e.g., Inherit the Wind), but they’re on my list because of their merit as entertainment. Borrowing from the criteria posted at the bottom of “Movies”, a rating of 9 means that I found a film to be superior several of thhe following dimensions: mood, plot, dialogue, music (if applicable), dancing (if applicable), quality of performances, production values, and historical or topical interest; worth seeing twice but not a slam-dunk great film. A “10” is an exemplar of its type; it can be enjoyed many times.

My Very Favorite Films: Releases from 1920 through 2018
(listed roughly in descending order of my ratings)
Ratings
Title (year of release) IMDb Me
1. The Wizard of Oz (1939)  8 10
2. Alice in Wonderland (1951)  7.4 10
3. A Man for All Seasons (1966)  7.7 10
4. Amadeus (1984)  8.3 10
5. The Harmonists (1997)  7.1 10
6. Dr. Jack (1922)  7.1 9
7. The General (1926)  8.1 9
8. City Lights (1931)  8.5 9
9. March of the Wooden Soldiers (1934)  7.3 9
10. The Gay Divorcee (1934)  7.5 9
11. David Copperfield (1935)  7.4 9
12. Captains Courageous (1937)  8 9
13. The Adventures of Robin Hood (1938)  7.9 9
14. Alexander Nevsky (1938)  7.6 9
15. Bringing Up Baby (1938)  7.9 9
16. A Christmas Carol (1938)  7.5 9
17. Destry Rides Again (1939)  7.7 9
18. Gunga Din (1939)  7.4 9
19. The Hunchback of Notre Dame (1939)  7.8 9
20. Mr. Smith Goes to Washington (1939)  8.1 9
21. The Women (1939)  7.8 9
22. The Grapes of Wrath (1940)  8 9
23. The Philadelphia Story (1940)  7.9 9
24. Pride and Prejudice (1940)  7.4 9
25. Rebecca (1940)  8.1 9
26. Sullivan’s Travels (1941)  8 9
27. Woman of the Year (1942)  7.2 9
28. The African Queen (1951)  7.8 9
29. The Browning Version (1951)  8.2 9
30. The Bad Seed (1956)  7.5 9
31. The Bridge on the River Kwai (1957)  8.1 9
32. Inherit the Wind (1960)  8.1 9
33. Psycho (1960)  8.5 9
34. The Hustler (1961)  8 9
35. Billy Budd (1962)  7.8 9
36. Lawrence of Arabia (1962)  8.3 9
37. Zorba the Greek (1964)  7.7 9
38. Doctor Zhivago (1965)  8 9
39. The Graduate (1967)  8 9
40. The Lion in Winter (1968)  8 9
41. Butch Cassidy and the Sundance Kid (1969)  8 9
42. Five Easy Pieces (1970)  7.5 9
43. The Godfather (1972)  9.2 9
44. Papillon (1973)  8 9
45. Chinatown (1974)  8.2 9
46. The Godfather: Part II (1974)  9 9
47. One Flew Over the Cuckoo’s Nest (1975)  8.7 9
48. Star Wars: Episode IV – A New Hope (1977)  8.6 9
49. Breaker Morant (1980)  7.8 9
50. Star Wars: Episode V – The Empire Strikes Back (1980)  8.7 9
51. Das Boot (1981)  8.3 9
52. Chariots of Fire (1981)  7.2 9
53. Raiders of the Lost Ark (1981)  8.4 9
54. Blade Runner (1982)  8.1 9
55. Gandhi (1982)  8 9
56. The Last Emperor (1987)  7.7 9
57. Dangerous Liaisons (1988)  7.6 9
58. Henry V (1989)  7.5 9
59. Chaplin (1992)  7.6 9
60. Noises Off… (1992)  7.6 9
61. Three Colors: Blue (1993)  7.9 9
62. Pulp Fiction (1994)  8.9 9
63. Richard III (1995)  7.4 9
64. The English Patient (1996)  7.4 9
65. Fargo (1996)  8.1 9
66. Chicago (2002)  7.1 9
67. Master and Commander: The Far Side of the World (2003)  7.4 9
68. The Chronicles of Narnia: The Lion, the Witch and the Wardrobe (2005)  6.9 9
69. The Kite Runner (2007)  7.6 9

What? Where?

The small city of Marysville, Michigan (population ca. 10,000), is in the news because Jean Cramer, a candidate for a seat on the city council, is reported by The New York Times (and other of our “moral guardians”) to have said “Keep Marysville a white community as much as possible” during a forum at which she and the other candidates spoke.

The Times adds that

Kathy Hayman, the city’s mayor pro tempore, said during the forum that she took Ms. Cramer’s comments personally….

“My son-in-law is a black man and I have biracial grandchildren,” Ms. Hayman said.

After the forum, Ms. Cramer submitted to an  interview with the local newspaper:

… Ms. Cramer expanded on her views to The Times Herald and said that Ms. Hayman’s family was “in the wrong” because it was multiracial.

“Husband and wife need to be the same race,” Ms. Cramer told the paper. “Same thing with the kids. That’s how it’s been from the beginning of, how can I say, when God created the heaven and the earth. He created Adam and Eve at the same time. But as far as me being against blacks, no I’m not.”

Ms. Cramer told The Times Herald on Friday that she would not have an issue if a black couple moved next door to her. “What is the issue is the biracial marriages, that’s the big problem,” Ms. Cramer said. “And there are a lot of people who don’t know it’s in the Bible and so they’re going outside of that.”

I find this brouhaha rather amusing because I’m familiar with Marysville, the population of which in 2010 was

97.5% White, 0.3% African American, 0.2% Native American, 0.6% Asian, 0.4% from other races, and 0.9% from two or more races. Hispanic or Latino of any race were 1.8% of the population.

Marysville is a “suburb” of Port Huron, a shrinking city of 30,000 souls. Among the reasons for the shrinkage of Port Huron’s population is its growing “blackness”. When I toured Port Huron on my last trip to Michigan (four years ago), I saw that neighborhoods which used to be all-white have changed complexion. Marysville was the original white-flight destination for Port Huronites. Other “suburbs” of Port Huron have grown, even as the city’s population shrinks, for much the same reason.

So Ms. Cramer is guilty of saying what residents of places around the nation — upscale and downscale — believe about keeping a “white community”. Her stated reason — a Biblical injunction against miscegenation — probably isn’t widely shared. But her objective — economic-social-cultural segregation — is widely shared, nonetheless.

The only newsworthy thing about Ms. Cramer’s statement is the hypocrisy of the cosseted editors and reporters of The New York Times and other big-media outlets for making a big deal of it.

“Movies” Updated

I have updated my “Movies” page. I was prompted to do so by having recently (and unusually) viewed two feature-length films (on consecutive evenings, no less): Darkest Hour and Goodbye Christopher Robin.

The former, despite Gary Oldman’s deservedly acclaimed, Oscar-winning impersonation of Winston Churchill, earned a rating of 7 from me. It was an entertaining film, but a rather trite offering of Hollywoodized history. The latter film, on the other hand earned a rating of 8 from me for the quality of its script, excellent performances, and non-saccharine treatment of Christopher Robin’s boyhood and his parents’ failings as parents.

In any event, go to “Movies”. Even if you’ve been there before, you will find new material in the updated version. You will find at the bottom of the page an explanation of my use of the 10-point rating scale.

Conservatism’s Fundamental Dilemma: Markets vs. Morality

Conservatives rightly defend free markets because they exemplify the learning from trial and error that underlies the wisdom of voluntarily evolved social norms — norms that bind a people in mutual trust, respect, and forbearance.

Conservatives also rightly condemn free markets — or some of the produce of free markets — because that produce is often destructive of social norms.

Collaborationist Conservatives

Michael Anton, author of “The Flight 93 Election“, has coined the apt term Vichycons for collaborationist conservatives. (I wish I had thought of it first.) Anton nails them in “Vichycons and Mass Shootings“:

One prominent member of the species has called for “civility.” I’m all for “civility,” but it takes two to tango and the kind of “civility” on which he insists amounts—in the face of the Left’s intensifying power-hungry wrath—to unilateral disarmament. The Vichycons are like pearl-clutching old ladies somehow unperturbed by the ambient culture’s mass obscenity who upbraid their husbands for saying “damn.” They may claim to favor high standards for all, but in practice all their fire is consistently directed rightward….

Conservatives, as noted, are supposed to know something about nature, human nature, natural limits, politics, history, and permanent truths. That they do not is plainly evident from the fact that an alternative explanation for El Paso—and for other recent mass atrocities—is right under their collective nose and yet has never occurred to them. Or maybe it has but they’re too chicken to voice it. Again, I don’t know which would be worse.

Anton’s “alternative explanation” is the unraveling of social norms since the 1960s, which has led to greater violence and far less social harmony.

And Vichycons bear a big share of the responsibility for what has happened. Too many of them — especially in high and influential places — have been (and are) so anxious to seem “civil” and so eager to “get along” that they have failed to challenge the willful unraveling of social norms by the left. Theirs is a moral failing, though they don’t think of it as such because, for them, “image” and “connections” are far more important than actual adherence to principle. Perhaps it’s because, like Max Boot, they were never really conservative in the first place.

(See also “Corresponding with a Collaborator“, “‘Conservative’ Collabos“, and “Rooted in the Real World of Real People“.)

“Justice on Trial” A Brief Review

I recently read Justice on Trial: The Kavanaugh Confirmation and the Future of the Supreme Court by Mollie Hemingway and Carrie Severino. The book augments and reinforces my understanding of the political battle royal that began a nanosecond after Justice Kennedy announced his retirement from the Supreme Court.

The book is chock-full of details that are damning to the opponents of the nomination of Brett Kavanaugh (or any other constitutionalist) to replace Kennedy. Rather, the opponents would consider the details to be damning if they had an ounce of honesty and integrity. What comes through — loudly, clearly, and well-documented — is the lack of honesty and integrity on the part of the opponents of the Kavanaugh nomination, which is to say most of the Democrats in the Senate, most of the media, and all of the many interest groups that opposed the nomination.

Unfortunately, it is unlikely the authors’ evident conservatism and unflinching condemnation of the anti-Kavanaugh forces will convince anyone but the already-convinced, like me. The anti-Kavanaugh, anti-Constitution forces will redouble their efforts to derail the next Trump nominee (if there is one). As the authors say in the book’s closing paragraphs,

for all the hysteria, there is still no indication that anyone on the left is walking away from the Kavanaugh confirmation chastened by the electoral consequences or determined to prevent more damage to the credibility of the judiciary… [S]ooner or later there will be another vacancy on the Court, whether it is [RBG’s] seat or another justice’s. It’s hard to imagine how a confirmation battle could compete with Kavanaugh’s for ugliness. But if the next appointment portends a major ideological shift, it could be worse. When President Reagan had a chance to replace Louis Powell, a swing vote, with Bork, Democrats went to the mat to oppose him. When Thurgood Marshall, one of the Court’s most liberal members, stood to be replaced by Clarence Thomas, the battle got even uglier. And trading the swing vote Sandra Day O’Connor for Alito triggered an attempted filibuster.

As ugly as Kavanaugh’s confirmation battle became, he is unlikely to shift the Court dramatically. Except on abortion and homosexuality, Justice Kennedy usually voted with the conservatives. If Justice Ginsburg were to retire while Trump was in the White House, the resulting appointment would probably be like the Thomas-for-Marshall trade. Compared with what might follow, the Kavanaugh confirmation might look like the good old days of civility.

Indeed.

Simple Economic Truths Worth Repeating

From “Keynesian Multiplier: Fiction vs. Fact“:

There are a few economic concepts that are widely cited (if not understood) by non-economists. Certainly, the “law” of supply and demand is one of them. The Keynesian (fiscal) multiplier is another; it is

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier….

To show why the math is phony, I’ll start with a derivation of the multiplier. The derivation begins with the accounting identity Y = C + I + G, which means that total output (Y) = consumption (C) + investment (I) + government spending (G)….

Now, let’s say that b = 0.8. This means that income-earners, on average, will spend 80 percent of their additional income on consumption goods (C), while holding back (saving, S) 20 percent of their additional income. With b = 0.8, k = 1/(1 – 0.8) = 1/0.2 = 5. That is, every $1 of additional spending — let us say additional government spending (∆G) rather than investment spending (∆I) — will yield ∆Y = $5. In short, ∆Y = k(∆G), as a theoretical maximum.

But:

[The multiplier] it isn’t a functional representation — a model — of the dynamics of the economy. Assigning a value to b (the marginal propensity to consume) — even if it’s an empirical value — doesn’t alter that fact that the derivation is nothing more than the manipulation of a non-functional relationship, that is, an accounting identity.

Consider, for example, the equation for converting temperature Celsius (C) to temperature Fahrenheit (F): F = 32 + 1.8C. It follows that an increase of 10 degrees C implies an increase of 18 degrees F. This could be expressed as ∆F/∆C = k* , where k* represents the “Celsius multiplier”. There is no mathematical difference between the derivation of the investment/government-spending multiplier (k) and the derivation of the Celsius multiplier (k*). And yet we know that the Celsius multiplier is nothing more than a tautology; it tells us nothing about how the temperature rises by 10 degrees C or 18 degrees F. It simply tells us that when the temperature rises by 10 degrees C, the equivalent rise in temperature F is 18 degrees. The rise of 10 degrees C doesn’t cause the rise of 18 degrees F.

Therefore:

[T]he Keynesian investment/government-spending multiplier simply tells us that if ∆Y = $5 trillion, and if b = 0.8, then it is a matter of mathematical necessity that ∆C = $4 trillion and ∆I + ∆G = $1 trillion. In other words, a rise in I + G of $1 trillion doesn’t cause a rise in Y of $5 trillion; rather, Y must rise by $5 trillion for C to rise by $4 trillion and I + G to rise by $1 trillion. If there’s a causal relationship between ∆G and ∆Y, the multiplier doesn’t portray it.

In sum, the fiscal multiplier puts the cart before the horse. It begins with a non-functional, mathematical relationship, stipulates a hypothetical increase in GDP, and computes that increase in consumption (and other things) that would occur if that increase were to be realized.

As economist Steve Landsburg explains in “The Landsburg Multiplier: How to Make Everyone Rich”,

Murray Rothbard … observed that the really neat thing about this [fiscal stimulus] argument is that you can do exactly the same thing with any accounting identity. Let’s start with this one:

Y = L + E

Here Y is economy-wide income, L is Landsburg’s income, and E is everyone else’s income. No disputing that one.

Next we observe that everyone else’s share of the income tends to be about 99.999999% of the total. In symbols, we have:

E = .99999999 Y

Combine these two equations, do your algebra, and voila:

Y = 100,000,000

That 100,000,000 there is the soon-to-be-famous “Landsburg multiplier”. Our equation proves that if you send Landsburg a dollar, you’ll generate $100,000,000 worth of income for everyone else.

Send me your dollars, yearning to be free.

Tax cuts may stimulate economic activity, but not nearly to the extent suggested by the multiplier. Moreover, if government spending isn’t reduced at the same time that taxes are cut, and if there is something close to full employment of labor and capital, the main result of a tax cut will be inflation.

Government spending (as shown in “Keynsian Multiplier: Fact vs. Fiction” and “Economic Growth Since World War II“) doesn’t stimulate the economy, and usually has the effect of reducing private consumption and investment. That may be to the liking of big-government worshipers, but it’s bad for most of us.

Has Humanity Reached Peak Intelligence?

That’s the title of a post at BBC Future by David Robson, a journalist who has written a book called The Intelligence Trap: Why Smart People Make Dumb Mistakes. Inasmuch as “humanity” isn’t a collective to which “intelligence” can be attached, the title is more titillating than informative about the substance of the post, wherein Mr. Robson says some sensible things; for example:

When the researcher James Flynn looked at [IQ] scores over the past century, he discovered a steady increase – the equivalent of around three points a decade. Today, that has amounted to 30 points in some countries.

Although the cause of the Flynn effect is still a matter of debate, it must be due to multiple environmental factors rather than a genetic shift.

Perhaps the best comparison is our change in height: we are 11cm (around 5 inches) taller today than in the 19th Century, for instance – but that doesn’t mean our genes have changed; it just means our overall health has changed.

Indeed, some of the same factors may underlie both shifts. Improved medicine, reducing the prevalence of childhood infections, and more nutritious diets, should have helped our bodies to grow taller and our brains to grow smarter, for instance. Some have posited that the increase in IQ might also be due to a reduction of the lead in petrol, which may have stunted cognitive development in the past. The cleaner our fuels, the smarter we became.

This is unlikely to be the complete picture, however, since our societies have also seen enormous shifts in our intellectual environment, which may now train abstract thinking and reasoning from a young age. In education, for instance, most children are taught to think in terms of abstract categories (whether animals are mammals or reptiles, for instance). We also lean on increasingly abstract thinking to cope with modern technology. Just think about a computer and all the symbols you have to recognise and manipulate to do even the simplest task. Growing up immersed in this kind of thinking should allow everyone [hyperbole alert] to cultivate the skills needed to perform well in an IQ test….

[Psychologist Robert Sternberg] is not alone in questioning whether the Flynn effect really represented a profound improvement in our intellectual capacity, however. James Flynn himself has argued that it is probably confined to some specific reasoning skills. In the same way that different physical exercises may build different muscles – without increasing overall “fitness” – we have been exercising certain kinds of abstract thinking, but that hasn’t necessarily improved all cognitive skills equally. And some of those other, less well-cultivated, abilities could be essential for improving the world in the future.

Here comes the best part:

You might assume that the more intelligent you are, the more rational you are, but it’s not quite this simple. While a higher IQ correlates with skills such as numeracy, which is essential to understanding probabilities and weighing up risks, there are still many elements of rational decision making that cannot be accounted for by a lack of intelligence.

Consider the abundant literature on our cognitive biases. Something that is presented as “95% fat-free” sounds healthier than “5% fat”, for instance – a phenomenon known as the framing bias. It is now clear that a high IQ does little to help you avoid this kind of flaw, meaning that even the smartest people can be swayed by misleading messages.

People with high IQs are also just as susceptible to the confirmation bias – our tendency to only consider the information that supports our pre-existing opinions, while ignoring facts that might contradict our views. That’s a serious issue when we start talking about things like politics.

Nor can a high IQ protect you from the sunk cost bias – the tendency to throw more resources into a failing project, even if it would be better to cut your losses – a serious issue in any business. (This was, famously, the bias that led the British and French governments to continue funding Concorde planes, despite increasing evidence that it would be a commercial disaster.)

Highly intelligent people are also not much better at tests of “temporal discounting”, which require you to forgo short-term gains for greater long-term benefits. That’s essential, if you want to ensure your comfort for the future.

Besides a resistance to these kinds of biases, there are also more general critical thinking skills – such as the capacity to challenge your assumptions, identify missing information, and look for alternative explanations for events before drawing conclusions. These are crucial to good thinking, but they do not correlate very strongly with IQ, and do not necessarily come with higher education. One study in the USA found almost no improvement in critical thinking throughout many people’s degrees.

Given these looser correlations, it would make sense that the rise in IQs has not been accompanied by a similarly miraculous improvement in all kinds of decision making.

So much for the bright people who promote and pledge allegiance to socialism and its various manifestations (e.g., the Green New Deal, and Medicare for All). So much for the bright people who suppress speech with which they disagree because it threatens the groupthink that binds them.

Robson, still using “we” inappropriately, also discusses evidence of dysgenic effects in IQ:

Whatever the cause of the Flynn effect, there is evidence that we may have already reached the end of this era – with the rise in IQs stalling and even reversing. If you look at Finland, Norway and Denmark, for instance, the turning point appears to have occurred in the mid-90s, after which average IQs dropped by around 0.2 points a year. That would amount to a seven-point difference between generations.

Psychologist (and intelligence specialist) James Thompson has addressed dysgenic effects at his blog on the website of The Unz Review. In particular, he had a lot to say about the work of an intelligence researcher named Michael Woodley. Here’s a sample from a post by Thompson:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong?

From a later post:

The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by [Matthew] Sarraf.

The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.

The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”….

Here are the findings which I have arranged by generational decline (taken as 25 years).

  • Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.
  • 3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.
  • Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.
  • Working memory, over 85 years (3.4 generations) 0.16 drop/decade.
  • Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.
  • Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.

Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.

My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.

How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

Here’s my hypothesis, which I offer on the assumption that the test-takers are demographically representative of the whole populations of the countries in which they were tested: The less-intelligent portions of the populace are breeding faster than the more-intelligent portions. That phenomenon is magnified by the rapid growth of the Muslim component of Europe’s population and the rapid growth of the Latino component of America’s population.

(See also “The Learning Curve and the Flynn Effect“, “More about Intelligence“, “Selected Writings about Intelligence“, and especially “Intelligence“.)

Colleges and Universities are Overrated

Keith Whittington writes at The Volokh Conspiracy about “The Partisan Split on Higher Education“:

A new Pew survey reveals that the partisan split that became visible a couple of years ago in public perceptions of American higher education has continued. In the long term, this cannot be good for American colleges and universities….

Colleges and universities are fairly distinctive in being non-political institutions that are nonetheless seen in increasingly partisan terms. There is an extensive conservative infrastructure now dedicated to publicizing the foibles of academia. Of course, the reality is that college professors and administrators lean heavily to the political left, though this has been true for decades. Republicans now perceive universities as politicized, partisan institutions….

[I]f Republicans continue to believe that on the whole universities are damaging American society, they are unlikely to try to defend them against misguided political interventions from the political left and are more likely to propose misguided political interventions of their own.

Colleges and universities are (and long have been) “political” institutions, as Whittington himself acknowledges. But that isn’t my quibble with Whittington.

His tone implies that he holds colleges and universities in higher regard than they should be held. But there isn’t anything sacred about colleges and universities. Free inquiry (which most of them no longer support) can go on without them. Advances in theoretical and applied science can go on without them, as long as there are free markets to support the development and application of scientific knowledge. In fact, colleges and universities have (on the whole) become so inimical to free markets that Americans would be better off with far fewer colleges and universities.

Sending kids to college has become conspicuous consumption. The practical value of colleges and universities is realized through courses that could be replicated by for-profit institutions. The rest — including the bloated, mostly leftist administrative apparatus — is waste.

(See also “Is College for Everyone?“, “College for Almost No One“, and “More Evidence against College for Everyone“.)

 

Megaprojects, Cost-Benefit Analysis, and “Social Welfare”

Timothy Taylor writes about “The Iron Law of Megaprojects vs. the Hiding Hand Principle“. He begins by quoting a piece by Bent Flyvbjerg in Cato Policy Report (January 2017):

Megaprojects are large-scale, complex ventures that typically cost a billion dollars or more, take many years to develop and build, involve multiple public and private stakeholders, are transformational, and impact millions of people. Examples of megaprojects are high-speed rail lines, airports, seaports, motorways, hospitals, national health or pension information and communications technology (ICT) systems, national broadband, the Olympics, largescale signature architecture, dams, wind farms, offshore oil and gas extraction, aluminum smelters, the development of new aircrafts, the largest container and cruise ships, high-energy particle accelerators, and the logistics systems used to run large supply-chain-based companies like Amazon and Maersk.

For the largest of this type of project, costs of $50-100 billion are now common, as for the California and UK high-speed rail projects, and costs above $100 billion are not uncommon, as for the International Space Station and the Joint Strike Fighter. If they were nations, projects of this size would rank among the world’s top 100 countries measured by gross domestic product. When projects of this size go wrong, whole companies and national economies suffer. …

If, as the evidence indicates, approximately one out of ten megaprojects is on budget, one out of ten is on schedule, and one out of ten delivers the promised benefits, then approximately one in a thousand projects is a success, defined as on target for all three. Even if the numbers were wrong by a factor of two, the success rate would still be dismal.

So far, so good. But then Taylor says this:

A common comeback to the Iron Law of Megaprojects is that if we pay attention to it, we will be so dissuaded by costs and risks of megaprojects that nothing will ever get done. Alfred O. Hirschman offered a sophisticated expression of this concern in his 1967 essay, “The Hiding Hand.” Hirschman argued there there is rough balance in megaprojects: we tend underestimate the costs and problems of megaprojects, but we also tend to underestimate the creative with which people address the costs and and problems that arise.

I will come to the irrelevance of Hirschman’s argument, but first a few more tidbits from Taylor:

[Flyvbjerg] argues that a number of prominent megaprojects have been completed on time and on budget. When choosing which megaprojects to pursue, it is useful to avoid underestimating costs and overestimating benefits. [Wow, what an astute observation.] …

Further, Flyvbjerg offers a reminder that even when a megaproject is eventually completed, and seems to be working well, project may still have been uneconomic–and society may have been better off without it.

The second comment brings Taylor close to the heart of the matter. But he never gets there. Like most economists, he overlooks the major flaw in the application of cost-benefit analysis to government projects: Costs and benefits usually have different distributions across the population. At the extreme, benefits that accrue only to the indigent are borne almost entirely by the non-indigent. (The indigent may pay some sales taxes.)

Cost-benefit analysis (applied to government projects) effectively rests on the assumption of a social welfare function. If there were such a thing, then it would be all right for people to go around punching each other (and worse), as long as the aggressors derived more gains in “utility” than the losses suffered by the victims.

Information Security in the Cyber-Age

No system is perfect, but I am doing the best that I can:

1. PCs and mobile devices protected by anti-virus and anti-malware programs.

2. Password-protected home network, wrapped in a virtual private network, which is also used by the mobile devices when they aren’t linked to the home network.

3. Different usernames and passwords for every one of dozens of sites that require (and need) them.

4. Passwords created by a complex spreadsheet routine that generates and selects from random series of upper-case letters, lower-case letters, digits, and special characters.

5. Passwords stored in a password-protected file, with paper backup in a secure container.

6. Master password required for access to passwords stored in browser.

Measures 4, 5, and 6 adopted in lieu of reliance on vulnerable vendors for password generation and storage.

Suggestions for improvement are always welcome.

Social Security Is an Entitlement

Entitlement has come to mean the right to guaranteed benefits under a government program. In the nature of government programs, those who receive the benefits usually don’t pay the taxes required to fund those benefits.

I recently saw on Facebook (which I look at occasionally) a discussion to the effect that Social Security isn’t an entitlement program because “we (the discussants) paid into it”.

Well, paying into Social Security doesn’t mean that you paid your own way. First, the system is rigged so the persons in lower income brackets receive benefits that are disproportionately high relative to the payments that they (and their employers) made during their working years.

Second, the money that a person pays into Social Security doesn’t earn anything. You are not buying a financial instrument that funds productive investments, which in turn reward you with a future stream of income.

True, there’s the mythical Social Security Trust Fund, which has been paying out benefits that have been defrayed in part by interest earned on “investments” in U.S. Treasury securities. Where does that interest come from? Not from the beneficiaries of Social Security. It comes from taxpayers who are, at the same time, also making payments into Social Security in exchange for the “promise” of future Social Security benefits. (I say “promise” because there is no binding contract for Social Security benefits; you get what Congress provides by law.)

So, yes, Social Security is an entitlement program. Paying into it doesn’t mean that the payer earns what he eventually receives from it. Quite the contrary. Most participants are feeding from the public trough.

The Unique “Me”

Children, at some age, will begin to understand that there is death, the end of a human life (in material form, at least). At about the same time, in my experience, they will begin to speculate about the possibility that they might have been someone else: a child born in China, for instance.

Death eventually loses its fascination, though it may come to mind from time to time as one grows old. (Will I wake up in the morning? Is this the day that my heart stops beating? Will I be able to break my fall when the heart attack happens, or will I just go down hard and die of a fractured skull?)

But after careful reflection, at some age, the question of being been born as someone else is answered in the negative.

For each person, there is only one “I”, the unique “me”. If I hadn’t been born, I wouldn’t be “I” — there wouldn’t be a “me”. I couldn’t have been born as someone else: a child born in China, instance. A child born in China — or at any place and time other than where and when my mother gave birth to me — must be a different “I’. not the one I think of as “me”.

(Inspired by Sir Roger Scruton’s On Human Nature, for which I thank my son.)

An Antidote to Alienation

This much of Marx’s theory of alienation bears a resemblance to the truth:

The design of the product and how it is produced are determined, not by the producers who make it (the workers)….

[T]he generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive, motions that offer the worker little psychological satisfaction for “a job well done.”

These statements are true not only of assembly-line manufacturing. They’re also true of much “white collar” work — certainly routine office work and even a lot of research work that requires advanced degrees in scientific and semi-scientific disciplines (e.g., economics). They are certainly true of “blue collar” work that is rote, and in which the worker has no ownership stake.

There’s a relevant post at West Hunter which is short enough to quote in full:

Many have noted how difficult it is to persuade hunter-gatherers to adopt agriculture, or more generally, to get people to adopt a more intensive kind of agriculture.

It’s worth noting that, given the choice, few individuals pick the more intensive, more ‘civilized’ way of life, even when their ancestors have practiced it for thousands of years.

Benjamin Franklin talked about this. “When an Indian Child has been brought up among us, taught our language and habituated to our Customs, yet if he goes to see his relations and makes one Indian Ramble with them, there is no persuading him ever to return. [But] when white persons of either sex have been taken prisoners young by the Indians, and lived a while among them, tho’ ransomed by their Friends, and treated with all imaginable tenderness to prevail with them to stay among the English, yet in a Short time they become disgusted with our manner of life, and the care and pains that are necessary to support it, and take the first good Opportunity of escaping again into the Woods, from whence there is no reclaiming them.”

The life of the hunter-gatherer, however fraught, is less rationalized than the kind of life that’s represented by intensive agriculture, let alone modern manufacturing, transportation, wholesaling, retailing, and office work.

The hunter-gatherer isn’t a cog in a machine, he is the machine: the shareholder, the co-manager, the co-worker, and the consumer, all in one. His work with others is truly cooperative. It is like the execution of a game-winning touchdown by a football team, and unlike the passing of a product from stage to stage in an assembly line, or the passing of a virtual piece of paper from computer to computer.

The hunter-gatherer’s social milieu was truly societal:

Hunter-gatherer bands in the [Pleistocene] were in the range of 25 to 150 individuals: men, women, and children. These small bands would have sometimes formed larger agglomerations of up to a few thousand for the purpose of mate-seeking and defense, but this would have been unusual. The typically small size for bands meant that interactions within the group were face-to-face, with everyone knowing the name and something of the reputation and character of everyone else. Though group members would have engaged in some specializa­tion of labor beyond the normal sex distinctions (men as hunters, women as gatherers), specialization would not have been strict: all men, for example, would haft adzes, make spears, find game, kill, and dress it, and hunt in bands of ten to twenty individuals. [From Denis Dutton’s review of Paul Rubin’s Darwinian Politics: The Evolutionary Origin of Freedom.]

Nor is the limit of 150 unique to hunter-gatherer bands:

[C]ommunal societies — like those our ancestors lived in, or in any human group for that matter — tend to break down at about 150. Such is perhaps due to our limited brain capacity to know any more people that intimately, but it’s also due to the breakdown of reciprocal relationships like those discussed above — after a certain number (again, around 150).

A great example of this is given by Richard Stroup and John Baden in an old article about communal Hutterite colonies. (Hutterites are sort of like the Amish — or more broadly like Mennonites — but settled in different areas of North America.) Stroup, an economist at Montana State University, shared with me his Spring 1972 edition of Public Choice, wherein he and political scientist John Baden write:

In a relatively small colony, the proportional contribution of each member is greater. Likewise, surveillance of him by each of the others is more complete and an informal accounting of contribution is feasible. In a colony, there are no elaborate systems of formal controls over a person’s contribution. Thus, in general, the incentive and surveillance structures of a small or medium-size colony are more effective than those of a large colony and shirking is lessened.

Interestingly, according to Stroup and Baden, once the Hutterites reach Magic Number 150, they have a tradition of breaking off and forming another colony. This idea is echoed in Gladwell’s The Tipping Point, wherein he discusses successful companies that use 150 in their organizational models.

Had anyone known about this circa 1848, someone might have told Karl Marx that his theory [communism] could work, but only up to the Magic Number. [From Max Borders’s “The Stone Age Trinity“.]

What all of this means, of course, is that for the vast majority of people there’s no going back. How many among us are willing — really willing — to trade our creature comforts for the “simple life”? Few would be willing when faced with the reality of what the “simple life” means; for example, catching or growing your own food, dawn-to-post-dusk drudgery, nothing resembling culture as we know it (high or low), and lives that are far closer to nasty, brutish, and short than today’s norms.

Given that, it is important (nay, crucial) to cultivate an inner life of intellectual or spiritual satisfaction. Only that inner life — and the love and friendship of a small circle of fellows — can hold alienation at bay. Only that inner life — and love and close friendships — can give us serenity as civilization crumbles around us.

(See also “Alienation” and “Another Angle on Alienation“.)

The Paradox That Is Western Civilization

The main weakness of Western civilization is a propensity to tolerate ideas and actions that would undermine it. The paradox is that the main strength of Western civilization is a propensity to tolerate ideas and actions that would strengthen it. The survival and improvement of Western civilization requires carefully balancing the two propensities. It has long been evident in continental Europe and the British Isles that the balance has swung toward destructive toleration. The United States is rapidly catching up to Europe. At the present rate the intricate network of social relationships and norms that has made America great will be destroyed within a decade. Israel, if it remains staunchly defensive of its heritage, will be the only Western nation still worthy of the name.

(See also “Conservatism, Society, and the End of America” and “Another Take on the State of America“.)

Ad-Hoc Hypothesizing and Data Mining

An ad-hoc hypothesis is

a hypothesis added to a theory in order to save it from being falsified….

Scientists are often skeptical of theories that rely on frequent, unsupported adjustments to sustain them. This is because, if a theorist so chooses, there is no limit to the number of ad hoc hypotheses that they could add. Thus the theory becomes more and more complex, but is never falsified. This is often at a cost to the theory’s predictive power, however. Ad hoc hypotheses are often characteristic of pseudoscientific subjects.

An ad-hoc hypothesis can also be formed from an existing hypothesis (a proposition that hasn’t yet risen to the level of a theory) when the existing hypothesis has been falsified or is in danger of falsification. The (intellectually dishonest) proponents of the existing hypothesis seek to protect it from falsification by putting the burden of proof on the doubters rather than where it belongs, namely, on the proponents.

Data mining is “the process of discovering patterns in large data sets”. It isn’t hard to imagine the abuses that are endemic to data mining; for example, running regressions on the data until the “correct” equation is found, and excluding or adjusting portions of the data because their use leads to “counterintuitive” results.

Ad-hoc hypothesizing and data mining are two sides of the same coin: intellectual dishonesty. The former is overt; the latter is covert. (At least, it is covert until someone gets hold of the data and the analysis, which is why many “scientists” and “scientific” journals have taken to hiding the data and obscuring the analysis.) Both methods are justified (wrongly) as being consistent with the scientific method. But the ad-hoc theorizer is just trying to rescue a falsified hypothesis, and the data miner is just trying to conceal information that would falsify his hypothesis.

From what I have seen, the proponents of the human activity>CO2>”global warming” hypothesis have been guilty of both kinds of quackery: ad-hoc hypothesizing and data mining (with a lot of data manipulation thrown in for good measure).

Another Take on the State of America

Samuel J. Abrams, writing at newgeography, alleges that “America’s Regional Variations Are Wildly Overstated“. According to Abrams,

[p]erhaps the most widely accepted and popular idea of regional differences comes from Colin Woodard who carves the country into 11 regional nations each with unique histories and distinct cultures that he believes has shaped the ideologies and politics at play today….

Woodard argues that regions project “[a] force that you feel that’s there, and those sort of assumptions and givens about politics, and culture, and different social relationships.” Yet the problem with Woodard’s argument is that while these histories and memoirs are fascinating, they are not necessarily representative of what drives politics and society among those living in various regions around the country. New data from the AEI survey on Community and Society makes it clear that recent accounts of America splintering does not hold up to empirical scrutiny and are appreciably overstated.

In what follows, you will see references to Woodward’s 11 “nations”, which look like this:

Abrams, drawing on the AEI survey of which he is a co-author, tries to how alike the “nations” are statistically; for example:

The Deep South … is widely viewed as a conservative bastion given its electoral history but the data tells [sic] a different story[:] 39% of those in the Deep South identify as somewhat, very, or extremely conservative while 23% are somewhat, very or extremely liberal. There are more residents in the region who identify or even lean to the right compared to the left but 37% of Southerners assert themselves as moderate or do not think about themselves ideologically at all. Thus the South is hardly a conservative monoculture – almost a quarter of the population is liberal. Similarly, in the progressive northeast region that is Yankeedom, only 31% of its residents state that they are liberal to some degree compared to 26% conservative but plurality is in the middle with 43%….

Religion presents a similar picture where 47% of Americans nationally hold that religious faith is central or very important to their lives and 10 of the 11 regions are within a handful points of the average except the Left Coast which drops to 26%….

The AEI survey asks about the number of close friends one has and 73% of Americans state that they have between 1 and 5 close friends today. Regional variation is minor here but what is notable is that Yankeedom with its urban history and density is actually the lowest at 68% while the Deep South and its sprawl has the highest rate of 81%.

Turning to communities specifically, the survey asks respondents about how well they know their neighbors. A majority, 54% of Americans, gave positive responses – very and fairly well. The Deep South, El Norte and Far West all came in at 49% – the low end – and at the high end was 61% for the Midlands and 58% for New England. The remaining regions were within a few points of the national average….

[T]he survey asked about helping out one’s neighbor by doing such things as watching each other’s children, helping with shopping, house sitting, picking up newspapers or packages, lending tools and other similar things. These are relatively small efforts and 38% of Americans help their neighbors a few times a month or more often. Once again, the regions hover around this average with the Far West, New Netherlands, and the Left Coast being right in the middle. Those in the Midlands and Yankeedom – New England – were at 41% and El Norte at 30% were the least helpful. As before, there are minor differences from the average but they are relatively small with no region being an outlier in terms of being far more or less engaged communally.

Actually, Abrams has admitted to some significant gaps:

The Deep South is 39 percent conservative; Yankeedom, only 26 percent.

The Left Coast is markedly less religious than the rest of the country.

Denizens of the Deep South have markedly more friends than do inhabitants of Yankeedom (a ratio of 81:68).

Residents of the Midlands and New England are much more neighborly than are residents of The Deep South, El Norte, and the Far West (ratios of 61:49 and 58:49).

Residents of The Midlands and Yankeedom are much more helpful to their neighbors than are residents of El Norte (ratio of 41:30).

It’s differences like those that distinguish the regions. Abrams’s effort to minimize the difference is akin to saying that humans and chimps are pretty much alike because 96 percent of human genes are the same as chimp genes.

Moreover, Abrams hasn’t a thing to say about trends. Based on the following trends, it’s hard not to conclude that regional differences are growing:

Call me a cock-eyed pessimist.