Science and Understanding

Not Over the Hill

The Washington Post reports on some research about intelligence that is as irrelevant as the candle problem. Specifically:

[R]esearchers at Canada’s Simon Fraser University … have found that measurable declines in cognitive performance begin to occur at age 24. In terms of brainpower, you’re over the hill by your mid-20s.

The researchers measured this by studying the performance of thousands of players of Starcraft 2, a strategy video game….

Even worse news for those of us who are cognitively over-the-hill: the researchers find “no evidence that this decline can be attenuated by expertise.” Yes, we get wiser as we get older. But wisdom doesn’t substitute for speed. At best, older players can only hope to compensate “by employing simpler strategies and using the game’s interface more efficiently than younger players,” the authors say.

So there you have it: scientific evidence that we cognitively peak at age 24. At that point, you should probably abandon any pretense of optimism and accept that your life, henceforth, will be a steady descent into mediocrity, punctuated only by the bitter memories of the once seemingly-endless potential that you so foolishly squandered in your youth. Considering that the average American lives to be 80, you’ll have well over 50 years to do so! (Christopher Ingraham, “Your Brain Is Over the Hill by Age 24,” April 16, 2014)

Happily, Starcraft 2 is far from a representation of the real world. Take science, for example. I went to Wikipedia and obtained the list of all Nobel laureates in physics. It’s a long list, so I sampled it — taking the winners for the first five years (1901-1905), the middle five years (1955-1959) and the most recent five years (2009-2013). Here’s a list of the winners for those 15 years, and the approximate age of each winner at the time he or she did the work for which the prize was awarded:

1901 Wilhelm Röntgen (50)

1902 Hendrik Lorentz; (43) and Pieter Zeeman (31)

1903 Henri Becquerel, (44), Pierre Curie (37), and Marie Curie (29)

1904 Lord Rayleigh (52)

1955 Willis Lamb (34) and Polykarp Kusch (40)

1956 John Bardeen (39), Walter Houser Brattain (45), and William Shockley (37)

1957 Chen Ning Yang (27) and Tsung-Dao Lee (23)

1958 Pavel Cherenkov (30), Ilya Frank (26), and Igor Tamm (39)

1959 Emilio G. Segrè (50) and Owen Chamberlain (35)

2009 Charles K. Kao (33), Willard S. Boyle (45), and George E. Smith (39)

2010 Andre Geim (46) and Konstantin Novoselov (34)

2011 Saul Perlmutter (39), Adam G. Riess (29), and Brian Schmidt (31)

2012 Serge Haroche (40-50) and David J. Wineland (40-50)

2013 François Englert (32) and Peter W. Higgs (35)

There’s exactly one person within a year of age 24 (Tsung-Dao Lee, 23), and a few others who were still in their (late) 20s. Most of the winners were in their 30s and 40s when they accomplished their prize-worthy scientific feats. And there are at least as many winners who were in their 50s as winners who were in their 20s.

Let’s turn to so-called physical pursuits, which often combine brainpower (anticipation, tactical improvisation, hand-eye coordination) and pure physical skill (strength and speed). Baseball exemplifies such a pursuit. Do ballplayers go sharply downhill after the age of 24? Hardly. On average, they’re just entering their best years at age 24, and they perform at peak level for several years.

I’ll use two charts to illustrate the point about ballplayers. The first depicts normalized batting average vs. age for 86 of the leading hitters in the history of the American League*:

Greatest hitters_BA by age_86 hitters

Because of the complexity of the spreadsheet from which the numbers are taken, I was unable to derive a curve depicting mean batting average vs. age. But the density of the plot lines suggests that the peak age for batting average begins at 24 and extends into the early 30s. Further, with relatively few exceptions, batting performance doesn’t decline sharply until the late 30s.

Among a more select group of players, and by a different measure of performance, the peak years occur at ages 24-28, with a slow decline after 28**:

Offensive average by age_25 leading hitters

The two graphs suggest to me that ballplayers readily compensate for physical decline (such as it is) by applying the knowledge they acquire in the course of playing the game. Such knowledge would include “reading” pitchers to make better guesses about the pitch that’s coming, knowing where to hit a ball in a certain ballpark against certain fielders, judging the right moment to attempt a stolen base against a certain pitcher-catcher combination, hitting to the opposite field on occasion instead of trying to pull the ball every time, and so on.

I strongly suspect that what is true in baseball is true in many walks of life: Wisdom — knowledge culled from experience — compensates for pure brainpower, and continues to do so for a long time. The Framers of the Constitution, who weren’t perfect but who were astute observers of the human condition, knew as much. That’s why they set 35 as the minimum age for election to the presidency. (Subsequent history — notably, the presidencies of TR, JFK, Clinton, and Obama — tells us that the Framers should have made it 50.)

I do grow weary of pseudo-scientific crap like the research reported in the Post. But it does give me something to write about. And most of the pseudo-science is harmless, unlike the statistical lies on which global-warming hysteria is based.

__________
* The numbers are drawn from the analysis described in detail here and here, which is based on statistics derived through the Play Index at Baseball-Reference.com. The bright red line represents Ty Cobb’s career, which deserves special mention because of Cobb’s unparalleled dominance as a hitter-for-average over a 24-year career, and especially for ages 22-32. I should add that Cobb’s dominance has been cemented by Ichiro Suzuki’s sub-par performance in the three seasons since I posted this, wherein I proclaimed Cobb the American League’s best all-time hitter for average, taking age into account. (There’s no reason to think that the National League has ever hosted Cobb’s equal.)

** This is an index, where 100 represents parity with the league average. I chose the 25 players represented here from a list of career leaders in OPS+ (on-base percentage plus slugging average, normalized for league averages and park factors). Because of significant changes in rules and equipment in the late 1800s and early years of the 1900s (see here, here, and here), players whose careers began before 1907 were eliminated, excepting Cobb, who didn’t become a regular player until 1906. Also eliminated were Barry Bonds and Mark McGwire, whose drug-fueled records don’t merit recognition, and Joey Votto, who has completed only eight seasons. Offensive Average (OA) avoids the double-counting inherent in OPS+, which also (illogically) sums two fractions with different denominators. OA measures a player’s total offensive contribution (TOC) per plate appearance (PA) in a season, normalized by the league average for that season. TOC = singles + doubles x 2 + triples x 3 + home runs x 4 + stolen bases – times caught stealing + walks – times grounded into a double play + sacrifice hits + sacrifice flies. In the graph, Cobb seems to disappear into the (elite) crowd after age 24, but that’s an artifact of Cobb’s preferred approach to the game — slapping hits and getting on base — not his ability to hit the long ball, for which extra credit is given in computing OA. (See this, for example.)

The Limits of Science (II)

The material of the universe — be it called matter or energy — has three essential properties: essence, emanation, and effect. Essence — what things really “are” — is the most elusive of the properties, and probably unknowable. Emanations are the perceptible aspects of things, such as their detectible motions and electromagnetic properties. Effects are what things “do” to other things, as in the effect that a stream of photons has on paper when the photons are focused through a magnifying glass. (You’ve lived a bland life if you’ve never started a fire that way.)

Science deals in emanations and effects. It seems that these can be described without knowing what matter-energy “really” consists of. But can they?

Take a baseball. Assume, for the sake of argument, that it can’t be opened and separated into constituent parts, which are many. (See the video at this page for details.) Taking the baseball as a fundamental particle, its attributes (seemingly) can be described without knowing what’s inside it. Those attributes include the distance that it will travel when hit by a bat, when the ball and bat (of a certain weight) meet at certain velocities and at certain angles, given the direction and speed of rotation of the ball when it meets the bat, ambient temperature and relative humidity, and so on.

And yet, the baseball can’t be treated as if it were a fundamental particle. The distance that it will travel, everything else being the same, depends on the material at its core, the size of the core, the tightness of the windings of yarn around the core, the types of yarn used in the windings, the tightness of the cover, the flatness of the stitches that hold the cover in place, and probably several other things.

This suggests to me that the emanations and effects of an object depend on its essence — at least in the everyday world of macroscopic objects. If that’s so, why shouldn’t it be the same for the world of objects called sub-atomic particles?

Which leads to some tough questions: Is it really the case that all of the particles now considered elementary are really indivisible? Are there other elementary particles yet to be discovered or hypothesized, and will some of those be constituents of particles now thought to be elementary? And even if all of the truly elementary particles are discovered, won’t scientists still be in the dark as to what those particles really “are”?

The progress of science should be judged by how much scientists know about the universe and its constituents. By that measure — and despite what may seem to be a rapid pace of discovery — it is fair to say that science has a long way to go — probably forever.

Scientists, who tend to be atheists, like to refer to the God of the gaps, a “theological perspective in which gaps in scientific knowledge are taken to be evidence or proof of God’s existence.” The smug assumption implicit in the use of the phrase by atheists is that science will close the gaps, and that there will be no room left for God.

It seems to me that the shoe is really on the other foot. Atheistic scientists assume that the gaps in their knowledge are relatively small ones, and that science will fill them. How wrong they are.

*     *     *

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
A Theory of Everything, Occam’s Razor, and Baseball
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness

Home-Field Advantage

You know it’s real. Just how real? Consider this:

Home field advantage
Derived from statistics available through the Play Index subscription service of Baseball-Reference.com.

In the years 1901 through 2013, major league teams won 54 percent of their home games and lost 46 percent of their road games, for a home/road (H/R) ratio of 1.181. Only one team has lost more than half of its home games: San Diego, with 1,791 wins against 1,793 losses (which rounds to a W-L record of .500). The Padres have nevertheless done better at home than on the road, with an H/R ratio of 1.171.

No team has done better on the road than at home. Two expansion teams — Los Angeles Angels (1961) and New York Mets (1962) — have come closest. But their home records are still significantly better than their road records: H/R ratios of 1.132 and 1.139, respectively.

The Colorado Rockies have the best H/R ratio — 1.381 — mainly because Rockies teams have been tailored to do well at their home park in mile-high Denver. Accordingly, they have done poorly at lower altitudes, where (for example) high fly balls don’t as often become home runs.

The New York Yankees, unsurprisingly, have been the best at home and on the road. Further, the Yankees franchise is the only one with a road record above .500 for the past 113 years.

The importance of playing at home is perhaps best captured by these averages for 1901-2013:

  • The mighty Yankees compiled their enviable home record by outscoring opponents by only 0.89 run per game.
  • The second- and third-best Giants and Red Sox bested visitors by only 0.49 and 0.46 runs per game, respectively.
  • The lopsided Rockies compiled by far the biggest home-minus-road scoring gap: 1.04 runs per game.
  • Eleven of the 30 franchises were outscored at home, but only the Padres had a (barely) losing record at home.
  • Only 4 of 30 franchises — Yankees, Giants, Dodgers, and Cardinals — outscored opponents on the road as well at home.
  • Every franchise had a better average margin of victory at home than on the road.
  • Home teams (on average) outscored their opponents by only 0.16 runs.

Home-field advantage is a fragile but very real thing.

Not-So-Random Thoughts (IX)

Demystifying Science

In a post with that title, I wrote:

“Science” is an unnecessarily daunting concept to the uninitiated, which is to say, almost everyone. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — often are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Just how unwarranted is the “authority” that is lent by publication in a scientific journal?

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. . . .

In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one . . . paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” As he told the quadrennial International Congress on Peer Review and Biomedical Publication, held this September [2013] in Chicago, the problem has not gone away. (The Economist, “Trouble at the Lab,” October 19, 2013)

Tell me again about anthropogenic global warming.

The “Little Ice Age” Redux?

Speaking of AGW, remember the “Little Ice Age” of the 1970s?

George Will does. As do I.

One Sunday morning in January or February of 1977, when I lived in western New York State, I drove to the news stand to pick up my Sunday Times. I had to drive my business van because my car wouldn’t start. (Odd, I thought.) I arrived at the stand around 8:00 a.m. The temperature sign on the bank across the street then read -16 degrees (Fahrneheit). The proprietor informed me that when he opened his shop at 6:00 a.m. the reading was -36 degrees.

That was the nadir of the coldest winter I can remember. The village reservoir froze in January and stayed frozen until March. (The fire department had to pump water from the Genesee River to the village’s water-treatment plant.) Water mains were freezing solid, even though they were 6 feet below the surface. Many homeowners had to keep their faucets open a trickle to ensure that their pipes didn’t freeze. And, for the reasons cited in Will’s article, many scientists — and many Americans — thought that a “little ice age” had arrived and would be with us for a while.

But science is often inconclusive and just as often slanted to serve a political agenda. (Also, see this.) That’s why I’m not ready to sacrifice economic growth and a good portion of humanity on the altar of global warming and other environmental fads.

Well, the “Little Ice Age” may return, soon:

[A] paper published today in Advances in Space Research predicts that if the current lull in solar activity “endures in the 21st century the Sun shall enter a Dalton-like grand minimum. It was a period of global cooling.” (Anthony Watts, “Study Predicts the Sun Is Headed for a Dalton-like Solar Minimum around 2050,” Watts Up With That?, December 2, 2013)

The Dalton Minimum, named after English astronomer John Dalton, lasted from 1790 to 1830.

Bring in your pets and plants, cover your pipes, and dress warmly.

Madison’s Fatal Error

Timothy Gordon writes:

After reading Montesquieu’s most important admonitions in Spirit of the Laws, Madison decided that he could outsmart him. The Montesquieuan admonitions were actually limitations on what a well-functioning republic could allow, and thus, be. And Madison got greedy, not wanting to abide by those limitations.

First, Montesquieu required republican governments to maintain limited geographic scale. Second, Montesquieu required republican governments to preside over a univocal people of one creed and one mind on most matters. A “res publica” is a public thing valued by each citizen, after all. “How could this work when a republic is peopled diversely?” the faithful Montesquieuan asks. (Nowadays in America, for example, half the public values liberty and the other half values equality, its eternal opposite.) Thirdly—and most important—Montesquieu mandated that the three branches of government were to hold three distinct, separate types of power, without overlap.

Before showing just how correct Montesquieu was—and thus, how incorrect Madison was—it must be articulated that in the great ratification contest of 1787-1788, there operated only one faithful band of Montesquieu devotees: the Antifederalists. They publicly pointed out how superficial and misleading were the Federalist appropriations of Montesquieu within the new Constitution and its partisan defenses.

The first two of these Montesquieuan admonitions went together logically: a) limiting a republic’s size to a small confederacy, b) populated by a people of one mind. In his third letter, Antifederalist Cato made the case best:

“whoever seriously considers the immense extent of territory within the limits of the United States, together with the variety of its climates, productions, and number of inhabitants in all; the dissimilitude of interest, morals, and policies, will receive it as an intuitive truth, that a consolidated republican form of government therein, can never form a perfect union.”

Then, to bulwark his claim, Cato goes on to quote two sacred sources of inestimable worth: the Bible… and Montesquieu. Attempting to fit so many creeds and beliefs into such a vast territory, Cato says, would be “like a house divided against itself.” That is, it would not be a res publica, oriented at sameness. Then Cato goes on: “It is natural, says Montesquieu, to a republic to have only a small territory, otherwise it cannot long subsist.”

The teaching Cato references is simple: big countries of diverse peoples cannot be governed locally, qua republics, but rather require a nerve center like Washington D.C. wherefrom all the decisions shall be made. The American Revolution, Cato reminded his contemporaries, was fought over the principle of local rule.

To be fair, Madison honestly—if wrongly—figured that he had dialed up the answer, such that the United States could be both vast and pluralistic, without the consequent troubles forecast by Montesquieu. He viewed the chief danger of this combination to lie in factionalization. One can either “remove the cause [of the problem] or control its effects,” Madison famously prescribed in “Federalist 10″.

The former solution (“remove the cause”) suggests the Montesquieuan way: i.e. remove the plurality of opinion and the vastness of geography. Keep American confederacies small and tightly knit. After all, victory in the War of Independence left the thirteen colonies thirteen small, separate countries, contrary to President Lincoln’s rhetoric four score later. Union, although one possible option, was not logically necessary.

But Madison opted for the latter solution (“control the effects”), viewing union as vitally indispensable and thus, Montesquieu’s teaching as regrettably dispensable: allow size, diversity, and the consequent factionalization. Do so, he suggested, by reducing them to nothing…with hyper-pluralism. Madison deserves credit: for all its oddity, the idea actually seemed to work… for a time. . . . (“James Madison’s Nonsense-Coup Against Montesqieu (and the Classics Too),” The Imaginative Conservative, December 2013)

The rot began with the advent of the Progressive Era in the late 1800s, and it became irreversible with the advent of the New Deal, in the 1930s. As I wrote here, Madison’s

fundamental error can be found in . . . Federalist No. 51. Madison was correct in this:

. . . It is of great importance in a republic not only to guard the society against the oppression of its rulers, but to guard one part of the society against the injustice of the other part. Different interests necessarily exist in different classes of citizens. If a majority be united by a common interest, the rights of the minority will be insecure. . . .

But Madison then made the error of assuming that, under a central government, liberty is guarded by a diversity of interests:

[One method] of providing against this evil [is] . . . by comprehending in the society so many separate descriptions of citizens as will render an unjust combination of a majority of the whole very improbable, if not impracticable. . . . [This] method will be exemplified in the federal republic of the United States. Whilst all authority in it will be derived from and dependent on the society, the society itself will be broken into so many parts, interests, and classes of citizens, that the rights of individuals, or of the minority, will be in little danger from interested combinations of the majority.

In a free government the security for civil rights must be the same as that for religious rights. It consists in the one case in the multiplicity of interests, and in the other in the multiplicity of sects. The degree of security in both cases will depend on the number of interests and sects; and this may be presumed to depend on the extent of country and number of people comprehended under the same government. This view of the subject must particularly recommend a proper federal system to all the sincere and considerate friends of republican government, since it shows that in exact proportion as the territory of the Union may be formed into more circumscribed Confederacies, or States oppressive combinations of a majority will be facilitated: the best security, under the republican forms, for the rights of every class of citizens, will be diminished: and consequently the stability and independence of some member of the government, the only other security, must be proportionately increased. . . .

In fact, as Montesqieu predicted, diversity — in the contemporary meaning of the word, is inimical to civil society and thus to ordered liberty. Exhibit A is a story by Michael Jonas about a study by Harvard political scientist Robert Putnam, “E Pluribus Unum: Diversity and Community in the Twenty-first Century“:

It has become increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.

But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings. . . .

. . . Putnam’s work adds to a growing body of research indicating that more diverse populations seem to extend themselves less on behalf of collective needs and goals.

His findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis in June in the journal Scandinavian Political Studies, he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer, in a recent Orange County Register op-ed titled “Greater diversity equals more misery.”. . .

The results of his new study come from a survey Putnam directed among residents in 41 US communities, including Boston. Residents were sorted into the four principal categories used by the US Census: black, white, Hispanic, and Asian. They were asked how much they trusted their neighbors and those of each racial category, and questioned about a long list of civic attitudes and practices, including their views on local government, their involvement in community projects, and their friendships. What emerged in more diverse communities was a bleak picture of civic desolation, affecting everything from political engagement to the state of social ties. . . .

. . . In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes. . . . (“The Downside of Diversity,” The Boston Globe (boston.com), August 5, 2007)

See also my posts, “Liberty and Society,” “The Eclipse of ‘Old America’,” and “Genetic Kinship and Society.” And these: “Caste, Crime, and the Rise of Post-Yankee America” (Theden, November 12, 2013) and “The New Tax Collectors for the Welfare State,” (Handle’s Haus, November 13, 2013).

Libertarian Statism

Finally, I refer you to David Friedman’s “Libertarian Arguments for Income Redistribution” (Ideas, December 6, 2013). Friedman notes that “Matt Zwolinski has recently posted some possible arguments in favor of a guaranteed basic income or something similar.” Friedman then dissects Zwolinski’s arguments.

Been there, done that. See my posts, “Bleeding-Heart Libertarians = Left-Statists” and “Not Guilty of Libertarian Purism,” wherein I tackle the statism of Zwolinski and some of his co-bloggers at Bleeding Heart Libertarians. In the second-linked post, I say that

I was wrong to imply that BHLs [Bleeding Heart Libertarians] are connivers; they (or too many of them) are just arrogant in their judgments about “social justice” and naive when they presume that the state can enact it. It follows that (most) BHLs are not witting left-statists; they are (too often) just unwitting accomplices of left-statism.

Accordingly, if I were to re-title ["Bleeding-Heart Libertarians = Left-Statists"] I would call it “Bleeding-Heart Libertarians: Crypto-Statists or Dupes for Statism?”.

*     *     *

Other posts in this series: I, II, III, IV, V, VI, VII, VIII

Evolution and Race

UPDATED 11/24/13 AND 02/11/14

Have you read about Skull 5, a 1.8-million-year-old fossil? Well, it has been in the news lately. Here’s some of the coverage:

Scientists trying to unravel the origins of humanity mostly study scraps — some ancient teeth here, a damaged bone there. But now a lucky research team has uncovered a fossil superstar: the first complete skull of an early human adult from the distant past.

The 1.8-million-year-old fossil, known as Skull 5, is like nothing seen before. It has a small brain case and a heavy, jutting jaw, as did some of humanity’s older, more apelike ancestors. But other bones linked to Skull 5 show its owner had relatively short arms and long legs, as does our own species, Homo sapiens. Those who’ve studied Skull 5 say it also provides support for the provocative idea that, 1.8 million years ago, only one kind of early human held sway, rather than the throng of different species listed in today’s textbooks….

Paleoanthropologist Susan Antón of New York University, while praising the new analysis, says the Dmanisi team didn’t compare fossil features, such as the anatomy around the front teeth, that differ most starkly between two different species of early humans. So the Dmanisi team’s hypothesis that there was only one lineage is not totally convincing, she says… (Traci Watson, “Skull Discovery Sheds Light on Human Species,” USA Today, October 17, 2013)

Here’s more:

In the eastern European nation of Georgia, a group of researchers has excavated a 1.8 million-year-old skull of an ancient human relative, whose only name right now is Skull 5. They report their findings in the journal Science, and say it belongs to our genus, called Homo.

“This is most complete early Homo skull ever found in the world,” said lead study author David Lordkipanidze, researcher at the Georgian National Museum in Tbilisi….

The variation in physical features among the Dmanisi hominid specimens is comparable to the degree of diversity found in humans today, suggesting that they all belong to one species, Lordkipanidze said….

Now it gets more controversial: Lordkipanidze and colleagues also propose that these individuals are members of a single evolving Homo erectus species, examples of which have been found in Africa and Asia. The similarities between the new skull from Georgia and Homo erectus remains from Java, Indonesia, for example, may mean there was genetic “continuity across large geographic distances,” the study said.

What’s more, the researchers suggest that the fossil record of what have been considered different Homo species from this time period — such as Homo ergaster, Homo rudolfensis and Homo habilis — could actually be variations on a single species, Homo erectus. That defies the current understanding of how early human relatives should be classified….

The Dmanisi fossils are a great find, say anthropology researchers not involved with the excavation. But they’re not sold on the idea that this is the same Homo erectus from both Africa and Asia — or that individual Homo species from this time period are really all one species.

“The specimen is wonderful and an important contribution to the hominin record in a temporal period where there are woefully too few fossils,” said Lee Berger, paleoanthropologist at the University of the Witwatersrand in Johannesburg, in an e-mail.

But the suggestion that these fossils prove an evolving lineage of Homo erectus in Asia and Africa, Berger said, is “taking the available evidence too far.”

…He criticized the authors of the new study for not comparing the fossils at Dmanisi to A. sediba or to more recent fossils found in East Africa…. (Elizabeth Landau, “Skull Sparks Human Evolution Controversy,” CNN, October 19, 2013)

I will go further and say this: Even if 1.8 million years ago there was a single species from which today’s human beings are descended, today’s human beings don’t necessarily belong to a single species or sub-species.

In fact, some reputable scientists have advanced a theory that is consistent with racial divergence:

Gregory Cochran and Henry Harpending begin The 10,000 Year Explosion [link added] with a remark from the paleontologist Stephen J. Gould, who said that “there’s been no biological change in humans for 40,000 or 50,000 years.” They also cite the evolutionist Ernst Mayr, who agrees that “man’s evolution towards manness suddenly came to a halt” in the same epoch. Such claims capture the consensus in anthropology, too, which dates the emergence of “behaviorally modern humans” — beings who acted much more like us than like their predecessors — to about 45,000 years ago.

But is the timeline right? Did human evolution really stop? If not, our sense of who we are — and how we got this way — may be radically altered. Messrs. Cochran and Harpending, both scientists themselves, dismiss the standard view. Far from ending, they say, evolution has accelerated since humans left Africa 40,000 years ago and headed for Europe and Asia.

Evolution proceeds by changing the frequency of genetic variants, known as “alleles.” In the case of natural selection, alleles that enable their bearers to leave behind more offspring will become more common in the next generation. Messrs. Cochran and Harpending claim that the rate of change in the human genome has been increasing in recent millennia, to the point of turmoil. Literally hundreds or thousands of alleles, they say, are under selection, meaning that our social and physical environments are favoring them over other — usually older — alleles. These “new” variants are sweeping the globe and becoming more common.

But genomes don’t just speed up their evolution willy-nilly. So what happened, the authors ask, to keep human evolution going in the “recent” past? Two crucial events, they contend, had to do with food production. As humans learned the techniques of agriculture, they abandoned their diffuse hunter-gatherer ways and established cities and governments. The resulting population density made humans ripe for infectious diseases like smallpox and malaria. Alleles that helped protect against disease proved useful and won out.

The domestication of cattle for milk production also led to genetic change. Among people of northern European descent, lactose intolerance — the inability to digest milk in adulthood — is unusual today. But it was universal before a genetic mutation arose about 8,000 years ago that made lactose tolerance continue beyond childhood. Since you can get milk over and over from a cow, but can get meat from it only once, you can harvest a lot more calories over time for the same effort if you are lactose tolerant. Humans who had this attribute would have displaced those who didn’t, all else being equal. (If your opponent has guns and you don’t, drinking milk won’t save you.)

To make their case for evolution having continued longer than is usually claimed, Messrs. Cochran and Harpending remind us that dramatic changes in human culture appeared about 40,000 years ago, resulting in painting, sculpture, and better tools and weapons. A sudden change in the human genome, they suggest, made for more creative, inventive brains. But how could such a change come about? The authors propose that the humans of 40,000 years ago occasionally mated with Neanderthals living in Europe, before the Neanderthals became extinct. The result was an “introgression” of Neanderthal alleles into the human lineage. Some of those alleles may have improved brain function enough to give their bearers an advantage in the struggle for survival, thus becoming common.

In their final chapter, Messrs. Cochran and Harpending venture into recorded history by observing two interesting facts about Ashkenazi Jews (those who lived in Europe after leaving the Middle East): They are disproportionately found among intellectual high-achievers — Nobel Prize winners, world chess champions, people who score well on IQ tests — and they are victims of rare genetic diseases, like Gaucher’s and Tay-Sachs. The authors hypothesize that these two facts are connected by natural selection.

Just as sickle-cell anemia results from having two copies of an allele that protects you against malaria if you have just one, perhaps each Ashkenazi disease occurs when you have two copies of an allele that brings about something useful when you have just one. That useful thing, according to Messrs. Cochran and Harpending, is higher cognitive ability. They argue that the rare diseases are unfortunate side-effects of natural selection for intelligence, which Messrs. Cochran and Harpending think happened during the Middle Ages in Europe, when Jews rarely intermarried with other Europeans. (Christopher F. Chabris, “Last-Minute Changes,” The Wall Street Journal, February 12, 2009)

It is said that, despite the differences across races, all humans beings have in common 96 percent of their genes. Well, if I told you that humans and chimpanzees have about the same percentage of their genes in common, would you consider chimpanzees to be nothing more than superficially different human beings who belong to the same sub-species? Just remember this: The “species problem” remains unsolved.

So what if human beings belong to a variety of different sub-species? A candid scientific admission of that fact would put an end to the nonsense the “we’re all the same under the skin.” We”re not, and it’s long past time to own up to it, and to quit using the power of the state to strive for a kind of equality that is unattainable.

UPDATE (11/24/13):

And there are some who prefer to be sub-human.

UPDATE (02/11/14):

Although there are out-and-out disbelievers and cautious skeptics, some recent research in a field known as epigenetics suggests that behavioral conditioning can yield heritable traits. If true, it means that evolution is shaped by cultural influences, thus reinforcing positive traits (e.g., hard work and law-abidingness) among those people who possess and inculcate such traits, while also reinforcing negative traits (e.g., violence, shiftlessness) among those people who possess inculcate such traits.

*     *     *

Related reading:
Gregory Gorelik and Todd D. Shackleford, “A Review of Gregory Cochran and Henry Harpending, The 10,000 Year Explosion: How Civilization Accelerated Human Evolution,” Evolutionary Psychology, 2010. 8(1): 113-118
Carl Zimmer, “Christening the Earliest Members of Our Genus,” The New York Times, October 24, 2013

Related posts:
Race and Reason: The Derbyshire Debacle
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications

Do Managers Make a Difference?

INTRODUCTION

The activity of managing ranges from the supervision of one other person in the performance of a menial task to the supervision of the executive branch of the government of the United States. (The latter is a fair description of a president’s constitutional responsibility.) And there are many criteria for judging managers, not all of which are unambiguous or conducive to precise quantification. It may be easy, for example, to determine whether a ditch was dug on time and within budget. But what if the manager’s methods alienated workers, causing some of them to quit when the job was done and requiring the company to recruit and train new workers at some expense?

Or consider the presidency. What determines whether an incumbent is doing a good job? Polls? They are mere opinions, mostly based on impressions and political preferences, not hard facts. The passage by Congress of legislation proposed by the president? By that measure, Obama earns points for the passage of the Affordable Care Act, which if not repealed will make health care less affordable and less available.

Given the impossibility of arriving at a general answer to the tittle question, I will turn — as is my wont — to the game of baseball. You might think that the plethora of baseball statistics would yield an unambiguous answer with respect to major-league managers. As you’ll see, that’s not so.

WHAT BASEBALL STATISTICS REVEAL (OR DON’T)

Data Source

According to this page at Baseball-Reference.com, 680 different men have managed teams in the history of major-league baseball, which is considered to have begun in 1871 with the founding of the National Association. Instead of reaching that far back into the past, when the game was primitive by comparison with today’s game, I focus on men whose managing careers began in 1920 or later. It was 1920 that marked the beginning of the truly modern era of baseball, with its emphasis on power hitting. (This modern era actually consists of six sub-eras. See this and this.) In this modern era, which now spans 1920 through 2013, 399 different men have managed major-league teams. That is a sizable sample from which I had hoped to draw firm judgments about whether baseball managers, or some of them, make a difference.

Won-Lost Record

The “difference” in question is a manager’s effect — or lack thereof — on the success of his team, as measured by its won-lost (W-L) record. For the benefit of non-fans, W-L record, usually denoted W-L%, is determined by the following simple equation: W/(W + L), that is, games won divided by games won plus games lost. (The divisor isn’t number of games played because sometimes, though rarely, a baseball game is played to a tie.) Thus a team that wins 81 of its 162 games in a season has a W-L record of .500 for that season. (In baseball statistics, it is customary to omit the “0″ before the decimal point, contrary to mathematical convention.)

Quantifying Effectiveness

I’m about to throw some numbers at you. But I must say more about the samples that I used in my analysis. The aggregate-level analysis described in the next section draws on the records of a subset of the 399 men whose managerial careers are encompassed in the 1920-2013 period. The subset consists of the 281 men who managed at least 162 games, which (perhaps not coincidentally) has been the number of games in a regulation season since the early 1960s. I truncated the sample where I did because the W-L records of mangers with 162 or more games are statistically better (significance level of 0.05) than the W-L records of managers with fewer than 162 games. In other words, a manager who makes it through a full season is likely to have passed a basic test of management ability: not losing “too many” games. (I address this subjective assessment later in the post.)

Following the aggregate-level analysis, I turn to an individual-level analysis of the records of those managers who led a team for at least five consecutive seasons. (I allowed into the sample some managers whose fifth full season consisted of a partial season in year 1 and a partial season in year 6, as long as the number of games in the two partial seasons added to the number of games in a full season, or nearly so. I also included a few managers whose service with a particular team was broken by three years or less.) Some managers led more than one team for at least five consecutive seasons, and each such occurrence is counted separately. For reasons that will become evident, the five seasons had to begin no earlier than 1923 and end no later than 2010.  The sample size for this analysis is 63 management tours accomplished by 47 different managers.

Results and Inferences: Aggregate Level

“Just the facts” about the sub-sample of 281 managers:

Number of games managed vs W-L record

The exponential equation, though statistically significant, tells us that W-L record explains only about 21 percent of the variation in number of games managed, which spans 162 to 5,097.

Looking closer, I found that the 28 managers in the top decile of games managed (2,368 to 5,097) have a combined W-L record of .526. But their individual W-L records range from .477 to .615, and eight of the managers compiled a career W-L record below .500. Perhaps the losers did the best they could with the teams they had. Perhaps, but it’s also quite possible that the winners were blessed with teams that made them look good. In any event, the length of a manager’s career may have little to do with his effectiveness as a manager.

Which brings me to the next topic.

Results and Inferences: Individual Level

This view is more complicated.  As mentioned above, I focused on those 47 managers who on 63 separate occasions led their respective teams for at least five consecutive seasons (with minor variations). To get at each manager’s success (or failure) during each management tour, I compared his W-L record during a tour with the W-L record of the same team in the preceding and following three seasons.

My aim in choosing five years for the minimum span of a manager’s tenure with a team was to avoid judging a manager’s performance on the basis of an atypical year or two. My aim in looking three years back and three years ahead was to establish a baseline against which to compare the manager’s performance. I could have chosen on time spans, of course, but a plausible story ensues from the choices that I made.

First, here is a graphical view of the relationship between each of the 63 managerial stints and the respective before-and-after records of the teams involved:

Manager's W-L record vs. baseline

A clue to deciphering the graph: Look at the data point toward the upper-left corner labeled “Sewell SLB 41-46.” The label gives the manager’s last name (Sewell for Luke Sewell, in this case), the team he managed (SLB = St. Louis Browns), and the years of his tenure (1941-46). (In the table below, all names, teams, and dates are spelled out, for all 63 observations.) During Sewell’s tenure, the Browns’ W-L record was .134 points above the average of .378 attained by the Browns in 1938-40 and 1947-49. That’s an impressive performance, and it stands well above the 68-percent confidence interval. (Confidence intervals represent the range within which certain percentages of observations are expected to fall.)

The linear fit (equation in lower-left corner) indicates a statistically significant negative relationship between the change in a team’s fortunes during a manager’s tenure and the team’s baseline performance. The negative relationship means that there is a strong tendency to “regress toward the mean,” that is toward a record that is consistent with the quality of a team’s players. In other words, the negative relationship indicates that a team’s outstanding or abysmal record my owe nothing (or very little) to a manager’s efforts.

In fact, relatively few managers succeeded in leading their teams significantly far (up or down) from baseline performance. Those managers are indicated by green (good) and red (bad) in the preceding graph.

The following table gives a rank-ordering of all 47 managers in their 63 management stints. The color-coding indicates the standing of a particular performance with respect to the trend (green = above trend, red = below trend). The shading indicates the standing of a particular performance with respect to the confidence intervals: darkest shading = above and below the 95-percent confidence interval; medium shading = between the 68-percent and 95-percent confidence intervals; lightest shading = between the 68-percent confidence intervals.

Ranking of manager's performances

Of the 63 performances, 4 of them (6.3 percent) lie outside the 95-percent confidence interval; 13 of them (20.6 percent) are between the 68-percent and 95-percent confidence intervals; the other 46 (73.0) percent are in the middle, and statistically indistinguishable.

Billy Southworth’s tour as manager of the St. Louis Cardinals in 1940-45 (#1) stands alone above the 95-percent confidence interval. Two of Bucky Harris’s four stints rank near the bottom (#61 and #62) just above Ralph Houk’s truly abysmal performance as manager of the Detroit Tigers in 1974-78 (#63).

Southworth’s tenure with the Cardinals is of a piece with his career W-L record (.597), and with his above-average performance as manager of the Boston Braves in 1946-51 (# 18). Harris had a mixed career, as indicated by his overall W-L record of .493 and two above-average tours as manager (#22 and #26). Houk’s abysmal record with the Tigers was foretold by his below-average tour as manager of the Yankees, a broken tenure that spanned 1961-73 (#47).

Speaking of the Yankees, will the real Casey Stengel please stand up? Is he the “genius” with an above-average record as Yankees manager in 1949-60, (#13) or the “bum” with a dismal record as skipper of the Boston Bees/Braves in 1938-42 (#56)? (Stengel’s ludicrous three-and-a-half-year tour as manager of the hapless New York Mets of 1962-65 isn’t on the list because of its brevity. It should be noted, however, that the Mets improved gradually after Stengel’s departure, and won the World Series in 1969.)

Stengel is one of seven managers with a single-season performance below the 68-percent confidence level. Four of the seven — Harris, Houk, Stengel, and Tom Kelly (late of the Minnesota Twins) — are among the top decile on the games-managed list. The top decile also includes seven managers who turned in performances that rank above the 68-percent confidence interval: Earl Weaver, Bobby Cox, Al Lopez, Joe Torre, Sparky Anderson, Joe McCarthy, and Charlie Grimm (#s 2-4 and 6-9).

I could go on and on about games managed vs. performance, but it boils down to this: If there were a strong correlation between the rank-order of managers’ performances in the preceding table and the number of games they managed in their careers, it would approach -1.00. (Minus because the the best performance is ranked #1 and the worst is ranked #68.) But the correlation between between rank and number of games managed in a career is only -0.196, a “very weak” correlation in the parlance of statistics.

In summary, when it comes to specific management stints, Southworth’s performance in 1940-45 was clearly superlative; the performances of Harris (1929-33, 1935-42) and Houk (1974-78) were clearly awful. In between those great and ghastly performance lie a baker’s dozen that probably merit cheers or Bronx cheers. A super-majority of the performances (the 73 percent in the middle) probably have little to do with management skills and a lot to do with other factors, to which I will come.

The Bottom Line

It’s safe to say that the number of games managed is, at best, a poor reflection of managerial ability. What this means is that (a) few managers exert a marked influence on the performance of their teams and (b) managers, for the most part, are dismissed or kept around for reasons other than their actual influence on performance. Both points are supported by the two preceding sections.

More tellingly, both points are consistent with the time-tested observation that “they” couldn’t fire the team, so “they” fired the manager.

CLOSING THOUGHTS

The numbers confirm what I saw in 30 years of being managed and 22 (overlapping) years of managing: The selection of managers is at least as random as their influence on what they manage. This is true not only in baseball but wherever there are managers, that is, throughout the world of commerce (including its entertainment sectors), the academy, and government.

The is randomness for several reasons. First, there is the difficulty of specifying managerial objectives that are measurable and consistent. A manager’s basic task might be to attain a specific result (e.g., winning more games than the previous manager, winning at least a certain number of games, turning a loss into a profit). But a manager might also be expected to bring peace and harmony to a fractious workplace. And the manager might also be charged with maintainng a”diverse” workplace and avoiding charges of discrimination? Whatever the tasks, their specification is often arbitrary and, in large organizations, impossible to relate the objective to an overarching organization goal (e.g., attaining a profit target).

Who knows if it’s possible to win more games or turn a loss into a profit, given the competition, the quality of the workforce, etc.? Is a harmonious workplace more productive than a fractious one if a fractious one is a sign of productive competitiveness?  How does one square “diversity” and forbearance toward the failings of the “diverse” (to avoid discrimination charges), while also turning a profit?

Given the complexity of management, at which I’ve only hinted, and the difficulty of judging managers, even when their “output” is well-defined (e.g., W-L record), it’s unsurprising that the ranks of managers are riddled with the ineffective and the incompetent. And such traits are often tolerated and even rewarded (e.g., raise, promotion, contract extension). Why? Here are some of the reasons:

  • Unwillingness to admit that it was a mistake to hire or promote a manager
  • A manager’s likeability or popularity
  • A manager’s connections to higher-ups
  • The cost and difficulty of firing a manager (e.g., severance pay, contract termination clauses, possibility of discrimination charges)
  • Inertia — Things seem to be going well enough, and no one has an idea of how well they should be going).

The good news is that relatively few managers make a big difference. The bad news is that the big difference is just as likely to be negative as it is to be positive. And for the reasons listed above, abysmal managers will not be rooted out until they have done a lot of damage.

So, yes, some managers — though relatively few — make a difference. But that difference is likely to prove disastrous. Just look at the course of the United States over the past 80 years.

A Human Person

A RERUN (WITH LIGHT EDITING) OF A POST AT MY OLD BLOG, FROM MAY 5, 2008

The ludicrous and (it seems) increasingly popular assertion that plants have rights should not distract us from the more serious issue of fetal rights. (My position on the issue can be found among these links.) Maverick Philosopher explains how abortion may be opposed for non-religious reasons:

It is often assumed that opposition to abortion can be based only on religious premises. This assumption is plainly false. To show that it is is false, one need merely give an anti-abortion argument that does not invoke any religious tenet, for example:1. Infanticide is morally wrong.
2. There is no morally relevant difference between abortion and infancticide.
Therefore
3. Abortion is morally wrong.

Whether one accepts this argument or not, it clearly invokes no religious premise. It is therefore manifestly incorrect to say or imply that all opposition to abortion must be religiously-based. Theists and atheists alike could make use of the above argument.

MP then links to a piece by Nat Hentoff, an atheist and Leftist. Hentoff writes, apropos Barack Obama and abortion, that

I admire much of Obama’s record, including what he wrote in “The Audacity of Hope” about the Founders’ “rejection of all forms of absolute authority, whether the king, the theocrat, the general, the oligarch, the dictator, the majority … George Washington declined the crown because of this impulse.”

But on abortion, Obama is an extremist. He has opposed the Supreme Court decision that finally upheld the Partial-Birth Abortion Ban Act against that form of infanticide. Most startlingly, for a professed humanist, Obama — in the Illinois Senate — also voted against the Born Alive Infant Protection Act….

Furthermore, as “National Right to Life News” (April issue) included in its account of Obama’s actual votes on abortion, he “voted to kill a bill that would have required an abortionist to notify at least one parent before performing an abortion on a minor girl from another state.”

These are conspiracies — and that’s the word — by pro-abortion extremists to transport a minor girl across state lines from where she lives, unbeknownst to her parents. This assumes that a minor fully understands the consequences of that irredeemable act. As I was researching this presidential candidate’s views on the unilateral “choice” that takes another’s life, I heard on the radio what Obama said during a Johnstown, Pa., town hall meeting on March 29 as he was discussing the continuing dangers of exposure to HIV/AIDS infections:

“When it comes specifically to HIV/AIDS, the most important prevention is education, which should include — which should include abstinence education and teaching children, you know, that sex is not something casual. But it should also include — it should also include other, you know, information about contraception because, look, I’ve got two daughters, 9 years old and 6 years old. I am going to teach them first of all about values and morals.

“But if they make a mistake,” Obama continued, “I don’t want them punished with a baby.”

Among my children and grandchildren are two daughters and three granddaughters; and when I hear anyone, including a presidential candidate, equate having a baby as punishment, I realize with particular force the impact that the millions of legal abortions in this country have had on respect for human life.

And that’s the crux of the issue: respect for human life.

Thus I turn to a Peter Lawler’s “A Human Person, Actually,” in which Lawler reviews Embryo: A Defense of Human Life, by Robert P. George and Christopher Tollefsen:

The embryo, George and Tollefsen argue, is a whole being, possessing the integrated capability to go through all the phases of human development. An embryo has what it takes to be a free, rational, deliberating, and choosing being; it is naturally fitted to develop into a being who can be an “uncaused cause,” a genuinely free agent. Some will object, of course, that the embryo is only potentially human. The more precise version of this objection is that the embryo is human—not a fish or a member of some other species—but not yet a person. A person, in this view, is conscious enough to be a free chooser right now. Rights don’t belong to members of our species but to persons, beings free enough from natural determination to be able to exercise their rights. How could someone have rights if he doesn’t even know that he has them?…

Is the embryo a “who”? It’s true enough that we usually don’t bond with embryos or grieve when they die. Doubtless, that’s partly because of our misperception of who or what an embryo is. But it’s also because we have no personal or loving contact with them. We tend to think of persons as beings with brains and hearts; an embryo has neither. But personal significance can’t be limited to those we happen to know and love ourselves; my powers of knowing and loving other persons are quite limited, and given to the distortions of prejudice. Whether an embryo is by nature a “who” can be determined only by philosophical reflection about what we really know.The evidence that George and Tollefsen present suggests that there are only two non-arbitrary ways to consider when a “what” naturally becomes a “who.” Either the embryo is incapable of being anything but a “who”; from the moment he or she comes to be, he or she is a unique and particular being capable of exhibiting all the personal attributes associated with knowing, loving, and choosing. Or a human being doesn’t become a “who” until he or she actually acquires the gift of language and starts displaying distinctively personal qualities. Any point in between these two extremes—such as the point at which a fetus starts to look like a human animal or when the baby is removed from the mother’s womb—is perfectly arbitrary. From a purely rational or scientific view, the price of being unable to regard embryos as “whos” is being unable to regard newborn babies as “whos”….

As I say here,

abortion is of a piece with selective breeding and involuntary euthanasia, wherein the state fosters eugenic practices that aren’t far removed from those of the Third Reich. And when those practices become the norm, what and who will be next? Libertarians, of all people, should be alert to such possibilities. Instead of reflexively embracing “choice” they should be asking whether “choice” will end with fetuses.

Most libertarians, alas, mimic “liberals” and “progressives” on the issue of abortion. But there are no valid libertarian arguments for abortion, just wrong-headed ones.

Are You Happy?

A RERUN OF A POST AT MY OLD BLOG, FROM MAY 6, 2008

Justin Wolfers (Freakonomics blog) has completed a series of six posts about the economics of happiness (here, here, here, here, here, and here). The bottom line, according to Wolfers:

1) Rich people are happier than poor people.
2) Richer countries are happier than poorer countries.
3) As countries get richer, they tend to get happier.

All of which should come as no surprise to anyone, without the benefit of “happiness research.” Regarding which, I agree with Arnold Kling, who says:

My view is that happiness research implies Nothing. Zero. Zilch. Nada. I believe that you do not learn about economic behavior by watching what people say in response to a survey.

You learn about economic behavior by watching what people actually do.

And…you consult your “priors.” It is axiomatic that individuals prefer more to less; that is, more income yields more satisfaction because it affords access to goods and services of greater variety and higher quality. Moreover, income and the wealth that flows from it are valued for their own sake by most individuals. (That they might be valued because they enable philanthropic endeavors is a case in point.)

It is reasonable to conclude, therefore, that the “law” of diminishing marginal utility, which may apply to particular goods and services, does not generally apply to income or wealth in the aggregate. But, in any event, given that Wolfers’s first conclusion is self-evidently true, the second and third conclusions follow. And they follow logically, not from “happiness research.”

“Ensuring America’s Freedom of Movement”: A Review

Ensuring America’s Freedom of Movement: A National Security Imperative to Reduce U.S. Oil Dependence was issued by CNA in October 2011. (CNA, in this case, is a not-for-profit analytical organization located in Alexandria, Virginia, and is not to be mistaken for the Chicago-based insurance and financial services company.) Ensuring America’s Freedom of Movement is a product of CNA’s Military Advisory Board (MAB), and is the fourth report issued by the MAB. Accordingly, I refer to it in the rest of this review as MAB4.

This review may be somewhat out of date in places, though not in its thrust. I began writing it almost two years ago, when Ensuring… was published. I have not been in a hurry to post this review because Ensuring… is an inconsequential bit of fluff and unlikely to influence policy. But post I must, because the existence of the MAB and MAB4 are affronts to the distinguished intellectual heritage claimed by CNA.

*     *     *

A critical reader — someone who is not seeking support for preconceived policy prescriptions — will be disappointed in MAB4. If there are valid arguments for government initiatives to foster the development and use of alternatives to oil, they do not leap out of the pages of MAB4.

The main point of MAB4 is to urge

government … action to promote the use of a more diverse mix of transportation fuels and to drive wider public acceptance of these alternatives. (p. xiv, emphasis added)

And on cue, a day after the issuance of Obama’s plan to combat “climate change,” the MAB released a statement that ends with this:

The CNA MAB supports the President’s plan to act now to address the worst effects of climate change and to improve our nations’ energy posture and competitive advantage in clean energy markets. The CNA MAB continues to identify the security implications of climate change and to protect and enhance our energy, climate and national security today and for our future generations. (June 26, 2013)

Despite token acknowledgement of the power of markets to do the job, the authors consistently invoke the power of government, in the name of “stability.”

There is much pointing-with-alarm at the instability caused by “dependence” on imported oil — with a focus on the Middle East. But the only “hard” estimate of the price of instability is a poorly documented, questionable estimate of the effects of a 30-day closure of the Strait of Hormuz on GDP and the output and employment of the U.S. trucking industry. Empirical estimates of the effects of sudden reductions in oil imports (oil shocks) are available, but the authors of MAB4 did not use them — or perhaps did not know about them.

It would have been instructive to compare the cumulative losses to GDP resulting from actual oil shocks with (a) the costs of maintaining forces in the Middle East to deter overtly hostile shocks (e.g., the closure of the Strait of Hormuz by Iran) and (b) the costs to taxpayers and consumers of government subsidies and edicts to promote the development and require the use of alternative energy sources. But no such comparison is offered, so the critical reader has no idea whether efforts to wean the U.S. from oil — especially imported oil — make economic sense.

Moreover, the authors of MAB4 reject the possibility of drawing down U.S. forces in the Middle East, for “strategic” reasons, which means that (in the authors’ view) taxpayers should continue to foot the bill for Middle East forces while coughing up additional sums to subsidize the development and use of alternative energy sources. I am all in favor of a forward strategy that is aimed at deterring and countering adventurism on the part of America’s enemies and potential enemies. It would be foolish in the extreme to allow our enemies and potential enemies to aggrandize their power by denying America’s access to a vital resource, such as oil. (Iran and China, I am looking at you.) It would be (and is) doubly foolish to throw bad money after good by also succumbing to the lobbying efforts of corn-growers, makers of solar panels, and kindred rent-seekers.

In sum, MAB4 is a piece of advocacy, not objective analysis. True believers in the wisdom and infallibility of government will rejoice in MAB4 and hope, pray, plead, and work for the adoption of its recommendations by the federal government. Critical readers will check their wallets and wonder at the naivete and presumptuousness of the 13 retired flag and general officers who constituted the MAB when CNA extruded MAB4.

*   *   *

I will elaborate on the preceding observations in the rest of this review, which has six main parts:

  • I. Background: CNA and the MAB — This part is for the benefit of those readers — almost all of you, I’m sure — who know nothing of CNA or its Military Advisory Board.
  • II. An Overview of MAB4 — This part outlines the organization of MAB4 and summarizes its findings and recommendations, which come into play throughout the review.
  • III. The Hidden Foundation of MAB4 — MAB4′s findings and recommendations rest on a foundation of hidden assumptions — biases, if you will. Part III articulates those biases.
  • IV. The Analytical Superstructure of MAB4 — This part focuses on the facts and logic of the substantive portions of MAB4, namely, Chapters 1 and 2. They are found wanting.
  • V. MAB4 vs. CNA’s Standards — CNA proclaims itself an organization that upholds a long tradition of high standards and objectivity. Are the MAB and MAB4 consistent with that tradition? Part V answers that question in the negative.
  • VI. Summary Assessment –  A final 534 words, for the benefit of readers who want to skip the gory details.

(The rest of this very long review is below the fold.) (more…)

Mind, Cosmos, and Consciousness

The appearance last year of Thomas Nagel’s Mind and Cosmos greatly irked atheistic materialists (a.k.a. naturalists). They need not have been irked, for Nagel’s argument fails to negate materialism.

To begin at the beginning, I turn to Andrew Ferguson’s recounting of the reactions of prominent atheistic materialists to Mind and Cosmos:

… The naturalistic project has been greatly aided by neo-Darwinism, the application of Darwin’s theory of natural selection to human behavior, including areas of life once assumed to be nonmaterial: emotions and thoughts and habits and perceptions….

… Thomas Nagel is a prominent and heretofore respected member of the country’s intellectual elite. And such men are not supposed to write books with subtitles like the one he tacked onto Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False….

Nagel’s tone is measured and tentative, but there’s no disguising the book’s renegade quality. There are flashes of exasperation and dismissive impatience. What’s exhilarating is that the source of Nagel’s exasperation is, so to speak, his own tribe: the “secular theoretical establishment and the contemporary enlightened culture which it dominates.” The establishment today, he says, is devoted beyond all reason to a “dominant scientific naturalism, heavily dependent on Darwinian explanations of practically everything, and armed to the teeth against attacks from religion.” …

Nagel follows the materialist chain of reasoning all the way into the cul de sac where it inevitably winds up…. He has no doubt that “we are products of the long history of the universe since the big bang, descended from bacteria through millions of years of natural selection.” And he assumes that the self and the body go together. “So far as we can tell,” he writes, “our mental lives, including our subjective experiences, and those of other creatures are strongly connected with and probably strictly dependent on physical events in our brains and on the physical interaction of our bodies with the rest of the physical world.” To believe otherwise is to believe, as the materialists derisively say, in “spooky stuff.” …

Materialism, then, is fine as far as it goes. It just doesn’t go as far as materialists want it to. It is a premise of science, not a finding. Scientists do their work by assuming that every phenomenon can be reduced to a material, mechanistic cause and by excluding any possibility of nonmaterial explanations….

… From a fruitful method, materialism becomes an axiom: If science can’t quantify something, it doesn’t exist, and so the subjective, unquantifiable, immaterial “manifest image” of our mental life is proved to be an illusion.

Here materialism bumps up against itself. Nagel insists that we know some things to exist even if materialism omits or ignores or is oblivious to them. Reductive materialism doesn’t account for the “brute facts” of existence—it doesn’t explain, for example, why the world exists at all, or how life arose from nonlife…. On its own terms, materialism cannot account for brute facts. Brute facts are irreducible, and materialism, which operates by breaking things down to their physical components, stands useless before them. “There is little or no possibility,” he writes, “that these facts depend on nothing but the laws of physics.” …

Among these remarkable, nonaccidental things are…. [c]onsciousness itself….

In a recent review in the New York Review of Books of Where the Conflict Really Lies, by the Christian philosopher Alvin Plantinga, Nagel told how instinctively he recoils from theism, and how hungry he is for a reasonable alternative. “If I ever found myself flooded with the conviction that what the Nicene Creed says is true,” he wrote, “the most likely explanation would be that I was losing my mind, not that I was being granted the gift of faith.” He admits that he finds the evident failure of materialism as a worldview alarming—precisely because the alternative is, for a secular intellectual, unthinkable. He calls this intellectual tic “fear of religion.”

“I speak from experience, being strongly subject to this fear,” he wrote not long ago in an essay called “Evolutionary Naturalism and the Fear of Religion.” “I want atheism to be true and am made uneasy by the fact that some of the most intelligent and well-informed people I know are religious believers. It isn’t just that I don’t believe in God and, naturally, hope that I’m right in my belief. It’s that I hope there is no God! I don’t want there to be a God; I don’t want the universe to be like that.”

Nagel believes this “cosmic authority problem” is widely shared among intellectuals, and I believe him. It accounts for the stubbornness with which they cling to materialism—and for the hostility that greets an intellectual who starts to wander off from the herd. Materialism must be true because it “liberates us from religion.” The positive mission Nagel undertakes in Mind and Cosmos is to outline, cautiously, a possible Third Way between theism and materialism, given that the first is unacceptable—emotionally, if not intellectually—and the second is untenable. Perhaps matter itself has a bias toward producing conscious creatures. Nature in that case would be “teleological”—not random, not fully subject to chance, but tending toward a particular end. Our mental life would be accounted for—phew!—without reference to God. (“The Heretic,” The Weekly Standard, March 25, 2013)

Nagel’s admission — “I hope there is no God!” — is admirable for its candor. Most atheistic materialists rationalize their disbelief by assuming that science can explain everything, including (quite wrongly) the mystery of existence. The false assumption that science can explain everything undermines (or should undermine) the credibility of every atheistic materialist. No one who assumes the answer for which he claims to be searching deserves to be taken seriously.

That said, I lend no more credence to Nagel’s “Third Way” than I do to the out-and-out materialism of Nagel’s critics. That is to say, Nagel and his critics are all incredible, in the proper meaning of that word.

Ironically, when it comes to consciousness, there’s no need for a nonmaterialistic explanation. To understand why, let us begin with Nagel’s reason for rejecting a materialistic accounting of consciousness:

… Since our mental lives evidently depend on our existence as physical organisms, especially on the functioning of our central nervous systems, it seems natural to think that the physical sciences can in principle provide the basis for an explanation of the mental aspects of reality as well — that physics can aspire finally to be a theory of everything.

However, I believe this possibility is ruled out by the conditions that have defined the physical sciences from the beginning. The physical sciences can describe organisms like ourselves as parts of the objective spatio-temporal order – our structure and behavior in space and time – but they cannot describe the subjective experiences of such organisms or how the world appears to their different particular points of view. There can be a purely physical description of the neurophysiological processes that give rise to an experience, and also of the physical behavior that is typically associated with it, but such a description, however complete, will leave out the subjective essence of the experience – how it is from the point of view of its subject — without which it would not be a conscious experience at all…. (“The Core of ‘Mind and Cosmos’,” The New York Times, August 18, 2013)

Nagel make too much of subjectivity. Every person’s experience of any phenomenon is, of course, subjective. Why? Because every person is unique, and no instrument yet devised (or likely to be devised) is capable of capturing the unique way in which a particular person experiences something.

But this uniqueness has nothing to do with the essentially material basis of experience. It just means that no two persons can have the same “inner” experiences, just as no two persons (it is said) can have the same fingerprints.

I don’t mean to minimize the difficulty of finding a clear and convincing material explanation of consciousness. But such an explanation is possible, at least in outline; for example:

One way to think about the relationship between brain and consciousness is to break it down into two mysteries. I call them Arrow A and Arrow B. Arrow A is the mysterious route from neurons to consciousness. If I am looking at a blue sky, my brain doesn’t merely register blue as if I were a wavelength detector from Radio Shack. I am aware of the blue. Did my neurons create that feeling?

Arrow B is the mysterious route from consciousness back to the neurons. Arrow B attracts much less scholarly attention than Arrow A, but it is just as important. The most basic, measurable, quantifiable truth about consciousness is simply this: we humans can say that we have it. We can conclude that we have it, couch that conclusion into language and then report it to someone else. Speech is controlled by muscles, which are controlled by neurons. Whatever consciousness is, it must have a specific, physical effect on neurons, or else we wouldn’t be able to communicate anything about it. Consciousness cannot be what is sometimes called an epiphenomenon — a floating side-product with no physical consequences — or else I wouldn’t have been able to write this article about it.

Any workable theory of consciousness must be able to account for both Arrow A and Arrow B. Most accounts, however, fail miserably at both. Suppose that consciousness is a non-physical feeling, an aura, an inner essence that arises somehow from a brain or from a special circuit in the brain. The ‘emergent consciousness’ theory is the most common assumption in the literature. But how does a brain produce the emergent, non-physical essence? And even more puzzling, once you have that essence, how can it physically alter the behaviour of neurons, such that you can say that you have it? ‘Emergent consciousness’ theories generally stake everything on Arrow A and ignore Arrow B completely.

The attention schema theory does not suffer from these difficulties. It can handle both Arrow A and Arrow B. Consciousness isn’t a non-physical feeling that emerges. Instead, dedicated systems in the brain compute information. Cognitive machinery can access that information, formulate it as speech, and then report it. When a brain reports that it is conscious, it is reporting specific information computed within it. It can, after all, only report the information available to it. In short, Arrow A and Arrow B remain squarely in the domain of signal-processing. There is no need for anything to be transmuted into ghost material, thought about, and then transmuted back to the world of cause and effect.

Some people might feel disturbed by the attention schema theory. It says that awareness is not something magical that emerges from the functioning of the brain. When you look at the colour blue, for example, your brain doesn’t generate a subjective experience of blue. Instead, it acts as a computational device. It computes a description, then attributes an experience of blue to itself. The process is all descriptions and conclusions and computations. Subjective experience, in the theory, is something like a myth that the brain tells itself. The brain insists that it has subjective experience because, when it accesses its inner data, it finds that information. (Michael Graziano, “How the Light Gets Out,” Aeon, August 21, 2013)

I applaud Nagel’s skepticism about materialism. He is right to say that it doesn’t account for the “brute facts” of existence. But Nagel overshoots the mark, and discredits himself, when he tries to enlist consciousness and its products (e.g., emotions, moral reasoning) as “brute facts.”

Materialism is valid insofar as it extends to the workings of the universe and its components (human consciousness included). Materialism falls short when it comes to explaining how the universe came to be, and why its workings seem to obey “laws.”

*****

Related posts:
Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
The Legality of Teaching Intelligent Design
Science, Logic, and God
Capitalism, Liberty, and Christianity
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
The Big Bang and Atheism
The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Pascal’s Wager, Morality, and the State
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Something from Nothing?
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
The Glory of the Human Mind
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design

AGW: The Death Knell

UPDATED 02/12/14 (related reading added)

I am loath to write again about AGW, so convinced am I that it is a “scientific” myth. But the myth has a large, vociferous, politically motivated, and almost-unshakeable following. So here goes…

Watt’s Up With That? notes

a Technical University of Denmark press release [about] what looks to be a significant confirmation of [Henrik] Svensmark’s theory of temperature modulation on Earth by cosmic ray interactions. The process is that when there are more cosmic rays, they help create more microscopic cloud nuclei, which in turn form more clouds, which reflect more solar radiation back into space, making Earth cooler than what it normally might be. Conversely, [fewer] cosmic rays mean less cloud cover and a warmer planet as indicated here.  The sun’s magnetic field is said to deflect cosmic rays when its solar magnetic dynamo is more active, and right around the last solar max, we were at an 8000 year high, suggesting more deflected cosmic rays, and warmer temperatures. Now the sun has gone into a record slump, and there are predictions of cooler temperatures ahead….

The new paper is:

Response of cloud condensation nuclei (>50 nm) to changes in ion-nucleation” H. Svensmark, Martin B. Enghoff, Jens Olaf Pepke Pedersen, Physics Letters A 377 (2013) 2343–2347….

FULL PAPER LINK PROVIDED IN THE PRESS RERLEASE: https://dl.dropboxusercontent.com/u/51188502/PLA22068.pdf (open access PDF)

LOCAL COPY: (for those having trouble with link above):  Svensmark_PLA22068 (PDF)

What about the seemingly high rate of increase in temperatures in recent decades? As I say here,

It should be quite evident by now that the warming trend of the past thirty-odd years merely coincides with the rise in human activity (as measured by population) but is not explained by the “greenhouse” effect…. There are alternative and compelling alternative explanations for the warming trend, including the influence of solar activity summarized above….

What else explains the apparent (but exaggerated) warming trend of the past thirty-odd years (a “trend” that ended more than 15 years ago)? Poorly sited weather stations and urban heating are the two main culprits.

P.S. So much for the prematurely predicted disappearance of the Arctic ice cap: here and here.

*****

Recommended book: Henrik Svensmark and Nigel Calder, The Chilling Stars: A New Theory of Climate Change, Totem Books, 2003

Related reading:
Anthony Watts, “The EPA Is Challenged in the Supreme Court over Greenhouse Gas Regulations,” Watts Up With That?, December 17, 2013
Chet Richards, “A Few Easy Tests to Debunk Global Warming Hysteria,” American Thinker, January 1, 2014
Ronald Bailey, “Ugly Climate Models,” reason.com, January 2014
Christopher Monckton, “IPCC Silently Slashes Its Global Warming Predictions in the Final AR5 Draft,” Watts Up With That?, January 1, 2014
Anthony Watts, “Could This Study on Honesty and Government Service Explain the EPA Climaterr Fraud and ‘Climategate’?,” Watts Up With That, January 6, 2014
Don J. Easterbrook, “Setting the Record Straight on the ‘Cause of the Pause’ in Global Warming’,” Watts Up With That?, January 21, 2014
Vincent Gray, “The Scientific Method and Climate Science,” Watts Up With That?, January 21, 2014

Related posts:
Hemibel Thinking
Climatology (a term that I use to distinguish phony climate science from the real thing)
Global Warming: Realities and Benefits
Another Blow to Climatology?
Another Blow to Chicken-Little Science
Global Warming and Life
Remember the “Little Ice Age”?
Science’s Anti-Scientific Bent
More Bad News for Global-Warming Zealots
“Warmism”: The Myth of Anthropogenic Global Warming
More Evidence against Anthropogenic Global Warming
Yet More Evidence against Anthropogenic Global Warming
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
Demystifying Science
Analysis for Government Decision-Making: Demi-Science, Hemi-Demi-Science, and Sophistry

IQ, Political Correctness, and America’s Present Condition

This is a wandering post, in which I use a recent controversy about IQ to make some observations about political correctness, which leads to a tale of leftist subversion and America’s descent into statism.

Since my last post about IQ, more than a year ago, the biggest kerfuffle on the IQ front arose when Jason Richwine was chased from his job at Heritage Foundation. The proximate cause of Richwine’s departure from Heritage was the usual kind of witch hunt that accompanies the discovery of anything coming from a conservative source that might offend political correctness. Richwine was “guilty” of having penned a dissertation that contains unremarkable statements about ethnic differences in average IQ, including the IQ difference between Hispanics and non-Hispanic whites.

These are excerpts of John Derbyshire’s narration of l’affaire Richwine as it unfolded:

… Following the release of a report by the Heritage Foundation arguing that the Rubio-Schumer immigration bill will cost the nation $6.3 trillion, the Slave Power set their dwarf miners to digging.

They soon found gold. One of the co-authors of the study is twentysomething Jason Richwine, a Heritage analyst. Not just an analyst, but a quantitative analyst: “Heritage’s senior policy analyst in empirical studies.” …

After a few days’ digging the Nibelungs turned up Richwine’s Ph.D. thesis from Harvard University, title: “IQ and Immigration Policy.” The mother lode! (You can download it from here.)

The Washington Post ran a gleeful story on the find under the headline “Heritage study co-author opposed letting in immigrants with low IQs.” [By Dylan Matthews, May 8, 2013]. They note that:

Richwine’s dissertation asserts that there are deep-set differentials in intelligence between races.

Eek! A witch! …

Post columnist Jennifer Rubin, on secondment from Conservatism, Inc. to offer some pretense of “balance” at the Post, hastened to join the lynch mob. “It undermines the cause of all immigration opponents to have their prized work authored by such a character,” she wrote, reading Richwine out of respectable society….

She then brings in Jennifer S. Korn for a quote. Ms. Korn was Secretary for Hispandering in the George W. Bush White House….

What does Ms. Korn have to tell us?

Richwine’s comments are bigoted and ignorant. America is a nation of immigrants; to impugn the intelligence of immigrants is to offend each and every American and the foundation of our country….

Even if you take Ms. Korn’s usage of “impugn” to mean Richwine has stated that immigrants have lower mean IQ than natives, she is wrong. Table 2.2 in the thesis (p. 30) gives an average estimated mean IQ of 105.5 for immigrants from Northeast Asia….

And so another “anti-racist” witch hunt commences….

The forces of orthodoxy have identified a heretic. They’re marching on his hut with pitchforks and flaming brands. The cry echoes around the internet: “Burn the witch!” … (“‘Burn the Witch’: Heritage Foundation Scuttles Away from Jason Richwine–and the Cold, Hard Facts,” VDare.com, May 9, 2013)

The impetus for politically correct witch-hunting comes from the left, of course. This is unsurprising because leftists, on average, are dumber than conservatives and libertarians. (See this and this, for example.) Which would explain their haste to take offense when the subject of IQ is raised.

But facts are facts, and Richwine summarizes them neatly in a recent (post-Heritage) essay; for example:

The American Psychological Association (APA) tried to set the record straight in 1996 with a report written by a committee of experts. Among the specific conclusions drawn by the APA were that IQ tests reliably measure a real human trait, that ethnic differences in average IQ exist, that good tests of IQ are not culturally biased against minority groups, and that IQ is a product of both genetic inheritance and early childhood environment. Another report signed by 52 experts, entitled “Mainstream Science on Intelligence,” stated similar facts and was printed in the Wall Street Journal. (“Why Can’t We Talk about IQ?,” Politico, August 9, 2013)

Richwine continues:

[W]hen Larry Summers, then the president of Harvard University, speculated in 2005 that women might be naturally less gifted in math and science, the intense backlash contributed to his ouster.Two years later, when famed scientist James Watson noted the low average IQ scores of sub-Saharan Africans, he was forced to resign from his lab, taking his Nobel Prize with him.

When a Harvard law student was discovered in 2010 to have suggested in a private email that the black-white IQ gap might have a genetic component, the dean publicly condemned her amid a campus-wide outcry. Only profuse apologies seem to have saved her career.

In none of these cases did an appeal to science tamp down the controversy or help to prevent future ones. My own time in the media crosshairs would be no different.

So what did I write that created such a fuss? In brief, my dissertation shows that recent immigrants score lower than U.S.-born whites on a variety of cognitive tests. Using statistical analysis, it suggests that the test-score differential is due primarily to a real cognitive deficit rather than to culture or language bias. It analyzes how that deficit could affect socioeconomic assimilation, and concludes by exploring how IQ selection might be incorporated, as one factor among many, into immigration policy.

Because a large number of recent immigrants are from Latin America, I reviewed the literature showing that Hispanic IQ scores fall between white and black scores in the United States. This fact isn’t controversial among experts, but citing it seems to have fueled much of the media backlash.

Derbyshire follows up:

Jason, who can hardly be more than thirty, has not yet grasped an important thing about humanity at large: that most of our thinking is magical, superstitious, religious, social, and egotistical. Very little of it is empirical. I myself am as stone-cold an empiricist as you’ll meet in a month of Sundays; yet every day when I walk my dog there is a certain tree I have to pat as we pass it. (It’s on the wrong side of the road. The family joke is that I shall one day be hit by a truck while crossing the road to pat my lucky tree.)

Hence Jason’s puzzlement that 25 years after Snyderman and Rothman, 19 years after The Bell Curve and the follow-up “Mainstream Science on Intelligence” declaration, the public discourse even in quality outlets is dominated by innumerate journo-school graduates parroting half-remembered half-truths from Stephen Jay Gould’s The Mismeasure of Man, the greatest work of Cultural Marxist propaganda yet produced.

That’s how we are. That’s the shape of human nature. Alan Cromer explained it in his 1993 book Uncommon Sense: The Heretical Nature of Science. Not many people can think empirically much of the time. At the aggregate level, where the lowest common denominator takes over and social acceptance is at the front of everyone’s mind, empiricism doesn’t stand a chance unless it delivers some useful technology.

Nor is it quite the case that “emotion trumps reason.” What mostly trumps reason is the yearning for respectability, leading us to conform to ambient dogmas—in the present-day West, the dogmas of Cultural Marxism, which waft around us like a noxious vapor….

This is how we are: jumbles of superstition, emotion, self-deception, and social conformism, with reason and science trotting along behind trying to keep up.

Science insists that there is an external world beyond our emotions and wish-fulfillment fantasies. It claims that we can find out true facts about that world, including facts with no immediate technological application. The human sciences insist even more audaciously that we ourselves are part of that world and can be described as dispassionately as stars, rocks, and microbes. Perhaps one day it will be socially acceptable to believe this. (“Why We Can’t Talk about IQ,” Taki’s Magazine, August 15, 2013)

Much has been made of the “bland” 1950s and the supposed pressure to conform to the Ozzie and Harriett way of life. Though i was never clear about the preferred alternative. On the evidence of the past 50 years, it seems to have been a potent mix of blue language, promiscuous sex, sodomy, broken families, drugs, violence, and ear-blasting “music.”

The true forces of conformity had begun their work many years before Ricky Nelson was a gleam in his father’s eye. There was, of course, the Progressive Era of the late 1800s and early 1900s, from which America was beginning to recover by the late 1920s.. But then came the Great Depression, the New Deal, and the establishment in America of a fifth column dedicated to the suppression of liberty:

As recounted in [KGB: The Inside Story by KGB Colonel Oleg Gordievsky and Cambridge intelligence expert Christopher Andrew]  … Harry Hopkins — FDR’s confidant, advisor, and policy czar, who actually resided in the White House during World War II — was the Big Enchilada among American agents of influence working for the USSR. Gordievsky recounts attending a lecture early in his career by Iskhak Akhmerov, the KGB’s top “illegal” spy in the U.S. during the 1940s (In espionage parlance, “illegals” do not have legal cover if caught). According to Gordievsky, Akhmerov spoke for a long period about Hopkins, calling him the top Soviet asset in the US. Yet, Gordievsky and Andrew tiptoe around this allegation by representing that Hopkins was a naïve devotee who only courted Stalin to ensure victory over Hitler’s Germany.

Although I know Andrew well, and have met Gordievsky twice, I now doubt their characterization of Hopkins…. It does not ring true that Hopkins was an innocent dupe dedicated solely to defeating the Nazis. Hopkins comes over in history as crafty, secretive and no one’s fool, hardly the personality traits of a naïve fellow traveler. And his fingerprints are on the large majority of pro-Soviet policies implemented by the Roosevelt administration. [Diana] West [author of American Betrayal: Secret Assault on Our Nation's Character] deserves respect for cutting through the dross that obscures the evidence about Hopkins, and for screaming from the rooftops that the U.S. was the victim of a successful Soviet intelligence operation….

West mines Venona, the testimony of “Red spy queen” Elizabeth Bentley — who confessed her work for the communist underground to the FBI in 1945 — and the book Blacklisted by History by M. Stanton Evans, a re-examination of the McCarthy era using Venona and hundreds of other recently declassified documents from the FBI, CIA, and other agencies. And West lambastes the Truman administration for not revealing data from Venona that would have exonerated McCarthy and informed the nation that Soviet agents had indeed infiltrated key departments of the FDR administration….

The Rosenbergs, Alger Hiss, Harry Dexter White, Laurence Duggan, and 397 more American agents have been confirmed and verified as Soviet agents. West claims Harry Hopkins has been outed too in Venona, but Radosh and other scholars say this identification is bogus. But the Soviets also ran important agents of influence with great attention to the security of their identities. In essence, whether or not Hopkins is ever identified in Venona, he remains, as the cops say, a person of interest. (Bernie Reeves, “Reds under the Beds: Diana West Can’t Sleep,” American Thinker, August 10, 2013)

Influence flows downhill. What happened in Washington was repeated in many a city and State because the New Deal had made leftism respectable. By the end of World War II, which made nationalization the norm, the “mainstream” had shifted far to the left of where it had flowed before the Great Depression.

Influence also flows laterally. The growing respectability of leftism emboldened and empowered those institutions that naturally lean left: the media, academia, and the arts and letters. And so they went forth into the wilderness, amplifying the gospel according to Marx.

The most insidious influence has been the indoctrination of students — from pre-Kindergarten to graduate school — in the language and ideals of leftism: world government (i.e., anit-Americanism); redistributionism (as long as it hits only the “rich,” of course); favoritism for “minorities” (i.e., everyone but straight, white males); cultural diversity (any kind of crap in the arts, music, and literature, as long as it wasn’t produced by dead, white mailes); moral relativism (e.g., anti-feminism is bad, unless it’s practiced by Muslims). All of that, and much more, is the stuff of political correctness, which is an especially corrosive manifestation of social conformism, as Jason Richwine learned the hard way.

And then came the “pod people.” These are the masses of “ordinary people” who may have been deaf or impervious to indoctrination by teachers and professors, but who in vast numbers were (and continue to be) seduced by into collaboration with the left by years and decades of post-educational exposure to leftist cant. Seduced by slanted opinionators — usually disguised as reporters. Seduced by novelists, screenwriters, playwrights, and other denizens of the world of arts and letters. Seduced by politicians (even “conservative” ones) trading “free lunches” and “local jobs” for votes.

It is more than a small wonder that there is such a sizable remnant of true conservatives and non-leftish libertarians (unlike this leftish one). But we are vastly outnumbered by staunch leftists, wishy-washy “moderates,” and “conservatives” whose first instinct is to defend sacred cows (Social Security and Medicare, for example) instead of defending liberty.

I will have more to say, in future posts, about the subversion of “Old America.” For now, I end with this observation from an earlier post:
If America was ever close to being a nation united and free, it has drifted far from that condition — arguably, almost as far as it  had by 1861. And America’s condition will only worsen unless leaders emerge who will set the nation (or a large, independent portion of it) back on course. Barring the emergence of such leaders, America will continue to slide into baseness, divisiveness, and servitude.

*     *     *

Related posts:
Affirmative Action: Two Views from the Academy
Affirmative Action, One More Time
A Contrarian View of Segregation
After the Bell Curve
A Footnote . . .
Schelling and Segregation
Affirmative Action: Two Views from the Academy, Revisited
“Family Values,” Liberty, and the State
Is There Such a Thing as Society
Intellectuals and Capitalism
Secession
A New, New Constitution
Secession Redux
A New Cold War or Secession?
The Real Constitution and Civil Disobedience
A Declaration of Independence
First Principles
The Shape of Things to Come
The Near-Victory of Communism
The Constitution: Original Meaning, Corruption, and Restoration
“Intellectuals and Society”: A Review
Intelligence, Personality, Politics, and Happiness
The Left’s Agenda
The Left and Its Delusions
Intelligence as a Dirty Word
Crimes against Humanity
Abortion and Logic
The Myth That Same-Sex “Marriage” Causes No Harm
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Abortion, Doublethink, and Left-Wing Blather
Reclaiming Liberty throughout the Land
Race and Reason: The Victims of Affirmative Action
Abortion, “Gay Rights,” and Liberty
Race and Reason: The Achievement Gap–Causes and Implications
Dan Quayle Was (Almost) Right
Tolerance on the Left
The Eclipse of “Old America”
Genetic Kinship and Society
Government in Macroeconomic Perspective
Keynesianism: Upside-Down Economics in the Collectivist Cause
Secession for All Seasons
Liberty and Society
Liberty as a Social Construct: Moral Relativism?
A Contrarian View of Universal Suffrage
Well-Founded Pessimism
America: Past, Present, and Future
Defending Liberty against (Pseudo) Libertarians
“Conversing” about Race
The Fallacy of Human Progress
Political Correctness vs. Civility

Spooky Numbers, Evolution, and Intelligent Design

“Spooky numbers” refers to Steven Landsburg’s position — expressed here in commenting on a post by Bob Murphy about intelligent design — that natural numbers just are. This encapsulates Landsburg’s thesis:

The natural numbers are irreducibly complex, moreso (by any reasonable definition) than anything in biology. But the natural numbers were not designed and did not evolve….

I previously addressed Landsburg’s claim about natural numbers, here; for example:

Why have humans, widely separated in time and space, agreed about numbers and the manipulation of numbers (mathematics)? Specifically, with respect to the natural numbers, why is there agreement that something called “one” or “un” or “ein” (and so on) is followed by something called “two” or “deux” or “zwei,” and so on? And why is there agreement that those numbers, when added, equal something called “three” or “trois” or “drei,” and so on? Is that evidence for the transcendent timelessness of numbers and mathematics, or is it nothing more than descriptive necessity?By descriptive necessity, I mean that numbering things is just another way of describing them. If there are some oranges on a table, I can say many things about them; for example, they are spheroids, they are orange-colored, they contain juice and (usually) seeds, and their skins are bitter-tasting.

Another thing that I can say about the oranges is that there are a certain number of them — let us say three, in this case. But I can say that only because, by convention, I can count them: one, two, three. And if someone adds an orange to the aggregation, I can count again: one, two, three, four. And, by convention, I can avoid counting a second time by simply adding one (the additional orange) to three (the number originally on the table). Arithmetic is simply a kind of counting, and other mathematical manipulations are, in one way or another, extensions of arithmetic. And they all have their roots in numbering and the manipulation of numbers, which are descriptive processes.

But my ability to count oranges and perform mathematical operations based on counting does not mean that numbers and mathematics are timeless and transcendent. It simply means that I have used some conventions — devised and perfected by other humans over the eons — which enable me to describe certain facets of physical reality.

Mathematics is merely a tool that can be useful in describing some aspects of the real world. Evolution and intelligent design, on the other hand, are theories about the real world. Though evolution and intelligent design are not complete theories of the real world, they are far more than mere mathematical descriptions of it.

To understand the distinction that I’m making, consider this: Some of the differences between apples and oranges can be described by resorting to the mathematics of color, taste, shape, and so on. But an apple or an orange — as an entity — is more than the sum of its various, partial descriptors. So, too, is the real world more than the sum of any number of mathematics or descriptors (physics, chemistry, biology, etc.) that have mathematical components. The real world encompasses love, hate, social customs, and religion — among many things that defy complete (or even partial) mathematical description.

Now, what about evolution and intelligent design? Are they reconcilable theories? Murphy implies that they are. He says that

Michael Behe–[a leading proponent of intelligent design] who (in)famously said that the bacterial flagellum exhibited too much design to have arisen through unguided evolution in the modern neo-Darwinian sense–does not have a problem with the idea that all of today’s cells share a common ancestor….

So yes, Behe is fine with the proposition that if we had a camera and a time machine, we could go observe the first cell on earth as it reproduced and yielded offspring. There would be nothing magical in these operations; they would obey the laws of physics, chemistry, and biology. The cells would further divide and so on, and then over billions of years there would be mutations and the environment would favor some of the mutants over their kin, such that natural selection over time would yield the bacterial flagellum and the human nervous system.

Yet Behe’s point is that when you look at what this process spits out at the end, you can’t deny that a guiding intelligence must be involved somehow.

The question-begging of that last sentence is what frustrates scientists. It says, in effect, that there must be a guiding intelligence, and the complexity of the products of evolution proves it.

No, it doesn’t prove it. God — as an entity apart from the material universe — cannot be shown to exist by pointing to particular aspects of the material universe, be they evolution or the Big Bang (to offer but two examples). God is a logical necessity, beyond empirical proof or disproof.

I greatly respect the sincerity of theists and the credence they give to sacred texts and accounts of visions and miracles. Their credence may be well-placed. But I am just too much of a doubting Thomas to rely on unfalsifiable, second-hand evidence about the nature of God and His role in the workings of the universe.

I will say this: Given the logical necessity of God, it follows that the universe operates in accordance with the “laws” that are inherent in His creation. Intelligent design, as an explanation for the forms taken by living creatures, is therefore something of a truism. But intelligent design cannot be proved by reference to products of evolution.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Free Will: A Proof by Example?
A Theory of Everything, Occam’s Razor, and Baseball
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
What Is Time?
Science’s Anti-Scientific Bent
The Tenth Dimension
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Are the Natural Numbers Supernatural?
Not-So-Random Thoughts (II) (first item)
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness

Pinker Commits Scientism

Steven Pinker, who seems determined to outdo Bryan Caplan in wrongheadedness, devotes “Science Is Not Your Enemy” (The New Republic,  August 6, 2013), to the defense of scientism. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. You don’t need to take my word for it; Pinker’s own words tell the tale.

But, first, let’s get clear about the meaning and fallaciousness of scientism. The various writers cited by Pinker describe it well, but Hayek probably offers the most thorough indictment of it; for example:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it…..

The blind transfer of the striving for quantitative measurements to a field in which the specific conditions are not present which give it its basic importance in the natural sciences, is the result of an entirely unfounded prejudice. It is probably responsible for the worst aberrations and absurdities produced by scientism in the social sciences. It not only leads frequently to the selection for study of the most irrelevant aspects of the phenomena because they happen to be measurable, but also to “measurements” and assignments of numerical values which are absolutely meaningless. What a distinguished philosopher recently wrote about psychology is at least equally true of the social sciences, namely that it is only too easy “to rush off to measure something without considering what it is we are measuring, or what measurement means. In this respect some recent measurements are of the same logical type as Plato’s determination that a just ruler is 729 times as happy as an unjust one.”…

Closely connected with the “objectivism” of the scientistic approach is its methodological collectivism, its tendency to treat “wholes” like “society” or the “economy,” “capitalism” (as a given historical “phase”) or a particular “industry” or “class” or “country” as definitely given objects about which we can discover laws by observing their behavior as wholes. While the specific subjectivist approach of the social sciences starts … from our knowledge of the inside of these social complexes, the knowledge of the individual attitudes which form the elements of their structure, the objectivism of the natural sciences tries to view them from the outside ; it treats social phenomena not as something of which the human mind is a part and the principles of whose organization we can reconstruct from the familiar parts, but as if they were objects directly perceived by us as wholes….

The belief that human history, which is the result of the interaction of innumerable human minds, must yet be subject to simple laws accessible to human minds is now so widely held that few people are at all aware what an astonishing claim it really implies. Instead of working patiently at the humble task of rebuilding from the directly known elements the complex and unique structures which we find in the world, and of tracing from the changes in the relations between the elements the changes in the wholes, the authors of these pseudo-theories of history pretend to be able to arrive by a kind of mental short cut at a direct insight into the laws of succession of the immediately apprehended wholes. However doubtful their status, these theories of development have achieved a hold on public imagination much greater than any of the results of genuine systematic study. “Philosophies” or “theories” of history (or “historical theories”) have indeed become the characteristic feature, the “darling vice” of the 19th century. From Hegel and Comte, and particularly Marx, down to Sombart and Spengler these spurious theories came to be regarded as representative results of social science; and through the belief that one kind of “system” must as a matter of historical necessity be superseded by a new and different “system,” they have even exercised a profound influence on social evolution. This they achieved mainly because they looked like the kind of laws which the natural sciences produced; and in an age when these sciences set the standard by which all intellectual effort was measured, the claim of these theories of history to be able to predict future developments was regarded as evidence of their pre-eminently scientific character. Though merely one among many characteristic 19th century products of this kind, Marxism more than any of the others has become the vehicle through which this result of scientism has gained so wide an influence that many of the opponents of Marxism equally with its adherents are thinking in its terms. (Friedrich A. Hayek, The Counter Revolution Of Science [Kindle Locations 120-1180], The Free Press.)

After a barrage like that (and this), what’s a defender of scientism to do? Pinker’s tactic is to stop using “scientism” and start using “science.” This makes it seem as if he really isn’t defending scientism, but rather trying to show how science can shed light onto subjects that are usually not in the province of science. In reality, Pinker preaches scientism by calling it science.

For example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists.” We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (bluffing, spotting “tells,” avoiding “tells,” for example).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature, which defies scientific control.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real aim of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification.” With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, which I examine here.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

If Pinker is right about anything, it is when he says that “the intrusion of science into the territories of the humanities has been deeply resented.” The resentment, though some of it may be wrongly motivated, is fully justified.

Related reading (added 08/10/13 and 09/06/13):
Bill Vallicella, “Steven Pinker on Scientism, Part One,” Maverick Philosopher, August 10, 2013
Leon Wieseltier, “Crimes Against Humanities,” The New Republic, September 3, 2013 (gated)

Related posts about Pinker:
Nonsense about Presidents, IQ, and War
The Fallacy of Human Progress

Related posts about modernism:
Speaking of Modern Art
Making Sense about Classical Music
An Addendum about Classical Music
My Views on Classical Music, Vindicated
But It’s Not Music
A Quick Note about Music
Modernism in the Arts and Politics
Taste and Art
Modernism and the Arts

Related posts about science:
Science’s Anti-Scientific Bent
Modeling Is Not Science
Physics Envy
We, the Children of the Enlightenment
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Scientism, Evolution, and the Meaning of Life
The Candle Problem: Balderdash Masquerading as Science
Mysteries: Sacred and Profane
The Glory of the Human Mind

The Glory of the Human Mind

As an antidote to the bleakness of “Nothingness” and in tribute to the glory that is the human mind, I refer you to three old posts of mine: “Flow,” “The Purpose-Driven Life,” and “In Praise of Solitude.”

And I also refer you to every great artist, writer, and thinker, from Socrates and Shakespeare to Newton and Einstein to Bach and Dvorak to Nabakov and Nagel.

A particular mind may be evanescent, but the beauty, wisdom, and knowledge that is produced by the best minds is priceless. The sum total of beauty, wisdom, and knowledge that is available to us — though too often ignored and derided — is overwhelming. No one can possibly absorb and understand all of it, which means that no one must waste his time on mindless intellectual and artistic dreck.

That so much time is wasted on dreck — often whole lifetimes — is a greater tragedy than the inevitable death of any particular artist, writer, or thinker. Equally tragic is the rejection of civilizing traditions, which are also sublime products of the human mind. Thus:

I hate modern art that swaps form for dead sharks; and modern music that exchanges harmony for noise…. I hate religious leaders who think that God is found “in the spaces” and that worship is therapy. I hate our pornographic culture, our tasteless battery foods, and our TV that treats adults like children and children like adults. I hate our obsession with irony, as if a shrug of the shoulders is cleverer than serious inquiry. I hate the death of chivalry, manners and the doffed hats. I hate our promotion of sex over romance – today’s Brief Encounters are very different things. I hate the eradication of guilt and shame, very useful concepts that hold us back from indulgence. (Tim Stanley, “Conservatives: Don’t Despair of Our Corrupt, Decadent Age. Write about It,” The Telegraph, August 2, 2013)

Life needn’t be like that. When all else fails us, we can take refuge in our own minds, where beauty dwells — if we have cultivated our minds so that beauty thrives there.

The potentiality of the human mind allows us to be more — much more — than the “most robots” of New Atheism. Thank God for that.

Nothingness

Edward Feser’s post, “Fifty Shades of Nothing,” prompts this one.

Preamble:

Nothing is the alternative to the existence of the universe,* that is, to the existence of something. Something is either caused by a self-existent, uncaused entity (i.e., God), or something simply exists. In the letter case, something must be uncaused and eternal, ruling out the possibility of nothing.

Therefore, given the necessity of God, nothing is possible, though there has been something for at least 14 billion years, according to the Big Bang theory. And there may have been something into the indefinite past, according to cyclic models of cosmology.

This suggests the following questions:

A. Given that nothing is possible, what can be said about it, other than that is the alternative to something?

Consider:

1. Make a fist and then open it. What do you see? “Nothing” is the usual answer if you’re holding nothing in your hand. But the “nothing” that you see is in fact the absence of an object in your hand. (It would be mere pedantry to say that your hand is “holding” a column of air.) Therefore, you don’t see “nothing”; you see an open hand, which appears to be empty. Nothing, by contrast, can’t be described in terms of an open hand or the absence of an object in the space above an open hand. If there were nothing, there would be no open hand to begin with. A vacuum in a bottle or in outer space is of the same ilk; it is an apparent emptiness (lack of matter, though not necessarily of energy) that exists only because there is something.

2. If you are a philosophical materialist (i.e., disbeliever in supernatural phenomena or divine interventions),** you believe that a person ceases to exist when his brain ceases to function (if not when the person lapses into permanent unconsciousness). From your perspective, the cessation of brain function (or even of consciousness) puts an end to the things that made the person a particular being with a unique set of characteristics: personality, memory, habits, ways of talking, laughing, etc. You might even say that where there was a particular person there is now “nothing.” But that “nothing” is really an absence or negation of the particular person who existed before brain death (or permanent lapse into unconsciousness). It is not the kind of nothing that is understood as an alternative to the existence of the universe; it is the perceived absence of an erstwhile portion of that universe. In fact, by the laws of physics, that erstwhile portion of the universe (a particular person) actually continues to exist, though not in a form that you would you would label with the name of the particular person. Here again, we have “nothing” (i.e., absence of a person) only because there was something. But we do not have nothing; there is still brain matter, which has assumed a different electrochemical state than before brain death. (This is not to say that the residue of a person is the same thing as the person. Nor is it to deny those wondrous things — persons and their many positive accomplishments — that emerge from the activity of the brain.)

Generally:

Nothing more can be said of nothing than that it is the alternative to something. Nothing, by definition, has no characteristics. It is neither imaginable nor describable, despite the temptation to think and speak of it as some kind of empty blackness within which nothing exists. The image of an empty blackness is an image of something, not nothing.

B. Can nothing follow something, as death follows life?

Nothing can follow something only if something (i.e., the universe) is annihilated. Annihilation necessarily means the disappearance of all traces of matter and energy and the space contains their existence. It doesn’t mean the conversion of matter, energy, and space to a mere blankness (black, white, or otherwise).

Annihilation is beyond the ability of humans, and beyond the forces of nature. It is a job for God.

C. Does the fact that there is something rule out the possibility of nothing?

No. See the preamble and the answer to B.

__________
* I use “universe” generally, to include the possibility of a spatial and/or temporal multiverse.

** I am a kind of philosophical materialist, but unlike most materialists I am not an atheist. Specifically, I believe that the universe was created by God. But I also doubt (regretfully) that God plays an active role in the workings of His creation, except to sustain it (as against the possibility of annihilation). As long as the universe is sustained, it (seemingly) operates according to “laws” that are (in theory) discoverable, though the ultimate nature of existence is not discoverable.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Free Will: A Proof by Example?
A Theory of Everything, Occam’s Razor, and Baseball
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
What Is Time?
Science’s Anti-Scientific Bent
The Tenth Dimension
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Not-So-Random Thoughts (II) (first item)
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology

The Fallacy of Human Progress

Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined is cited gleefully by leftists and cockeyed optimists as evidence that human beings, on the whole, are becoming kinder and gentler because of:

  • The Leviathan – The rise of the modern nation-state and judiciary “with a monopoly on the legitimate use of force,” which “can defuse the [individual] temptation of exploitative attack, inhibit the impulse for revenge, and circumvent…self-serving biases.”
  • Commerce – The rise of “technological progress [allowing] the exchange of goods and services over longer distances and larger groups of trading partners,” so that “other people become more valuable alive than dead” and “are less likely to become targets of demonization and dehumanization”;
  • Feminization – Increasing respect for “the interests and values of women.”
  • Cosmopolitanism – the rise of forces such as literacy, mobility, and mass media, which “can prompt people to take the perspectives of people unlike themselves and to expand their circle of sympathy to embrace them”;
  • The Escalator of Reason – an “intensifying application of knowledge and rationality to human affairs,” which “can force people to recognize the futility of cycles of violence, to ramp down the privileging of their own interests over others’, and to reframe violence as a problem to be solved rather than a contest to be won.”

I can tell you that Pinker’s book is hogwash because two very bright leftists — Peter Singer and Will Wilkinson — have strongly and wrongly endorsed some of its key findings. Singer writes:

Pinker argues that enhanced powers of reasoning give us the ability to detach ourselves from our immediate experience and from our personal or parochial perspective, and frame our ideas in more abstract, universal terms. This in turn leads to better moral commitments, including avoiding violence. It is just this kind of reasoning ability that has improved during the 20th century. He therefore suggests that the 20th century has seen a “moral Flynn effect, in which an accelerating escalator of reason carried us away from impulses that lead to violence” and that this lies behind the long peace, the new peace, and the rights revolution. Among the wide range of evidence he produces in support of that argument is the tidbit that since 1946, there has been a negative correlation between an American president’s I.Q. and the number of battle deaths in wars involving the United States.

I disposed of this staggeringly specious correlation here:

There is the convenient cutoff point of 1946. Why 1946? Well, it enables Pinker-Singer to avoid the inconvenient fact that the Civil War, World War I, and World War II happened while the presidency was held by three men who [purportedly] had high IQs: Lincoln, Wilson, and FDR….

If you buy the brand of snake oil being peddled by Pinker-Singer, you must believe that the “dumbest” and “smartest” presidents are unlikely to get the U.S. into wars that result in a lot of battle deaths, whereas some (but, mysteriously, not all) of the “medium-smart” presidents (Lincoln, Wilson, FDR) are likely to do so….

Let us advance from one to two explanatory variables. The second explanatory variable that strongly suggests itself is political party. And because it is not good practice to omit relevant statistics (a favorite gambit of liars), I estimated an equation based on “IQ” and battle deaths for the 27 men who served as president from the first Republican presidency (Lincoln’s) through the presidency of GWB….

In other words, battle deaths rise at the rate of 841 per IQ point (so much for Pinker-Singer). But there will be fewer deaths with a Republican in the White House (so much for Pinker-Singer’s implied swipe at GWB)….

All of this is nonsense, of course, for two reasons: [the] estimates of IQ are hogwash, and the number of U.S. battle deaths is a meaningless number, taken by itself.

… [The] estimates of presidents’ IQs put every one of them — including the “dumbest,” U.S. Grant — in the top 2.3 percent of the population. And the mean of Simonton’s estimates puts the average president in the top 0.1 percent (one-tenth of one percent) of the population. That is literally incredible.

As for Wilkinson, he praises statistics adduced by Pinker that show a decline in the use of capital punishment:

In the face of such a decisive trend in moral culture, we can say a couple different things. We can say that this is just change and says nothing in particular about what is really right or wrong, good or bad. Or we can take take say this is evidence of moral progress, that we have actually become better. I prefer the latter interpretation for basically the same reasons most of us see the abolition of slavery and the trend toward greater equality between races and sexes as progress and not mere morally indifferent change. We can talk about the nature of moral progress later. It’s tricky. For now, I want you to entertain the possibility that convergence toward the idea that execution is wrong counts as evidence that it is wrong.

My observation:

I would count convergence toward the idea that execution is wrong as evidence that it is wrong, if … that idea were (a) increasingly held by individuals who (b) had arrived at their “enlightenment” unnfluenced by operatives of the state (legislatures and judges), who take it upon themselves to flout popular support of the death penalty. What we have, in the case of the death penalty, is moral regress, not moral progress.

Moral regress because the abandonment of the death penalty puts innocent lives at risk. Capital punishment sends a message, and the message is effective when it is delivered: it deters homicide. And even if it didn’t, it would at least remove killers from our midst, permanently. By what standard of morality can one claim that it is better to spare killers than to protect innocents? For that matter, by what standard of morality is it better to kill innocents (in the womb) than to spare killers? Proponents of abortion (like Singer and Wilkinson) — who by and large oppose capital punishment — are completely lacking in moral authority.

Returning to Pinker’s thesis that violence has declined, I quote a review at Foseti:

Pinker’s basic problem is that he essentially defines “violence” in such a way that his thesis that violence is declining becomes self-fulling. “Violence” to Pinker is fundamentally synonymous with behaviors of older civilizations. On the other hand, modern practices are defined to be less violent than newer practices.

A while back, I linked to a story about a guy in my neighborhood who’s been arrested over 60 times for breaking into cars. A couple hundred years ago, this guy would have been killed for this sort of vandalism after he got caught the first time. Now, we feed him and shelter him for a while and then we let him back out to do this again. Pinker defines the new practice as a decline in violence – we don’t kill the guy anymore! Someone from a couple hundred years ago would be appalled that we let the guy continue destroying other peoples’ property without consequence. In the mind of those long dead, “violence” has in fact increased. Instead of a decline in violence, this practice seems to me like a decline in justice – nothing more or less.

Here’s another example, Pinker uses creative definitions to show that the conflicts of the 20th Century pale in comparison to previous conflicts. For example, all the Mongol Conquests are considered one event, even though they cover 125 years. If you lump all these various conquests together and you split up WWI, WWII, Mao’s takeover in China, the Bolshevik takeover of Russia, the Russian Civil War, and the Chinese Civil War (yes, he actually considers this a separate event from Mao), you unsurprisingly discover that the events of the 20th Century weren’t all that violent compared to events in the past! Pinker’s third most violent event is the “Mideast Slave Trade” which he says took place between the 7th and 19th Centuries. Seriously. By this standard, all the conflicts of the 20th Century are related. Is the Russian Revolution or the rise of Mao possible without WWII? Is WWII possible without WWI? By this consistent standard, the 20th Century wars of Communism would have seen the worst conflict by far. Of course, if you fiddle with the numbers, you can make any point you like.

There’s much more to the review, including some telling criticisms of Pinker’s five reasons for the (purported) decline in violence. That the reviewer somehow still wants to believe in the rightness of Pinker’s thesis says more about the reviewer’s optimism than it does about the validity of Pinker’s thesis.

That thesis is fundamentally flawed, as Robert Epstein points out in a review at Scientific American:

[T]he wealth of data [Pinker] presents cannot be ignored—unless, that is, you take the same liberties as he sometimes does in his book. In two lengthy chapters, Pinker describes psychological processes that make us either violent or peaceful, respectively. Our dark side is driven by a evolution-based propensity toward predation and dominance. On the angelic side, we have, or at least can learn, some degree of self-control, which allows us to inhibit dark tendencies.

There is, however, another psychological process—confirmation bias—that Pinker sometimes succumbs to in his book. People pay more attention to facts that match their beliefs than those that undermine them. Pinker wants peace, and he also believes in his hypothesis; it is no surprise that he focuses more on facts that support his views than on those that do not. The SIPRI arms data are problematic, and a reader can also cherry-pick facts from Pinker’s own book that are inconsistent with his position. He notes, for example, that during the 20th century homicide rates failed to decline in both the U.S. and England. He also describes in graphic and disturbing detail the savage way in which chimpanzees—our closest genetic relatives in the animal world—torture and kill their own kind.

Of greater concern is the assumption on which Pinker’s entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease? By this logic, when we reach a world population of nine billion in 2050, Pinker will conceivably be satisfied if a mere two million people are killed in war that year.

The biggest problem with the book, though, is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.

Pinker’s belief that violence is on the decline reminds me of “it’s different this time,” a phrase that was on the lips of hopeful stock-pushers, stock-buyers, and pundits during the stock-market bubble of the late 1990s. That bubble ended, of course, in the spectacular crash of 2000.

Predictions about the future of humankind are better left in the hands of writers who see human nature whole, and who are not out to prove that it can be shaped or contained by the kinds of “liberal” institutions that Pinker so obviously favors.

Consider this, from an article by Robert J. Samuelson at The Washington Post:

[T]he Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar. Amid the controversy over leaks from the National Security Agency, this looms as an even bigger downside.

By cyberwarfare, I mean the capacity of groups — whether nations or not — to attack, disrupt and possibly destroy the institutions and networks that underpin everyday life. These would be power grids, pipelines, communication and financial systems, business record-keeping and supply-chain operations, railroads and airlines, databases of all types (from hospitals to government agencies). The list runs on. So much depends on the Internet that its vulnerability to sabotage invites doomsday visions of the breakdown of order and trust.

In a report, the Defense Science Board, an advisory group to the Pentagon, acknowledged “staggering losses” of information involving weapons design and combat methods to hackers (not identified, but probably Chinese). In the future, hackers might disarm military units. “U.S. guns, missiles and bombs may not fire, or may be directed against our own troops,” the report said. It also painted a specter of social chaos from a full-scale cyberassault. There would be “no electricity, money, communications, TV, radio or fuel (electrically pumped). In a short time, food and medicine distribution systems would be ineffective.”

But Pinker wouldn’t count the resulting chaos as violence, as long as human beings were merely starving and dying of various diseases. That violence would ensue, of course, is another story, which is told by John Gray in The Silence of Animals: On Progress and Other Modern Myths. Grays book — published  18 months after Better Angels — could be read as a refutation of Pinker’s book, though Gray never mentions Pinker or his book.

The gist of Gray’s argument is faithfully recounted in a review of Gray’s book by Robert W. Merry at The National Interest:

The noted British historian J. B. Bury (1861–1927) … wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future…. The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

…Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong….

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption….

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

In the Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny….

And yet the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves….

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy.

There was an era of realism, but it was short-lived:

But other Western philosophers, particularly in the realm of Anglo-Saxon thought, viewed the idea of progress in much more limited terms. They rejected the idea that institutions could reshape mankind and usher in a golden era of peace and happiness. As Bury writes, “The general tendency of British thought was to see salvation in the stability of existing institutions, and to regard change with suspicion.” With John Locke, these thinkers restricted the proper role of government to the need to preserve order, protect life and property, and maintain conditions in which men might pursue their own legitimate aims. No zeal here to refashion human nature or remake society.

A leading light in this category of thinking was Edmund Burke (1729–1797), the British statesman and philosopher who, writing in his famous Reflections on the Revolution in France, characterized the bloody events of the Terror as “the sad but instructive monuments of rash and ignorant counsel in time of profound peace.” He saw them, in other words, as reflecting an abstractionist outlook that lacked any true understanding of human nature. The same skepticism toward the French model was shared by many of the Founding Fathers, who believed with Burke that human nature isn’t malleable but rather potentially harmful to society. Hence, it needed to be checked. The central distinction between the American and French revolutions, in the view of conservative writer Russell Kirk, was that the Americans generally held a “biblical view of man and his bent toward sin,” whereas the French opted for “an optimistic doctrine of human goodness.” Thus, the American governing model emerged as a secular covenant “designed to restrain the human tendencies toward violence and fraud . . . [and] place checks upon will and appetite.”

Most of the American Founders rejected the French philosophes in favor of the thought and history of the Roman Republic, where there was no idea of progress akin to the current Western version. “Two thousand years later,” writes Kirk, “the reputation of the Roman constitution remained so high that the framers of the American constitution would emulate the Roman model as best they could.” They divided government powers among men and institutions and created various checks and balances. Even the American presidency was modeled generally on the Roman consular imperium, and the American Senate bears similarities to the Roman version. Thus did the American Founders deviate from the French abstractionists and craft governmental structures to fit humankind as it actually is—capable of great and noble acts, but also of slipping into vice and treachery when unchecked. That ultimately was the genius of the American system.

But, as the American success story unfolded, a new collection of Western intellectuals, theorists and utopians—including many Americans—continued to toy with the idea of progress. And an interesting development occurred. After centuries of intellectual effort aimed at developing the idea of progress as an ongoing chain of improvement with no perceived end into the future, this new breed of “Progress as Power” thinkers began to declare their own visions as the final end point of this long progression.

Gray calls these intellectuals “ichthyophils,” which he defines as “devoted to their species as they think it ought to be, not as it actually is or as it truly wants to be.” He elaborates: “Ichthyophils come in many varieties—the Jacobin, Bolshevik and Maoist, terrorizing humankind in order to remake it on a new model; the neo-conservative, waging perpetual war as a means to universal democracy; liberal crusaders for human rights, who are convinced that all the world longs to become as they imagine themselves to be.” He includes also “the Romantics, who believe human individuality is everywhere repressed.”

Throughout American politics, as indeed throughout Western politics, a large proportion of major controversies ultimately are battles between the ichthyophils and the Burkeans, between the sensibility of the French Revolution and the sensibility of American Revolution, between adherents of the idea of progress and those skeptical of that potent concept. John Gray has provided a major service in probing with such clarity and acuity the impulses, thinking and aims of those on the ichthyophil side of that great divide. As he sums up, “Allowing the majority of humankind to imagine they are flying fish even as they pass their lives under the waves, liberal civilization rests on a dream.”

And so it goes. On the left there are the ichtyophils of America, represented in huge numbers by “progressives” and their constituents and dupes (i.e., a majority of the public). They are given aid and comfort by a small but vociferous number of pseudo-libertarians (as discussed here, for example). On the right stands a throng of pseudo-conservatives — mainly identified with the Republican Party — who are prone to adopt the language and ideals of progressivism, out of power-lust and ignorance. Almost entirely muted by the sound and fury emanating from left and right — and relatively few in number — are the true libertarians: Burkean conservatives.

And so Leviathan grows, crushing the liberty envisioned by our Burkean Founders in the name of “progress” (i.e., social and economic engineering). And as Robert Samuelson points out, the growth of Leviathan doesn’t ensure our immunity to chaos and barbarity in the event of a debilitating attack on our fragile infrastructure. It is ironic that we would be better able to withstand such an attack without descending into chaos and barbarity had not Leviathan weakened and sundered many true social bonds, in the name of “progress.”

Our thralldom to an essentially impotent Leviathan is of no importance to Pinker, to “progressives,” or the dupes and constituents of “progressivism.” They have struck their Faustian bargain with Leviathan, and they will pay the price, sooner or later. Unfortunately, all of us will pay the price — even those of us who despise and resist Leviathan.

Related posts:
Democracy vs. Liberty
Something Controversial
More about Democracy and Liberty
Yet Another Look at Democracy
Law, Liberty, and Abortion
Abortion and the Slippery Slope
Privacy: Variations on the Theme of Liberty
An Immigration Roundup
Illogic from the Pro-Immigration Camp
The Ruinous Despotism of Democracy
On Liberty
Illegal Immigration: A Note to Libertarian Purists
Inside-Outside
A Moralist’s Moral Blindness
Pseudo-Libertarian Sophistry vs. True Libertarianism
The Folly of Pacifism
Positivism, “Natural Rights,” and Libertarianism
What Are “Natural Rights”?
The Golden Rule and the State
Libertarian Conservative or Conservative Libertarian?
Bounded Liberty: A Thought Experiment
Evolution, Human Nature, and “Natural Rights”
More Pseudo-Libertarianism
More about Conservative Governance
The Meaning of Liberty
Positive Liberty vs. Liberty
On Self-Ownership and Desert
In Defense of Marriage
Understanding Hayek
Rethinking the Constitution: Freedom of Speech and of the Press
The Golden Rule as Beneficial Learning
Why I Am Not an Extreme Libertarian
Facets of Liberty
Burkean Libertarianism
Rights: Source, Applicability, How Held
The Folly of Pacifism, Again
What Is Libertarianism?
True Libertarianism, One More Time
Human Nature, Liberty, and Rationalism
Utilitarianism and Psychopathy
Privacy Is Not Sacred
A Declaration and Defense of My Prejudices about Governance
The Libertarian-Conservative Fusion Is Alive and Well
Libertarianism and Morality
Libertarianism and Morality: A Footnote
Merit Goods, Positive Rights, and Cosmic Justice
More about Merit Goods
What Is Bleeding-Heart Libertarianism?
Society and the State
Prohibition, Abortion, and “Progressivism”
Liberty, Negative Rights, and Bleeding Hearts
Cato, the Kochs, and a Fluke
Conservatives vs. “Liberals”
Not-So-Random Thoughts (II)
Why Conservatism Works
The Pool of Liberty and “Me” Libertarianism
Bleeding-Heart Libertarians = Left-Statists
Enough with the Bleeding Hearts, Already
Not Guilty of Libertarian Purism
Liberty and Society
The Eclipse of “Old America”
Genetic Kinship and Society
Liberty as a Social Construct: Moral Relativism?
A Contrarian View of Universal Suffrage
Well-Founded Pessimism
Defending Liberty against (Pseudo) Libertarians

Baseball Statistics and the Consumer Price Index

Faithful readers of this blog will have noticed that I like to invoke baseball when addressing matters far afield from America’s pastime. (See this, this, this, this, this, this, this, this, and this.) It lately occurred to me that baseball statistics, properly understood, illustrate the inherent meaninglessness of the Consumer Price Index (CPI).

What does the CPI purport to measure? The Bureau of Labor Statistics (BLS) — compiler of the index — says that it “is a measure of the average change over time in the prices paid by urban consumers for a market basket of consumer goods and services.” Read that statement carefully. The CPI does not measure the average change in prices of the goods and services purchased by every urban consumer; it measures the prices of a “market basket” of goods and services that is thought to represent the purchases of a “typical” consumer. Further, the composition of that “market basket” is assumed to change, over time, in accordance with the preferences of the “typical” consumer. (There is more about the CPI in the note at the bottom of this post.)

To understand the arbitrariness of the CPI — as regards the construction of the “market basket” and the estimation of the prices of its components — one must read no further than the Bureau’s own list of questions and answers, some of which I have reproduced in the footnote. As a measure of your cost of living — at any time or over time — the CPI is as useful as the statement that the average depth of a swimming pool is 5 feet; a non-swimmer who is 6 feet tall puts himself in danger of drowning if he jumps into the deep end of such a pool.

The BLS nevertheless computes one version CPI back to January 1913. If you believe that prices in 1913 can be compared with prices in 2013, you must believe that baseball statistics yield meaningful comparisons of the performance of contemporary players and the players of bygone years. I enjoy making such comparisons, but I do not endorse their validity. As I will discuss later in this post, my reservations about cross-temporal comparisons of baseball statistics apply also to cross-temporal comparisons of prices.

Let us begin our journey into baseball statistics with three popular measures of batting prowess: batting average (BA), slugging percentage (SLG), and on-base plus slugging (OPS). The “normal” values of these statistics have varied widely:

Average major league batting statistics_1901-2012
Source: League Year-by-Year Batting at Baseball-Reference.com.

Aside from the upward trends of SLG and OPS, which are unsurprising to anyone with a passing knowledge of baseball’s history, the most striking feature of these statistics is their synchronicity. Players (and fans) of the 1920s and 1930s enjoyed an upsurge in BA, SLG, and OPS that was echoed in the 1980s and 1990s. How can the three statistics rise in lockstep when BA usually suffers with emphasis on the long ball (captured in SLG and OPS)? The three statistics can rise in lockstep only because of changes in the conditions of play that allow batters to hit for a better average while also getting more long hits. By the same token, changes in conditions of play can have the opposite effect of causing offensive statistics to fall, across the board. But given constant conditions of play, there usually is a tradeoff between batting average and long hits. A key point, to which I will return, is the essential incommensurability of statistics gathered under different conditions of play (or economic activity).

There are many variations in the conditions of play that have resulted in significant changes in offensive statistics. Among those changes are the use of cleaner and more tightly wound baseballs, the advent of night baseball, better lighting for night games, bigger gloves, lighter bats, bigger and stronger players, the expansion of the major leagues in fits and starts, the size of the strike zone, the height of the pitching mound, and — last but far from least in this list — the integration of black and Hispanic players into major league baseball. In addition to these structural variations, there are others that mitigate against the commensurability of statistics over time; for example, the rise and decline of each player’s skills, the skills of teammates (which can boost or depress a player’s performance), the characteristics of a player’s home ballpark (where players generally play half their games), and the skills of the opposing players who are encountered over the course of a career.

Despite all of these obstacles to commensurability, the urge to evaluate the relative performance of players from different teams, leagues, seasons, and eras is irrepressible. Baseball-Reference.com is rife with such evaluations; the Society for American Baseball Research (SABR) revels in them; many books offer them (e.g., this one); and I have succumbed to the urge more than once.

It is one thing to have fun with numbers. It is quite another thing to ascribe meanings to them that they cannot support. Consider the following cross-temporal comparison of baseball statistics:

Top-25 single-season offensive records
Source: Derived from the Play Index at Baseball-Reference.com. (Most baseball fans will recognize all of the names but one: Cy Seymour. His life and career are detailed in this article.)

Take, for example, the players ranked 17-25 in single-season BA. The range of BA for those 9 seasons (.384 to .388) is insignificantly small; it represents a maximum difference of only 4 hits per 1,000 times at bat. Given the vastly different conditions of play — and of the players — what does it mean to say that Rod Carew in 1977 and George Brett in 1980 had essentially the same BA as Honus Wagner in 1905 and 1908? It means nothing. The only thing that is essentially the same is the normalized BA that I concocted to represent those (and other) seasons. Offering normalized BA in evidence is to beg the question. In fact, any cross-temporal comparison of BA (or SLG or OPS) is essentially meaningless.

By the same token, it means nothing to say that prices in 2013 are X times as high as prices in 1913, when — among many other things — consumers in 2013 have access to a vastly richer “market basket” of products and services. Further, the products and services of 2013 that bear a passing resemblance to those of 1913 (e.g., houses, automobiles, telephone service) are demonstrably superior in quality.

So, it is fun to play with numbers, but when it comes to using them to make cross-temporal comparisons — especially over a span of decades — be very wary. Better yet, resist the temptation to make those cross-temporal comparisons, except for the fun of it.
____________
A SELECTION OF QUESTIONS AND ANSWERS ABOUT THE CPI, FROM THIS PAGE AT THE WEBSITE OF THE BUREAU OF LABOR STATISTICS:

Whose buying habits does the CPI reflect?

The CPI reflects spending patterns for each of two population groups: all urban consumers and urban wage earners and clerical workers. The all urban consumer group represents about 87 percent of the total U.S. population. It is based on the expenditures of almost all residents of urban or metropolitan areas, including professionals, the self-employed, the poor, the unemployed, and retired people, as well as urban wage earners and clerical workers. Not included in the CPI are the spending patterns of people living in rural nonmetropolitan areas, farm families, people in the Armed Forces, and those in institutions, such as prisons and mental hospitals. Consumer inflation for all urban consumers is measured by two indexes, namely, the Consumer Price Index for All Urban Consumers (CPI-U) and the Chained Consumer Price Index for All Urban Consumers (C-CPI-U)….

The Consumer Price Index for Urban Wage Earners and Clerical Workers (CPI-W) is based on the expenditures of households included in the CPI-U definition that also meet two requirements: more than one-half of the household’s income must come from clerical or wage occupations, and at least one of the household’s earners must have been employed for at least 37 weeks during the previous 12 months. The CPI-W population represents about 32 percent of the total U.S. population and is a subset, or part, of the CPI-U population….

Does the CPI measure my experience with price change?

Not necessarily. It is important to understand that BLS bases the market baskets and pricing procedures for the CPI-U and CPI-W populations on the experience of the relevant average household, not of any specific family or individual. It is unlikely that your experience will correspond precisely with either the national indexes or the indexes for specific cities or regions….

How is the CPI market basket determined?

The CPI market basket is developed from detailed expenditure information provided by families and individuals on what they actually bought. For the current CPI, this information was collected from the Consumer Expenditure Surveys for 2007 and 2008. In each of those years, about 7,000 families from around the country provided information each quarter on their spending habits in the interview survey. To collect information on frequently purchased items, such as food and personal care products, another 7,000 families in each of these years kept diaries listing everything they bought during a 2-week period….

What goods and services does the CPI cover?

The CPI represents all goods and services purchased for consumption by the reference population (U or W) BLS has classified all expenditure items into more than 200 categories, arranged into eight major groups. Major groups and examples of categories in each are as follows:

  • FOOD AND BEVERAGES (breakfast cereal, milk, coffee, chicken, wine, full service meals, snacks)
  • HOUSING (rent of primary residence, owners’ equivalent rent, fuel oil, bedroom furniture)
  • APPAREL (men’s shirts and sweaters, women’s dresses, jewelry)
  • TRANSPORTATION (new vehicles, airline fares, gasoline, motor vehicle insurance)
  • MEDICAL CARE (prescription drugs and medical supplies, physicians’ services, eyeglasses and eye care, hospital services)
  • RECREATION (televisions, toys, pets and pet products, sports equipment, admissions);
  • EDUCATION AND COMMUNICATION (college tuition, postage, telephone services, computer software and accessories);
  • OTHER GOODS AND SERVICES (tobacco and smoking products, haircuts and other personal services, funeral expenses)….

For each of the more than 200 item categories, using scientific statistical procedures, the Bureau has chosen samples of several hundred specific items within selected business establishments frequented by consumers to represent the thousands of varieties available in the marketplace. For example, in a given supermarket, the Bureau may choose a plastic bag of golden delicious apples, U.S. extra fancy grade, weighing 4.4 pounds to represent the Apples category….

How do I read or interpret an index?

An index is a tool that simplifies the measurement of movements in a numerical series. Most of the specific CPI indexes have a 1982-84 reference base. That is, BLS sets the average index level (representing the average price level)-for the 36-month period covering the years 1982, 1983, and 1984-equal to 100. BLS then measures changes in relation to that figure. An index of 110, for example, means there has been a 10-percent increase in price since the reference period; similarly, an index of 90 means a 10-percent decrease….

Can the CPIs for individual areas be used to compare living costs among the areas?

No, an individual area index measures how much prices have changed over a specific period in that particular area; it does not show whether prices or living costs are higher or lower in that area relative to another. In general, the composition of the market basket and the relative prices of goods and services in the market basket during the expenditure base period vary substantially across areas….

The Value of Experience

UPDATED BELOW

Does experience count? You bet.

But experience is not an undifferentiated quality. Experience as a “community organizer” — which means “professional rabble rouser” — hardly qualifies someone for the responsibility of heading the executive branch of the U.S. government. As millions of Americans have learned (and other millions have not).

Relevant experience, on the other hand, counts for a lot. Consider the following graphs:


The New York Yankees have enjoyed three “dynastic” eras of dominance in the American League: 1921-1964, 1976-1981, and 1994-2012.

In the first era, which predates free agency, the Yankees relied mainly on the allure of the team’s initial successes  to attract talented young prospects. (Those initial successes were due in large part to the acquisition of Babe Ruth in a trade that Boston Red Sox fans have ever since rued.) The best of the young prospects were then tested in the Yankees’ farm system, advanced (selectively) as they proved worthy, and brought up to the “big time” when they were deemed ready.

The second and third eras of  the Yankees’ dominance followed and coincided with the dwindling of the minor leagues, the expansion of the major leagues, and the advent of free agency. Because of the first two developments, major-league teams have been providing on-the-job training to players who, in earlier decades, might never have made it to the big leagues. The Yankees adapted to this change by picking up proven players through trades and free-agent signings — after those players had acquired polish and displayed their skills while in the pay of other teams. Thus it is that the Yankees faded after 1981, as their teams became younger, and became successful again in the mid-1990s, as their teams became older (i.e., more experienced).

UPDATE 08/20/13:

I’ve taken a closer look at the relationships discussed above, and they hold up well.

The following equation applies to 1901-1976 (the years before free agency was in full force):

WL = 0.258 + 0.458xWLP + 0.010xBHO, where

WL = won-lost record in a season

WLP = won-lost record in the previous season

BHO = batting holdovers (the number of batters who appeared in 100 or more games in both the current and preceding seasons)

It makes sense for WL to be strongly correlated with WLP (continuity of players, playing conditions, opposition, etc.). The interesting wrinkle is the presence of BHO in the equation, at a high level of significance (0.07), given the strong cross-correlations between WL and WLP (0.61) and WLP and BHO (0.66).

Since, 1976, however, only WLP explains WL with any degree of significance. This is consistent with my hypothesis that after the advent of free agency the Yankees (unsurprisingly) became more dependent on free agents (i.e., veteran players).

In fact, there was a significant change in the correlation between WL and the relative age of players. For 1901-1976, the correlation is effectively zero (an insignificant -0.11). For 1977-2012, the correlation is a highly significant 0.50. Moreover, the correlation between WLP and BHO drops significantly after the advent of free agency, from 0.66 to 0.22.

To summarize: Before free agency, the Yankees’ depended largely on the retention of proved veterans, but the team remained relatively young (on average) because of the constant acquisition and cultivation of young players, some of whom became valuable veterans. Since free agency, the team has relied less on “growing its own” and more on veteran players who had proved themselves elsewhere.

In any event, experience is valuable. It’s just that it’s acquired in a different way than it was before free agency.

Related posts:
Moral Luck
The Residue of Choice
Can Money Buy Excellence in Baseball?
Inventing “Liberalism”
Randomness Is Over-Rated
Fooled by Non-Randomness
Accountants of the Soul
Rawls Meets Bentham
Social Justice
Positive Liberty vs. Liberty
More Social Justice
Luck-Egalitarianism and Moral Luck
Nature Is Unfair
Elizabeth Warren Is All Wet
Luck and Baseball, One More Time
The Candle Problem: Balderdash Masquerading as Science
More about Luck and Baseball
Barack Channels Princess SummerFall WinterSpring
Obama’s Big Lie
Pseudoscience, “Moneyball,” and Luck