Pattern-Seeking

UPDATED 09/04/17

Scientists and analysts are reluctant to accept the “stuff happens” explanation for similar but disconnected events. The blessing and curse of the scientific-analytic mind is that it always seeks patterns, even where there are none to be found.

UPDATE 1

The version of this post that appears at Ricochet includes the following comments and replies:

Comment — Cool stuff, but are you thinking of any particular patter/maybe-not-pattern in particular?

My reply — The example that leaps readily to mind is “climate change”, the gospel of which is based on the fleeting (25-year) coincidence of rising temperatures and rising CO2 emissions. That, in turn, leads to the usual kind of hysteria about “climate change” when something like Harvey occurs.

Comment — It’s not a coincidence when the numbers are fudged.

My reply — The temperature numbers have been fudged to some extent, but even qualified skeptics accept the late 20th century temperature rise and the long-term rise in CO2. What’s really at issue is the cause of the temperature rise. The true believers seized on CO2 to the near-exclusion of other factors. How else could they then justify their puritanical desire to control the lives of others, or (if not that) their underlying anti-scientific mindset which seeks patterns instead of truths.

Another example, which applies to non-scientists and (some) scientists, is the identification of random arrangements of stars as “constellations”, simply because they “look” like something. Yet another example is the penchant for invoking conspiracy theories to explain (or rationalize) notorious events.

Returning to science, it is pattern-seeking which drives scientists to develop explanations that are later discarded and even discredited as wildly wrong. I list a succession of such explanations in my post “The Science Is Settled“.

UPDATE 2

Political pundits, sports writers, and sports commentators are notorious for making predictions that rely on tenuous historical parallels. I herewith offer an example, drawn from this very blog.

Here is the complete text of “A Baseball Note: The 2017 Astros vs. the 1951 Dodgers“, which I posted on the 14th of last month:

If you were following baseball in 1951 (as I was), you’ll remember how that season’s Brooklyn Dodgers blew a big lead, wound up tied with the New York Giants at the end of the regular season, and lost a 3-game playoff to the Giants on Bobby Thomson’s “shot heard ’round the world” in the bottom of the 9th inning of the final playoff game.

On August 11, 1951, the Dodgers took a doubleheader from the Boston Braves and gained their largest lead over the Giants — 13 games. The Dodgers at that point had a W-L record of 70-36 (.660), and would top out at .667 two games later. But their W-L record for the rest of the regular season was only .522. So the Giants caught them and went on to win what is arguably the most dramatic playoff in the history of professional sports.

The 2017 Astros peaked earlier than the 1951 Dodgers, attaining a season-high W-L record of .682 on July 5, and leading the second-place team in the AL West by 18 games on July 28. The Astros’ lead has dropped to 12 games, and the team’s W-L record since the July 5 peak is only .438.

The Los Angeles Angels might be this year’s version of the 1951 Giants. The Angels have come from 19 games behind the Astros on July 28, to trail by 12. In that span, the Angels have gone 11-4 (.733).

Hold onto your hats.

Since I wrote that, the Angels have gone 10-9, while the Astros have gone gone 12-8 and increased their lead over the Angels to 13.5 games. It’s still possible that the Astros will collapse and the Angels will surge. But the contest between the two teams no longer resembles the Dodgers-Giants duel of 1951, when the Giants had closed to 5.5 games behind the Dodgers at this point in the season.

My “model” of the 2017 contest between the Astros and Angels was on a par with the disastrously wrong models that “prove” the inexorability of catastrophic anthropogenic global warming. The models are disastrously wrong because they are being used to push government policy in counterproductive directions: wasting money on “green energy” while shutting down efficient sources of energy at the cost of real jobs and economic growth.


Related posts:
Hemibel Thinking
The Limits of Science
The Thing about Science
Words of Caution for Scientific Dogmatists
What’s Wrong with Game Theory
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
Mathematical Economics
Modeling Is Not Science
Beware the Rare Event
Physics Envy
What Is Truth?
The Improbability of Us
We, the Children of the Enlightenment
In Defense of Subjectivism
The Atheism of the Gaps
The Ideal as a False and Dangerous Standard
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Luck and Baseball, One More Time
Are the Natural Numbers Supernatural?
The Candle Problem: Balderdash Masquerading as Science
More about Luck and Baseball
Combinatorial Play
Pseudoscience, “Moneyball,” and Luck
The Fallacy of Human Progress
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
The “Marketplace” of Ideas
Time and Reality
My War on the Misuse of Probability
Ty Cobb and the State of Science
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
Is Science Self-Correcting?
Taleb’s Ruinous Rhetoric
Words Fail Us
Fine-Tuning in a Wacky Wrapper
Tricky Reasoning
Modeling Revisited
Bayesian Irrationality
The Fragility of Knowledge

“Science” vs. Science: The Case of Evolution, Race, and Intelligence

If you were to ask those people who marched for science if they believe in evolution, they would have answered with a resounding “yes”. Ask them if they believe that all branches of the human race evolved identically and you will be met with hostility. The problem, for them, is that an admission of the obvious — differential evolution, resulting in broad racial differences — leads to a fact that they don’t want to admit: there are broad racial differences in intelligence, differences that must have evolutionary origins.

“Science” — the cherished totem of left-wing ideologues — isn’t the same thing as science. The totemized version consists of whatever set of facts and hypotheses suit the left’s agenda. In the case of “climate change”, for example, the observation that in the late 1900s temperatures rose for a period of about 25 years coincident with a reported rise in the level of atmospheric CO2 occasioned the hypothesis that the generation of CO2 by humans causes temperatures to rise. This is a reasonable hypothesis, given the long-understood, positive relationship between temperature and so-called greenhouse gases. But it comes nowhere close to confirming what leftists seem bent on believing and “proving” with hand-tweaked models, which is that if humans continue to emit CO2, and do so at a higher rate than in the past, temperatures will rise to the point that life on Earth will become difficult if not impossible to sustain. There is ample evidence to support the null hypothesis (that “climate change” isn’t catastrophic) and the alternative view (that recent warming is natural and caused mainly by things other than human activity).

Leftists want to believe in catastrophic anthropogenic global warming because it suits the left’s puritanical agenda, as did Paul Ehrlich’s discredited thesis that population growth would outstrip the availability of food and resources, leading to mass starvation and greater poverty. Population control therefore became a leftist mantra, and remains one despite the generally rising prosperity of the human race and the diminution of scarcity (except where leftist governments, like Venezuela’s, create misery).

Why are leftists so eager to believe in problems that portend catastrophic consequences which “must” be averted through draconian measures, such as enforced population control, taxes on soft drinks above a certain size, the prohibition of smoking not only in government buildings but in all buildings, and decreed reductions in CO2-emitting activities (which would, in fact, help to impoverish humans)? The common denominator of such measures is control. And yet, by the process of psychological projection, leftists are always screaming “fascist” at libertarians and conservatives who resist control.

Returning to evolution, why are leftists so eager to eager to embrace it or, rather, what they choose to believe about it? My answers are that (a) it’s “science” (it’s only science when it’s spelled out in detail, uncertainties and all) and (b) it gives leftists (who usually are atheists) a stick with which to beat “creationists”.

But when it comes to race, leftists insist on denying what’s in front of their eyes: evolutionary disparities in such phenomena as skin color, hair texture, facial structure, running and jumping ability, cranial capacity, and intelligence.

Why? Because the urge to control others is of a piece with the superiority with which leftists believe they’re endowed because they are mainly white persons of European descent and above-average intelligence (just smart enough to be dangerous). Blacks and Hispanics who vote left do so mainly for the privileges it brings them. White leftists are their useful idiots.

Leftism, in other words, is a manifestation of “white privilege”, which white leftists feel compelled to overcome through paternalistic condescension toward blacks and other persons of color. (But not East Asians or the South Asians who have emigrated to the U.S., because the high intelligence of those groups is threatening to white leftists’ feelings of superiority.) What could be more condescending, and less scientific, than to deny what evolution has wrought in order to advance a political agenda?

Leftist race-denial, which has found its way into government policy, is akin to Stalin’s support of Lysenkoism, which its author cleverly aligned with Marxism. Lysenkoism

rejected Mendelian inheritance and the concept of the “gene”; it departed from Darwinian evolutionary theory by rejecting natural selection.

This brings me to Stephen Jay Gould, a leading neo-Lysenkoist and a fraudster of “science” who did much to deflect science from the question of race and intelligence:

[In The Mismeasure of Man] Gould took the work of a 19th century physical anthropologist named Samuel George Morton and made it ridiculous. In his telling, Morton was a fool and an unconscious racist — his project of measuring skull sizes of different ethnic groups conceived in racism and executed in same. Why, Morton clearly must have thought Caucasians had bigger brains than Africans, Indians, and Asians, and then subconsciously mismeasured the skulls to prove they were smarter.

The book then casts the entire project of measuring brain function — psychometrics — in the same light of primitivism.

Gould’s antiracist book was a hit with reviewers in the popular press, and many of its ideas about the morality and validity of testing intelligence became conventional wisdom, persisting today among the educated folks. If you’ve got some notion that IQ doesn’t measure anything but the ability to take IQ tests, that intelligence can’t be defined or may not be real at all, that multiple intelligences exist rather than a general intelligence, you can thank Gould….

Then, in 2011, a funny thing happened. Researchers at the University of Pennsylvania went and measured old Morton’s skulls, which turned out to be just the size he had recorded. Gould, according to one of the co-authors, was nothing but a “charlatan.”

The study itself couldn’t matter, though, could it? Well, recent work using MRI technology has established that descendants of East Asia have slightly more cranial capacity than descendants of Europe, who in turn have a little more than descendants of Africa. Another meta-analysis finds a mild correlation between brain size and IQ performance.

You see where this is going, especially if you already know about the racial disparities in IQ testing, and you’d probably like to hit the brakes before anybody says… what, exactly? It sounds like we’re perilously close to invoking science to argue for genetic racial superiority.

Am I serious? Is this a joke?…

… The reason the joke feels dangerous is that it incorporates a fact that is rarely mentioned in public life. In America, white people on average score higher than black people on IQ tests, by a margin of 12-15 points. And there’s one man who has been made to pay the price for that fact — the scholar Charles Murray.

Murray didn’t come up with a hypothesis of racial disparity in intelligence testing. He simply co-wrote a book, The Bell Curve, that publicized a fact well known within the field of psychometrics, a fact that makes the rest of us feel tremendously uncomfortable.

Nobody bears more responsibility for the misunderstanding of Murray’s work than Gould, who reviewed The Bell Curve savagely in the New Yorker. The IQ tests couldn’t be explained away — here he is acknowledging the IQ gap in 1995 — but the validity of IQ testing could be challenged. That was no trouble for the old Marxist.

Gould should have known that he was dead wrong about his central claim — that general intelligence, or g, as psychologists call it, was unreal. In fact, “Psychologists generally agree that the greatest success of their field has been in intelligence testing,” biologist Bernard D. Davis wrote in the Public Interest in 1983, in a long excoriation of Gould’s strange ideas.

Psychologists have found that performance on almost any test of cognition will have some correlation to other tests of cognition, even in areas that might seem distant from pure logic, such as recognizing musical notes. The more demanding tests have a higher correlation, or a high g load, as they term it.

IQ is very closely related to this measure, and turns out to be extraordinarily predictive not just for how well one does on tests, but on all sorts of real-life outcomes.

Since the publication of The Bell Curve, the data have demonstrated not just those points, but that intelligence is highly heritable (around 50 to 80 percent, Murray says), and that there’s little that can be done to permanently change the part that’s dependent on the environment….

The liberal explainer website Vox took a swing at Murray earlier this year, publishing a rambling 3,300-word hit job on Murray that made zero references to the scientific literature….

Vox might have gotten the last word, but a new outlet called Quillette published a first-rate rebuttal this week, which sent me down a three-day rabbit hole. I came across some of the most troubling facts I’ve ever encountered — IQ scores by country — and then came across some more reassuring ones from Thomas Sowell, suggesting that environment could be the main or exclusive factor after all.

The classic analogy from the environment-only crowd is of two handfuls of genetically identical seed corn, one planted in Iowa and the other in the Mojave Desert. One group flourishes; the other is stunted. While all of the variation within one group will be due to genetics, its flourishing relative to the other group will be strictly due to environment.

Nobody doubts that the United States is richer soil than Equatorial Guinea, but the analogy doesn’t prove the case. The idea that there exists a mean for human intelligence and that all racial subgroups would share it given identical environments remains a metaphysical proposition. We may want this to be true quite desperately, but it’s not something we know to be true.

For all the lines of attack, all the brutal slander thrown Murray’s way, his real crime is having an opinion on this one key issue that’s open to debate. Is there a genetic influence on the IQ testing gap? Murray has written that it’s “likely” genetics explains “some” of the difference. For this, he’s been crucified….

Murray said [in a recent interview] that the assumption “that everyone is equal above the neck” is written into social policy, employment policy, academic policy and more.

He’s right, of course, especially as ideas like “disparate impact” come to be taken as proof of discrimination. There’s no scientifically valid reason to expect different ethnic groups to have a particular representation in this area or that. That much is utterly clear.

The universities, however, are going to keep hollering about institutional racism. They are not going to accept Murray’s views, no matter what develops. [Jon Cassidy, “Mau Mau Redux: Charles Murray Comes in for Abuse, Again“, The American Spectator, June 9, 2017]

And so it goes in the brave new world of alternative facts, most of which seem to come from the left. But the left, with its penchant for pseudo-intellectualism (“science” vs. science) calls it postmodernism:

Postmodernists … eschew any notion of objectivity, perceiving knowledge as a construct of power differentials rather than anything that could possibly be mutually agreed upon…. [S]cience therefore becomes an instrument of Western oppression; indeed, all discourse is a power struggle between oppressors and oppressed. In this scheme, there is no Western civilization to preserve—as the more powerful force in the world, it automatically takes on the role of oppressor and therefore any form of equity must consequently then involve the overthrow of Western “hegemony.” These folks form the current Far Left, including those who would be described as communists, socialists, anarchists, Antifa, as well as social justice warriors (SJWs). These are all very different groups, but they all share a postmodernist ethos. [Michael Aaron, “Evergreen State and the Battle for Modernity“, Quillette, June 8, 2017]


Other related reading (listed chronologically):

Molly Hensley-Clancy, “Asians With “Very Familiar Profiles”: How Princeton’s Admissions Officers Talk About Race“, BuzzFeed News, May 19, 2017

Warren Meyer, “Princeton Appears To Penalize Minority Candidates for Not Obsessing About Their Race“, Coyote Blog, May 24, 2017

B. Wineguard et al., “Getting Voxed: Charles Murray, Ideology, and the Science of IQ“, Quillette, June 2, 2017

James Thompson, “Genetics of Racial Differences in Intelligence: Updated“, The Unz Review: James Thompson Archive, June 5, 2017

Raymond Wolters, “We Are Living in a New Dark Age“, American Renaissance, June 5, 2017

F. Roger Devlin, “A Tactical Retreat for Race Denial“, American Renaissance, June 9, 2017

Scott Johnson, “Mugging Mr. Murray: Mr. Murray Speaks“, Power Line, June 9, 2017


Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering
Some Notes about Psychology and Intelligence

Economists As Scientists

This is the third entry in a series of loosely connected posts on economics. The first entry is here and the second entry is here. (Related posts by me are noted parenthetically throughout this one.)

Science is something that some people “do” some of the time. There are full-time human beings and part-time scientists. And the part-timers are truly scientists only when they think and act in accordance with the scientific method.*

Acting in accordance with the scientific method is a matter of attitude and application. The proper attitude is one of indifference about the correctness of a hypothesis or theory. The proper application rejects a hypothesis if it can’t be tested, and rejects a theory if it’s refuted (falsified) by relevant and reliable observations.

Regarding attitude, I turn to the most famous person who was sometimes a scientist: Albert Einstein. This is from the Wikipedia article about the Bohr-Einstein debate:

The quantum revolution of the mid-1920s occurred under the direction of both Einstein and [Niels] Bohr, and their post-revolutionary debates were about making sense of the change. The shocks for Einstein began in 1925 when Werner Heisenberg introduced matrix equations that removed the Newtonian elements of space and time from any underlying reality. The next shock came in 1926 when Max Born proposed that mechanics were to be understood as a probability without any causal explanation.

Einstein rejected this interpretation. In a 1926 letter to Max Born, Einstein wrote: “I, at any rate, am convinced that He [God] does not throw dice.” [Apparently, Einstein also used the line in Bohr’s presence, and Bohr replied, “Einstein, stop telling God what to do.” — TEA]

At the Fifth Solvay Conference held in October 1927 Heisenberg and Born concluded that the revolution was over and nothing further was needed. It was at that last stage that Einstein’s skepticism turned to dismay. He believed that much had been accomplished, but the reasons for the mechanics still needed to be understood.

Einstein’s refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated.

It’s true that quantum mechanics was inchoate in the mid-1920s, and that it took a couple of decades to mature into quantum field theory. But there’s more than a trace of “attitude” in Einstein’s refusal to accept quantum mechanics, to stay abreast of developments in the theory, and to search quixotically for his own theory of everything, which he hoped would obviate the need for a non-deterministic explanation of quantum phenomena.

Improper application of the scientific method is rife. See, for example the Wikipedia article about the replication crisis, John Ioannidis’s article, “Why Most Published Research Findings Are False.” (See also “Ty Cobb and the State of Science” and “Is Science Self-Correcting?“) For a thorough analysis of the roots of the crisis, read Michael Hart’s book, Hubris: The Troubling Science, Economics, and Politics of Climate Change.

A bad attitude and improper application are both found among the so-called scientists who declare that the “science” of global warming is “settled,” and that human-generated CO2 emissions are the primary cause of the apparent rise in global temperatures during the last quarter of the 20th century. The bad attitude is the declaration of “settled science.” In “The Science Is Never Settled” I give many prominent examples of the folly of declaring it to be “settled.”

The improper application of the scientific method with respect to global warming began with the hypothesis that the “culprit” is CO2 emissions generated by the activities of human beings — thus anthropogenic global warming (AGW). There’s no end of evidence to the contrary, some of which is summarized in these posts and many of the links found therein. There’s enough evidence, in my view, to have rejected the CO2 hypothesis many times over. But there’s a great deal of money and peer-approval at stake, so the rush to judgment became a stampede. And attitude rears its ugly head when pro-AGW “scientists” shun the real scientists who are properly skeptical about the CO2 hypothesis, or at least about the degree to which CO2 supposedly influences temperatures. (For a depressingly thorough account of the AGW scam, read Michael Hart’s Hubris: The Troubling Science, Economics, and Politics of Climate Change.)

I turn now to economists, as I have come to know them in more than fifty years of being taught by them, working with them, and reading their works. Scratch an economist and you’re likely to find a moralist or reformer just beneath a thin veneer of rationality. Economists like to believe that they’re objective. But they aren’t; no one is. Everyone brings to the table a large serving of biases that are incubated in temperament, upbringing, education, and culture.

Economists bring to the table a heaping helping of tunnel vision. “Hard scientists” do, too, but their tunnel vision is generally a good thing, because it’s actually aimed at a deeper understanding of the inanimate and subhuman world rather than the advancement of a social or economic agenda. (I make a large exception for “hard scientists” who contribute to global-warming hysteria, as discussed above.)

Some economists, especially behavioralists, view the world through the lens of wealth-and-utility-maximization. Their great crusade is to force everyone to make rational decisions (by their lights), through “nudging.” It almost goes without saying that government should be the nudger-in-chief. (See “The Perpetual Nudger” and the many posts linked to therein.)

Other economists — though far fewer than in the past — have a thing about monopoly and oligopoly (the domination of a market by one or a few sellers). They’re heirs to the trust-busting of the late 1800s and early 1900s, a movement led by non-economists who sought to blame the woes of working-class Americans on the “plutocrats” (Rockefeller, Carnegie, Ford, etc.) who had merely made life better and more affordable for Americans, while also creating jobs for millions of them and reaping rewards for the great financial risks that they took. (See “Monopoly and the General Welfare” and “Monopoly: Private Is Better than Public.”) As it turns out, the biggest and most destructive monopoly of all is the federal government, so beloved and trusted by trust-busters — and too many others. (See “The Rahn Curve Revisited.”)

Nowadays, a lot of economists are preoccupied by income inequality, as if it were something evil and not mainly an artifact of differences in intelligence, ambition, and education, etc. And inequality — the prospect of earning rather grand sums of money — is what drives a lot of economic endeavor, to good of workers and consumers. (See “Mass (Economic) Hysteria: Income Inequality and Related Themes” and the many posts linked to therein.) Remove inequality and what do you get? The Soviet Union and Communist China, in which everyone is equal except party operatives and their families, friends, and favorites.

When the inequality-preoccupied economists are confronted by the facts of life, they usually turn their attention from inequality as a general problem to the (inescapable) fact that an income distribution has a top one-percent and top one-tenth of one-percent — as if there were something especially loathsome about people in those categories. (Paul Krugman shifted his focus to the top one-tenth of one percent when he realized that he’s in the top one percent, so perhaps he knows that’s he’s loathsome and wishes to deny it, to himself.)

Crony capitalism is trotted out as a major cause of very high incomes. But that’s hardly a universal cause, given that a lot of very high incomes are earned by athletes and film stars beside whom most investment bankers and CEOs are making peanuts. Moreover, as I’ve said on several occasions, crony capitalists are bright and driven enough to be in the stratosphere of any income distribution. Further, the fertile soil of crony capitalism is the regulatory power of government that makes it possible.

Many economists became such, it would seem, in order to promote big government and its supposed good works — income redistribution being one of them. Joseph Stiglitz and Paul Krugman are two leading exemplars of what I call the New Deal school of economic thought, which amounts to throwing government and taxpayers’ money at every perceived problem, that is, every economic outcome that is deemed unacceptable by accountants of the soul. (See “Accountants of the Soul.”)

Stiglitz and Krugman — both Nobel laureates in economics — are typical “public intellectuals” whose intelligence breeds in them a kind of arrogance. (See “Intellectuals and Society: A Review.”) It’s the kind of arrogance that I mentioned in the preceding post in this series: a penchant for deciding what’s best for others.

New Deal economists like Stiglitz and Krugman carry it a few steps further. They ascribe to government an impeccable character, an intelligence to match their own, and a monolithic will. They then assume that this infallible and wise automaton can and will do precisely what they would do: Create the best of all possible worlds. (See the many posts in which I discuss the nirvana fallacy.)

New Deal economists, in other words, live their intellectual lives  in a dream-world populated by the likes of Jiminy Cricket (“When You Wish Upon a Star”), Dorothy (“Somewhere Over the Rainbow”), and Mary Jane of a long-forgotten comic book (“First I shut my eyes real tight, then I wish with all my might! Magic words of poof, poof, piffles, make me just as small as [my mouse] Sniffles!”).

I could go on, but you should by now have grasped the point: What too many economists want to do is change human nature, channel it in directions deemed “good” (by the economist), or simply impose their view of “good” on everyone. To do such things, they must rely on government.

It’s true that government can order people about, but it can’t change human nature, which has an uncanny knack for thwarting Utopian schemes. (Obamacare, whose chief architect was economist Jonathan Gruber, is exhibit A this year.) And government (inconveniently for Utopians) really consists of fallible, often unwise, contentious human beings. So government is likely to march off in a direction unsought by Utopian economists.

Nevertheless, it’s hard to thwart the tax collector. The regulator can and does make things so hard for business that if one gets off the ground it can’t create as much prosperity and as many jobs as it would in the absence of regulation. And the redistributor only makes things worse by penalizing success. Tax, regulate, and redistribute should have been the mantra of the New Deal and most presidential “deals” since.

I hold economists of the New Deal stripe partly responsible for the swamp of stagnation into which the nation’s economy has descended. (See “Economic Growth Since World War II.”) Largely responsible, of course, are opportunistic if not economically illiterate politicians who pander to rent-seeking, economically illiterate constituencies. (Yes, I’m thinking of old folks and the various “disadvantaged” groups with which they have struck up an alliance of convenience.)

The distinction between normative economics and positive economics is of no particular use in sorting economists between advocates and scientists. A lot of normative economics masquerades as positive economics. The work of Thomas Piketty and his comrades-in-arms comes to mind, for example. (See “McCloskey on Piketty.”) Almost everything done to quantify and defend the Keynesian multiplier counts as normative economics, inasmuch as the work is intended (wittingly or not) to defend an intellectual scam of 80 years’ standing. (See “The Keynesian Multiplier: Phony Math,” “The True Multiplier,” and “Further Thoughts about the Keynesian Multiplier.”)

Enough said. If you want to see scientific economics in action, read Regulation. Not every article in it exemplifies scientific inquiry, but a good many of them do. It’s replete with articles about microeconomics, in which the authors uses real-world statistics to validate and quantify the many axioms of economics.

A final thought is sparked by Arnold Kling’s post, “Ed Glaeser on Science and Economics.” Kling writes:

I think that the public has a sort of binary classification. If it’s “science,” then an expert knows more than the average Joe. If it’s not a science, then anyone’s opinion is as good as anyone else’s. I strongly favor an in-between category, called a discipline. Think of economics as a discipline, where it is possible for avid students to know more than ordinary individuals, but without the full use of the scientific method.

On this rare occasion I disagree with Kling. The accumulation of knowledge about economic variables, or pseudo-knowledge such as estimates of GDP (see “Macroeconomics and Microeconomics“), either leads to well-tested, verified, and reproducible theories of economic behavior or it leads to conjectures, of which there are so many opposing ones that it’s “take your pick.” If that’s what makes a discipline, give me the binary choice between science and story-telling. Most of economics seems to be story-telling. “Discipline” is just a fancy word for it.

Collecting baseball cards and memorizing the statistics printed on them is a discipline. Most of economics is less useful than collecting baseball cards — and a lot more destructive.

Here’s my hypothesis about economists: There are proportionally as many of them who act like scientists as there are baseball players who have career batting averages of at least .300.
__________
* Richard Feynman, a physicist and real scientist, had a different view of the scientific method than Karl Popper’s standard taxonomy. I see Feynman’s view as complementary to Popper’s, not at odds with it. What is “constructive skepticism” (Feynman’s term) but a gentler way of saying that a hypothesis or theory might be falsified and that the act of falsification may point to a better hypothesis or theory?

Economics and Science

This is the second entry in what I expect to be a series of loosely connected posts on economics. The first entry is here.

Science is unnecessarily daunting to the uninitiated, which is to say, the vast majority of the populace. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Here I will dissect science, then turn to economics and begin a discussion of its scientific and non-scientific aspects. It has both, though at least one non-scientific aspect (the Keynesian multiplier) draws an inordinate amount of attention, and has many true believers within the profession.

Science is knowledge, but not all knowledge is science. A scientific body of knowledge is systematic; that is, the granular facts or phenomena which comprise the body of knowledge must be connected in patterned ways. The purported facts or phenomena of a science must represent reality, things that can be observed and measured in some way. Scientists may hypothesize the existence of an unobserved thing (e.g., the ether, dark matter), in an effort to explain observed phenomena. But the unobserved thing stands outside scientific knowledge until its existence is confirmed by observation, or because it remains standing as the only plausible explanation of observable phenomena. Hypothesized things may remain outside the realm of scientific knowledge for a very long time, if not forever. The Higgs boson, for example, was hypothesized in 1964 and has been tentatively (but not conclusively) confirmed since its “discovery” in 2011.

Science has other key characteristics. Facts and patterns must be capable of validation and replication by persons other than those who claim to have found them initially. Patterns should have predictive power; thus, for example, if the sun fails to rise in the east, the model of Earth’s movements which says that it will rise in the east is presumably invalid and must be rejected or modified so that it correctly predicts future sunrises or the lack thereof. Creating a model or tweaking an existing model just to account for a past event (e.g., the failure of the Sun to rise, the apparent increase in global temperatures from the 1970s to the 1990s) proves nothing other than an ability to “predict” the past with accuracy.

Models are usually clothed in the language of mathematics and statistics. But those aren’t scientific disciplines in themselves; they are tools of science. Expressing a theory in mathematical terms may lend the theory a scientific aura, but a theory couched in mathematical terms is not a scientific one unless (a) it can be tested against facts yet to be ascertained and events yet to occur, and (b) it is found to accord with those facts and events consistently, by rigorous statistical tests.

A science may be descriptive rather than mathematical. In a descriptive science (e.g., plant taxonomy), particular phenomena sometimes are described numerically (e.g., the number of leaves on the stem of a species), but the relations among various phenomena are not reducible to mathematics. Nevertheless, a predominantly descriptive discipline will be scientific if the phenomena within its compass are connected in patterned ways, can be validated, and are applicable to newly discovered entities.

Non-scientific disciplines can be useful, whereas some purportedly scientific disciplines verge on charlatanism. Thus, for example:

  • History, by my reckoning, is not a science because its account of events and their relationships is inescapably subjective and incomplete. But a knowledge of history is valuable, nevertheless, for the insights it offers into the influence of human nature on the outcomes of economic and political processes.
  • Physics is a science in most of its sub-disciplines, but there are some (e.g., cosmology) where it descends into the realm of speculation. It is informed, fascinating speculation to be sure, but speculation all the same. The idea of multiverses, for example, can’t be tested, inasmuch as human beings and their tools are bound to the known universe.
  • Economics is a science only to the extent that it yields empirically valid insights about  specific economic phenomena (e.g., the effects of laws and regulations on the prices and outputs of specific goods and services). Then there are concepts like the Keynesian multiplier, about which I’ll say more in this series. It’s a hypothesis that rests on a simplistic, hydraulic view of the economic system. (Other examples of pseudo-scientific economic theories are the labor theory of value and historical determinism.)

In sum, there is no such thing as “science,” writ large; that is, no one may appeal, legitimately, to “science” in the abstract. A particular discipline may be a science, but it is a science only to the extent that it comprises a factual and replicable body of patterned knowledge. Patterned knowledge includes theories with predictive power.

A scientific theory is a hypothesis that has thus far been confirmed by observation. Every scientific theory rests eventually on axioms: self-evident principles that are accepted as true without proof. The principle of uniformity (which can be traced to Galileo) is an example of such an axiom:

Uniformitarianism is the assumption that the same natural laws and processes that operate in the universe now have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of causal structure throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, uniformitarianism has been a key first principle of virtually all fields of science

Thus, for example, if observer B is moving away from observer A at a certain speed, observer A will perceive that he is moving away from observer B at that speed. It follows that an observer cannot determine either his absolute velocity or direction of travel in space. The principle of uniformity is a fundamental axiom of modern physics, most notably of Einstein’s special and general theories of relativity.

There’s a fine line between an axiom and a theory. Was the idea of a geocentric universe an axiom or a theory? If it was taken as axiomatic — as it surely was by many scientists for about 2,000 years — then it’s fair to say that an axiom can give way under the pressure of observational evidence. (Such an event is what Thomas Kuhn calls a paradigm shift.) But no matter how far scientists push the boundaries of knowledge, they must at some point rely on untestable axioms, such as the principle of uniformity. There are simply deep and (probably) unsolvable mysteries that science is unlikely to fathom.

This brings me to economics, which — in my view — rests on these self-evident axioms:

1. Each person strives to maximize his or her sense of satisfaction, which may also be called well-being, happiness, or utility (an ugly word favored by economists). Striving isn’t the same as achieving, of course, because of lack of information, emotional decision-making, buyer’s remorse, etc

2. Happiness can and often does include an empathic concern for the well-being of others; that is, one’s happiness may be served by what is usually labelled altruism or self-sacrifice.

3. Happiness can be and often is served by the attainment of non-material ends. Not all persons (perhaps not even most of them) are interested in the maximization of wealth, that is, claims on the output of goods and services. In sum, not everyone is a wealth maximizer. (But see axiom number 12.)

4. The feeling of satisfaction that an individual derives from a particular product or service is situational — unique to the individual and to the time and place in which the individual undertakes to acquire or enjoy the product or service. Generally, however, there is a (situationally unique) point at which the acquisition or enjoyment of additional units of a particular product or service during a given period of time tends to offer less satisfaction than would the acquisition or enjoyment of units of other products or services that could be obtained at the same cost.

5. The value that a person places on a product or service is subjective. Products and services don’t have intrinsic values that apply to all persons at a given time or period of time.

6. The ability of a person to acquire products and services, and to accumulate wealth, depends (in the absence of third-party interventions) on the valuation of the products and services that are produced in part or whole by the person’s labor (mental or physical), or by the assets that he owns (e.g., a factory building, a software patent). That valuation is partly subjective (e.g., consumers’ valuation of the products and services, an employer’s qualitative evaluation of the person’s contributions to output) and partly objective (e.g., an employer’s knowledge of the price commanded by a product or service, an employer’s measurement of an employees’ contribution to the quantity of output).

7. The persons and firms from which products and services flow are motivated by the acquisition of income, with which they can acquire other products and services, and accumulate wealth for personal purposes (e.g., to pass to heirs) or business purposes (e.g., to expand the business and earn more income). So-called profit maximization (seeking to maximize the difference between the cost of production and revenue from sales) is a key determinant of business decisions but far from the only one. Others include, but aren’t limited to, being a “good neighbor,” providing employment opportunities for local residents, and underwriting philanthropic efforts.

8. The cost of production necessarily influences the price at which a good or and service will be offered for sale, but doesn’t solely determine the price at which it will be sold. Selling price depends on the subjective valuation of the products or service, prospective buyers’ incomes, and the prices of other products and services, including those that are direct or close substitutes and those to which users may switch, depending on relative prices.

9. The feeling of satisfaction that a person derives from the acquisition and enjoyment of the “basket” of products and services that he is able to buy, given his income, etc., doesn’t necessarily diminish, as long as the person has access to a great variety of products and services. (This axiom and axiom 12 put paid to the myth of diminishing marginal utility of income.)

10. Work may be a source of satisfaction in itself or it may simply be a means of acquiring and enjoying products and services, or acquiring claims to them by accumulating wealth. Even when work is satisfying in itself, it is subject to the “law” of diminishing marginal satisfaction.

11. Work, for many (but not all) persons, is no longer be worth the effort if they become able to subsist comfortably enough by virtue of the wealth that they have accumulated, the availability of redistributive schemes (e.g., Social Security and Medicare), or both. In such cases the accumulation of wealth often ceases and reverses course, as it is “cashed in” to defray the cost of subsistence (which may be far more than minimal).

12. However, there are not a few persons whose “work” is such a great source of satisfaction that they continue doing it until they are no longer capable of doing so. And there are some persons whose “work” is the accumulation of wealth, without limit. Such persons may want to accumulate wealth in order to “do good” or to leave their heirs well off or simply for the satisfaction of running up the score. The justification matters not. There is no theoretical limit to the satisfaction that a particular person may derive from the accumulation of wealth. Moreover, many of the persons (discussed in axiom 11) who aren’t able to accumulate wealth endlessly would do so if they had the ability and the means to take the required risks.

13. Individual degrees of satisfaction (happiness, etc.) are ephemeral, nonquantifiable, and incommensurable. There is no such thing as a social welfare function that a third party (e.g., government) can maximize by taking from A to give to B. If there were such a thing, its value would increase if, for example, A were to punch B in the nose and derive a degree of pleasure that somehow more than offsets the degree of pain incurred by B. (The absurdity of a social-welfare function that allows As to punch Bs in their noses ought to be enough shame inveterate social engineers into quietude — but it won’t. They derive great satisfaction from meddling.) Moreover, one of the primary excuses for meddling is that income (and thus wealth) has a  diminishing marginal utility, so it makes sense to redistribute from those with higher incomes (or more wealth) to those who have less of either. Marginal utility is, however, unknowable (see axioms 4 and 5), and may not always be negative (see axioms 9 and 12).

14. Whenever a third party (government, do-gooders, etc.) intervene in the affairs of others, that third party is merely imposing its preferences on those others. The third party sometimes claims to know what’s best for “society as a whole,” etc., but no third party can know such a thing. (See axiom 13.)

15. It follows from axiom 13 that the welfare of “society as a whole” can’t be aggregated or measured. An estimate of the monetary value of the economic output of a nation’s economy (Gross Domestic Product) is by no means an estimate of the welfare of “society as a whole.” (Again, see axiom 13.)

That may seem like a lot of axioms, which might give you pause about my claim that some aspects of economics are scientific. But economics is inescapably grounded in axioms such as the ones that I propound. This aligns me (mainly) with the Austrian economists, whose leading light was Ludwig von Mises. Gene Callahan writes about him at the website of the Ludwig von Mises Institute:

As I understand [Mises], by categorizing the fundamental principles of economics as a priori truths and not contingent facts open to empirical discovery or refutation, Mises was not claiming that economic law is revealed to us by divine action, like the ten commandments were to Moses. Nor was he proposing that economic principles are hard-wired into our brains by evolution, nor even that we could articulate or comprehend them prior to gaining familiarity with economic behavior through participating in and observing it in our own lives. In fact, it is quite possible for someone to have had a good deal of real experience with economic activity and yet never to have wondered about what basic principles, if any, it exhibits.

Nevertheless, Mises was justified in describing those principles as a priori, because they are logically prior to any empirical study of economic phenomena. Without them it is impossible even to recognize that there is a distinct class of events amenable to economic explanation. It is only by pre-supposing that concepts like intention, purpose, means, ends, satisfaction, and dissatisfaction are characteristic of a certain kind of happening in the world that we can conceive of a subject matter for economics to investigate. Those concepts are the logical prerequisites for distinguishing a domain of economic events from all of the non-economic aspects of our experience, such as the weather, the course of a planet across the night sky, the growth of plants, the breaking of waves on the shore, animal digestion, volcanoes, earthquakes, and so on.

Unless we first postulate that people deliberately undertake previously planned activities with the goal of making their situations, as they subjectively see them, better than they otherwise would be, there would be no grounds for differentiating the exchange that takes place in human society from the exchange of molecules that occurs between two liquids separated by a permeable membrane. And the features which characterize the members of the class of phenomena singled out as the subject matter of a special science must have an axiomatic status for practitioners of that science, for if they reject them then they also reject the rationale for that science’s existence.

Economics is not unique in requiring the adoption of certain assumptions as a pre-condition for using the mode of understanding it offers. Every science is founded on propositions that form the basis rather than the outcome of its investigations. For example, physics takes for granted the reality of the physical world it examines. Any piece of physical evidence it might offer has weight only if it is already assumed that the physical world is real. Nor can physicists demonstrate their assumption that the members of a sequence of similar physical measurements will bear some meaningful and consistent relationship to each other. Any test of a particular type of measurement must pre-suppose the validity of some other way of measuring against which the form under examination is to be judged.

Why do we accept that when we place a yardstick alongside one object, finding that the object stretches across half the length of the yardstick, and then place it alongside another object, which only stretches to a quarter its length, that this means the first object is longer than the second? Certainly not by empirical testing, for any such tests would be meaningless unless we already grant the principle in question. In mathematics we don’t come to know that 2 + 2 always equals 4 by repeatedly grouping two items with two others and counting the resulting collection. That would only show that our answer was correct in the instances we examined — given the assumption that counting works! — but we believe it is universally true. [And it is universally true by the conventions of mathematics. If what we call “5” were instead called “4,” 2 + 2 would always equal 5. — TEA] Biology pre-supposes that there is a significant difference between living things and inert matter, and if it denied that difference it would also be denying its own validity as a special science. . . .

The great fecundity from such analysis in economics is due to the fact that, as acting humans ourselves, we have a direct understanding of human action, something we lack in pondering the behavior of electrons or stars. The contemplative mode of theorizing is made even more important in economics because the creative nature of human choice inherently fails to exhibit the quantitative, empirical regularities, the discovery of which characterizes the modern, physical sciences. (Biology presents us with an interesting intermediate case, as many of its findings are qualitative.) . . .

[A] person can be presented with scores of experiments supporting a particular scientific theory is sound, but no possible experiment ever can demonstrate to him that experimentation is a reasonable means by which to evaluate a scientific theory. Only his intuitive grasp of its plausibility can bring him to accept that proposition. (Unless, of course, he simply adopts it on the authority of others.) He can be led through hundreds of rigorous proofs for various mathematical theorems and be taught the criteria by which they are judged to be sound, but there can be no such proof for the validity of the method itself. (Kurt Gödel famously demonstrated that a formal system of mathematical deduction that is complex enough to model even so basic a topic as arithmetic might avoid either incompleteness or inconsistency, but always must suffer at least one of those flaws.) . . .

This ultimate, inescapable reliance on judgment is illustrated by Lewis Carroll in Alice Through the Looking Glass. He has Alice tell Humpty Dumpty that 365 minus one is 364. Humpty is skeptical, and asks to see the problem done on paper. Alice dutifully writes down:

365 – 1 = 364

Humpty Dumpty studies her work for a moment before declaring that it seems to be right. The serious moral of Carroll’s comic vignette is that formal tools of thinking are useless in convincing someone of their conclusions if he hasn’t already intuitively grasped the basic principles on which they are built.

All of our knowledge ultimately is grounded on our intuitive recognition of the truth when we see it. There is nothing magical or mysterious about the a priori foundations of economics, or at least nothing any more magical or mysterious than there is about our ability to comprehend any other aspect of reality.

(Callahan has more to say here. For a technical discussion of the science of human action, or praxeology, read this. Some glosses on Gödel’s incompleteness theorem are here.)

I omitted an important passage from the preceding quotation, in order to single it out. Callahan says also that

Mises’s protégé F.A. Hayek, while agreeing with his mentor on the a priori nature of the “logic of action” and its foundational status in economics, still came to regard investigating the empirical issues that the logic of action leaves open as a more important undertaking than further examination of that logic itself.

I agree with Hayek. It’s one thing to know axiomatically that the speed of light is constant; it is quite another (and useful) thing to know experimentally that the speed of light (in empty space) is about 671 million miles an hour. Similarly, it is one thing to deduce from the axioms of economics that demand curves generally slope downward; it is quite another (and useful) thing to estimate specific demand functions.

But one must always be mindful of the limitations of quantitative methods in economics. As James Sheehan writes at the website of the Mises Institute,

economists are prone to error when they ascribe excessive precision to advanced statistical techniques. They assume, falsely, that a voluminous amount of historical observations (sample data) can help them to make inferences about the future. They presume that probability distributions follow a bell-shaped pattern. They make no provision for the possibility that past correlations between economic variables and data were coincidences.

Nor do they account for the possibility, as economist Robert Lucas demonstrated, that people will incorporate predictable patterns into their expectations, thus canceling out the predictive value of such patterns. . . .

As [Nassim Nicholas] Taleb points out [in Fooled by Randomness], the popular Monte Carlo simulation “is more a way of thinking than a computational method.” Employing this way of thinking can enhance one’s understanding only if its weaknesses are properly understood and accounted for. . . .

Taleb’s critique of econometrics is quite compatible with Austrian economics, which holds that dynamic human actions are too subjective and variegated to be accurately modeled and predicted.

In some parts of Fooled by Randomness, Taleb almost sounds Austrian in his criticisms of economists who worship “the efficient market religion.” Such economists are misguided, he argues, because they begin with the flawed hypothesis that human beings act rationally and do what is mathematically “optimal.” . . .

As opposed to a Utopian Vision, in which human beings are rational and perfectible (by state action), Taleb adopts what he calls a Tragic Vision: “We are faulty and there is no need to bother trying to correct our flaws.” It is refreshing to see a highly successful practitioner of statistics and finance adopt a contrarian viewpoint towards economics.

Yet, as Arnold Kling explains, many (perhaps most) economists have lost sight of the axioms of economics in their misplaced zeal to emulate the methods of the physical sciences:

The most distinctive trend in economic research over the past hundred years has been the increased use of mathematics. In the wake of Paul Samuelson’s (Nobel 1970) Ph.D dissertation, published in 1948, calculus became a requirement for anyone wishing to obtain an economics degree. By 1980, every serious graduate student was expected to be able to understand the work of Kenneth Arrow (Nobel 1972) and Gerard Debreu (Nobel 1983), which required mathematics several semesters beyond first-year calculus.

Today, the “theory sequence” at most top-tier graduate schools in economics is controlled by math bigots. As a result, it is impossible to survive as an economics graduate student with a math background that is less than that of an undergraduate math major. In fact, I have heard that at this year’s American Economic Association meetings, at a seminar on graduate education one professor quite proudly said that he ignored prospective students’ grades in economics courses, because their math proficiency was the key predictor of their ability to pass the coursework required to obtain an advanced degree.

The raising of the mathematical bar in graduate schools over the past several decades has driven many intelligent men and women (perhaps women especially) to pursue other fields. The graduate training process filters out students who might contribute from a perspective of anthropology, biology, psychology, history, or even intense curiosity about economic issues. Instead, the top graduate schools behave as if their goal were to produce a sort of idiot-savant, capable of appreciating and adding to the mathematical contributions of other idiot-savants, but not necessarily possessed of any interest in or ability to comprehend the world to which an economist ought to pay attention.

. . . The basic question of What Causes Prosperity? is not a question of how trading opportunities play out among a given array of goods. Instead, it is a question of how innovation takes place or does not take place in the context of institutional factors that are still poorly understood.

Mathematics, as I have said, is a tool of science, it’s not science in itself. Dressing hypothetical relationships in the garb of mathematics doesn’t validate them.

Where, then, is the science in economics? And where is the nonsense? I’ve given you some hints (and more than hints). There’s more to come.

Is Science Self-Correcting?

A long-time colleague, in response to a provocative article about the sins of scientists, characterized it as “garbage” and asserted that science is self-correcting.

I should note here that my colleague abhors “extreme” views, and would cross the street to avoid a controversy. As a quondam scientist, he thinks of a challenge to the integrity of science as “extreme.” Which strikes me as an unscientific attitude.

Science is only self-correcting on a time scale of decades, and even centuries. Wrong-headed theories can persist for a very long time. And it has become worse in the past six decades.

What has changed in the past six decades? Sputnik spurred a (relatively) massive increase in government-funded research. This created a new and compelling incentive: produce research that comports with the party line. The party line isn’t necessarily the line of the party then in power, but the line favored by the bureaucrats in charge of doling out money.

On top of that, politically incorrect research is generally frowned upon. And when it surfaces it is attacked en masse by academicians who are eager to prove their political correctness.

Thus it is that the mere coincidence of a rise in CO2 emissions and a rise in temperatures in the latter part of the 20th century became the basis for kludgey models which “prove” AGW — preferably of the “catastrophic” kind — while essentially ignoring eons of evidence to the contrary. Skeptics (i.e., scientists doing what scientists should do) are attacked viciously when they aren’t simply ignored. The attackers are, all too often, people who call themselves scientists.

And thus it is that research into the connection between race and intelligence has been discouraged and even suppressed at universities. This despite truckloads of evidence that there is such a connection.

Those two examples don’t represent all of science, to be sure, but they’re a sad commentary on the state of science — in some fields, at least.

There are many more examples in Politicizing Science: The Alchemy of Policy-Making, edited by Michael Gough. I haven’t read the book, but I’m familiar with most of the cases documented by the contributors. The cases are about scientists behaving badly, and about non-scientists misusing science and advocating policies that lack firm scientific backing.

Scientists have been behaved badly since the dawn of science, though — as discussed above — there are now more (or different) incentives to behave badly than there were in the past. But non-scientists (especially politicians) will behave badly regardless of and contrary to scientific knowledge. So I won’t blame science or scientists for that behavior, except to the extent that scientists are actively abetting the bad behavior of non-scientists.

Which brings me to the matter of science being self-correcting. I am an avid (perhaps rabid) anti-reificationist. So I must say here that there is no such thing as “science.” There’s only what scientists “do” and claim to know.

It’s possible, though not certain, that future scientists will correct the errors of their predecessors — whether those errors arose from honest mistakes or bias. But, in the meantime, the errors persist and are used to abet policies that have costly, harmful, and even fatal consequences for multitudes of people. And most of that damage can’t be undone.

So, in this age of weaponized science, I take no solace in the idea that the errors of its practitioners and abusers might, someday, be recognized. The errors of knowledge might be corrected, but the errors of application are (mostly) beyond remedy.

Here’s an analogy: The errors of the builders, owners, captain, and crew of RMS Titanic seem to have been corrected, in that there hasn’t been a repetition of the conditions and events that led to the ship’s sinking. But that doesn’t make up for the loss of 1,514 lives, the physical and emotional suffering of the 710 survivors, the loss of a majestic ship, the loss of much valuable property, or the grief of the families and friends of those who were lost.

In sum, the claim that science is self-correcting amounts to a fatuous excuse for the irreparable damage that is often done in the name of science.


Related reading: Nathan Cofnas, “Science Is Not Always Self-Correcting“, Foundations of Science 21(3):477-492 (2016)


Related posts:
Demystifying Science
Scientism, Evolution, and the Meaning of Life
The Fallacy of Human Progress
Pinker Commits Scientism
AGW: The Death Knell (with many links to related readings and earlier posts)
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Not-So-Random Thoughts (XIV) (second item)
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
Further Pretensions of Knowledge
“And the Truth Shall Set You Free”
AGW in Austin? (II)

Ty Cobb and the State of Science

This post was inspired by “Layman’s Guide to Understanding Scientific Research” at bluebird of bitterness.

The thing about history is that it’s chock full of lies. Well, a lot of the lies are just incomplete statements of the truth. Think of history as an artificially smooth surface, where gaps in knowledge have been filled by assumptions and guesses, and where facts that don’t match the surrounding terrain have been sanded down. Charles Leershen offers an excellent example of the lies that became “history” in his essay “Who Was Ty Cobb? The History We Know That’s Wrong.” (I’m now reading the book on which the essay is based, and it tells the same tale, at length.)

Science is much like history in its illusory certainty. Stand back from things far enough and you see a smooth, mathematical relationship. Look closer, however, and you find rough patches. A classic example is found in physics, where the big picture of general relativity doesn’t mesh with the small picture of quantum mechanics.

Science is based on guesses, also known as hypotheses. The guesses are usually informed by observation, but they are guesses nonetheless. Even when a guess has been lent credence by tests and observations, it only becomes a theory — a working model of a limited aspect of physical reality. A theory is never proven; it can only be disproved.

Science, in other words, is never “settled.” Napoleon is supposed to have said “What is history but a fable agreed upon?” It seems, increasingly, that so-called scientific facts are nothing but a fable that some agree upon because they wish to use those “facts” as a weapon with which to advance their careers and political agendas. Or they simply wish to align themselves with the majority, just as Barack Obama’s popularity soared (for a few months) after he was re-elected.

*     *     *

Related reading:

Wikipedia, “Replication Crisis

John P.A. Ionnidis, “Why Most Published Research Findings Are False,” PLOS Medicine, August 30, 2005

Liberty Corner, “Science’s Anti-Scientific Bent,” April 12, 2006

Politics & Prosperity, “Modeling Is Not Science,” April 8, 2009

Politics & Prosperity, “Physics Envy,” May 26, 2010

Politics & Prosperity, “Demystifying Science,” October 5, 2011 (also see the long list of related posts at the bottom)

Politics & Prosperity, “The Science Is Settled,” May 25, 2014

Politics & Prosperity, “The Limits of Science, Illustrated by Scientists,” July 28, 2014

Steven E. Koonin, “Climate Science Is Not Settled,” WSJ.com, September 19, 2014

Joel Achenbach, “No, Science’s Reproducibility Problem Is Not Limited to Psychology,” The Washington Post, August 28, 2015

William A. Wilson, “Scientific Regress,” First Things, May 2016

Jonah Goldberg, “Who Are the Real Deniers of Science?AEI.org, May 20, 2016

Steven Hayward, “The Crisis of Scientific Credibility,” Power Line, May 25, 2016

There’s a lot more here.

Rationalism, Empiricism, and Scientific Knowledge

Take a very large number, say, 1 quintillion. Written out, it looks like this: 1,000,000,000,000,000,000. It can also be expressed as 1018 or 10.E+18.

I doubt that any human being has ever discerned 1 quintillion discrete objects in a single moment. Including the constituents of all of the stars and planets, there may be more than 1 quintillion particles of matter in the visible portion of the sky on a clear night. But no person may reasonably claim to have seen all of those particles of matter as individual objects.

I doubt, further, that any human being has ever discerned 1 million  objects in a lifetime, even a very long lifetime. And if I’m wrong about that, it’s certainly possible to conjure a number high enough to be well beyond the experiential capacity of any human being; 101000, for instance.

Despite the impossibility of experiencing 101000 things, it is possible to write the number and to perform mathematical operations which involve the number. So, in some sense, very large numbers “exist.” But they exist only because human beings are capable of thinking of them. They are not “real” in the same way that a sky full of stars and planets is real.

Numbers and mathematics are rational constructs of the minds of human beings. Stars and planets are observed; that is, there is empirical evidence of their existence.

Thus there are two1 types of scientific knowledge: rational2 and empirical. They are related in the following ways:

1. Rational knowledge builds on empirical knowledge. Astronomical observations enabled Copernicus to devise a mathematical heliocentric model of the universe, which was an improvement on the geocentric model.

2. Empirical knowledge builds on rational knowledge. Observations aimed at verifying the heliocentric model led eventually to the discovery that the Sun is not at the center of the universe.

3. Empirical knowledge may affirm or contradict rational knowledge. Einstein’s general theory of relativity, which is given in a paper written in 1915, says that light is deflected (bent) by gravity. Astronomical observations made in 1919 affirmed the effect of gravity on light. Had the observations contradicted the postulated effect, the general theory (if any) might be markedly different than the one set forth in 1915. (A scientific theory is more than a hypothesis; it has been substantiated, though it always remains open to refutation.)

4. Rational knowledge may lead to empirical knowledge. One of the postulates that underlies Einstein’s special theory of relativity is the constancy of the speed of light; that is, the speed of light is independent of the motion of the source or the observer. This is unlike (for example) the speed of a ball that is thrown inside a moving train car, in the direction of the train car’s motion. An observer who is stationary relative to the train car will see the speed of the ball as the sum of (a) its speed relative to the thrower and (b) the speed of the train car relative to the observer. Einstein’s postulate, which drew on James Clerk Maxwell’s empirically based theory of electromagnetism, was subsequently verified experimentally.

These reflections lead me to four conclusions:

  • Knowledge is provisional. Human beings often don’t know what to make of the things that they perceive, and what they make of those things is often found to be wrong.
  • When it comes to science, rational and empirical knowledge are intertwined, and their effects are cumulative.
  • Rational knowledge that can’t be or hasn’t been put to an empirical test is merely a hypothesis. The hypothesis may be correct, but it doesn’t represent knowledge.
  • Empirical knowledge necessarily precedes rational knowledge because hypotheses draw on empirical knowledge and must be substantiated by empirical knowledge.3

*     *     *

Related reading:
Thomas M. Lennon and Shannon Dea, “Continental Rationalism,” Stanford Encyclopedia of Philosophy, April 14, 2012 (substantive revision)
Peter Markie, “Rationalism vs. Empiricism,” Stanford Encyclopedia of Philosophy, March 21, 2013 (substantive revision)

Related posts:
Hemibel Thinking
What Is Truth?
Demystifying Science
Are the Natural Numbers Supernatural?
Pinker Commits Scientism
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists

__________
1. This post focuses on scientific knowledge and ignores other phenomena that are sometimes classified as branches of knowledge, such as emotional knowledge.

2. In this context, rational means by virtue of reason, not lucid or sane. The discussion of rational knowledge is restricted to knowledge that derives from and is a logical extension of observed phenomena, as in the example with which the post begins. I will not, in this post, deal with intuition, innate knowledge, or innate concepts, which are also treated under the heading of rational knowledge.

3. Unless it is true that human beings are born with certain kinds of knowledge, or with certain concepts that can be filled in by knowledge. The article by Markie treats these possibilities at some length.

Signature

The Pretence of Knowledge

Friedrich Hayek, in his Nobel Prize lecture of 1974, “The Pretence of Knowledge,” observes that

the great and rapid advance of the physical sciences took place in fields where it proved that explanation and prediction could be based on laws which accounted for the observed phenomena as functions of comparatively few variables.

Hayek’s particular target was the scientism then (and still) rampant in economics. In particular, there was (and is) a quasi-religious belief in the power of central planning (e.g., regulation, “stimulus” spending, control of the money supply) to attain outcomes superior to those that free markets would yield.

But, as Hayek says in closing,

There is danger in the exuberant feeling of ever growing power which the advance of the physical sciences has engendered and which tempts man to try, “dizzy with success” … to subject not only our natural but also our human environment to the control of a human will. The recognition of the insuperable limits to his knowledge ought indeed to teach the student of society a lesson of humility which should guard him against becoming an accomplice in men’s fatal striving to control society – a striving which makes him not only a tyrant over his fellows, but which may well make him the destroyer of a civilization which no brain has designed but which has grown from the free efforts of millions of individuals.

I was reminded of Hayek’s observations by John Cochrane’s post, “Groundhog Day” (The Grumpy Economist, May 11, 2014), wherein Cochrane presents this graph:

The fed's forecasting models are broken

Cochrane adds:

Every serious forecast looked like this — Fed, yes, but also CBO, private forecasters, and the term structure of forward rates. Everyone has expected bounce-back growth and rise in interest rates to start next year, for the last 6 years. And every year it has not happened. Welcome to the slump. Every year, Sonny and Cher wake us up, and it’s still cold, and it’s still grey. But we keep expecting spring tomorrow.

Whether the corrosive effects of government microeconomic and regulatory policy, or a failure of those (unprintable adjectives) Republicans to just vote enough wasted-spending Keynesian stimulus, or a failure of the Fed to buy another $3 trillion of bonds, the question of the day really should be why we have this slump — which, let us be honest, no serious forecaster expected.

(I add the “serious forecaster” qualification on purpose. I don’t want to hear randomly mined quotes from bloviating prognosticators who got lucky once, and don’t offer a methodology or a track record for their forecasts.)

The Fed’s forecasting models are nothing more than sophisticated charlatanism — a term that Hayek applied to pseudo-scientific endeavors like macroeconomic modeling. Nor is charlatanism confined to economics and the other social “sciences.” It’s rampant in climate “science,” as Roy Spencer has shown. Consider, for example, this graph from Spencers’s post, “95% of Climate Models Agree: The Observations Must Be Wrong” (Roy Spencer, Ph.D., February 7, 2014):

95% of climate models agree_the observations must be wrong

Spencer has a lot more to say about the pseudo-scientific aspects of climate “science.” This example is from “Top Ten Good Skeptical Arguments” (May 1, 2014):

1) No Recent Warming. If global warming science is so “settled”, why did global warming stop over 15 years ago (in most temperature datasets), contrary to all “consensus” predictions?

2) Natural or Manmade? If we don’t know how much of the warming in the longer term (say last 50 years) is natural, then how can we know how much is manmade?

3) IPCC Politics and Beliefs. Why does it take a political body (the IPCC) to tell us what scientists “believe”? And when did scientists’ “beliefs” translate into proof? And when was scientific truth determined by a vote…especially when those allowed to vote are from the Global Warming Believers Party?

4) Climate Models Can’t Even Hindcast How did climate modelers, who already knew the answer, still fail to explain the lack of a significant temperature rise over the last 30+ years? In other words, how to you botch a hindcast?

5) …But We Should Believe Model Forecasts? Why should we believe model predictions of the future, when they can’t even explain the past?

6) Modelers Lie About Their “Physics”. Why do modelers insist their models are based upon established physics, but then hide the fact that the strong warming their models produce is actually based upon very uncertain “fudge factor” tuning?

7) Is Warming Even Bad? Who decided that a small amount of warming is necessarily a bad thing?

8) Is CO2 Bad? How did carbon dioxide, necessary for life on Earth and only 4 parts in 10,000 of our atmosphere, get rebranded as some sort of dangerous gas?

9) Do We Look that Stupid? How do scientists expect to be taken seriously when their “theory” is supported by both floods AND droughts? Too much snow AND too little snow?

10) Selective Pseudo-Explanations. How can scientists claim that the Medieval Warm Period (which lasted hundreds of years), was just a regional fluke…yet claim the single-summer (2003) heat wave in Europe had global significance?

11) (Spinal Tap bonus) Just How Warm is it, Really? Why is it that every subsequent modification/adjustment to the global thermometer data leads to even more warming? What are the chances of that? Either a warmer-still present, or cooling down the past, both of which produce a greater warming trend over time. And none of the adjustments take out a gradual urban heat island (UHI) warming around thermometer sites, which likely exists at virtually all of them — because no one yet knows a good way to do that.

It is no coincidence that leftists believe in the efficacy of central planning and cling tenaciously to a belief in catastrophic anthropogenic global warming. The latter justifies the former, of course. And both beliefs exemplify the left’s penchant for magical thinking, about which I’ve written several times (e.g., here, here, here, here, and here).

Magical thinking is the pretense of knowledge in the nth degree. It conjures “knowledge” from ignorance and hope. And no one better exemplifies magical thinking than our hopey-changey president.

*     *     *

Related posts:
Modeling Is Not Science
The Left and Its Delusions
Economics: A Survey
AGW: The Death Knell
The Keynesian Multiplier: Phony Math
Modern Liberalism as Wishful Thinking

The Limits of Science (II)

The material of the universe — be it called matter or energy — has three essential properties: essence, emanation, and effect. Essence — what things really “are” — is the most elusive of the properties, and probably unknowable. Emanations are the perceptible aspects of things, such as their detectible motions and electromagnetic properties. Effects are what things “do” to other things, as in the effect that a stream of photons has on paper when the photons are focused through a magnifying glass. (You’ve lived a bland life if you’ve never started a fire that way.)

Science deals in emanations and effects. It seems that these can be described without knowing what matter-energy “really” consists of. But can they?

Take a baseball. Assume, for the sake of argument, that it can’t be opened and separated into constituent parts, which are many. (See the video at this page for details.) Taking the baseball as a fundamental particle, its attributes (seemingly) can be described without knowing what’s inside it. Those attributes include the distance that it will travel when hit by a bat, when the ball and bat (of a certain weight) meet at certain velocities and at certain angles, given the direction and speed of rotation of the ball when it meets the bat, ambient temperature and relative humidity, and so on.

And yet, the baseball can’t be treated as if it were a fundamental particle. The distance that it will travel, everything else being the same, depends on the material at its core, the size of the core, the tightness of the windings of yarn around the core, the types of yarn used in the windings, the tightness of the cover, the flatness of the stitches that hold the cover in place, and probably several other things.

This suggests to me that the emanations and effects of an object depend on its essence — at least in the everyday world of macroscopic objects. If that’s so, why shouldn’t it be the same for the world of objects called sub-atomic particles?

Which leads to some tough questions: Is it really the case that all of the particles now considered elementary are really indivisible? Are there other elementary particles yet to be discovered or hypothesized, and will some of those be constituents of particles now thought to be elementary? And even if all of the truly elementary particles are discovered, won’t scientists still be in the dark as to what those particles really “are”?

The progress of science should be judged by how much scientists know about the universe and its constituents. By that measure — and despite what may seem to be a rapid pace of discovery — it is fair to say that science has a long way to go — probably forever.

Scientists, who tend to be atheists, like to refer to the God of the gaps, a “theological perspective in which gaps in scientific knowledge are taken to be evidence or proof of God’s existence.” The smug assumption implicit in the use of the phrase by atheists is that science will close the gaps, and that there will be no room left for God.

It seems to me that the shoe is really on the other foot. Atheistic scientists assume that the gaps in their knowledge are relatively small ones, and that science will fill them. How wrong they are.

*     *     *

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
A Theory of Everything, Occam’s Razor, and Baseball
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness

Not-So-Random Thoughts (IX)

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

Demystifying Science

In a post with that title, I wrote:

“Science” is an unnecessarily daunting concept to the uninitiated, which is to say, almost everyone. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — often are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Just how unwarranted is the “authority” that is lent by publication in a scientific journal?

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. . . .

In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one . . . paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” As he told the quadrennial International Congress on Peer Review and Biomedical Publication, held this September [2013] in Chicago, the problem has not gone away. (The Economist, “Trouble at the Lab,” October 19, 2013)

Tell me again about anthropogenic global warming.

The “Little Ice Age” Redux?

Speaking of AGW, remember the “Little Ice Age” of the 1970s?

George Will does. As do I.

One Sunday morning in January or February of 1977, when I lived in western New York State, I drove to the news stand to pick up my Sunday Times. I had to drive my business van because my car wouldn’t start. (Odd, I thought.) I arrived at the stand around 8:00 a.m. The temperature sign on the bank across the street then read -16 degrees (Fahrneheit). The proprietor informed me that when he opened his shop at 6:00 a.m. the reading was -36 degrees.

That was the nadir of the coldest winter I can remember. The village reservoir froze in January and stayed frozen until March. (The fire department had to pump water from the Genesee River to the village’s water-treatment plant.) Water mains were freezing solid, even though they were 6 feet below the surface. Many homeowners had to keep their faucets open a trickle to ensure that their pipes didn’t freeze. And, for the reasons cited in Will’s article, many scientists — and many Americans — thought that a “little ice age” had arrived and would be with us for a while.

But science is often inconclusive and just as often slanted to serve a political agenda. (Also, see this.) That’s why I’m not ready to sacrifice economic growth and a good portion of humanity on the altar of global warming and other environmental fads.

Well, the “Little Ice Age” may return, soon:

[A] paper published today in Advances in Space Research predicts that if the current lull in solar activity “endures in the 21st century the Sun shall enter a Dalton-like grand minimum. It was a period of global cooling.” (Anthony Watts, “Study Predicts the Sun Is Headed for a Dalton-like Solar Minimum around 2050,” Watts Up With That?, December 2, 2013)

The Dalton Minimum, named after English astronomer John Dalton, lasted from 1790 to 1830.

Bring in your pets and plants, cover your pipes, and dress warmly.

Madison’s Fatal Error

Timothy Gordon writes:

After reading Montesquieu’s most important admonitions in Spirit of the Laws, Madison decided that he could outsmart him. The Montesquieuan admonitions were actually limitations on what a well-functioning republic could allow, and thus, be. And Madison got greedy, not wanting to abide by those limitations.

First, Montesquieu required republican governments to maintain limited geographic scale. Second, Montesquieu required republican governments to preside over a univocal people of one creed and one mind on most matters. A “res publica” is a public thing valued by each citizen, after all. “How could this work when a republic is peopled diversely?” the faithful Montesquieuan asks. (Nowadays in America, for example, half the public values liberty and the other half values equality, its eternal opposite.) Thirdly—and most important—Montesquieu mandated that the three branches of government were to hold three distinct, separate types of power, without overlap.

Before showing just how correct Montesquieu was—and thus, how incorrect Madison was—it must be articulated that in the great ratification contest of 1787-1788, there operated only one faithful band of Montesquieu devotees: the Antifederalists. They publicly pointed out how superficial and misleading were the Federalist appropriations of Montesquieu within the new Constitution and its partisan defenses.

The first two of these Montesquieuan admonitions went together logically: a) limiting a republic’s size to a small confederacy, b) populated by a people of one mind. In his third letter, Antifederalist Cato made the case best:

“whoever seriously considers the immense extent of territory within the limits of the United States, together with the variety of its climates, productions, and number of inhabitants in all; the dissimilitude of interest, morals, and policies, will receive it as an intuitive truth, that a consolidated republican form of government therein, can never form a perfect union.”

Then, to bulwark his claim, Cato goes on to quote two sacred sources of inestimable worth: the Bible… and Montesquieu. Attempting to fit so many creeds and beliefs into such a vast territory, Cato says, would be “like a house divided against itself.” That is, it would not be a res publica, oriented at sameness. Then Cato goes on: “It is natural, says Montesquieu, to a republic to have only a small territory, otherwise it cannot long subsist.”

The teaching Cato references is simple: big countries of diverse peoples cannot be governed locally, qua republics, but rather require a nerve center like Washington D.C. wherefrom all the decisions shall be made. The American Revolution, Cato reminded his contemporaries, was fought over the principle of local rule.

To be fair, Madison honestly—if wrongly—figured that he had dialed up the answer, such that the United States could be both vast and pluralistic, without the consequent troubles forecast by Montesquieu. He viewed the chief danger of this combination to lie in factionalization. One can either “remove the cause [of the problem] or control its effects,” Madison famously prescribed in “Federalist 10″.

The former solution (“remove the cause”) suggests the Montesquieuan way: i.e. remove the plurality of opinion and the vastness of geography. Keep American confederacies small and tightly knit. After all, victory in the War of Independence left the thirteen colonies thirteen small, separate countries, contrary to President Lincoln’s rhetoric four score later. Union, although one possible option, was not logically necessary.

But Madison opted for the latter solution (“control the effects”), viewing union as vitally indispensable and thus, Montesquieu’s teaching as regrettably dispensable: allow size, diversity, and the consequent factionalization. Do so, he suggested, by reducing them to nothing…with hyper-pluralism. Madison deserves credit: for all its oddity, the idea actually seemed to work… for a time. . . . (“James Madison’s Nonsense-Coup Against Montesqieu (and the Classics Too),” The Imaginative Conservative, December 2013)

The rot began with the advent of the Progressive Era in the late 1800s, and it became irreversible with the advent of the New Deal, in the 1930s. As I wrote here, Madison’s

fundamental error can be found in . . . Federalist No. 51. Madison was correct in this:

. . . It is of great importance in a republic not only to guard the society against the oppression of its rulers, but to guard one part of the society against the injustice of the other part. Different interests necessarily exist in different classes of citizens. If a majority be united by a common interest, the rights of the minority will be insecure. . . .

But Madison then made the error of assuming that, under a central government, liberty is guarded by a diversity of interests:

[One method] of providing against this evil [is] . . . by comprehending in the society so many separate descriptions of citizens as will render an unjust combination of a majority of the whole very improbable, if not impracticable. . . . [This] method will be exemplified in the federal republic of the United States. Whilst all authority in it will be derived from and dependent on the society, the society itself will be broken into so many parts, interests, and classes of citizens, that the rights of individuals, or of the minority, will be in little danger from interested combinations of the majority.

In a free government the security for civil rights must be the same as that for religious rights. It consists in the one case in the multiplicity of interests, and in the other in the multiplicity of sects. The degree of security in both cases will depend on the number of interests and sects; and this may be presumed to depend on the extent of country and number of people comprehended under the same government. This view of the subject must particularly recommend a proper federal system to all the sincere and considerate friends of republican government, since it shows that in exact proportion as the territory of the Union may be formed into more circumscribed Confederacies, or States oppressive combinations of a majority will be facilitated: the best security, under the republican forms, for the rights of every class of citizens, will be diminished: and consequently the stability and independence of some member of the government, the only other security, must be proportionately increased. . . .

In fact, as Montesqieu predicted, diversity — in the contemporary meaning of the word, is inimical to civil society and thus to ordered liberty. Exhibit A is a story by Michael Jonas about a study by Harvard political scientist Robert Putnam, “E Pluribus Unum: Diversity and Community in the Twenty-first Century“:

It has become increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.

But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings. . . .

. . . Putnam’s work adds to a growing body of research indicating that more diverse populations seem to extend themselves less on behalf of collective needs and goals.

His findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis in June in the journal Scandinavian Political Studies, he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer, in a recent Orange County Register op-ed titled “Greater diversity equals more misery.”. . .

The results of his new study come from a survey Putnam directed among residents in 41 US communities, including Boston. Residents were sorted into the four principal categories used by the US Census: black, white, Hispanic, and Asian. They were asked how much they trusted their neighbors and those of each racial category, and questioned about a long list of civic attitudes and practices, including their views on local government, their involvement in community projects, and their friendships. What emerged in more diverse communities was a bleak picture of civic desolation, affecting everything from political engagement to the state of social ties. . . .

. . . In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes. . . . (“The Downside of Diversity,” The Boston Globe (boston.com), August 5, 2007)

See also my posts, “Liberty and Society,” “The Eclipse of ‘Old America’,” and “Genetic Kinship and Society.” And these: “Caste, Crime, and the Rise of Post-Yankee America” (Theden, November 12, 2013) and “The New Tax Collectors for the Welfare State,” (Handle’s Haus, November 13, 2013).

Libertarian Statism

Finally, I refer you to David Friedman’s “Libertarian Arguments for Income Redistribution” (Ideas, December 6, 2013). Friedman notes that “Matt Zwolinski has recently posted some possible arguments in favor of a guaranteed basic income or something similar.” Friedman then dissects Zwolinski’s arguments.

Been there, done that. See my posts, “Bleeding-Heart Libertarians = Left-Statists” and “Not Guilty of Libertarian Purism,” wherein I tackle the statism of Zwolinski and some of his co-bloggers at Bleeding Heart Libertarians. In the second-linked post, I say that

I was wrong to imply that BHLs [Bleeding Heart Libertarians] are connivers; they (or too many of them) are just arrogant in their judgments about “social justice” and naive when they presume that the state can enact it. It follows that (most) BHLs are not witting left-statists; they are (too often) just unwitting accomplices of left-statism.

Accordingly, if I were to re-title [“Bleeding-Heart Libertarians = Left-Statists”] I would call it “Bleeding-Heart Libertarians: Crypto-Statists or Dupes for Statism?”.

*     *     *

Other posts in this series: I, II, III, IV, V, VI, VII, VIII

Pinker Commits Scientism

Steven Pinker, who seems determined to outdo Bryan Caplan in wrongheadedness, devotes “Science Is Not Your Enemy” (The New Republic,  August 6, 2013), to the defense of scientism. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. You don’t need to take my word for it; Pinker’s own words tell the tale.

But, first, let’s get clear about the meaning and fallaciousness of scientism. The various writers cited by Pinker describe it well, but Hayek probably offers the most thorough indictment of it; for example:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it…..

The blind transfer of the striving for quantitative measurements to a field in which the specific conditions are not present which give it its basic importance in the natural sciences, is the result of an entirely unfounded prejudice. It is probably responsible for the worst aberrations and absurdities produced by scientism in the social sciences. It not only leads frequently to the selection for study of the most irrelevant aspects of the phenomena because they happen to be measurable, but also to “measurements” and assignments of numerical values which are absolutely meaningless. What a distinguished philosopher recently wrote about psychology is at least equally true of the social sciences, namely that it is only too easy “to rush off to measure something without considering what it is we are measuring, or what measurement means. In this respect some recent measurements are of the same logical type as Plato’s determination that a just ruler is 729 times as happy as an unjust one.”…

Closely connected with the “objectivism” of the scientistic approach is its methodological collectivism, its tendency to treat “wholes” like “society” or the “economy,” “capitalism” (as a given historical “phase”) or a particular “industry” or “class” or “country” as definitely given objects about which we can discover laws by observing their behavior as wholes. While the specific subjectivist approach of the social sciences starts … from our knowledge of the inside of these social complexes, the knowledge of the individual attitudes which form the elements of their structure, the objectivism of the natural sciences tries to view them from the outside ; it treats social phenomena not as something of which the human mind is a part and the principles of whose organization we can reconstruct from the familiar parts, but as if they were objects directly perceived by us as wholes….

The belief that human history, which is the result of the interaction of innumerable human minds, must yet be subject to simple laws accessible to human minds is now so widely held that few people are at all aware what an astonishing claim it really implies. Instead of working patiently at the humble task of rebuilding from the directly known elements the complex and unique structures which we find in the world, and of tracing from the changes in the relations between the elements the changes in the wholes, the authors of these pseudo-theories of history pretend to be able to arrive by a kind of mental short cut at a direct insight into the laws of succession of the immediately apprehended wholes. However doubtful their status, these theories of development have achieved a hold on public imagination much greater than any of the results of genuine systematic study. “Philosophies” or “theories” of history (or “historical theories”) have indeed become the characteristic feature, the “darling vice” of the 19th century. From Hegel and Comte, and particularly Marx, down to Sombart and Spengler these spurious theories came to be regarded as representative results of social science; and through the belief that one kind of “system” must as a matter of historical necessity be superseded by a new and different “system,” they have even exercised a profound influence on social evolution. This they achieved mainly because they looked like the kind of laws which the natural sciences produced; and in an age when these sciences set the standard by which all intellectual effort was measured, the claim of these theories of history to be able to predict future developments was regarded as evidence of their pre-eminently scientific character. Though merely one among many characteristic 19th century products of this kind, Marxism more than any of the others has become the vehicle through which this result of scientism has gained so wide an influence that many of the opponents of Marxism equally with its adherents are thinking in its terms. (Friedrich A. Hayek, The Counter Revolution Of Science [Kindle Locations 120-1180], The Free Press.)

After a barrage like that (and this), what’s a defender of scientism to do? Pinker’s tactic is to stop using “scientism” and start using “science.” This makes it seem as if he really isn’t defending scientism, but rather trying to show how science can shed light onto subjects that are usually not in the province of science. In reality, Pinker preaches scientism by calling it science.

For example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists.” We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (bluffing, spotting “tells,” avoiding “tells,” for example).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature, which defies scientific control.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real aim of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification.” With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, which I examine here.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

If Pinker is right about anything, it is when he says that “the intrusion of science into the territories of the humanities has been deeply resented.” The resentment, though some of it may be wrongly motivated, is fully justified.

Related reading (added 08/10/13 and 09/06/13):
Bill Vallicella, “Steven Pinker on Scientism, Part One,” Maverick Philosopher, August 10, 2013
Leon Wieseltier, “Crimes Against Humanities,” The New Republic, September 3, 2013 (gated)

Related posts about Pinker:
Nonsense about Presidents, IQ, and War
The Fallacy of Human Progress

Related posts about modernism:
Speaking of Modern Art
Making Sense about Classical Music
An Addendum about Classical Music
My Views on Classical Music, Vindicated
But It’s Not Music
A Quick Note about Music
Modernism in the Arts and Politics
Taste and Art
Modernism and the Arts

Related posts about science:
Science’s Anti-Scientific Bent
Modeling Is Not Science
Physics Envy
We, the Children of the Enlightenment
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Scientism, Evolution, and the Meaning of Life
The Candle Problem: Balderdash Masquerading as Science
Mysteries: Sacred and Profane
The Glory of the Human Mind

Further Thoughts about Metaphysical Cosmology

I have stated my metaphysical cosmology:

1. There is necessarily a creator of the universe, which comprises all that exists in “nature.”

2. The creator is not part of nature; that is, he stands apart from his creation and is neither of its substance nor governed by its laws. (I use “he” as a term of convenience, not to suggest that the creator is some kind of human or animate being, as we know such beings.)

3. The creator designed the universe, if not in detail then in its parameters. The parameters are what we know as matter-energy (substance) and its various forms, motions, and combinations (the laws that govern the behavior of matter-energy).

4. The parameters determine everything that is possible in the universe. But they do not necessarily dictate precisely the unfolding of events in the universe. Randomness and free will are evidently part of the creator’s design.

5. The human mind and its ability to “do science” — to comprehend the laws of nature through observation and calculation — are artifacts of the creator’s design.

6. Two things probably cannot be known through science: the creator’s involvement in the unfolding of natural events; the essential character of the substance on which the laws of nature operate.

It follows that science can neither prove nor disprove the preceding statements. If that is so, why can I not say, with equal certainty, that the universe is made of pea soup and supported by undetectable green giants?

There are two answers to that question. The first answer is that my cosmology is based on logical necessity; there is nothing of logic or necessity in the claims about pea soup and undetectable green giants. The second and related answer is that claims about pea soup and green giants — and their ilk — are obviously outlandish. There is an essential difference between (a) positing a creator and making limited but reasonable claims about his role and (b) engaging in obviously outlandish speculation.

What about various mythologies (e.g., Norse and Greek) and creation legends, which nowadays seem outlandish even to persons who believe in a creator? Professional atheists (e.g., Richard Dawkins, Daniel Dennett, Christopher Hitchens, and Lawrence Krauss) point to the crudeness of those mythologies and legends as a reason to reject the idea of a creator who set the universe and its laws in motion. (See, for example, “Russell’s Teapot,” discussed here.) But logic is not on the side of the professional atheists. The crudeness of a myth or legend, when viewed through the lens of contemporary knowledge, cannot be taken as evidence against creation. The crudeness of a myth or legend merely reflects the crudeness of the state of knowledge when the myth or legend arose.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Free Will: A Proof by Example?
A Theory of Everything, Occam’s Razor, and Baseball
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
What Is Time?
Science’s Anti-Scientific Bent
The Tenth Dimension
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Not-So-Random Thoughts (II) (first item)
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology

My Metaphysical Cosmology

This post is a work in progress. It draws on and extends the posts listed at the bottom.

1. There is necessarily a creator of the universe, which comprises all that exists in “nature.”

2. The creator is not part of nature; that is, he stands apart from his creation and is neither of its substance nor governed by its laws. (I use “he” as a term of convenience, not to suggest that the creator is some kind of human or animate being, as we know such beings.)

3. The creator designed the universe, if not in detail then in its parameters. The parameters are what we know as matter-energy (substance) and its various forms, motions, and combinations (the laws that govern the behavior of matter-energy).

4. The parameters determine everything that is possible in the universe. But they do not necessarily dictate precisely the unfolding of events in the universe. Randomness and free will are evidently part of the creator’s design.

5. The human mind and its ability to “do science” — to comprehend the laws of nature through observation and calculation — are artifacts of the creator’s design.

6. Two things probably cannot be known through science: the creator’s involvement in the unfolding of natural events; the essential character of the substance on which the laws of nature operate.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Free Will: A Proof by Example?
A Theory of Everything, Occam’s Razor, and Baseball
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
What Is Time?
Science’s Anti-Scientific Bent
The Tenth Dimension
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Not-So-Random Thoughts (II) (first item)
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing

Not-So-Random Thoughts (III)

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

Apropos Science

In the vein of “Something from Nothing?” there is this:

[Stephen] Meyer also argued [in a a recent talk at the University Club in D.C.] that biological evolutionary theory, which “attempts to explain how new forms of life evolved from simpler pre-existing forms,” faces formidable difficulties. In particular, the modern version of Darwin’s theory, neo-Darwinism, also has an information problem.

Mutations, or copying errors in the DNA, are analogous to copying errors in digital code, and they supposedly provide the grist for natural selection. But, Meyer said: “What we know from all codes and languages is that when specificity of sequence is a condition of function, random changes degrade function much faster than they come up with something new.”…

The problem is comparable to opening a big combination lock. He asked the audience to imagine a bike lock with ten dials and ten digits per dial. Such a lock would have 10 billion possibilities with only one that works. But the protein alphabet has 20 possibilities at each site, and the average protein has about 300 amino acids in sequence….

Remember: Not just any old jumble of amino acids makes a protein. Chimps typing at keyboards will have to type for a very long time before they get an error-free, meaningful sentence of 150 characters. “We have a small needle in a huge haystack.” Neo-Darwinism has not solved this problem, Meyer said. “There’s a mathematical rigor to this which has not been a part of the so-called evolution-creation debate.”…

“[L]eading U.S. biologists, including evolutionary biologists, are saying we need a new theory of evolution,” Meyer said. Many increasingly criticize Darwinism, even if they don’t accept design. One is the cell biologist James Shapiro of the University of Chicago. His new book is Evolution: A View From the 21st Century. He’s “looking for a new evolutionary theory.” David Depew (Iowa) and Bruce Weber (Cal State) recently wrote in Biological Theory that Darwinism “can no longer serve as a general framework for evolutionary theory.” Such criticisms have mounted in the technical literature. (Tom Bethell, “Intelligent Design at the University Club,” American Spectator, May 2012)

And this:

[I]t is startling to realize that the entire brief for demoting human beings, and organisms in general, to meaningless scraps of molecular machinery — a demotion that fuels the long-running science-religion wars and that, as “shocking” revelation, supposedly stands on a par with Copernicus’s heliocentric proposal — rests on the vague conjunction of two scarcely creditable concepts: the randomness of mutations and the fitness of organisms. And, strangely, this shocking revelation has been sold to us in the context of a descriptive biological literature that, from the molecular level on up, remains almost nothing buta documentation of the meaningfully organized, goal-directed stories of living creatures.

Here, then, is what the advocates of evolutionary mindlessness and meaninglessness would have us overlook. We must overlook, first of all, the fact that organisms are masterful participants in, and revisers of, their own genomes, taking a leading position in the most intricate, subtle, and intentional genomic “dance” one could possibly imagine. And then we must overlook the way the organism responds intelligently, and in accord with its own purposes, to whatever it encounters in its environment, including the environment of its own body, and including what we may prefer to view as “accidents.” Then, too, we are asked to ignore not only the living, reproducing creatures whose intensely directed lives provide the only basis we have ever known for the dynamic processes of evolution, but also all the meaning of the larger environment in which these creatures participate — an environment compounded of all the infinitely complex ecological interactions that play out in significant balances, imbalances, competition, cooperation, symbioses, and all the rest, yielding the marvelously varied and interwoven living communities we find in savannah and rainforest, desert and meadow, stream and ocean, mountain and valley. And then, finally, we must be sure to pay no heed to the fact that the fitness, against which we have assumed our notion of randomness could be defined, is one of the most obscure, ill-formed concepts in all of science.

Overlooking all this, we are supposed to see — somewhere — blind, mindless, random, purposeless automatisms at the ultimate explanatory root of all genetic variation leading to evolutionary change. (Stephen L. Talbott, “Evolution and the Illusion of Randomness,” The New Atlantis, Fall 2011)

My point is not to suggest that that the writers are correct in their conjectures. Rather, the force of their conjectures shows that supposedly “settled” science is (a) always far from settled (on big questions, at least) and (b) necessarily incomplete because it can never reach ultimate truths.

Trayvon, George, and Barack

Recent revelations about the case of Trayvon Martin and George Zimmerman suggest the following:

  • Martin was acting suspiciously and smelled of marijuana.
  • Zimmerman was rightly concerned about Martin’s behavior, given the history of break-ins in Zimmerman’s neighborhood.
  • Martin attacked Zimmerman, had him on the ground, was punching his face, and had broken his nose.
  • Zimmerman shot Martin in self-defense.

Whether the encounter was “ultimately avoidable,” as a police report asserts, is beside the point.  Zimmerman acted in self-defense, and the case against him should be dismissed. The special prosecutor should be admonished by the court for having succumbed to media and mob pressure in bringing a charge of second-degree murder against Zimmerman.

What we have here is the same old story: Black “victim”–>media frenzy to blame whites (or a “white Hispanic”), without benefit of all relevant facts–>facts exonerate whites. To paraphrase Shakespeare: The first thing we should do after the revolution is kill all the pundits (along with the lawyers).

Obama famously said, “”If I had a son, he would look like Trayvon.” Given the thuggish similarity between Trayvon and Obama (small sample here), it is more accurate to say that if Obama had a son, he would be like Trayvon.

Creepy People

Exhibit A is Richard Thaler, a self-proclaimed libertarian who is nothing of the kind. Thaler defends the individual mandate that is at the heart of Obamacare (by implication, at least), when he attacks the “slippery slope” argument against it. Annon Simon nails Thaler:

Richard Thaler’s NYT piece from a few days ago, Slippery-Slope Logic, Applied to Health Care, takes conservatives to task for relying on a “slippery slope” fallacy to argue that Obamacare’s individual mandate should be invalidated. Thaler believes that the hypothetical broccoli mandate — used by opponents of Obamacare to show that upholding the mandate would require the Court to acknowledge congressional authority to do all sorts of other things — would never be adopted by Congress or upheld by a federal court. This simplistic view of the Obamacare litigation obscures legitimate concerns over the amount of power that the Obama administration is claiming for the federal government. It also ignores the way creative judges can use previous cases as building blocks to justify outcomes that were perhaps unimaginable when those building blocks were initially formed….

[N]ot all slippery-slope claims are fallacious. The Supreme Court’s decisions are often informed by precedent, and, as every law student learned when studying the Court’s privacy cases, a decision today could be used by a judge ten years from now to justify outcomes no one had in mind.

In 1965, the Supreme Court in Griswold v. Connecticut, referencing penumbras and emanations, recognized a right to privacy in marriage that mandated striking down an anti-contraception law.

Seven years later, in Eisenstadt v. Baird, this right expanded to individual privacy, because after all, a marriage is made of individuals, and “[i]f the right of privacy means anything, it is the right of the individual . . . to be free from unwarranted governmental intrusion into matters so fundamentally affecting a person as the decision whether to bear or beget a child.”

By 1973 in Roe v. Wade, this precedent, which had started out as a right recognized in marriage, had mutated into a right to abortion that no one could really trace to any specific textual provision in the Constitution. Slippery slope anyone?

This also happened in Lawrence v. Texas in 2003, where the Supreme Court struck down an anti-sodomy law. The Court explained that the case did not involve gay marriage, and Justice O’Connor’s concurrence went further, distinguishing gay marriage from the case at hand. Despite those pronouncements, later decisions enshrining gay marriage as a constitutionally protected right have relied upon Lawrence. For instance, Goodridge v. Department of Public Health (Mass. 2003) cited Lawrence 9 times, Varnum v. Brien (Iowa 2009) cited Lawrence 4 times, and Perry v. Brown (N.D. Cal, 2010) cited Lawrence 9 times.

However the Court ultimately rules, there is no question that this case will serve as a major inflection point in our nation’s debate about the size and scope of the federal government. I hope it serves to clarify the limits on congressional power, and not as another stepping stone on the path away from limited, constitutional government. (“The Supreme Court’s Slippery Slope,” National Review Online, May 17, 2012)

Simon could have mentioned Wickard v. Filburn (1942), in which the Supreme Court brought purely private, intrastate activity within the reach of Congress’s power to regulate interstate commerce. The downward slope from Wickard v. Filburn to today’s intrusive regulatory regime has been been not merely slippery but precipitous.

Then there is Brian Leiter, some of whose statist musings I have addressed in the past. It seems that Leiter has taken to defending the idiotic Elizabeth Warren for her convenient adoption of a Native American identity. Todd Zywicki tears a new one for Leiter:

I was out of town most of last week and I wasn’t planning on blogging any more on the increasingly bizarre saga of Elizabeth Warren’s claim to Native American ancestry, which as of the current moment appears to be entirely unsubstantiated.  But I was surprised to see Brian Leiter’s post doubling-down in his defense of Warren–and calling me a “Stalinist” to boot (although I confess it is not clear why or how he is using that term).  So I hope you will indulge me while I respond.

First, let me say again what I expressed at the outset–I have known from highly-credible sources for a decade that in the past Warren identified herself as a Native American in order to put herself in a position to benefit from hiring preferences (I am certain that Brian knows this now too).  She was quite outspoken about it at times in the past and, as her current defenses have suggested, she believed that she was entitled to claim it.  So there would have been no reason for her to not identify as such and in fact she was apparently quite unapologetic about it at the time….

Second, Brian seems to believe for some reason that the issue here is whether Warren actually benefited from a hiring preference.  Of course it is not (as my post makes eminently clear).  The issue I raised is whether Warren made assertions as part of the law school hiring process in order to put herself in a position to benefit from a hiring preference for which she had no foundation….

Third, regardless of why she did it, Warren herself actually had no verifiable basis for her self-identification as Native American.  At the very least her initial claim was grossly reckless and with no objective foundation–it appears that she herself has never had any foundation for the claim beyond “family lore” and her “high cheekbones.”… Now it turns out that the New England Historical Genealogical Society, which had been the source for the widely-reported claim that she might be 1/32 Cherokee, has rescinded its earlier conclusion and now says “We have no proof that Elizabeth Warren’s great great great grandmother O.C. Sarah Smith either is or is not of Cherokee descent.”  The story adds, “Their announcement came in the wake of an official report from an Oklahoma county clerk that said a document purporting to prove Warren’s Cherokee roots — her great great great grandmother’s marriage license application — does not exist.”  A Cherokee genealogist has similarly stated that she can find no evidence to support Warren’s claim.  At this point her claim appears to be entirely unsupported as an objective matter and it appears that she herself had no basis for it originally.

Fourth, Brian’s post also states the obvious–that there is plenty of bad blood between Elizabeth and myself.  But, of course, the only reason that this issue is interesting and relevant today is because Warren is running for the U.S. Senate and is the most prominent law professor in America at this moment.

So, I guess I’ll conclude by asking the obvious question: if a very prominent conservative law professor (say, for example, John Yoo) had misrepresented himself throughout his professorial career in the manner that Elizabeth Warren has would Brian still consider it to be “the non-issue du jour“?  Really?

I’m not sure what a “Stalinist” is.  But I would think that ignoring a prominent person’s misdeeds just because you like her politics, and attacking the messenger instead, just might fit the bill. (“New England Genealogical Historical Society Rescinds Conclusion that Elizabeth Warren Might Be Cherokee,” The Volokh Conspiracy, May 17, 2012)

For another insight into Leiter’s character, read this and weep not for him.

Tea Party Sell-Outs

Business as usual in Washington:

This week the Club for Growth released a study of votes cast in 2011 by the 87 Republicans elected to the House in November 2010. The Club found that “In many cases, the rhetoric of the so-called “Tea Party” freshmen simply didn’t match their records.” Particularly disconcerting is the fact that so many GOP newcomers cast votes against spending cuts.

The study comes on the heels of three telling votes taken last week in the House that should have been slam-dunks for members who possess the slightest regard for limited government and free markets. Alas, only 26 of the 87 members of the “Tea Party class” voted to defund both the Economic Development Administration and the president’s new Advanced Manufacturing Technology Consortia program (see my previous discussion of these votes here) and against reauthorizing the Export-Import Bank (see my colleague Sallie James’s excoriation of that vote here).

I assembled the following table, which shows how each of the 87 freshman voted. The 26 who voted for liberty in all three cases are highlighted. Only 49 percent voted to defund the EDA. Only 56 percent voted to defund a new corporate welfare program requested by the Obama administration. And only a dismal 44 percent voted against reauthorizing “Boeing’s bank.” That’s pathetic. (Tad DeHaven, “Freshman Republicans Switch from Tea to Kool-Aid,” Cato@Liberty, May 17, 2012)

Lesson: Never trust a politician who seeks a position of power, unless that person earns trust by divesting the position of power.

PCness

Just a few of the recent outbreaks of PCness that enraged me:

Michigan Mayor Calls Pro-Lifers ‘Forces of Darkness’” (reported by LifeNews.com on May 11, 2012)

US Class Suspended for Its View on Islam” (reported by CourierMail.com.au, May 11, 2012)

House Democrats Politicize Trayvon Martin” (posted at Powerline, May 8, 2012)

Chronicle of Higher Education Fires Blogger for Questioning Seriousness of Black Studies Depts.” (posted at Reason.com/hit & run, May 8, 2012)

Technocracy, Externalities, and Statism

From a review of Robert Frank’s The Darwin Economy:

In many ways, economics is the discipline best suited to the technocratic mindset. This has nothing to do with its traditional subject matter. It is not about debating how to produce goods and services or how to distribute them. Instead, it relates to how economics has emerged as an approach that distances itself from democratic politics and provides little room for human agency.

Anyone who has done a high-school course in economics is likely to have learned the basics of its technocratic approach from the start. Students have long been taught that economics is a ‘positive science’ – one based on facts rather than values. Politicians are entitled to their preferences, so the argument went, but economists are supposed to give them impartial advice based on an objective examination of the facts.

More recently this approach has been taken even further. The supposedly objective role of the technocrat-economist has become supreme, while the role of politics has been sidelined….

The starting point of The Darwin Economy is what economists call the collective action problem: the divergence between individual and collective interests. A simple example is a fishermen fishing in a lake. For each individual, it might be rational to catch as many fish as possible, but if all fishermen follow the same path the lake will eventually be empty. It is therefore deemed necessary to find ways to negotiate this tension between individual and group interests.

Those who have followed the discussion of behavioural economics will recognise that this is an alternative way of viewing humans as irrational. Behavioural economists focus on individuals behaving in supposedly irrational ways. For example, they argue that people often do not invest enough to secure themselves a reasonable pension. For Frank, in contrast, individuals may behave rationally but the net result of group behaviour can still be irrational….

…From Frank’s premises, any activity considered harmful by experts could be deemed illegitimate and subjected to punitive measures….

…[I]t is … wrong to assume that there is no more scope for economic growth to be beneficial. Even in the West, there is a long way to go before scarcity is limited. This is not just a question of individuals having as many consumer goods as they desire – although that has a role. It also means having the resources to provide as many airports, art galleries, hospitals, power stations, roads, schools, universities and other facilities as are needed. There is still ample scope for absolute improvements in living standards…. (Daniel Ben-ami, “Delving into the Mind of the Technocrat,” The Spiked Review of Books, February 2012)

There is much to disagree with in the review, but the quoted material is right on. It leads me to quote myself:

…[L]ife is full of externalities — positive and negative. They often emanate from the same event, and cannot be separated. State action that attempts to undo negative externalities usually results in the negation or curtailment of positive ones. In terms of the preceding example, state action often is aimed at forcing the attractive woman to be less attractive, thus depriving quietly appreciative men of a positive externality, rather than penalizing the crude man if his actions cross the line from mere rudeness to assault.

The main argument against externalities is that they somehow result in something other than a “social optimum.” This argument is pure, economistic hokum. It rests on the unsupportable belief in a social-welfare function, which requires the balancing (by an omniscient being, I suppose) of the happiness and unhappiness that results from every action that affects another person, either directly or indirectly….

A believer in externalities might respond by saying that they are of “economic” importance only as they are imposed on bystanders as a spillover from economic transactions, as in the case of emissions from a power plant that can cause lung damage in susceptible persons. Such a reply is of a kind that only an omniscient being could make with impunity. What privileges an economistic thinker to say that the line of demarcation between relevant and irrelevant acts should be drawn in a certain place? The authors of campus speech codes evidently prefer to draw the line in such a way as to penalize the behavior of the crude man in the above example. Who is the economistic thinker to say that the authors of campus speech codes have it wrong? And who is the legalistic thinker to say that speech should be regulated by deferring to the “feelings” that it arouses in persons who may hear or read it?

Despite the intricacies that I have sketched, negative externalities are singled out for attention and rectification, to the detriment of social and economic intercourse. Remove the negative externalities of electric-power generation and you make more costly (and even inaccessible) a (perhaps the) key factor in America’s economic growth in the past century. Try to limit the supposed negative externality of human activity known as “greenhouse gases” and you limit the ability of humans to cope with that externality (if it exists) through invention, innovation, and entrepreneurship. Limit the supposed negative externality of “offensive” speech and you quickly limit the range of ideas that may be expressed in political discourse. Limit the supposed externalities of suburban sprawl and you, in effect, sentence people to suffer the crime, filth, crowding, contentiousness, heat-island effects, and other externalities of urban living.

The real problem is not externalities but economistic and legalistic reactions to them….

The main result of rationalistic thinking — because it yields vote-worthy slogans and empty promises to fix this and that “problem” — is the aggrandizement of the state, to the detriment of civil society.

The fundamental error of rationalists is to believe that “problems” call for collective action, and to identify collective action with state action. They lack the insight and imagination to understand that the social beings whose voluntary, cooperative efforts are responsible for mankind’s vast material progress are perfectly capable of adapting to and solving “problems,” and that the intrusions of the state simply complicate matters, when not making them worse. True collective action is found in voluntary social and economic intercourse, the complex, information-rich content of which rationalists cannot fathom. They are as useless as a blind man who is shouting directions to an Indy 500 driver….

Theodore Dalrymple

If you do not know of Theodore Dalrymple, you should. His book, In Praise of Prejudice: The Necessity of Preconceived Ideas, inspired  “On Liberty,” the first post at this blog. Without further ado, I commend these recent items by and about Dalrymple:

Rotting from the Head Down” (an article by Dalrymple about the social collapse of Britain, City Journal, March 8, 2012)

Symposium: Why Do Progressives Love Criminals?” (Dalrymple and others, FrontPageMag.com, March 9, 2012)

Doctors Should Not Vote for Industrial Action,” a strike, in American parlance (a post by Dalrymple, The Social Affairs Unit, March 22, 2012)

The third item ends with this:

The fact is that there has never been, is never, and never will be any industrial action over the manifold failures of the public service to provide what it is supposed to provide. Whoever heard of teachers going on strike because a fifth of our children emerge from 11 years of compulsory education unable to read fluently, despite large increases in expenditure on education?

If the doctors vote for industrial action, they will enter a downward spiral of public mistrust of their motives. They should think twice before doing so.

Amen.

The Higher-Eduction Bubble

The title of a post at The Right Coast tells the tale: “Under 25 College Educated More Unemployed than Non-college Educated for First Time.” As I wrote here,

When I entered college [in 1958], I was among the 28 percent of high-school graduates then attending college. It was evident to me that about half of my college classmates didn’t belong in an institution of higher learning. Despite that, the college-enrollment rate among high-school graduates has since doubled.

(Also see this.)

American taxpayers should be up in arms over the subsidization of an industry that wastes their money on the useless education of masses of indeducable persons. Then there is the fact that taxpayers are forced to subsidize the enemies of liberty who populate university faculties.

The news about unemployment among college grads may hasten the bursting of the higher-ed bubble. It cannot happen too soon.

Something from Nothing?

I do not know if Lawrence Krauss typifies scientists in his logical obtuseness, but he certainly exemplifies the breed of so-called scientists who proclaim atheism as a scientific necessity.  According to a review by David Albert of Krauss’s recent book, A Universe from Nothing,

the laws of quantum mechanics have in them the makings of a thoroughly scientific and adamantly secular explanation of why there is something rather than nothing.

Albert’s review, which I have quoted extensively elsewhere, comports with Edward Feser’s analysis:

The bulk of the book is devoted to exploring how the energy present in otherwise empty space, together with the laws of physics, might have given rise to the universe as it exists today. This is at first treated as if it were highly relevant to the question of how the universe might have come from nothing—until Krauss acknowledges toward the end of the book that energy, space, and the laws of physics don’t really count as “nothing” after all. Then it is proposed that the laws of physics alone might do the trick—though these too, as he implicitly allows, don’t really count as “nothing” either.

Bill Vallicella puts it this way:

[N]o one can have any objection to a replacement of the old Leibniz question — Why is there something rather than nothing? … — with a physically tractable question, a question of interest to cosmologists and one amenable to a  physics solution. Unfortunately, in the paragraph above, Krauss provides two different replacement questions while stating, absurdly, that the second is a more succinct version of the first:

K1. How can a physical universe arise from an initial condition in which there are no particles, no space and perhaps no time?

K2. Why is there ‘stuff’ instead of empty space?

These are obviously distinct questions.  To answer the first one would have to provide an account of how the universe originated from nothing physical: no particles, no space, and “perhaps” no time.  The second question would be easier to answer because it presupposes the existence of space and does not demand that empty space be itself explained.

Clearly, the questions are distinct.  But Krauss conflates them. Indeed, he waffles between them, reverting to something like the first question after raising the second.  To ask why there is something physical as opposed to nothing physical is quite different from asking why there is physical “stuff” as opposed to empty space.

Several years ago, I explained the futility of attempting to decide the fundamental question of creation and its cause on scientific grounds:

Consider these three categories of knowledge (which long pre-date their use by Secretary of Defense Donald Rumsfeld): known knowns, know unknowns, and unknown unknowns. Here’s how that trichotomy might be applied to a specific aspect of scientific knowledge, namely, Earth’s rotation about the Sun:

1. Known knowns — Earth rotates about the Sun, in accordance with Einstein’s theory of general relativity.

2. Known unknowns — Earth, Sun, and the space between them comprise myriad quantum phenomena (e.g., matter and its interactions of matter in, on, and above the Earth and Sun; the transmission of light from Sun to Earth). We don’t know whether and how quantum phenomena influence Earth’s rotation about the Sun; that is, whether Einsteinian gravity is a partial explanation of a more complete theory of gravity that has been dubbed quantum gravity.

3. Unknown unknowns — Other things might influence Earth’s rotation about the Sun, but we don’t know what those other things are, if there are any.

For the sake of argument, suppose that scientists were as certain about the origin of the universe in the Big Bang as they are about the fact of Earth’s rotation about the Sun. Then, I would write:

1. Known knowns — The universe was created in the Big Bang, and the universe — in the large — has since been “unfolding” in accordance with Einsteinian relativity.

2. Known unknowns — The Big Bang can be thought of as a meta-quantum event, but we don’t know if that event was a manifestation of quantum gravity. (Nor do we know how quantum gravity might be implicated in the subsequent unfolding of the universe.)

3. Unknown unknowns — Other things might have caused the Big Bang, but we don’t know if there were such things or what those other things were — or are.

Thus — to a scientist qua scientist — God and Creation are unknown unknowns because, as unfalsifiable hypotheses, they lie outside the scope of scientific inquiry. Any scientist who pronounces, one way or the other, on the existence of God and the reality of Creation has — for the moment, at least — ceased to be scientist.

Which is not to say that the question of creation is immune to logical analysis; thus:

To say that the world as we know it is the product of chance — and that it may exist only because it is one of vastly many different (but unobservable) worlds resulting from chance — is merely to state a theoretical possibility. Further, it is a possibility that is beyond empirical proof or disproof; it is on a par with science fiction, not with science.

If the world as we know it — our universe — is not the product of chance, what is it? A reasonable answer is found in another post of mine, “Existence and Creation.” Here is the succinct version:

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

Another blogger once said this about the final sentence of that quotation, which I lifted from another post of mine:

I would have to disagree with the last sentence. The problem is epistemology — how do we know what we know? Atheists, especially ‘scientistic’ atheists, take the position that the modern scientific methodology of observation, measurement, and extrapolation from observation and measurement, is sufficient to detect anything that Really Exists — and that the burden of proof is on those who propose that something Really Exists that cannot be reliably observed and measured; which is of course impossible within that mental framework. They have plenty of logic and science on their side, and their ‘evidence’ is the commonly-accepted maxim that it is impossible to prove a negative.

I agree that the problem of drawing conclusions about creation from science (as opposed to logic) is epistemological. The truth and nature of creation is an “unknown unknown” or, more accurately, an “unknowable unknown.” With regard to such questions, scientists do not have logic and science on their side when they asset that the existence of the universe is possible without a creator, as a matter of science (as Krauss does, for example). Moreover, it is scientists who are trying to prove a negative: that there is neither a creator nor the logical necessity of one.

“Something from nothing” is possible, but only if there is a creator who is not part of the “something” that is the proper subject of scientific exploration and explanation.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
The Legality of Teaching Intelligent Design
Science, Logic, and God
Capitalism, Liberty, and Christianity
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
The Big Bang and Atheism
The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Pascal’s Wager, Morality, and the State
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps

Scientism, Evolution, and the Meaning of Life

Scientism is “the uncritical application of scientific or quasi-scientific methods to inappropriate fields of study or investigation.” When scientists proclaim truths outside the realm of their expertise, they are guilty of practicing scientism. Two notable scientistic scientists, of whom I have written several times (e.g., here and here), are Richard Dawkins and Peter Singer. It is unsurprising that Dawkins and Singer are practitioners of scientism. Both are strident atheists, and a strident atheists, as I have said,  “merely practice a ‘religion’ of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.”

Dawkins, Singer, and many other scientistic atheists share an especially “religious” view of evolution. In brief, they seem to believe that evolution rules out God. Evolution rules out nothing. Evolution may be true in outline but it does not bear close inspection. On that point, I turn to the late David Stove, a noted Australian philosopher and atheist. This is from his essay, “So You Think You Are a Darwinian?“:

Of course most educated people now are Darwinians, in the sense that they believe our species to have originated, not in a creative act of the Divine Will, but by evolution from other animals. But believing that proposition is not enough to make someone a Darwinian. It had been believed, as may be learnt from any history of biology, by very many people long before Darwinism, or Darwin, was born.

What is needed to make someone an adherent of a certain school of thought is belief in all or most of the propositions which are peculiar to that school, and are believed either by all of its adherents, or at least by the more thoroughgoing ones. In any large school of thought, there is always a minority who adhere more exclusively than most to the characteristic beliefs of the school: they are the ‘purists’ or ‘ultras’ of that school. What is needed and sufficient, then, to make a person a Darwinian, is belief in all or most of the propositions which are peculiar to Darwinians, and believed either by all of them, or at least by ultra-Darwinians.

I give below ten propositions which are all Darwinian beliefs in the sense just specified. Each of them is obviously false: either a direct falsity about our species or, where the proposition is a general one, obviously false in the case of our species, at least. Some of the ten propositions are quotations; all the others are paraphrases. The quotations are all from authors who are so well-known, at least in Darwinian circles, as spokesmen for Darwinism or ultra-Darwinism, that their names alone will be sufficient evidence that the proposition is a Darwinian one. Where the proposition is a paraphrase, I give quotations or other information which will, I think, suffice to establish its Darwinian credentials.

My ten propositions are nearly in reverse historical order. Thus, I start from the present day, and from the inferno-scene – like something by Hieronymus Bosch – which the ‘selfish gene’ theory makes of all life. Then I go back a bit to some of the falsities which, beginning in the 1960s, were contributed to Darwinism by the theory of ‘inclusive fitness’. And finally I get back to some of the falsities, more pedestrian though no less obvious, of the Darwinism of the 19th or early-20th century.

1. The truth is, ‘the total prostitution of all animal life, including Man and all his airs and graces, to the blind purposiveness of these minute virus-like substances’, genes.

This is a thumbnail-sketch, and an accurate one, of the contents of The Selfish Gene (1976) by Richard Dawkins….

2 ‘…it is, after all, to [a mother’s] advantage that her child should be adopted’ by another woman….

This quotation is from Dawkins’ The Selfish Gene, p. 110.

Obviously false though this proposition is, from the point of view of Darwinism it is well-founded

3. All communication is ‘manipulation of signal-receiver by signal-sender.’

This profound communication, though it might easily have come from any used-car salesman reflecting on life, was actually sent by Dawkins, (in The Extended Phenotype, (1982), p. 57), to the readers whom he was at that point engaged in manipulating….

9. The more privileged people are the more prolific: if one class in a society is less exposed than another to the misery due to food-shortage, disease, and war, then the members of the more fortunate class will have (on the average) more children than the members of the other class.

That this proposition is false, or rather, is the exact reverse of the truth, is not just obvious. It is notorious, and even proverbial….

10. If variations which are useful to their possessors in the struggle for life ‘do occur, can we doubt (remembering that many more individuals are born than can possibly survive), that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed.’

This is from The Origin of Species, pp. 80-81. Exactly the same words occur in all the editions….

Since this passage expresses the essential idea of natural selection, no further evidence is needed to show that proposition 10 is a Darwinian one. But is it true? In particular, may we really feel sure that every attribute in the least degree injurious to its possessors would be rigidly destroyed by natural selection?

On the contrary, the proposition is (saving Darwin’s reverence) ridiculous. Any educated person can easily think of a hundred characteristics, commonly occurring in our species, which are not only ‘in the least degree’ injurious to their possessors, but seriously or even extremely injurious to them, which have not been ‘rigidly destroyed’, and concerning which there is not the smallest evidence that they are in the process of being destroyed. Here are ten such characteristics, without even going past the first letter of the alphabet. Abortion; adoption; fondness for alcohol; altruism; anal intercourse; respect for ancestors; susceptibility to aneurism; the love of animals; the importance attached to art; asceticism, whether sexual, dietary, or whatever.

Each of these characteristics tends, more or less strongly, to shorten our lives, or to lessen the number of children we have, or both. All of them are of extreme antiquity. Some of them are probably older than our species itself. Adoption, for example is practised by some species of chimpanzees: another adult female taking over the care of a baby whose mother has died. Why has not this ancient and gross ‘biological error’ been rigidly destroyed?…

The cream of the jest, concerning proposition 10, is that Darwinians themselves do not really believe it. Ask a Darwinian whether he actually believes that the fondness for alcoholic drinks is being destroyed now, or that abortion is, or adoption – and watch his face. Well, of course he does not believe it! Why would he? There is not a particle of evidence in its favour, and there is a great mountain of evidence against it. Absolutely the only thing it has in its favour is that Darwinism says it must be so. But (as Descartes said in another connection) ‘this reasoning cannot be presented to infidels, who might consider that it proceeded in a circle’.

What becomes, then, of the terrifying giant named Natural Selection, which can never sleep, can never fail to detect an attribute which is, even in the least degree, injurious to its possessors in the struggle for life, and can never fail to punish such an attribute with rigid destruction? Why, just that, like so much else in Darwinism, it is an obvious fairytale, at least as far as our species is concerned.

A science cannot be wrong in so many important ways and yet be taken seriously as a God-substitute.

Frederick Turner has this to say in “Darwin and Design: The Evolution of a Flawed Debate“:

Does the theory of evolution make God unnecessary to the very existence of the world?…

The polemical evolutionists are right about the truth of evolution. But the rightness of their cause has been deeply compromised by their own version of the creationists’ sin. The evolutionists’ sin, as I see it, is even greater, because it is three sins rolled into one….

The third sin is … dishonesty. In many cases it is clear that the beautiful and hard-won theory of evolution, now proved beyond reasonable doubt, is being cynically used by some — who do not much care about it as such — to support an ulterior purpose: a program of atheist indoctrination, and an assault on the moral and spiritual goals of religion. A truth used for unworthy purposes is quite as bad as a lie used for ends believed to be worthy. If religion can be undermined in the hearts and minds of the people, then the only authority left will be the state, and, not coincidentally, the state’s well-paid academic, legal, therapeutic and caring professions. If creationists cannot be trusted to give a fair hearing to evidence and logic because of their prior commitment to religious doctrine, some evolutionary partisans cannot be trusted because they would use a general social acceptance of the truth of evolution as a way to set in place a system of helpless moral license in the population and an intellectual elite to take care of them.

And that is my issue, not only with the likes of Dawkins and Singer but also with any so-called scientist who believes that evolution — or, more broadly, scientific knowledge — somehow justifies atheism.

Science is only about the knowable, and much of life’s meaning lies where science cannot reach. Maverick Philosopher puts it this way in “Why Science Will Never Put Religion Out of Business“:

We suffer from a lack of existential meaning, a meaning that we cannot supply from our own resources since any subjective acts of meaning-positing are themselves (objectively) meaningless….

…[T]he salvation religion promises is not to be understood in some crass physical sense the way the typical superficial and benighted atheist-materialist would take it but as salvation from meaninglessness, anomie, spiritual desolation, Unheimlichkeit, existential insecurity, Angst, ignorance and delusion, false value-prioritizations, moral corruption irremediable by any human effort, failure to live up to ideals, the vanity and transience of our lives, meaningless sufferings and cravings and attachments, the ultimate pointlessness of all efforts at moral and intellectual improvement in the face of death . . . .

…[I]t is self-evident that there are no technological solutions to moral evil, moral ignorance, and the apparent absurdity of life.  Is a longer life a morally better life?  Can mere longevity confer meaning?The notion that present or future science can solve the problems that religion addresses is utterly chimerical.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
The Legality of Teaching Intelligent Design
Science, Logic, and God
Capitalism, Liberty, and Christianity
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
The Big Bang and Atheism
The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Pascal’s Wager, Morality, and the State
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science

Analysis for Government Decision-Making: Demi-Science, Hemi-Demi-Science, and Sophistry

Taking a “hard science” like classical mechanics as an epitome of science and, say, mechanical engineering, as a rigorous application of it, one travels a goodly conceptual distance before arriving at operations research (OR). Philip M. Morse and George E. Kimball, pioneers of OR in World War II, put it this way:

[S]uccessful application of operations research usually results in improvements by factors of 3 or 10 or more…. In our first study of any operation we are looking for these large factors of possible improvement…. They can be discovered if the [variables] are given only one significant figure,…any greater accuracy simply adds unessential detail.

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Philip M. Morse and George E. Kimball, Methods of Operations Research, originally published as Operations Evaluation Group Report 54, 1946, p. 38)

This is science-speak for the following proposition: Where there is much variability in the particular circumstances of combat, there is much uncertainty about the contributions of various factors (human, mechanical, and meteorological) the the outcome of combat. It is therefore difficult to assign precise numerical values to the various factors.

OR, even in wartime, is therefore, and at best, a demi-science. From there, we descend to cost-effectiveness analysis and its constituent branches: techniques for designing and estimating the costs of systems that do not yet exist and the effectiveness of such systems in combat. These methods, taken separately and together, are (to coin a term) hemi-demi-scientific — a fact that the application of “rigorous” mathematical and statistical techniques cannot alter.

There is no need to elaborate on the wild inaccuracy of estimates about the costs and physical performance of government-owned and operated systems, whether they are intended for military or civilian use. The gross errors of estimation have been amply documented in the public press for decades.

What is less well known is the difficulty of predicting the performance of systems — especially combat systems — years before they are commanded, operated, and maintained by human beings, under conditions that are likely to be far different than those envisioned when the systems were first proposed. A paper that I wrote thirty years ago gives my view of the great uncertainty that surrounds estimates of the effectiveness of systems that have yet to be developed, or built, or used in combat:

Aside from a natural urge for certainty, faith in quantitative models of warfare springs from the experience of World War II, when they seemed to lead to more effective tactics and equipment. But the foundation of this success was not the quantitative methods themselves. Rather, it was the fact that the methods were applied in wartime. Morse and Kimball put it well:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [Methods of Operations Research, p. 10]

Contrast this attitude with the attempts of analysts … to evaluate weapons, forces, and strategies with abstract models of combat. However elegant and internally consistent the models, they have remained as untested and untestable as the postulates of theology.

There is, of course, no valid test to apply to a warfare model. In peacetime, there is no enemy; in wartime, the enemy’s actions cannot be controlled. Morse and Kimball, accordingly, urge “hemibel thinking”:

Having obtained the constants of the operations under study… we compare the value of the constants obtained in actual operations with the optimum theoretical value, if this can be computed. If the actual value is within a hemibel (…a factor of 3) of the theoretical value, then it is extremely unlikely that any improvement in the details of the operation will result in significant improvement. [When] there is a wide gap between the actual and theoretical results … a hint as to the possible means of improvement can usually be obtained by a crude sorting of the operational data to see whether changes in personnel, equipment, or tactics produce a significant change in the constants. [Ibid., p. 38]

….

Much as we would like to fold the many different parameters of a weapon, a force, or a strategy into a single number, we can not. An analyst’s notion of which variables matter and how they interact is no substitute for data. Such data as exist, of course, represent observations of discrete events — usually peacetime events. It remains for the analyst to calibrate the observations, but without a benchmark to go by. Calibration by past battles is a method of reconstruction –of cutting one of several coats to fit a single form — but not a method of validation. Lacking pertinent data, an analyst is likely to resort to models of great complexity. Thus, if useful estimates of detection probabilities are unavailable, the detection process is modeled; if estimates of the outcomes of dogfights are unavailable, aerial combat is reduced to minutiae. Spurious accuracy replaces obvious inaccuracy; untestable hypotheses and unchecked calibrations multiply apace. Yet the analyst claims relative if not absolute accuracy, certifying that he has identified, measured, and properly linked, a priori, the parameters that differentiate weapons, forces, and strategies….

Should we really attach little significance to differences of less than a hemibel? Consider a five-parameter model, involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model might easily yield a cumulative error of a hemibel, given a twenty-five percent error in each parameter. My intuition is that one would be lucky if relative errors in the probabilities assigned to alternative weapons and forces were as low as twenty-five percent.

The further that one travels from an empirical question, such as the likely effectiveness of an extant weapon system under specific, quantifiable conditions, the more likely one is to encounter the kind of sophistry known as policy analysis. It is in this kind of analysis that one  — more often than not — encounters in the context of broad policy issues (e.g., government policy toward health care, energy, or defense spending). Such analysis is constructed so that it favors the prejudices of the analyst or his client, or support the client’s political case for a certain policy.

Policy analysis often seems credible, especially on first hearing or reading it. But, on inspection, it is usually found to have at least two of these characteristics:

  • It stipulates or quickly arrives at a preferred policy, then marshals facts, calculations, and opinions that are selected because they support the preferred policy.
  • If it offers and assesses alternative policies, they are not placed on an equal footing with the preferred policy. They are, for example, assessed against criteria that favor the preferred policy, while other criteria (which might be important ones) are ignored or given short shrift.
  • It is wrapped in breathless prose, dripping with words and phrases like “aggressive action,”grave consequences,” and “sense of urgency.”

No discipline or quantitative method is rigorous enough to redeem policy analysis, but two disciplines are especially suited to it: political “science” and macroeconomics. Both are couched in the language of real science, but both lend themselves perfectly to the old adage: garbage in, garbage out.

Do I mean to suggest that broad policy issues should not be addressed as analytically as possible? Not at all. What I mean to suggest is that because such issues cannot be illuminated with scientific rigor, they are especially fertile ground for sophists with preconceived positions.

In that respect, the model of cost-effectiveness analysis, with all of its limitations, is to be emulated. Put simply, it is to state a clear objective in a way that does not drive the answer; reveal the assumptions underlying the analysis; state the relevant variables (factors influencing the attainment of the objective); disclose fully the data, the sources of data, and analytic methods; and explore openly and candidly the effects of variations in key assumptions and critical variables.

Demystifying Science

“Science” is an unnecessarily daunting concept to the uninitiated, which is to say, almost everyone. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — often are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

WHAT IS SCIENCE?

Science is knowledge, but not all knowledge is science. A scientific body of knowledge is systematic; that is, the granular facts or phenomena which comprise the body of knowledge are connected in patterned ways. Moreover, the facts or phenomena represent reality; they are not mere concepts, which may be tools of science but are not science. Beyond that, science — unless it is a purely descriptive body of knowledge — is predictive about the characteristics of as-yet unobserved phenomena. These may be things that exist but have not yet been measured (in terms of the applicable science), or things that are yet to be (as in the effects of new drug on a disease).

Above all, science is not a matter of “consensus” — AGW zealots to the contrary notwithstanding. Science is a matter of rigorously testing theories against facts, and doing it openly. Imagine the state of physics today if Galileo had been unable to question Aristotle’s theory of gravitation, if Newton had been unable to extend and generalize Galileo’s work, and if Einstein had deferred to Newton. The effort to “deny” a prevailing or popular theory is as old as science. There have been “deniers’ in the thousands, each of them responsible for advancing some aspect of knowledge. Not all “deniers” have been as prominent as Einstein (consider Dan Schectman, for example), but each is potentially as important as Einstein.

It is hard for scientists to rise above their human impulses. Einstein, for example, so much wanted quantum physics to be deterministic rather than probabilistic that he said “God does not play dice with the universe.” To which Nils Bohr replied, “Einstein, stop telling God what to do.” But the human urge to be “right” or to be on the “right side” of an issue does not excuse anti-scientific behavior, such as that of so-called scientists who have become invested in AGW.

There are many so-called scientists who subscribe to AGW without having done relevant research. Why? Because AGW is the “in” thing, and they do not wish to be left out. This is the stuff of which “scientific consensus” is made. If you would not buy a make of automobile just because it is endorsed by a celebrity who knows nothing about automotive engineering, why would you “buy” AGW just because it is endorsed by a herd of so-called scientists who have never done research that bears directly on it?

There are two lessons to take from this. The first is  that no theory is ever proven. (A theory may, if it is well and openly tested, be useful guide to action in certain rigorous disciplines, such as engineering and medicine.) Any theory — to be a truly scientific one — must be capable of being tested, even by (and especially by) others who are skeptical of the theory. Those others must be able to verify the facts upon which the theory is predicated, and to replicate the tests and calculations that seem to validate the theory. So-called scientists who restrict access to their data and methods are properly thought of as cultists with a political agenda, not scientists. Their theories are not to be believed — and certainly are not to be taken as guides to action.

The second lesson is that scientists are human and fallible. It is in the best tradition of science to distrust their claims and to dismiss their non-scientific utterances.

THE ROLE OF MATHEMATICS AND STATISTICS IN SCIENCE

Mathematics and statistics are not sciences, despite their vast and organized complexity. They offer ways of thinking about and expressing knowledge, but they are not knowledge. They are languages that enable scientists to converse with each other and outsiders who are fluent in the same languages.

Expressing a theory in mathematical terms may lend the theory a scientific aura. But a theory couched in mathematics (or its verbal equivalent) is not a scientific one unless (a) it can be tested against observable facts by rigorous statistical methods, (b) it is found, consistently, to accord with those facts, and (c) the introduction of new facts does not require adjustment or outright rejection of the theory. If the introduction of new facts requires the adjustment of a theory, then it is a new theory, which must be tested against new facts, and so on.

This “inconvenient fact” — that an adjusted theory is a new theory —  is ignored routinely, especially in the application of regression analysis to a data set for the purpose of quantifying relationships among variables. If a “model” thus derived does a poor job when applied to data outside the original set, it is not an uncommon practice to combine the original and new data and derive a new “model” based on the combined set. This practice (sometimes called data-mining) does not yield scientific theories with predictive power; it yields information (of dubious value) about the the data employed in the regression analysis. As a critic of regression models once put it: Regression is a way of predicting the past with great certainty.

A science may be descriptive rather than mathematical. In a descriptive science (e.g., plant taxonomy), particular phenomena sometimes are described numerically (e.g., the number of leaves on the stem of a species), but the relations among various phenomena are not reducible to mathematics. Nevertheless, a predominantly descriptive discipline will be scientific if the phenomena within its compass are connected in patterned ways.

NON-SCIENCE, SCIENCE, AND PSEUDO-SCIENCE

Non-scientific disciplines can be useful, whereas some purportedly scientific disciplines verge on charlatanism. Thus, for example:

  • History, by my reckoning, is not a science. But a knowledge of history is valuable, nevertheless, for the insights it offers into the influence of human nature on the outcomes of economic and political processes. I call the lessons of history “insights,” not scientific relationships, because history is influenced by so many factors that it does not allow for the rigorous testing of hypotheses.
  • Physics is a science in most of its sub-disciplines, but there are some (e.g., cosmology and certain interpretations of quantum mechanics) where it descends into the realm of speculation. Informed, fascinating speculation to be sure, but speculation all the same. It avoids being pseudo-scientific only because it might give rise to testable hypotheses.
  • Economics is a science only to the extent that it yields valid, statistical insights about specific microeconomic issues (e.g., the effects of laws and regulations on the prices and outputs of goods and services). The postulates of macroeconomics, except to the extent that they are truisms, have no demonstrable validity. (See, for example, my treatment of the Keynesian multiplier.) Macroeconomics is a pseudo-science.

CONCLUSION

There is no such thing as “science,” writ large; that is, no one may appeal, legitimately, to “science” in the abstract. A particular discipline may be a science, but it is a science only to the extent that it comprises a factual body of knowledge and testable theories. Further, its data and methods must be open to verification and testing. And only a particular theory — one that has been put to the proper tests — can be called a scientific one.

For the reasons adduced in this post, scientists who claim to “know” that there is no God are not practicing science when they make that claim. They are practicing the religion that is known as atheism. The existence or non-existence of God is beyond testing, at least by any means yet known to man.

Related posts:
About Economic Forecasting
Is Economics a Science?
Economics as Science
Hemibel Thinking
Climatology
Physics Envy
Global Warming: Realities and Benefits
Words of Caution for the Cautious
Scientists in a Snit
Another Blow to Climatology?
A Telling Truth
Proof That “Smart” Economists Can Be Stupid
Bad News for Politically Correct Science
Another Blow to Chicken-Little Science
Same Old Story, Same Old Song and Dance
Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Hockey Stick Is Broken
Talk about Brainwaves!
The Creation Model
The Thing about Science
Science in Politics, Politics in Science
Global Warming and Life
Evolution and Religion
Speaking of Religion…
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Global Warming and the Liberal Agenda
Science, Logic, and God
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
This Is Objectivism?
Objectivism: Tautologies in Search of Reality
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
Global Warming in Perspective
Mathematical Economics
Economics: The Dismal (Non) Science
The Big Bang and Atheism
More Bad News for Global Warming Zealots

The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Warming, Anyone?
“Warmism”: The Myth of Anthropogenic Global Warming
Re: Climate “Science”
More Evidence against Anthropogenic Global Warming
Yet More Evidence against Anthropogenic Global Warming
A Non-Believer Defends Religion
Evolution as God?
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
Beware the Rare Event
Landsburg Is Half-Right
Physics Envy
The Unreality of Objectivism
What Is Truth?
Evolution, Human Nature, and “Natural Rights”
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
We, the Children of the Enlightenment
Probability, Existence, and Creation
The Atheism of the Gaps
Probability, Existence, and Creation: A Footnote

What Is Truth?

There are four kinds of truth: physical, logical-mathematical, psychological-emotional, and judgmental. The first two are closely related, as are the last two. After considering each of the two closely related pairs, I will link all four kinds of truth.

PHYSICAL AND LOGICAL-MATHEMATICAL TRUTH

Physical truth is, seemingly, the most straightforward of the lot. Physical truth seems to consist of that which humans are able to apprehend with their senses, aided sometimes by instruments. And yet, widely accepted notions of physical truth have changed drastically over the eons, not only because of improvements in the instruments of observation but also because of changes in the interpretation of data obtained with the aid of those instruments.

The latter point brings me to logical-mathematical truth. It is logic and mathematics that translates specific physical truths — or what are taken to be truths — into constructs (theories) such as quantum mechanics, general relativity, the Big Bang, and evolution. Of the relationship between specific physical truth and logical-mathematical truth, G.K. Chesterton said:

Logic and truth, as a matter of fact, have very little to do with each other. Logic is concerned merely with the fidelity and accuracy with which a certain process is performed, a process which can be performed with any materials, with any assumption. You can be as logical about griffins and basilisks as about sheep and pigs. On the assumption that a man has two ears, it is good logic that three men have six ears, but on the assumption that a man has four ears, it is equally good logic that three men have twelve. And the power of seeing how many ears the average man, as a fact, possesses, the power of counting a gentleman’s ears accurately and without mathematical confusion, is not a logical thing but a primary and direct experience, like a physical sense, like a religious vision. The power of counting ears may be limited by a blow on the head; it may be disturbed and even augmented by two bottles of champagne; but it cannot be affected by argument. Logic has again and again been expended, and expended most brilliantly and effectively, on things that do not exist at all. There is far more logic, more sustained consistency of the mind, in the science of heraldry than in the science of biology. There is more logic in Alice in Wonderland than in the Statute Book or the Blue Books. The relations of logic to truth depend, then, not upon its perfection as logic, but upon certain pre-logical faculties and certain pre-logical discoveries, upon the possession of those faculties, upon the power of making those discoveries. If a man starts with certain assumptions, he may be a good logician and a good citizen, a wise man, a successful figure. If he starts with certain other assumptions, he may be an equally good logician and a bankrupt, a criminal, a raving lunatic. Logic, then, is not necessarily an instrument for finding truth; on the contrary, truth is necessarily an instrument for using logic—for using it, that is, for the discovery of further truth and for the profit of humanity. Briefly, you can only find truth with logic if you have already found truth without it. [Thanks to The Fourth Checkraise for making me aware of Chesterton’s aperçu.]

To put it another way, logical-mathematical truth is only as valid as the axioms (principles) from which it is derived. Given an axiom, or a set of them, one can deduce “true” statements (assuming that one’s logical-mathematical processes are sound). But axioms are not pre-existing truths with independent existence (like Platonic ideals). They are products, in one way or another, of observation and reckoning. The truth of statements derived from axioms depends, first and foremost, on the truth of the axioms, which is the thrust of Chesterton’s aperçu.

It is usual to divide reasoning into two types of logical process:

  • Induction is “The process of deriving general principles from particular facts or instances.” That is how scientific theories are developed, in principle. A scientist begins with observations and devises a theory from them. Or a scientist may begin with an existing theory, note that new observations do not comport with the theory, and devise a new theory to fit all the observations, old and new.
  • Deduction is “The process of reasoning in which a conclusion follows necessarily from the stated premises; inference by reasoning from the general to the specific.” That is how scientific theories are tested, in principle. A theory (a “stated premise”) should lead to certain conclusions (“observations”). If it does not, the theory is falsified. If it does, the theory lives for another day.

But the stated premises (axioms) of a scientific theory (or exercise in logic or mathematical operation) do not arise out of nothing. In one way or another, directly or indirectly, they are the result of observation and reckoning (induction). Get the observation and reckoning wrong, and what follows is wrong; get them right and what follows is right. Chesterton, again.

PSYCHOLOGICAL-EMOTIONAL AND JUDGMENTAL TRUTH

A psychological-emotional truth is one that depends on more than physical observations. A judgmental truth is one that arises from a psychological-emotional truth and results in a consequential judgment about its subject.

A common psychological-emotional truth, one that finds its way into judgmental truth, is an individual’s conception of beauty.  The emotional aspect of beauty is evident in the tendency, especially among young persons, to consider their lovers and spouses beautiful, even as persons outside the intimate relationship would find their judgments risible.

A more serious psychological-emotional truth — or one that has public-policy implications — has to do with race. There are persons who simply have negative views about races other than their own, for reasons that are irrelevant here. What is relevant is the close link between the psychological-emotional views about persons of other races — that they are untrustworthy, stupid, lazy, violent, etc. — and judgments that adversely affect those persons. Those judgments range from refusal to hire a person of a different race (still quite common, if well disguised to avoid legal problems) to the unjust convictions and executions because of prejudices held by victims, witnesses, police officers, prosecutors, judges, and jurors. (My examples point to anti-black prejudices on the part of whites, but there are plenty of others to go around: anti-white, anti-Latino, anti-Asian, etc. Nor do I mean to impugn prudential judgments that implicate race, as in the avoidance by whites of certain parts of a city.)

A close parallel is found in the linkage between the psychological-emotional truth that underlies a jury’s verdict and the legal truth of a judge’s sentence. There is an even tighter linkage between psychological-emotional truth and legal truth in the deliberations and rulings of higher courts, which operated without juries.

PUTTING TRUTH AND TRUTH TOGETHER

Psychological-emotional proclivities, and the judgmental truths that arise from them, impinge on physical and mathematical-logical truth. Because humans are limited (by time, ability, and inclination), they often accept as axiomatic statements about the world that are tenuous, if not downright false. Scientists, mathematicians, and logicians are not exempt from the tendency to credit dubious statements. And that tendency can arise not just from expediency and ignorance but also from psychological-emotional proclivities.

Albert Einstein, for example, refused to believe that very small particles of matter-energy (quanta) behave probabilistically, as described by the branch of physics known as quantum mechanics. Put simply, sub-atomic particles do not seem to behave according to the same physical laws that describe the actions of the visible universe; their behavior is discontinuous (“jumpy”) and described probabilistically, not by the kinds of continuous (“smooth”) mathematical formulae that apply to the macroscopic world.

Einstein refused to believe that different parts of the same universe could operate according to different physical laws. Thus he saw quantum mechanics as incomplete and in need of reconciliation with the rest of physics. At one point in his long-running debate with the defenders of quantum mechanics, Einstein wrote: “I, at any rate, am convinced that He [God] does not throw dice.” And yet, quantum mechanics — albeit refined and elaborated from the version Einstein knew — survives and continues to describe the sub-atomic world with accuracy.

Ironically, Einstein’s two greatest contributions to physics — special and general relativity — were met with initial skepticism by other physicists. Special relativity rejects absolute space-time; general relativity depicts a universe whose “shape” depends on the masses and motions of the bodies within it. These are not intuitive concepts, given man’s instinctive preference for certainty.

The point of the vignettes about Einstein is that science is not a sterile occupation; it can be (and often is) fraught with psychological-emotional visions of truth. What scientists believe to be true depends, to some degree, on what they want to believe is true. Scientists are simply human beings who happen to be more capable than the average person when it comes to the manipulation of abstract concepts. And yet, scientists are like most of their fellow beings in their need for acceptance and approval. They are fully capable of subscribing to a “truth” if to do otherwise would subject them to the scorn of their peers. Einstein was willing and able to question quantum mechanics because he had long since established himself as a premier physicist, and because he was among that rare breed of humans who are (visibly) unaffected by the opinions of their peers.

Such are the scientists who, today, question their peers’ psychological-emotional attachment to the hypothesis of anthropogenic global warming (AGW). The questioners are not “deniers” or “skeptics”; they are scientists who are willing to look deeper than the facile hypothesis that, more than two decades ago, gave rise to the AGW craze.

It was then that a scientist noted the coincidence of an apparent rise in global temperatures since the late 1800s (or is it since 1975?) and an apparent increase in the atmospheric concentration of CO2. And thus a hypothesis was formed. It was embraced and elaborated by scientists (and others) eager to be au courant, to obtain government grants (conveniently aimed at research “proving” AGW), to be “right” by being in the majority, and — let it be said — to curtail or stamp out human activities which they find unaesthetic. Evidence to the contrary be damned.

Where else have we seen this kind of behavior, albeit in a more murderous guise? At the risk of invoking Hitler, I must answer with this link: Nazi Eugenics. Again, science is not a sterile occupation, exempt from human flaws and foibles.

CONCLUSION

What is truth? Is it an absolute reality that lies beyond human perception? Is it those “answers” that flow logically or mathematically from unproven assumptions? Is it the “answers” that, in some way, please us? Or is it the ways in which we reshape the world to conform it with those “answers”?

Truth, as we are able to know it, is like the human condition: fragile and prone to error.